Distinction between inner and outer environment in decision-making
Furthermore, a rational decision is that which leads to select the appropriate means to reach particular ends. (This is to say that a rational decision is that which leads to a goal in a most effective way.) Thus a rational decision-maker is he or she who selects the means that lead to this effective decision (Simon 1947, p. 61).
This chapter presents the theoretical framework for our analysis of rational decisions within organizations. This framework will be outlined through the study of two different levels of analysis. First, the general features of the inner environment of a decision—i.e., the psychological aspects regarding rationality in human mind—will be presented according to standard theories in cognitive science. Second, the external environment of a decision—i.e., the outer shape of means and ends, provided generally by the organization—will be outlined according to major trends in organization theory such as organizational decision analysis and naturalistic decision making. In the next section we will focus on the first part of this issue, centering our analysis on the inner, cognitive level of decision analysis. Section 3.3 will be devoted to the role of organizations as patterns of events for rational decisions.
3.2 The Inner Environment of Decisions
Let us now recall the rational choice approach to decision-making in organizations briefly presented in the previous chapter. One of the main criticisms it has received claims it does not give a realistic account of how decisions are made in real situations, but that it delivers idealized visions of what a rational decision ought to be. In particular, some of its results are inaccurate because they cannot explain:
The actual motivations that bind the decisions of all the agents implied in a (strategic-oriented, game-like) decision situation.
The processes that lead to maximizing decisions (how the agents acquire the information needed, how they make the calculations, or whether they are capable of making the kinds of decisions they are supposed to perform).
Moreover, these theories do not give any account of how the decision-maker acquires knowledge about future events, or probability distributions of future events, knowledge about the available alternatives, knowledge about the consequences of these alternatives, and knowledge about his or her own ordered preferences. These theories do not even take account of the actual computational processes by which humans arrive at the conclusion that one option is better than another one. In a nutshell, they presume that humans maximize preferences, but they do not explain how.
Our objective in this section is to present a more realistic account of the role played by cognition in rational decision-making as it has been developed in the behavioral sciences, thus connecting cognition with goal formation and goal attainment. Therefore, our aim here is not to fully discuss the concept of rationality but to assess what level of rationality human beings are capable of.
In fact, even an authoritative source such as the MIT Encyclopedia of the Cognitive Sciences (Wilson and Keil 1999) does not contain a single entry for the particular noun rationality. Instead, rational almost always appears as an adjective (e.g., rational agency, rational choice theory, rational decision-making), except when it is a noun modified by another adjective, as in bounded rationality, which has an independent entry indeed.
With the aim of establishing a working definition, as a general basis, a rational decision is “choosing among alternatives in a way that ‘properly’ accords with the preferences and beliefs of an individual decision maker or those of a group making a joint decision” (Doyle 1999). Thus it includes (a) a decision-maker (individual or group), (b) a set of preferences and beliefs, (c) a set of alternatives before the decision-maker, and (d) a (“proper”) choosing action.
In the classical economic theory of rational decision-making, preferences are shaped by a utility function, which implies a number of assumptions:
The whole set of alternatives from which the decision maker has to choose is given and appears before the decision maker. There is not an agreement among different theories of rational choice upon this issue, delivering different models for certainty, risk, and uncertainty.2
Each alternative has a complete set of consequences attached.
The decision maker chooses the alternative that best matches the most preferred set of consequences. In the case of certainty, there is not ambiguity in the choosing action. In the case of risk or uncertainty, the expression expected utility has been coined to define a rational decision in terms of the probabilities of occurrence of the utilities that would generate all possible consequences (Savage 1951; Friedman and Savage 1952; Savage 1954). In “risky” situations, the probabilities of the outcomes are supposed to be known (e.g., in gambling situations); in “uncertain” or “ambiguous” situations, their precise likelihood is ignored (Shafir 1999). Thus in these cases the preference comparison is over prospects rather than outcomes (Wellman 1999), i.e., over expectations about possible outcomes (Friedman and Savage 1952). It is agreed that in case of uncertainty, rational decision is difficult, and some explanations and models of it have been delivered regarding both uncertainty and risk (Bell and Raiffa 1988; Bell 1988; Fishburn 1988). What has not reached a parallel degree of agreement is whether all actual human decisions are surrounded by a notable degree of uncertainty.
Considering the enumeration, it is apparent that the conditions under which a decision could be viewed as rational—from the classical point of view—are rather demanding to the decision-maker. Effectively, it assumes that (a) all the alternatives are given, (b) that the decision-maker knows all the consequences of each alternative, and (c) that the decision-maker has a utility-ordering function for all the consequences (and thus for all the alternatives). The behavioral approach to decision-making has produced a bulk of empirical research that challenges this rather optimistic view of human capabilities for knowing, ordering preferences, and making choices.
These features of classical economic theory led Herbert A. Simon to the first systematic attempt to present the alternate notion of bounded rationality and its consequences for economic, administrative, and political decision making. This early work was mainly done during the 1950s (e.g., Simon 1955, 1957),3 the moment in which he turns to cognitive psychology and a primeval computer science in order to better study and describe the psychology of human problem solving and “discovering the symbolic processes that people use in thinking” (Simon 1996a, p. 189).4 Noriega (2006, p. xxxiii) has identified three basic dimensions through which Simon covers the different problems produced by the classical theory of human rationality:
The difference between an omniscient rationality and a rationality that must contend with restrictions on information, time, and analysis capacity.
The difficulty to find an optimal solution for some decision problems, which is usually overcome by lowering the aspiration level from optimal to satisfactory or satisficing (Simon 1996b, Chaps. 2 and 5)—namely, both satisfying and sufficient.5 It also refers to the problem of the uniqueness of solutions (Simon 1955).
The difference between substantive rationality of “economic man” and procedural rationality (Simon 1976), which is twofold:
A process through which individuals explore a space (of possible solutions) searching for satisficing solutions.
A process through which organizations evolve in order to survive in a satisfactory manner within their environment.
These three dimensions refer, consecutively, to different levels of analysis (see Table 3.1). The first one accounts for the cognitive limitations all human beings have for contending with a complex environment. The second applies the knowledge about these limits on human cognition and draws a picture of what levels of satisfaction human beings are able to achieve in their goal-oriented action. Finally, the third dimension refers to the particular strategies human beings (or organizations) carry out in order to achieve those satisficing levels of goal-attainment in solving their problems.
Correspondence between Simon’s different dimensions of rationality and their levels of a analysis
Dimensions of rationality
Levels of analysis
Bounded vs. Omniscient
Satisficing vs. Optimizing
Procedural vs. Substantive
Strategies for goal-attainment
The three dimensions are touched upon in this section in this particular order, presenting a view of human rationality that both challenges the major assumptions of the neoclassical approach to rational decision-making, and in our opinion helps understanding the mechanisms that, in spite of human natural limitations, allow for rational decisions.
Therefore, the rest of this section will be devoted to discuss the contrast between maximizing rationality and bounded rationality with respect to three different features: the cognitive limits that shape human rationality (Sect. 3.2.1), the problem of aspiration levels (Sect. 3.2.2), and finally, the strategies for goal attainment (Sect. 3.2.3).
3.2.1 Bounded vs. Omniscient Rationality
In behavioral theories of decision-making, adaptation is the key. The individual is seen as an adaptive system whose goals draw the interface between its inner and outer environments. This poses a problem that presents two different questions. The first one is where to put this interface—where to draw the line between both environments. The second one is what consequences should we expect from deciding where to draw this line.
To face this two-fold problem, Simon came up with the celebrated example of an ant making “his laborious way across a wind- and wave-molded beach” (Simon 1996b, p. 51), whose path from point A to point B is represented in Fig. 3.2.
Representation of the path followed from point A to B by an agent. Behavior (the path) is mostly explained by the complexity of the environment
As to the first question, the interface between the inner and the outer environment is defined by the goals the ant (decision-maker) wants to attain, and thus its behavior will depend (a) on what do it longs to do, and (b) in what environment it will be trying to do it. Some features of its behavior will be explained by the inner characteristics of the ant, but most of the behavior will be accounted for by its goals and the shape of the beach. In Sect. 2.4.3 we cited part of the formulation of the notion of bounded rationality delivered by Simon (1996b, p. 53). We may put it here in its complete form:
Human beings, viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves.
The main advantage of this viewpoint is that few things about the inner environment need to be known or accounted for in order to get to understand an individual’s behavior—e.g., in the form of her decisions. In a simplified form, we just need to know (a) some features regarding the individual’s cognitive system, and (b) the individual’s goals. For, these features bind the adaptation of the deciding individual to her environment, the rest being the strategies the individual will follow in order to solve a problem.6
It is important to note, though, that the decision on the boundary line between the inner and outer environments is subjected to high discretion: we shall decide to draw it depending on our units of analysis—individuals, organizations, states, communities, tribes.
In our case, the goal is to study the process by which individuals (junior judges) make decisions in a particular institutional environment (on-call situations). Then, the outer environment to which he or she must adapt will be constituted, first, by the space of the problem that must be solved,7 and, then, the “space” represented by the on-call institution, within a more general external environment represented by the judicial organization, which at the same time is part of an even more general external environment, and so on. As if they were parts of a Matryoshka doll, the boundaries of both inner and outer environments change as we go up and down in our level of analysis. In our case, the inner environment will be constituted by the cognitive architecture of the decision-maker, and the interface will be outlined by his or her goals.
Thus observed, junior judges’ account of the conditions for their behavior in particular situations will give us rich information about the complexity of the environment to which these decision-makers must adapt in order to solve their problems.
As to the inner part, it would be far beyond our scope to attempt a detailed and accurate presentation of the human cognitive system so as to explore the limits that somehow affect the adaptive capacity of humans to their environments when solving problems. We will tackle these limits referring to their most salient aspects, including the acquisition of concepts, the capacity and speed to process and store information, and our ability to process natural language. They are briefly presented in the following sections.
18.104.22.168 Concept Attainment and Categorization
Concepts are “the elements from which propositional thought is constructed, thus providing a means of understanding the world” (Hampton 1999, p. 176), including our own experiences, which in turn are related to our history. As foundational elements of knowledge, they have obviously received attention, both in studies related to the way concepts are attained and processed (studies with an explanatory aim, e.g., memory) and in research about the ways these concepts are to be represented (studies with a constructive aim, e.g., knowledge representation) (Gärdenfors 2004). This brief subsection refers to the former.
A great deal of research has been devoted to study the limits of human beings in processes of concept learning, concept attainment, and, in general, categorization, which is “the process by which distinct entities are treated as equivalent” and which “permits us to understand and make predictions about objects and events in our world” (Medin and Aguilar 1999, p. 104).8
Typical experiments for concept attainment are planned to test the ability to acquire a particular concept from the manipulation of artificially produced concepts—namely, concepts built only for the sake of the research. In this context, concept learning is thought of as involving “the processes of generalization and abstraction, so it is not sufficient to show simply that subjects can learn to discriminate the set of stimuli with which they are trained” (Eysenck 1990, p. 73).
In concept learning experiments, concept acquisition capacity is measured as the time needed by an individual to recognize concepts non-randomly, e.g., that the subjects are able to classify new stimuli according to the knowledge (concepts) previously acquired. These concepts are conceived of as a set of features or attributes applicable to an object.
In this context, a classical approach is that of Bruner et al. (1986) , who gave account of a number of laboratory experiments on concept learning. They found that a number of strategies for concept learning could be identified and evaluated “in a relatively systematic way, both in terms of their objectives and in terms of the steps taken to achieve these” (Bruner et al. 1986, p. 235). They also reported, though, that there are a number of factors that can sensitively diminish the subjects’ efficiency. In effect, when subjects are faced to time and information constraints, their results are less successful and accurate. However, the authors were “struck by the notable flexibility and intelligence of [the] subjects in adapting their strategies to the information, capacity, and risk requirements […] imposed to them” (Bruner et al. 1986, p. 238).
In this sense, other evidence has been presented that suggest that humans have perceivable limitations on their speed in attaining concepts under particular (though common) situations (Lebowitz 1986; Michalski 1987; Simon 1996b, pp. 59–63), and also in their concept (symbol) processing activity, as shall be seen in the next section.
22.214.171.124 Storing and Processing Information
According to standard cognitive science (Anderson 1983; Newell 1990), memory is not a unitary system, but it is divided in a short-term memory (STM) and a long-term memory (LTM) in a way that information may be transferred from one memory to the other.
STM is seen as a working memory, a kind of “temporal” store for the information needed in a particular process.9 It is functionally limited because it “is used to hold the coded knowledge that is to be processed for the current task [but then] it is necessary to replace that knowledge when the current task changes” (Newell 1990, p. 355). These functional limits have been measured and refer (a) to the speed in which STM removes the “old” data to be replaced, (b) to how long STM is able to last information-processing activity in a single problem space (Newell 1990, p. 355), (c) the time it needs in order to store new information to be processed (Simon 1996b, 63–66), and (d) to the amount of information it is able to hold (Miller 1956).10
As to our capacity for processing information, Miller (1956) reported in a classical study the “severe limitations on the amount of information that we are able to receive, process, and remember”. Then he added that “by organizing the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break (or at least stretch) this information bottleneck”. As to the measurement of that limit to process information at a given time, the author reports that in the experiments carried out “this span is usually somewhere in the neighborhood of seven” chunks of information, plus or minus two.
On the other hand, the organization of long-term memory (LTM) and the way information is stored, retrieved, and indexed has also been a fruitful field for psychological research. Basically LTM is viewed as an infinite “library” of stored and indexed information. It seems to have no limits in its storing capacity and presents an associative structure (Newell and Simon 1972, p. 792). Here experiments with chess players and other kinds of experts have shown that information is stored in a relational form, and that memory shows an organization of list structures (lists that contain listed elements). It leads to pattern recognition and expertise.11
The main consequence of the limits of our memory system (the information bottleneck) is seriality in our information-processing activity—i.e., the necessity humans have to do things in a one-at-a-time manner (Simon 1980).12 A mechanism of bounded rationality for dealing with these limits is attention (Simon 1983, p. 21), i.e., the need to focus attention on what we are doing at a time. Attention is usually filtered by emotions, which can either distract us or call our attention to something. Castelfranchi et al. (2006) thus refer to emotions as modifiers of knowledge accessibility.13
In fact, a notable amount of research shows an increasing interest in the role of emotions (and intuition) in traditionally “hard” fields as financial decision making (Lipshitz and Shulimovitz 2005). Emotions also have received a renewed attention in some behavioral accounts of political behavior and decision making (Marcus 1988, 2000).14
126.96.36.199 Natural Language Processing
Apart from the debates around the nature of language and discussions about linguistic universals, a great deal of research has focused on the way humans process, utter, and comprehend language.15 A brief sample of what the leading journal of the field Cognitive Science has published on this particular issue in the last 30 years will serve as a picture of such interest.
As for structural features of human performance in language processing, Christiansen and Chater (1999a) offer a connectionist explanation of human performance in processing recursive language structures that occur in speech, and Smolensky (1999) combines generative grammar and connectionist approach to language.16 Gerken and Bever (1986) study the relation between particular linguistic features (such as intuition) and basic cognitive processes. Jurafsky (1996) explores disambiguation, Langacker (1986) introduces a cognitive grammar, and Schank (1980) reflects on meaning, conceptual representations, and memory.
As for linguistic comprehension, Carrithers and Bever (1984) observe eye-fixation patterns in readers in order to assess a comprehension-based model of reading and listening. Chaiklin (1984) considers the role of verbal rules in the use of procedural knowledge for problem-solving, while others (Chi et al. 1994) explore the role of self-explanations in improving text comprehension. Dascal (1989) and Gibbs (1989) explore the role of literal meaning and context in understanding.17 Riesbeck (1980) studies the relative role of spatial reasoning in reading directions for the first time, and explore conversation comprehension. Winograd (1980) has focused on the relation of natural language understanding as a cognitive process to be applied to computers.18
As for language utterance, many studies have been devoted to the nature and use of metaphors (Lakoff and Johnson 1980; Indurkhya 1987; Gerrig 1989; Martin 1992; Clausner and Croft 1997; Fernández-Duque and Johnson 1999), discourse contribution (Clark and Schaefer 1989),19 the use of referring expressions (Dale and Reiter 1995), the role of emotions in narratives (Dyer 1983a),20 coreference processing (Gordon and Hendrick 1998), the pragmatics of locative expressions (Herskovits 1985), the relation between discourse processing and conceptual structure (Morrow 1986), and other variables such as interestingness in discourse (Hidi and Baird 1986).
3.2.2 Satisficing Rationality vs. Optimizing Rationality
Put in the simplest way, the main conclusion of the previous section could be that “what a person cannot do he or she will not do” (Simon 1996b, p. 28). This section goes a little farther in this argument by showing the features of a satisficing policy.
188.8.131.52 Preference Reversal
A principal requirement of an optimizing organism—as the theory of subjective expected utility (SEU) puts it (Friedman and Savage 1952; Savage 1954)—is that all alternatives must be measurable in terms of a utility function. As we saw in the introduction, that function expresses the individual’s preferences in an ordinal scale, the principle of transitivity in preferences being a key assumption of maximizing rationality.21
Leaving aside the serious problem of majority cycling and agenda manipulation and its consequences on transitivity in voting (Arrow 1963),22 one of the actual problems emphasized by several behavioral studies on the field is that while there is not evidence whatsoever of the existence of such a utility function in human beings, empirical research shows that human preferences under certain situations are far from consistent, in general, and hardly transitive, in particular, and thus that many strong assumptions of the SEU theory—such as variance, apart from transitivity—do not concord with actual human behavior.
Simon (1979) made a strong point on this lack of empirical validity of neoclassical accounts of rational behavior, focusing on the weaknesses of the latter explaining decision processes in business organizations which included varying levels of uncertainty and imperfect competition. At the same time, he made an equally strong point around the empirical basis of the bounded rationality model, supporting his arguments with a notable amount of empirical research (Simon 1979, pp. 501–502).
In this direction, Daniel Kahnmeman and Amos Tversky pioneered the research on the empirical basis of certain aspects of the neoclassical approach. For instance, checking SEU theory in decisions under risk (Kahneman and Tversky 1979) they uncovered phenomena that linked choice with risk aversion—e.g., the certainty effect and reflection effect—in the sense that “certainty increases the aversiveness of losses as well as the desirability of gains” (Kahneman and Tversky 1979, p. 269).23 They also proved that people were poor predictors of the future (Kahneman and Tversky 1979; Shapira 2008).24
But facts are also reported that describe preference reversal when choices are framed in different manners, in what is referred to as framing effects (Tversky and Kahneman 1981). Tversky and Kahneman (1981) deliver data that cast doubts on the adequacy of such criteria as coherence and consistency for evaluating rational behavior, proposing bounded rationality as a framework under which preference reversal can be explained. In this same direction, Tversky and Kahneman (1986) showed that strong assumptions of SEU theory such as cancellation and variance were not met in actual decision behavior and that framing effects were the main cause of preference inconsistency.
In a more recent study, Tversky et al. (1990) elaborate on the causes of preference reversal in gambling situations and seriously challenge those rational models that explain preference reversal simply as violations of specific axioms such as independence and transitivity.25
In the political science field, Druckman (2004) recently carried out a research on the impact of framing effects on changing citizens’ preferences, incorporating elite competition on frames and citizens’ conversations about frames as the key variables that diminish frame effects on political behavior. The main conclusion was that framing effects are not as pervasive as initially assumed, at least among citizens with a high level of expertise.
Yet he also concluded that among non-experts, “when framing effects persist, they can be even more pernicious than often thought—not only do they suggest incoherent preferences but they also stimulate increased confidence in those preferences”. This view of a rather narrow role of framing effects contrasts with those of other political scientists that assume a wider influence, even when conclusions are also arrived at through due empirical research. This is the case of Quattrone and Tversky (1988) who conclude that “errors [in choice and judgment] are common and systematic, rather than idiosyncratic or random, hence they cannot be dismissed as noise”.
184.108.40.206 Satisficing as a Trigger for Search
Bounded rationality is not only related to preference reversal and framing of decisions, but also and above all to the simpler notion of relative achievement of goals, which implies choosing without examining all possible behavior alternatives and without ascertaining that these are in fact all the alternatives (March 1990a). Yet surely decision-makers, in absence of a utility function that leads their choice, must have some measure of aspiration in order to direct their action.
Briefly, the concept of aspiration level refers to a multidimensional measure for what people may or want to attain, and turns to be the mechanism humans use for satisficing in their goal-oriented action.26 Research on decision-making in firms (Cyert and Simon 1956; Cyert and March 1963) explored the way in which organizations (firms) set levels of aspirations (but not maximizing functions).
In particular, Cyert and Simon (1956) pioneered the study of firm decision-making in oligopolistic markets through the change of general assumptions made by classical theories of firm decision-making, such as “the entrepreneurial imperative of profit maximization” (Cyert and Simon 1956, p. 45). Stemming from the framework given by previous work on goal-setting in organizations (March 1954), and on the empirical validity of the maximization principle (Simon 1955; March 1955), they showed that establishing an acceptable level of profits—plus taking other variables into account, such as the impact of planning procedures and the allocation of firm’s resources—allowed for a new vision and analysis of planning processes.
In a nutshell, the main conclusion was that search behavior (for new alternatives) in organization was caused by the perception of a failure in meeting its (acceptable) goals, as other behavior such as slack in organizations—i.e., behavior that deviates from the main organizational goals, resulting in a misuse of limited resources—could also be explained by a perception that the acceptable levels had been already attained or surpassed (March and Simon 1958; Cyert et al. 1959).
The main idea here is that aspiration levels are mechanisms of bounded rationality for (a) allocating attention on particular matters (those that do not satisfice enough), and (b) triggering search for new alternatives (thus, re-allocating our attention in search activities).27
Application of satisfactory levels instead of optimal ones in game or competitive situations has also proved satisfactory in leading to stable situations that otherwise would be impossible (i.e., with the application of optimality). One example is given by the Prisoner’s Dilemma, which shows that cooperative strategies can be rational when an optimizing policy is substituted for a satisficing one (Simon 1983, 1996b).28
The psychological foundation of this “trigger for search” is the notion of generate-and-test, which refers to the very capacity of humans to solve problems: “we have a problem if we know what we want to do (the test), and if we don’t know immediately how to do it” (Newell and Simon 1976, p. 121) in which case we have the ability to generate (search for) possible solutions for the test.
Summing up, this draws a picture of rational human beings that turns to be rather different from the one drawn by theories that assume perfect rationality in human action, especially regarding two different features of bounded rationality. First, the idea of the very existence of a utility function is challenged by observations of preference reversal and inconsistency in choice situations. Second, the notion of aspiration level can be used both as a mechanism for attention allocation and as an induction to change situations of unsatisfactory levels of goal attainment.
We are thus left with a view of a human decision-maker who has perceivable limits on his computational abilities, who is not always able to maintain the coherence and consistency of his preferences, and who needs to focus his attention to search for alternatives when the ones tested are not sufficiently satisfactory. In the next section, a final step is made in exploring bounded rationality. In particular, it deals with the mechanisms for search.
3.2.3 Procedural Rationality vs. Substantive Rationality
Applied to decision-making, the difference between substantive and procedural rationality is the difference between the ability to choose the correct course of action and the ability to choose a good (enough) course of action.
The picture of the economic man given by neoclassical economic theory is that of a human being who purports to deal with the “real world” in all its complexity (and that may help her to achieve her goals using rationality). This view assumes a theory of human problem solving in which optimal solutions (decisions) are arrived at through a chain of mental processes of the kind depicted in the first chapter (foundations of rational-choice theories, Sect. 2.3.1) and in our working definition of rationality.
The model of a boundedly rational human being, in contrast, draws a picture of an individual who cannot perceive the world in such a detailed manner. Instead, she makes it with a highly simplified image of it, in which one situation (problem) is only loosely connected with another situation (problem). Thus she can make decisions with relatively simple rules of thumb that do not make impossible demands upon her capacity for thought.
In particular, then, the problem of procedural versus substantive rationality is the problem of the way by which humans develop strategies to adapt to their environments—i.e., in order to achieve their goals (solve their problems). This question poses the further problem whether performing complex tasks requires a highly complex memory organization. Or, in other words, it puts forth our main problem: how problems that require huge amounts of expert knowledge (stored in memory) are actually worked out.
According to the viewpoint adopted in the previous sections, the search for solutions in rich domains will be determined by the complex structure of these problems rather than by the limits of human memory.
220.127.116.11 Procedural Rationality in Organizations
The study of the way humans develop search strategies in particularly complex problem spaces run parallel to the development of techniques and tools for helping organizations to achieve higher levels of efficiency in their decisions, that is, to achieve procedural rationality. A number of techniques have been proposed, operations research (OR) and artificial intelligence (AI) being among the most popular ones.
These developments are based upon simplified models of real-world problems, although they enable taking a huge number of different variables into account in order to find an optimal solution to a problem. Yet unless the model of the problem used in the analysis and the real-world problem have the very same properties, the solutions will hardly be optimal.
Typically OR works with simplified models of problems so that solutions can be worked out, but these solutions are only optimal within a simplified model of the problem that has been thus formalized, but in the real-world problem, the solution, at best, will be good enough (Simon 1996b). Even more, OR and techniques such as linear programming have limited application to ill-structured problems, compared to heuristic search enabled by symbolic AI-based applications, which ease working with less well-structured problem spaces.29 The price is that with AI optima are rarely found. The trade-off lies between “satisficing in a near-realistic model (AI) against optimizing in a greatly simplified model (OR)” (Simon 1996b, p. 28).
18.104.22.168 A Computer Model of the Mind
The heuristic models developed under the label of artificial intelligence were applied bearing in mind the idea that these tools actually helped to better understand human heuristic processes precisely because they performed their processes in the same way humans actually performed them. Behind this idea resided the strong belief that human thinking could be reduced to a set of information or symbol manipulating (information-processing) processes, in such a way that computers (being symbol-processing systems themselves) would be able to simulate.30
This idea, that triggered the development of artificial intelligence jointly with cognitive science, was the result of work done at the RAND Corporation and the Carnegie Institute of Technology by Herbert A. Simon (at Carnegie Institute of Technology), Allen Newell and J. Cliff Shaw (at RAND’s Systems Research Laboratory), and others, since the early 1950s. An early formulation of these ideas may be found in a report on the General Problem Solver (GPS)—a joint research project of RAND and Carnegie Tech—for the RAND Corporation (Newell et al. 1958b). The main objective of this research was to “understand the information processes that underlie human intellectual, adaptive, and creative abilities” by means of constructing “computer programs that can solve problems requiring intelligence and adaptation” (Newell et al. 1958b). The same ideas were soon publicly stated with notable optimism (Simon and Newell 1958; Newell et al. 1958a; Simon 1961).31
It is also worth noting that the analogy was not made between the computer and the neurophysiological structure of human brain, but between two physical systems that process symbols in order to perform activities regarded as thinking or intelligent.
Finally, the computer was also viewed as a way to test theories on complex human thinking processes such as problem solving and learning, which gave a definite impulse to cognitive sciences as a meeting point between other disciplines such as computer science, economics, linguistics, and notably artificial intelligence (Simon 1980).
In effect, under all these developments—that involved research and development of information processing languages, production systems languages, and simulation of search in complex environments—underlied a brand-new information-based theory of human cognition and, in particular, a novel theory of human problem solving. The notion of problem soon led Allen Newell to the notion of heuristics, or “things that aid discovery” (Newell et al. 1958b, pp. 1–2),32 in the sense that “a genuine problem-solving process involves the repeated use of available information to initiate exploration, which discloses, in turn, more information until a way to attain the solution is finally discovered” (Newell et al. 1958b, p. 1).33 These informations are heuristics. A theory of human problem solving was thus set forth including a “space” in which the problem solver searched for a solution.
22.214.171.124 Task Environment and Problem Space
Regarding “space”, Newell (1990, p. 98) describes problem space as “the space the agent has created within which to search for the solution to whatever problem it is currently attending. It is the agent’s attempt to bound the problem so it can be worked on. Thus, the agent adopts some problem space in which to solve a problem. The agent is then located at some current state in the space”. Therefore, while the task environment (the objective structure of the problem, also known as problem environment) molds the structure of the problem space, the problem space is the representation of that task environment in the information-processing system (the problem solver).34
The distinction of task environment and problem space leads, in fact, to two different representations of the same thing made by two different agents. The task environment itself may be represented by the observer. The problem space is the internal representation of the task environment by the behaving subject (the decision maker). Therefore, the problem of analyzing a subject’s decision process will mostly imply the problem of interpreting and representing the way in which the subject represents and interprets the objective environment of the problem to be solved.
126.96.36.199 LTM and Information-rich Domains
Considering what has been explored so far, the notion of the decision-maker as an adaptive system turns to be clearer. On one hand, in the sense that the structure of the problem to be solved (within a particular problem space) is the essential constraint to which the system has to adapt in order to solve a problem. On the other hand, in that the actual behavior of this adaptive individual solving a problem will tell us (a) how complex the problem is, and (b) “what aspects of behavior are determined by the psychology of the problem solver” (Newell and Simon 1972, p. 79).
Given a problem, a problem space (providing the paths that may lead to a solution), and a problem solver (who is an adaptive system), Newell and Simon (1972, p. 83) proposed that the level of adaptivity (or intelligence, or expertise, we may add) is a measure of the success the problem solver shows with problems of a certain kind, thus exploring the relation between certain problems and the skill needed to solve them.
Solving the problem does not require special skills for an adult person with a basic level of literacy. Solving the problems that professionals of any field (medicine, judiciary, fire departments) must solve every day require special skills. These skills are needed because the environment of the problems they face requires the availability, recovery, and management of huge amounts of information stored in these professionals’ long-term memory (LTM).
Our simpler calculation problem did not require retrieving any amount of information from the LTM. Rather, its solution comes generally “as automatic, largely unconscious, and relatively undemanding of computational capacity” (Stanovich and West 2000).35 The question is not whether expert professionals are inherently more intelligent than other adults but whether they possess a sufficient store of information related to their field and the ability to recover that information in a sufficiently efficient way so that problems may be solved. This is why some people are considered experts. While their basic cognitive system is invariant and the same as the average human being (Newell 1990), the difference between the layman and the expert lies (a) in the amount of information about a particular domain stored in his LTM, and (b) in his ability to search, recognize, associate, and retrieve this information in order to make certain decisions about that domain (Simon 1996b, p. 87).36
As long as the LTM is thus viewed as a “place” where the search occurs (in particular kinds of problems), it may be regarded as standing beyond the goals of the problem-solver, which mark the boundary between both environments. Therefore, the long-term memory is another part of the outer environment to which the subject has to adapt, especially when deciding over information-rich domains (Simon 1996b).
3.2.4 Concluding Remarks: The Limits that Shape Rationality
In this section the main features of the inner environment of a typical decision-maker have been put forth in order to assess what level of rationality is to be expected in human normal behavior. There is a notable amount of empirical research supporting the view of a (boundedly) rational human being in cognitive sciences, organization and management, and information processing psychology. Human thought processes in difficult problem-solving, concept-attainment, and decision-making tasks have been described successfully in terms of basic information-processing processes.
We have presented the main characteristics of the inner environment in different steps. First, we discussed the concept of rationality presenting an alternative to the one adopted by neoclassical economics and rational-choice approaches to political decision-making.
Second, this notion of rationality has led us to consider three different dimensions according to which these two different views of rationality may be analyzed. The first one has dealt with the difference between bounded and omniscient rationality, which refers to the cognitive limits all human beings share in our processing of information. The second one takes account of the difference between satisficing and optimizing as valid criteria for interpreting human behavior as goal-oriented action, introducing the notion of aspiration level. The third one accounts for the difference between procedural and substantive rationality. While the latter refers to the outcomes of rational choice, the former centers on the process through which a rational choice is arrived at (Simon 1978). In this last part a number of issues have been presented and discussed, such as technological means to reach procedural rationality in organizations, the historical and basic theoretical grounds of a computer model of the mind, and the relevant notions of problem space and task environment.
These issues provide the framework in which human problem solving can be understood as a process by which humans use certain methods (heuristics) within a problem space in order to attain a goal, namely the solution of a problem.
The purpose of this work required the presentation and explanation of a theory of human problem solving, although to give a detailed account of the theory of heuristic search would fall beyond our scope.37 Moreover, we are aware that the theory presented here has been extended since the 1970s through a great deal of research in cognitive science, mainly on cognitive architectures.38
In this sense, three main cognitive architectures have been developed in order to incorporate and expand the theory of human problem solving. Soar (Laird et al. 1987) was proposed as an architecture for general intelligence and as a candidate for a unified theory of cognition (Newell 1990, p. 38). Seen as an intelligent system, it performs small tasks (Tower of Hanoi, resolution theorem proving, means-ends analysis, etc.) as well as larger ones (algorithm discovery, production scheduling of replacement windshields, etc.). It also learns by chunking on all tasks it performs (Newell 1990, pp. 216–217).39
ACT-R (Adaptive Character of Thought) (Anderson 1996)—elaborating ACT* (Anderson 1983), which founded upon Newell’s contributions on the role of production rules in procedural knowledge (Anderson 1996)—has centered upon complex cognition through decomposition of cognition (Anderson and Gluck 2001). It has received attention and has been applied in different research areas such as conflict resolution (Belavkin and Ritter 2004), linguistic metaphor analysis (Budiu and Anderson 2003), and theory of choice (Lovett 1998).
Finally, Icarus (Langley et al. 2004) has been presented as an “extended theory of human problem solving” (Langley and Rogers 2005) which deals with many features regarding representation, performance, and learning in puzzle-like problem-solving situations (e.g., the Tower of Hanoi).
Although these advances are impressive, and the applications numerous, the core assumptions of Newell and Simon’s theory of human problem solving seem to stand and are still referred to not only as seminal work but also as a still valid general framework (Anderson 1987).
In conclusion, then, these assumptions and the features of the inner environment have set forth the basic limits that shape human rationality, which might be summarized as:
Limits on what we can know (cognitive limits)
Limits on what we can expect to do (aspiration levels)
Limits on what we can and actually do (task environment)
3.3 The External Environment of Decisions
This section holds on two simple ideas that have been already presented in this chapter. The first one is that individuals are adaptive systems who act in an environment in order to solve their problems. In effect, at the beginning of this chapter we stated that placing the boundary between the inner and the outer environment of a deciding system was quite arbitrary since it depended upon whose decisions do we want to deal with. Many early students within the behavioral tradition (Cyert and Simon 1956; Cyert and March 1955) opted for setting that boundary at the fringes of the organization rather than placing it at the individual level. Such a decision has implied the systematic study of the mechanisms for organizational bounded rationality in a similar way that others have focused their efforts on studying the mechanisms of individual bounded rationality. Some of the latter have been already pointed out in previous sections. For instance, we saw how we humans deal with our poor ability to attend to different things at a time recurring to attention (and emotions). Similarly, organizations may have problems to attack a great amount of important matters at the same time and have to make use of allocation of attention (e.g., in the form of resources) in order to assure a particular problem is tackled in due time and by appropriate means (March 1990a).
This and other mechanisms—such as routinized behavior or departmentalization—make organizations adaptive systems that are able to achieve goals and make decisions in ways that single individuals would have never dreamt of. Building the “Big Mac” between Lake Michigan and Lake Huron, “discovering” America, and successfully carrying out the Brécourt Manor Assault are all instances of such achievements. These mechanisms are part of an organization’s inner environment in the sense that they allow for adaptation to the external environment.40
As we have also pointed out, we decided to set the boundary at the individual level, so these organizational mechanisms that otherwise would be considered the inner environment, are viewed here as the most important portion of external environment within which the individual has to adapt and make choices—decisions. Thus in our case, the problems junior judges have to face when on-call will have to be understood as demands of this external environment rather than the judicial organization’s mechanisms to deal with its bounded rationality.
So while in the account (in the rest of this chapter) of these demands we shall give a brief introduction to some general features to be taken into consideration, by no means they are to be considered fixed or given, for it will be our analysis of the individual’s adaptive problem-solving behavior that will tell us the precise nature, characteristics, and complexity of this external environment and of the particular problems it puts forth. This is the second idea.
Therefore, at this point we should make a decision on what features of the external environment are to be most relevant to our analysis. In order to make that choice for this tentative presentation, we have used the conclusions sketched in the few earlier works on the matter (Rodrigo et al. 2004; Casanovas et al. 2005). In a preliminary overview on Spanish judges, Rodrigo et al. (2004, p. 15) stressed the characteristics of the on-call period arguing that judges’ decision-making in these circumstances “may entail simultaneous decision making over a number of parallel issues (raised by the police, lawyers, prosecutors, etc.)”, which points out the problem of allocation of attention, i.e., the problem of paying attention to a few things at a time.
Furthermore, it was detected that “the need for quick decisions makes it difficult to review jurisprudence or precedents, so inexperienced judges have to rely on uncertain consultation with peers or senior judges”. This last point raises, first, the general problem of decision-making under uncertainty, particularly shaped by lack of complete information about the problem to be solved. Secondly, in this particular context it also refers to the lack of experience—i.e., knowledge gathered from past problem-solving activities, and expertise—, which in turn leads to the need of some kind of devices such as adaptive rules or routines—that are usually viewed as the quintessential mode of bureaucratic behavior—in order to deal with usual problems and situations.
In a further account, Casanovas et al. (2005, p. 20) analyzed samples of qualitative data on the conditions under which junior judges usually work and detected another potential problem regarding personnel at courts. In particular it was observed that:
One of the perennial problems of the Spanish judicial system is that first instance courts in remote areas are plagued with vacancies of the judicial staff. […] [I]n some Autonomous Communities members of the judicial staff still depend on the Ministry of Justice and they are organized at the national level. After a compulsory period of permanency in a judicial unit, judicial staff tends to move to another area, usually closer to their homeland. This also holds true for civil servants of the Autonomous Communities with competencies over judicial staff. As a result, in remote areas judges may occasionally find at their arrival either a deserted unit (i.e. the officials have moved to another area and the new ones have not yet arrived), a unit filled with substitute and poorly trained personnel, or a unit filled with newly recruited staff who aims to spend a short period of time there before moving to another area.
Furthermore, the assessment of these data also uncovered that “one of the main concerns of judges at the beginning of their service is how to manage judicial staff” as they are heads of the judicial office and thus sometimes they have to “establish some organizational ground rules”. This point should make us aware of the potentiality of conflict in organizations, which usually refers to the existence of mutually inconsistent preferences within the same organizational unit.
Therefore our focus will purposely emphasize those aspects that are potentially most relevant for our further case-study analysis (such as decisions under uncertainty or allocation of attention); thus it shall necessarily de-emphasize other features. This in any case is to be understood as that these other features are not important in different analyses of an administrative organization.41
Bearing this in mind, this section deals with a number of features and mechanisms that structure organizations in general and their behavior as they have been studied in the tradition of behavioral organizational decision analysis.