Heuristics in judgment and decision-making




In psychology, heuristics are simple, efficient rules which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and ignoring others.[1][2][3] These rules work well under most circumstances, but they can lead to systematic deviations from logic, probability or rational choice theory. The resulting errors are called "cognitive biases" and many different types have been documented. These have been shown to affect people's choices in situations like valuing a house, deciding the outcome of a legal case, or making an investment decision. Heuristics usually govern automatic, intuitive judgments but can also be used as deliberate mental strategies when working from limited information.


Cognitive scientist Herbert A. Simon originally proposed that human judgments are limited by available information, time constraints, and cognitive limitations, calling this bounded rationality.[4] In the early 1970s, psychologists Amos Tversky and Daniel Kahneman demonstrated three heuristics that underlie a wide range of intuitive judgments. These findings set in motion the heuristics and biases research program,[5] which studies how people make real-world judgments and the conditions under which those judgments are unreliable. This research challenged the idea that human beings are rational actors, but provided a theory of information processing to explain how people make estimates or choices. This research, which first gained worldwide attention in 1974 with the Science paper "Judgment Under Uncertainty: Heuristics and Biases",[6] has guided almost all current theories of decision-making,[7] and although the originally proposed heuristics have been challenged in the further debate, this research program has changed the field by permanently setting the research questions.[8]


This heuristics-and-biases tradition has been criticised by Gerd Gigerenzer and others for being too focused on how heuristics lead to errors.[9] The critics argue that heuristics can be seen as rational in an underlying sense. According to this perspective, heuristics are good enough for most purposes without being too demanding on the brain's resources. Another theoretical perspective sees heuristics as fully rational in that they are rapid, can be made without full information and can be as accurate as more complicated procedures. By understanding the role of heuristics in human psychology, marketers and other persuaders can influence decisions, such as the prices people pay for goods or the quantity they buy.


.mw-parser-output .toclimit-2 .toclevel-1 ul,.mw-parser-output .toclimit-3 .toclevel-2 ul,.mw-parser-output .toclimit-4 .toclevel-3 ul,.mw-parser-output .toclimit-5 .toclevel-4 ul,.mw-parser-output .toclimit-6 .toclevel-5 ul,.mw-parser-output .toclimit-7 .toclevel-6 uldisplay:none



Contents





  • 1 Types

    • 1.1 Availability


    • 1.2 Representativeness

      • 1.2.1 Ignorance of base rates


      • 1.2.2 Conjunction fallacy


      • 1.2.3 Ignorance of sample size


      • 1.2.4 Dilution effect


      • 1.2.5 Misperception of randomness



    • 1.3 Anchoring and adjustment

      • 1.3.1 Examples



    • 1.4 Affect heuristic


    • 1.5 Others



  • 2 Theories

    • 2.1 Cognitive laziness


    • 2.2 Attribute substitution


    • 2.3 Fast and frugal



  • 3 Consequences

    • 3.1 Efficient decision heuristics


    • 3.2 "Beautiful-is-familiar" effect


    • 3.3 Judgments of morality and fairness


    • 3.4 Persuasion



  • 4 See also


  • 5 Citations


  • 6 References


  • 7 Further reading


  • 8 External links




Types


In their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, and anchoring and adjustment. Subsequent work has identified many more. Heuristics that underlie judgment are called "judgment heuristics". Another type, called "evaluation heuristics", are used to judge the desirability of possible choices.[10]



Availability



In psychology, availability is the ease with which a particular idea can be brought to mind. When people estimate how likely or how frequent an event is on the basis of its availability, they are using the availability heuristic.[11] When an infrequent event can be brought easily and vividly to mind, people tend to overestimate its likelihood.[12] For example, people overestimate their likelihood of dying in a dramatic event such as a tornado or terrorism. Dramatic, violent deaths are usually more highly publicised and therefore have a higher availability.[13] On the other hand, common but mundane events are hard to bring to mind, so their likelihoods tend to be underestimated. These include deaths from suicides, strokes, and diabetes. This heuristic is one of the reasons why people are more easily swayed by a single, vivid story than by a large body of statistical evidence.[14] It may also play a role in the appeal of lotteries: to someone buying a ticket, the well-publicised, jubilant winners are more available than the millions of people who have won nothing.[13]


When people judge whether more English words begin with T or with K, the availability heuristic gives a quick way to answer the question. Words that begin with T come more readily to mind, and so subjects give a correct answer without counting out large numbers of words. However, this heuristic can also produce errors. When people are asked whether there are more English words with K in the first position or with K in the third position, they use the same process. It is easy to think of words that begin with K, such as kangaroo, kitchen, or kept. It is harder to think of words with K as the third letter, such as lake, or acknowledge, although objectively these are three times more common. This leads people to the incorrect conclusion that K is more common at the start of words.[15] In another experiment, subjects heard the names of many celebrities, roughly equal numbers of whom were male and female. The subjects were then asked whether the list of names included more men or more women. When the men in the list were more famous, a great majority of subjects incorrectly thought there were more of them, and vice versa for women. Tversky and Kahneman's interpretation of these results is that judgments of proportion are based on availability, which is higher for the names of better-known people.[11]


In one experiment that occurred before the 1976 U.S. Presidential election, some participants were asked to imagine Gerald Ford winning, while others did the same for a Jimmy Carter victory. Each group subsequently viewed their allocated candidate as significantly more likely to win. The researchers found a similar effect when students imagined a good or a bad season for a college football team.[16] The effect of imagination on subjective likelihood has been replicated by several other researchers.[14]


A concept's availability can be affected by how recently and how frequently it has been brought to mind. In one study, subjects were given partial sentences to complete. The words were selected to activate the concept either of hostility or of kindness: a process known as priming. They then had to interpret the behavior of a man described in a short, ambiguous story. Their interpretation was biased towards the emotion they had been primed with: the more priming, the greater the effect. A greater interval between the initial task and the judgment decreased the effect.[17]


Tversky and Kahneman offered the availability heuristic as an explanation for illusory correlations in which people wrongly judge two events to be associated with each other. They explained that people judge correlation on the basis of the ease of imagining or recalling the two events together.[11][15]



Representativeness



The representativeness heuristic is seen when people use categories, for example when deciding whether or not a person is a criminal. An individual thing has a high representativeness for a category if it is very similar to a prototype of that category. When people categorise things on the basis of representativeness, they are using the representativeness heuristic. "Representative" is here meant in two different senses: the prototype used for comparison is representative of its category, and representativeness is also a relation between that prototype and the thing being categorised.[15][18] While it is effective for some problems, this heuristic involves attending to the particular characteristics of the individual, ignoring how common those categories are in the population (called the base rates). Thus, people can overestimate the likelihood that something has a very rare property, or underestimate the likelihood of a very common property. This is called the base rate fallacy. Representativeness explains this and several other ways in which human judgments break the laws of probability.[15]


The representativeness heuristic is also an explanation of how people judge cause and effect: when they make these judgements on the basis of similarity, they are also said to be using the representativeness heuristic. This can lead to a bias, incorrectly finding causal relationships between things that resemble one another and missing them when the cause and effect are very different. Examples of this include both the belief that "emotionally relevant events ought to have emotionally relevant causes", and magical associative thinking.[19]



Ignorance of base rates



A 1973 experiment used a psychological profile of Tom W., a fictional graduate student.[20] One group of subjects had to rate Tom's similarity to a typical student in each of nine academic areas (including Law, Engineering and Library Science). Another group had to rate how likely it is that Tom specialised in each area. If these ratings of likelihood are governed by probability, then they should resemble the base rates, i.e. the proportion of students in each of the nine areas (which had been separately estimated by a third group). If people based their judgments on probability, they would say that Tom is more likely to study Humanities than Library Science, because there are many more Humanities students, and the additional information in the profile is vague and unreliable. Instead, the ratings of likelihood matched the ratings of similarity almost perfectly, both in this study and a similar one where subjects judged the likelihood of a fictional woman taking different careers. This suggests that rather than estimating probability using base rates, subjects had substituted the more accessible attribute of similarity.[20]



Conjunction fallacy



When people rely on representativeness, they can fall into an error which breaks a fundamental law of probability.[18] Tversky and Kahneman gave subjects a short character sketch of a woman called Linda, describing her as, "31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations". People reading this description then ranked the likelihood of different statements about Linda. Amongst others, these included "Linda is a bank teller", and, "Linda is a bank teller and is active in the feminist movement". People showed a strong tendency to rate the latter, more specific statement as more likely, even though a conjunction of the form "Linda is both X and Y" can never be more probable than the more general statement "Linda is X". The explanation in terms of heuristics is that the judgment was distorted because, for the readers, the character sketch was representative of the sort of person who might be an active feminist but not of someone who works in a bank. A similar exercise concerned Bill, described as "intelligent but unimaginative". A great majority of people reading this character sketch rated "Bill is an accountant who plays jazz for a hobby", as more likely than "Bill plays jazz for a hobby".[21]


Without success, Tversky and Kahneman used what they described as "a series of increasingly desperate manipulations" to get their subjects to recognise the logical error. In one variation, subjects had to choose between a logical explanation of why "Linda is a bank teller" is more likely, and a deliberately illogical argument which said that "Linda is a feminist bank teller" is more likely "because she resembles an active feminist more than she resembles a bank teller". Sixty-five percent of subjects found the illogical argument more convincing.[21][22]
Other researchers also carried out variations of this study, exploring the possibility that people had misunderstood the question. They did not eliminate the error.[23][24] It has been shown that individuals with high CRT scores are significantly less likely to be subject to the conjunction fallacy.[25] The error disappears when the question is posed in terms of frequencies. Everyone in these versions of the study recognised that out of 100 people fitting an outline description, the conjunction statement ("She is X and Y") cannot apply to more people than the general statement ("She is X").[26]



Ignorance of sample size



Tversky and Kahneman asked subjects to consider a problem about random variation. Imagining for simplicity that exactly half of the babies born in a hospital are male, the ratio will not be exactly half in every time period. On some days, more girls will be born and on others, more boys. The question was, does the likelihood of deviating from exactly half depend on whether there are many or few births per day? It is a well-established consequence of sampling theory that proportions will vary much more day-to-day when the typical number of births per day is small. However, people's answers to the problem do not reflect this fact. They typically reply that the number of births in the hospital makes no difference to the likelihood of more than 60% male babies in one day. The explanation in terms of the heuristic is that people consider only how representative the figure of 60% is of the previously given average of 50%.[15][27]



Dilution effect


Richard E. Nisbett and colleagues suggest that representativeness explains the dilution effect, in which irrelevant information weakens the effect of a stereotype. Subjects in one study were asked whether "Paul" or "Susan" was more likely to be assertive, given no other information than their first names. They rated Paul as more assertive, apparently basing their judgment on a gender stereotype. Another group, told that Paul's and Susan's mothers each commute to work in a bank, did not show this stereotype effect; they rated Paul and Susan as equally assertive. The explanation is that the additional information about Paul and Susan made them less representative of men or women in general, and so the subjects' expectations about men and women had a weaker effect.[28] This means unrelated and non-diagnostic information about certain issue can make relative information less powerful to the issue when people understand the phenomenon.[29]



Misperception of randomness


Representativeness explains systematic errors that people make when judging the probability of random events. For example, in a sequence of coin tosses, each of which comes up heads (H) or tails (T), people reliably tend to judge a clearly patterned sequence such as HHHTTT as less likely than a less patterned sequence such as HTHTTH. These sequences have exactly the same probability, but people tend to see the more clearly patterned sequences as less representative of randomness, and so less likely to result from a random process.[15][30] Tversky and Kahneman argued that this effect underlies the gambler's fallacy; a tendency to expect outcomes to even out over the short run, like expecting a roulette wheel to come up black because the last several throws came up red.[18][31] They emphasised that even experts in statistics were susceptible to this illusion: in a 1971 survey of professional psychologists, they found that respondents expected samples to be overly representative of the population they were drawn from. As a result, the psychologists systematically overestimated the statistical power of their tests, and underestimated the sample size needed for a meaningful test of their hypotheses.[15][31]



Anchoring and adjustment



Anchoring and adjustment is a heuristic used in many situations where people estimate a number.[32] According to Tversky and Kahneman's original description, it involves starting from a readily available number—the "anchor"—and shifting either up or down to reach an answer that seems plausible.[32] In Tversky and Kahneman's experiments, people did not shift far enough away from the anchor. Hence the anchor contaminates the estimate, even if it is clearly irrelevant. In one experiment, subjects watched a number being selected from a spinning "wheel of fortune". They had to say whether a given quantity was larger or smaller than that number. For instance, they might be asked, "Is the percentage of African countries which are members of the United Nations larger or smaller than 65%?" They then tried to guess the true percentage. Their answers correlated well with the arbitrary number they had been given.[32][33] Insufficient adjustment from an anchor is not the only explanation for this effect. An alternative theory is that people form their estimates on evidence which is selectively brought to mind by the anchor.[34]




The amount of money people will pay in an auction for a bottle of wine can be influenced by considering an arbitrary two-digit number.


The anchoring effect has been demonstrated by a wide variety of experiments both in laboratories and in the real world.[33][35] It remains when the subjects are offered money as an incentive to be accurate, or when they are explicitly told not to base their judgment on the anchor.[35] The effect is stronger when people have to make their judgments quickly.[36] Subjects in these experiments lack introspective awareness of the heuristic, denying that the anchor affected their estimates.[36]


Even when the anchor value is obviously random or extreme, it can still contaminate estimates.[35] One experiment asked subjects to estimate the year of Albert Einstein's first visit to the United States. Anchors of 1215 and 1992 contaminated the answers just as much as more sensible anchor years.[36] Other experiments asked subjects if the average temperature in San Francisco is more or less than 558 degrees, or whether there had been more or fewer than 100,025 top ten albums by The Beatles. These deliberately absurd anchors still affected estimates of the true numbers.[33]


Anchoring results in a particularly strong bias when estimates are stated in the form of a confidence interval. An example is where people predict the value of a stock market index on a particular day by defining an upper and lower bound so that they are 98% confident the true value will fall in that range. A reliable finding is that people anchor their upper and lower bounds too close to their best estimate.[15] This leads to an overconfidence effect. One much-replicated finding is that when people are 98% certain that a number is in a particular range, they are wrong about thirty to forty percent of the time.[15][37]


Anchoring also causes particular difficulty when many numbers are combined into a composite judgment. Tversky and Kahneman demonstrated this by asking a group of people to rapidly estimate the product 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1. Another group had to estimate the same product in reverse order; 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8. Both groups underestimated the answer by a wide margin, but the latter group's average estimate was significantly smaller.[38] The explanation in terms of anchoring is that people multiply the first few terms of each product and anchor on that figure.[38] A less abstract task is to estimate the probability that an aircraft will crash, given that there are numerous possible faults each with a likelihood of one in a million. A common finding from studies of these tasks is that people anchor on the small component probabilities and so underestimate the total.[38] A corresponding effect happens when people estimate the probability of multiple events happening in sequence, such as an accumulator bet in horse racing. For this kind of judgment, anchoring on the individual probabilities results in an overestimation of the combined probability.[38]



Examples


People's valuation of goods, and the quantities they buy, respond to anchoring effects. In one experiment, people wrote down the last two digits of their social security numbers. They were then asked to consider whether they would pay this number of dollars for items whose value they did not know, such as wine, chocolate, and computer equipment. They then entered an auction to bid for these items. Those with the highest two-digit numbers submitted bids that were many times higher than those with the lowest numbers.[39][40] When a stack of soup cans in a supermarket was labelled, "Limit 12 per customer", the label influenced customers to buy more cans.[36] In another experiment, real estate agents appraised the value of houses on the basis of a tour and extensive documentation. Different agents were shown different listing prices, and these affected their valuations. For one house, the appraised value ranged from US$114,204 to $128,754.[41][42]


Anchoring and adjustment has also been shown to affect grades given to students. In one experiment, 48 teachers were given bundles of student essays, each of which had to be graded and returned. They were also given a fictional list of the students' previous grades. The mean of these grades affected the grades that teachers awarded for the essay.[43]


One study showed that anchoring affected the sentences in a fictional rape trial.[44] The subjects were trial judges with, on average, more than fifteen years of experience. They read documents including witness testimony, expert statements, the relevant penal code, and the final pleas from the prosecution and defence. The two conditions of this experiment differed in just one respect: the prosecutor demanded a 34-month sentence in one condition and 12 months in the other; there was an eight-month difference between the average sentences handed out in these two conditions.[44] In a similar mock trial, the subjects took the role of jurors in a civil case. They were either asked to award damages "in the range from $15 million to $50 million" or "in the range from $50 million to $150 million". Although the facts of the case were the same each time, jurors given the higher range decided on an award that was about three times higher. This happened even though the subjects were explicitly warned not to treat the requests as evidence.[39]


Assessments can also be influenced by the stimuli provided. In one review, researchers found that if a stimuli is perceived to be important or carry "weight" to a situation, that people were more likely to attribute that stimuli as heavier physically.[45]



Affect heuristic



"Affect", in this context, is a feeling such as fear, pleasure or surprise. It is shorter in duration than a mood, occurring rapidly and involuntarily in response to a stimulus. While reading the words "lung cancer" might generate an affect of dread, the words "mother's love" can create an affect of affection and comfort. When people use affect ("gut responses") to judge benefits or risks, they are using the affect heuristic.[46] The affect heuristic has been used to explain why messages framed to activate emotions are more persuasive than those framed in a purely factual way.[47]



Others




  • Control heuristic

  • Contagion heuristic

  • Effort heuristic

  • Familiarity heuristic

  • Fluency heuristic

  • Gaze heuristic

  • Hot-hand fallacy

  • Naive diversification

  • Peak-end rule

  • Recognition heuristic

  • Scarcity heuristic

  • Similarity heuristic

  • Simulation heuristic

  • Social proof



Theories


There are competing theories of human judgment, which differ on whether the use of heuristics is irrational. A cognitive laziness approach argues that heuristics are inevitable shortcuts given the limitations of the human brain. According to the natural assessments approach, some complex calculations are already done rapidly and automatically by the brain, and other judgments make use of these processes rather than calculating from scratch. This has led to a theory called "attribute substitution", which says that people often handle a complicated question by answering a different, related question, without being aware that this is what they are doing.[48] A third approach argues that heuristics perform just as well as more complicated decision-making procedures, but more quickly and with less information. This perspective emphasises the "fast and frugal" nature of heuristics.[49]



Cognitive laziness



An effort-reduction framework proposed by Anuj K. Shah and Daniel M. Oppenheimer states that people use a variety of techniques to reduce the effort of making decisions.[50]



Attribute substitution





A visual example of attribute substitution. This illusion works because the 2D size of parts of the scene is judged on the basis of 3D (perspective) size, which is rapidly calculated by the visual system.


In 2002 Daniel Kahneman and Shane Frederick proposed a process called attribute substitution which happens without conscious awareness. According to this theory, when somebody makes a judgment (of a target attribute) which is computationally complex, a rather more easily calculated heuristic attribute is substituted.[51] In effect, a difficult problem is dealt with by answering a rather simpler problem, without the person being aware this is happening.[48] This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.[48][51][52]


This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system.[53] Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place.[48][51]


In 1975, psychologist Stanley Smith Stevens proposed that the strength of a stimulus (e.g. the brightness of a light, the severity of a crime) is encoded by brain cells in a way that is independent of modality. Kahneman and Frederick built on this idea, arguing that the target attribute and heuristic attribute could be very different in nature.[48]



.mw-parser-output .quoteboxbackground-color:#F9F9F9;border:1px solid #aaa;box-sizing:border-box;padding:10px;font-size:88%.mw-parser-output .quotebox.floatleftmargin:0.5em 1.4em 0.8em 0.mw-parser-output .quotebox.floatrightmargin:0.5em 0 0.8em 1.4em.mw-parser-output .quotebox.centeredmargin:0.5em auto 0.8em auto.mw-parser-output .quotebox.floatleft p,.mw-parser-output .quotebox.floatright pfont-style:inherit.mw-parser-output .quotebox-titlebackground-color:#F9F9F9;text-align:center;font-size:larger;font-weight:bold.mw-parser-output .quotebox-quote.quoted:beforefont-family:"Times New Roman",serif;font-weight:bold;font-size:large;color:gray;content:" “ ";vertical-align:-45%;line-height:0.mw-parser-output .quotebox-quote.quoted:afterfont-family:"Times New Roman",serif;font-weight:bold;font-size:large;color:gray;content:" ” ";line-height:0.mw-parser-output .quotebox .left-alignedtext-align:left.mw-parser-output .quotebox .right-alignedtext-align:right.mw-parser-output .quotebox .center-alignedtext-align:center.mw-parser-output .quotebox citedisplay:block;font-style:normal@media screen and (max-width:360px).mw-parser-output .quoteboxmin-width:100%;margin:0 0 0.8em!important;float:none!important
[P]eople are not accustomed to thinking hard, and are often content to trust a plausible judgment that comes to mind.

Daniel Kahneman, American Economic Review 93 (5) December 2003, p. 1450[52]



Kahneman and Frederick propose three conditions for attribute substitution:[48]


  1. The target attribute is relatively inaccessible.
    Substitution is not expected to take place in answering factual questions that can be retrieved directly from memory ("What is your birthday?") or about current experience ("Do you feel thirsty now?).

  2. An associated attribute is highly accessible.
    This might be because it is evaluated automatically in normal perception or because it has been primed. For example, someone who has been thinking about their love life and is then asked how happy they are might substitute how happy they are with their love life rather than other areas.

  3. The substitution is not detected and corrected by the reflective system.
    For example, when asked "A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?" many subjects incorrectly answer $0.10.[52] An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do. Whether they feel that is the right answer will depend on whether they check the calculation with their reflective system.

Kahneman gives an example where some Americans were offered insurance against their own death in a terrorist attack while on a trip to Europe, while another group were offered insurance that would cover death of any kind on the trip. Even though "death of any kind" includes "death in a terrorist attack", the former group were willing to pay more than the latter. Kahneman suggests that the attribute of fear is being substituted for a calculation of the total risks of travel.[54] Fear of terrorism for these subjects was stronger than a general fear of dying on a foreign trip. See Morewedge and Kahneman (2010), for a recent summary of attribute substitution.[53]



Fast and frugal


Gerd Gigerenzer and colleagues have argued that heuristics can be used to make judgments that are accurate rather than biased. According to them, heuristics are "fast and frugal" alternatives to more complicated procedures, giving answers that are just as good.[55]



Consequences



Efficient decision heuristics


Warren Thorngate, an emeritus social psychologist, implemented 10 simple decision rules or heuristics in a simulation program as computer subroutines chose an alternative. He determined how often each heuristic selected alternatives with highest-through-lowest expected value in a series of randomly generated decision situations. He found that most of the simulated heuristics selected alternatives with highest expected value and almost never selected alternatives with lowest expected value. More information about the simulation can be found in his "Efficient decision heuristics" article (1980).[56]



"Beautiful-is-familiar" effect


Psychologist Benoît Monin reports a series of experiments in which subjects, looking at photographs of faces, have to judge whether they have seen those faces before. It is repeatedly found that attractive faces are more likely to be mistakenly labeled as familiar.[57] Monin interprets this result in terms of attribute substitution. The heuristic attribute in this case is a "warm glow"; a positive feeling towards someone that might either be due to their being familiar or being attractive. This interpretation has been criticised, because not all the variance in familiarity is accounted for by the attractiveness of the photograph.[50]



Judgments of morality and fairness


Legal scholar Cass Sunstein has argued that attribute substitution is pervasive when people reason about moral, political or legal matters.[58] Given a difficult, novel problem in these areas, people search for a more familiar, related problem (a "prototypical case") and apply its solution as the solution to the harder problem. According to Sunstein, the opinions of trusted political or religious authorities can serve as heuristic attributes when people are asked their own opinions on a matter. Another source of heuristic attributes is emotion: people's moral opinions on sensitive subjects like sexuality and human cloning may be driven by reactions such as disgust, rather than by reasoned principles.[59] Sunstein has been challenged as not providing enough evidence that attribute substitution, rather than other processes, is at work in these cases.[50]



Persuasion



See also




  • Behavioral economics

  • Bounded rationality

  • Debiasing

  • Ecological rationality

  • Great Rationality Debate

  • List of cognitive biases

  • List of memory biases

  • Low information voter



Citations




  1. ^ Lewis, Alan (17 April 2008). The Cambridge Handbook of Psychology and Economic Behaviour. Cambridge University Press. p. 43. ISBN 978-0-521-85665-2. Retrieved 7 February 2013..mw-parser-output cite.citationfont-style:inherit.mw-parser-output qquotes:"""""""'""'".mw-parser-output code.cs1-codecolor:inherit;background:inherit;border:inherit;padding:inherit.mw-parser-output .cs1-lock-free abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-subscription abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registrationcolor:#555.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration spanborder-bottom:1px dotted;cursor:help.mw-parser-output .cs1-hidden-errordisplay:none;font-size:100%.mw-parser-output .cs1-visible-errorfont-size:100%.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-formatfont-size:95%.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-leftpadding-left:0.2em.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-rightpadding-right:0.2em


  2. ^ Harris, Lori A. (21 May 2007). CliffsAP Psychology. John Wiley & Sons. p. 65. ISBN 978-0-470-19718-9. Retrieved 7 February 2013.


  3. ^ Nevid, Jeffrey S. (1 October 2008). Psychology: Concepts and Applications. Cengage Learning. p. 251. ISBN 978-0-547-14814-4. Retrieved 7 February 2013.


  4. ^ Bazerman, M. H. (2017). "Judgment and decision making". In R. Biswas-Diener & E. Diener. Noba textbook series: Psychology. Champaign, IL: DEF publishers.CS1 maint: Uses editors parameter (link)


  5. ^ Kahneman, Daniel; Klein, Gary (2009). "Conditions for intuitive expertise: A failure to disagree". American Psychologist. 64 (6): 515–526. doi:10.1037/a0016755. PMID 19739881.


  6. ^ Kahneman, Daniel (2011). "Introduction". Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-1-4299-6935-2.


  7. ^ Plous 1999, p. 109


  8. ^ Fiedler, Klaus; von Sydow, Momme (2015). "Heuristics and Biases: Beyond Tversky and Kahneman's (1974) Judgment under Uncertainty" (PDF). In Eysenck, Michael W.; Groome, David. Cognitive Psychology: Revising the Classical Studies. Sage, London. pp. 146–161. ISBN 978-1-4462-9447-5.


  9. ^ Gigerenzer, G. (1996). "On narrow norms and vague heuristics: A reply to Kahneman and Tversky". Psychological Review. 103 (3): 592–596. doi:10.1037/0033-295X.103.3.592.


  10. ^ Hastie & Dawes 2009, pp. 210–211


  11. ^ abc Tversky, Amos; Kahneman, Daniel (1973), "Availability: A Heuristic for Judging Frequency and Probability", Cognitive Psychology, 5: 207–232, doi:10.1016/0010-0285(73)90033-9, ISSN 0010-0285


  12. ^ Morewedge, Carey K.; Todorov, Alexander (24 January 2012). "The Least Likely Act: Overweighting Atypical Past Behavior in Behavioral Predictions". Social Psychological and Personality Science. 3 (6): 760–766. doi:10.1177/1948550611434784.


  13. ^ ab Sutherland 2007, pp. 16–17


  14. ^ ab Plous 1993, pp. 123–124


  15. ^ abcdefghi Tversky & Kahneman 1974


  16. ^ Carroll, J. (1978). "The Effect of Imagining an Event on Expectations for the Event: An Interpretation in Terms of the Availability Heuristic". Journal of Experimental Social Psychology. 14 (1): 88–96. doi:10.1016/0022-1031(78)90062-8. ISSN 0022-1031.


  17. ^ Srull, Thomas K.; Wyer, Robert S. (1979). "The Role of Category Accessibility in the Interpretation of Information About Persons: Some Determinants and Implications". Journal of Personality and Social Psychology. 37 (10): 1660–1672. doi:10.1037/0022-3514.37.10.1660. ISSN 0022-3514.


  18. ^ abc Plous 1993, pp. 109–120


  19. ^ Nisbett, Richard E.; Ross, Lee (1980). Human inference: strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-Hall. pp. 115–118. ISBN 9780134450735.


  20. ^ ab Kahneman, Daniel; Amos Tversky (July 1973). "On the Psychology of Prediction". Psychological Review. American Psychological Association. 80 (4): 237–251. doi:10.1037/h0034747. ISSN 0033-295X.


  21. ^ ab Tversky, Amos; Kahneman, Daniel (1983). "Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment". Psychological Review. 90 (4): 293–315. doi:10.1037/0033-295X.90.4.293. reprinted in Gilovich, Thomas; Griffin, Dale; Kahneman, Daniel, eds. (2002), Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press, pp. 19–48, ISBN 9780521796798, OCLC 47364085


  22. ^ Poundstone 2010, p. 89


  23. ^ Tentori, K.; Bonini, N.; Osherson, D. (1 May 2004). "The conjunction fallacy: a misunderstanding about conjunction?". Cognitive Science. 28 (3): 467–477. doi:10.1016/j.cogsci.2004.01.001.


  24. ^ Moro, Rodrigo (29 July 2008). "On the nature of the conjunction fallacy". Synthese. 171 (1): 1–24. doi:10.1007/s11229-008-9377-8.


  25. ^ Oechssler, Jörg; Roider, Andreas; Schmitz, Patrick W. (2009). "Cognitive abilities and behavioral biases". Journal of Economic Behavior & Organization. 72 (1): 147–152. doi:10.1016/j.jebo.2009.04.018. ISSN 0167-2681.


  26. ^ Gigerenzer, Gerd (1991). "How to make cognitive illusions disappear: Beyond "heuristics and biases". European Review of Social Psychology. 2: 83–115. doi:10.1080/14792779143000033.


  27. ^ Kunda 1999, pp. 70–71


  28. ^ Kunda 1999, pp. 68–70


  29. ^ Zukier, Henry (1982). "The dilution effect: The role of the correlation and the dispersion of predictor variables in the use of nondiagnostic information". Journal of Personality and Social Psychology. 43 (6): 1163–1174. doi:10.1037/0022-3514.43.6.1163.


  30. ^ Kunda 1999, pp. 71–72


  31. ^ ab Tversky, Amos; Kahneman, Daniel (1971). "Belief in the law of small numbers". Psychological Bulletin. 76 (2): 105–110. doi:10.1037/h0031322. reprinted in Daniel Kahneman; Paul Slovic; Amos Tversky, eds. (1982). Judgment under uncertainty: heuristics and biases. Cambridge: Cambridge University Press. pp. 23–31. ISBN 9780521284141.


  32. ^ abc Baron 2000, p. 235?


  33. ^ abc Plous 1993, pp. 145–146


  34. ^ Koehler & Harvey 2004, p. 99


  35. ^ abc Mussweiler, Englich & Strack 2004, pp. 185–186, 197


  36. ^ abcd Yudkowsky 2008, pp. 102–103


  37. ^ Lichtenstein, Sarah; Fischoff, Baruch; Phillips, Lawrence D. (1982), "Calibration of probabilities: The state of the art to 1980", in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and biases, Cambridge University Press, pp. 306–334, ISBN 9780521284141


  38. ^ abcd Sutherland 2007, pp. 168–170


  39. ^ ab Hastie & Dawes 2009, pp. 78–80


  40. ^ George Loewenstein (2007), Exotic Preferences: Behavioral Economics and Human Motivation, Oxford University Press, pp. 284–285, ISBN 9780199257072


  41. ^ Mussweiler, Englich & Strack 2004, p. 188


  42. ^ Plous 1993, pp. 148–149


  43. ^ Caverni, Jean-Paul; Péris, Jean-Luc (1990), "The Anchoring-Adjustment Heuristic in an 'Information-Rich, Real World Setting': Knowledge Assessment by Experts", in Caverni, Jean-Paul; Fabré, Jean-Marc; González, Michel, Cognitive biases, Elsevier, pp. 35–45, ISBN 9780444884138


  44. ^ ab Mussweiler, Englich & Strack 2004, p. 183


  45. ^ Rabelo, A. L., Keller, V. N., Pilati, R., & Wicherts, J. M. (2015). No effect of weight on judgments of importance in the moral domain and evidence of publication bias from a meta-analysis. PLOS One, 10(8), e0134808.


  46. ^ Finucane, M.L.; Alhakami, A.; Slovic, P.; Johnson, S.M. (January 2000). "The Affect Heuristic in Judgment of Risks and Benefits". Journal of Behavioral Decision Making. 13 (1): 1–17. doi:10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S.


  47. ^ Keller, Carmen; Siegrist, Michael; Gutscher, Heinz (June 2006). "The Role of Affect and Availability Heuristics in Risk Analysis". Risk Analysis. 26 (3): 631–639. doi:10.1111/j.1539-6924.2006.00773.x. PMID 16834623.


  48. ^ abcdef Kahneman, Daniel; Frederick, Shane (2002), "Representativeness Revisited: Attribute Substitution in Intuitive Judgment", in Gilovich, Thomas; Griffin, Dale; Kahneman, Daniel, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press, pp. 49–81, ISBN 9780521796798, OCLC 47364085


  49. ^ Hardman 2009, pp. 13–16


  50. ^ abc Shah, Anuj K.; Daniel M. Oppenheimer (March 2008). "Heuristics Made Easy: An Effort-Reduction Framework". Psychological Bulletin. American Psychological Association. 134 (2): 207–222. doi:10.1037/0033-2909.134.2.207. ISSN 1939-1455. PMID 18298269.


  51. ^ abc Newell, Benjamin R.; David A. Lagnado; David R. Shanks (2007). Straight choices: the psychology of decision making. Routledge. pp. 71–74. ISBN 9781841695884.


  52. ^ abc Kahneman, Daniel (December 2003). "Maps of Bounded Rationality: Psychology for Behavioral Economics" (PDF). American Economic Review. American Economic Association. 93 (5): 1449–1475. doi:10.1257/000282803322655392. ISSN 0002-8282.


  53. ^ ab Morewedge, Carey K.; Kahneman, Daniel (October 2010). "Associative processes in intuitive judgment". Trends in Cognitive Sciences. 14 (10): 435–440. doi:10.1016/j.tics.2010.07.004. PMC 5378157. PMID 20696611.


  54. ^ Kahneman, Daniel (2007). "Short Course in Thinking About Thinking". Edge.org. Edge Foundation. Retrieved 2009-06-03.


  55. ^ Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. Oxford, UK, Oxford University Press.
    ISBN 0-19-514381-7



  56. ^ Thorngate, Warren (1980). "Efficient decision heuristics". Behavioral Science. 25 (3): 219–225. doi:10.1002/bs.3830250306.


  57. ^ Monin, Benoît; Daniel M. Oppenheimer (2005), "Correlated Averages vs. Averaged Correlations: Demonstrating the Warm Glow Heuristic Beyond Aggregation" (PDF), Social Cognition, 23 (3): 257–278, doi:10.1521/soco.2005.23.3.257, ISSN 0278-016X


  58. ^ Sunstein, Cass R. (2005). "Moral heuristics". Behavioral and Brain Sciences. Cambridge University Press. 28 (4): 531–542. doi:10.1017/S0140525X05000099. ISSN 0140-525X. PMID 16209802.


  59. ^ Sunstein, Cass R. (2009). "Some Effects of Moral Indignation on Law" (PDF). Vermont Law Review. Vermont Law School. 33 (3): 405–434. SSRN 1401432. Archived from the original (PDF) on November 29, 2014. Retrieved 2009-09-15.




References



  • Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press, ISBN 0521650305, OCLC 316403966


  • Fiedler, Klaus; von Sydow, Momme (2015), "Heuristics and Biases: Beyond Tversky and Kahneman's (1974) Judgment under Uncertainty" (PDF), in Eysenck, Michael W.; Groome, David, Cognitive Psychology: Revising the Classical Studies, Sage, London, pp. 146–161, ISBN 9781446294475


  • Gigerenzer, G. (1996), "On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Heuristic", Psychological Review, 103 (3): 592–596, doi:10.1037/0033-295X.103.3.592


  • Gilovich, Thomas; Griffin, Dale W. (2002), "Introduction – Heuristics and Biases: Then and Now", in Gilovich, Thomas; Griffin, Dale W.; Kahneman, Daniel, Heuristics and biases: the psychology of intuitive judgement, Cambridge University Press, pp. 1–18, ISBN 9780521796798


  • Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell, ISBN 9781405123983


  • Hastie, Reid; Dawes, Robyn M. (29 September 2009), Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making, SAGE, ISBN 9781412959032


  • Koehler, Derek J.; Harvey, Nigel (2004), Blackwell handbook of judgment and decision making, Wiley-Blackwell, ISBN 9781405107464


  • Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, ISBN 978-0-262-61143-5, OCLC 40618974


  • Mussweiler, Thomas; Englich, Birte; Strack, Fritz (2004), "Anchoring effect", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 183–200, ISBN 9781841693514, OCLC 55124398


  • Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN 9780070504776, OCLC 26931106


  • Poundstone, William (2010), Priceless: the myth of fair value (and how to take advantage of it), Hill and Wang, ISBN 9780809094691


  • Reber, Rolf (2004), "Availability", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 147–163, ISBN 9781841693514, OCLC 55124398


  • Sutherland, Stuart (2007), Irrationality (2nd ed.), London: Pinter and Martin, ISBN 9781905177073, OCLC 72151566


  • Teigen, Karl Halvor (2004), "Judgements by representativeness", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 165–182, ISBN 9781841693514, OCLC 55124398


  • Tversky, Amos; Kahneman, Daniel (1974), "Judgments Under Uncertainty: Heuristics and Biases" (PDF), Science, 185 (4157): 1124–1131, doi:10.1126/science.185.4157.1124, PMID 17835457 reprinted in Daniel Kahneman; Paul Slovic; Amos Tversky, eds. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. pp. 3–20. ISBN 9780521284141.


  • Yudkowsky, Eliezer (2008), "Cognitive biases potentially affecting judgment of global risks", in Bostrom, Nick; Ćirković, Milan M., Global catastrophic risks, Oxford University Press, pp. 91–129, ISBN 9780198570509


Further reading



  • Slovic, Paul; Melissa Finucane; Ellen Peters; Donald G. MacGregor (2002). "The Affect Heuristic". In Thomas Gilovich; Dale Griffin; Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press. pp. 397–420. ISBN 9780521796798.


External links


  • Test Yourself: Decision Making and the Availability Heuristic







這個網誌中的熱門文章

How to read a connectionString WITH PROVIDER in .NET Core?

In R, how to develop a multiplot heatmap.2 figure showing key labels successfully

Museum of Modern and Contemporary Art of Trento and Rovereto