AMOS TVERSKY; DANIEL KAHNEMAN
September 27, 1974
This is where the discussion on behavioural finance started, and hence, this deserves a special place
When faced with uncertainty, we tend to assign subjective probabilities to the likelihood of an outcome. We depend on some heuristic (short-cut) to assign these probabilities. While these heuristics may generally be useful, they lead to severe and systematic errors. For instance, we estimate the distance of an object is determined by how clearly it can be seen. We tend to underestimate the distance of an object that is clearly visible and overestimate that of an object with poor visibility.
The paper describes three heuristics that are applied to assess subjective probabilities and lists the biases which are an outcome of these heuristics.
Representative heuristic refers to assigning probability based on the degree to how closely A resembles B. For an illustration, an individual has described a neighbour – Steve is shy and withdrawn, very helpful but with little interest in what others are doing. How would people assess the probability of Steve being farmer, salesman, airline pilot, librarian or physician? In the representative heuristic, Steve’s characteristics are very similar to those of a librarian. Hence, higher probability would be assigned to Steve being a librarian. This approach to judgement can lead to serious errors.
Insensitivity to prior probabilities of outcomes
Base rate frequency has no impact on representativeness but it influences the actual probabilities. In Steve’s case, the fact that there are many more farmers than librarians in the population should enter into any reasonable estimate of the probability of Steve being a farmer or a librarian. Base rate frequency does not affect Steve’s similarity to the stereotypes of farmers and librarians. If, however, people were asked to assign probabilities to Steve’s likely occupation without describing any of Steve’s characteristics, farmer would be given higher probability than librarian because base rate frequency is the only piece of information available.
Insensitivity to sample size
Assume that the average height of males in a population is 6 feet. If 10 males were randomly picked from this population, people are more likely to believe that this sample will also have an average height of 6 feet. In fact, the probability of obtaining an average height greater than 6 feet was assigned the same value for a sample size of 10000, 100 and 10 men. In reality, sample theory states that the variation in the average height of a sample of 10 is going to be much larger than that of a sample of 10000. This is because a smaller sample is more likely and a larger sample is less likely to stray from the population average.
Misconceptions of chance
In considering the tosses of a coin, people regard the sequence of H-T-H-T-T-H to be more likely than H-H-H-T-T-T which does not appear random, and also more likely than H-H-H-H-T-H which does not represent the fairness of the coin. Thus, people expect that the essential characteristics of a process will be represented not only globally in the entire sequence but also locally in each of its parts. A consequence of belief in local representative is the gambler’s fallacy. After observing a long run of red on the roulette wheel, most people believe that black is now due, simply assuming that it is a self correcting process of restoring equilibrium.
Insensitivity to predictability
People often make numerical predictions about the future value of a stock or the outcome of a football game. If one were asked to predict the future profits of a company based on a given description, a very high profit would appear most representative if the description is favourable and a mediocre performance would be most representative of a mediocre description. Here, assigning probability to the outcome becomes insensitive to the reliability of the description and to the expected accuracy of the prediction.
The illusion of validity
The unwarranted confidence produced by a good fit between the predicted outcome and the input information may be called the illusion of validity. Thus, people exhibit high confidence in predicting that a person is a librarian when a given description of his personality matches the stereotypes of librarians, even if the description is scanty, unreliable or outdated. This illusion comes out of observing consistent patterns of highly redundant and correlated input variables. Redundancy decreases accuracy even as it increases confidence in predictions that are quite of the mark.
Misconceptions of regressions
People generally do not expect regression in many contexts where it is bound to occur. Also, when they do recognise the occurrence of regression, they often invent spurious causal explanations for it.
In a discussion on flight training, instructors observed that when they admired a candidate’s smooth landing, their next landing would not be as smooth. Similarly, when they admonished a candidate for poor landing, their next landing would be much better. They concluded that verbal rewards and criticisms were instrumental in the way a candidate performed. However, this conclusion was flawed – even without their comments, a good performance would be followed be a poor one because of the presence of regression to mean.
The failure to understand the effect of regression leads one to overestimate the effectiveness of punishment and to underestimate the effectiveness of reward.
One may assess the risk of heart-attack among middle-aged people by recalling such occurrences among one’s acquaintances. Similarly, one may evaluate the probability of a business venture’s success based on the difficulties it could encounter. This judgement heuristic is called availability. Availability is useful for assessing frequency or probability, because instances of large classes are usually recalled faster and better than instance of less frequent classes. However, availability can be influenced by factors other than frequency and probability. This leads to some predictable biases.
Biases due to the retrievably of instances
Two groups were given a list of names. The first group’s list contained more famous males than females and the second one contained more famous females than males. When asked if the list contained more names of men than women, the first group mostly answered males and the second group, females. In each of the lists, the subjects erroneously judged that the class (sex) that had the more famous personalities was more numerous. In addition to familiarity, there are other factors such as salience or recency – something experienced first-hand or recently seems to be more probable than other outcomes.
Biases due to the effectiveness of a search set
If three or more lettered words were to be sampled at random, is it more likely that the word starts with r or that r is the third letter? People approach this problem by recalling words that start with r (road) and words with r in the third place (car). Since it is much easier to recall the words starting with r than with r in the third place, they assess that words starting with r are more numerous.
Biases of imaginability
The risk involved in an adventurous expedition is evaluated by imagining contingencies the expedition is not equipped to cope. If many difficulties are imagined, the expedition seems exceedingly dangerous, although the ease with which a disaster can be imagined is not a measure of its likelihood.
A group of people were given information concerning several hypothetical mental patients and drawings made by each of them. They were asked to judge what specific conditions these patients were suffering from. The respondents markedly associated peculiar eyes with suspiciousness or paranoia. This illusory correlation existed even when they were presented with contradictory data. Perhaps, this illusory correlation existed because suspiciousness can be more easily associated with eyes than any other body parts.
Anchoring and Adjustment
In many situations, people estimate an initial value that is adjusted to yield the final answer. This initial value may be part of the problem or result of a partial computation. Either way, different initial values can lead to different answers.
A group of people was asked to guess the number of African states in the United Nations. A number between 0 and 100 was determined by spinning a wheel of fortune. The respondents were first asked to guess if the number of countries in the UN was higher or lower than the observed number on the spinning wheel. Next, they were asked to guess the exact number. The median estimates of the percentage of African countries in the United Nations were 25 and 45 for groups that received 10 and 65, respectively, as starting points. Anchoring occurs even when estimates are based on incomplete or unrelated computation.
Biases in the evaluation of conjunctive and disjunctive events
If it is a conjunctive structure, such as the development of a product, people often evaluate the likelihood of its success. But if it is a disjunctive structure such as a nuclear reactor, people evaluate the possible risks it carries. The chain-like structure of conjunctions leads to overestimation of the chances of its success while the funnel-like structure of disjunctions leads to underestimation of the chances of success.
Anchoring in the assessment of subjective probability distributions
A group was asked to select a value on the Dow-Jones Average on a particular day. Another group was asked to state a value for Dow-Jones average above or below a specified value. Since both groups were basically asked to estimate the Dow-Jones average, both should have yielded similar results. However, the events that the first group defined as having 10% probability actually appeared in 24% of the cases. Their answers were extreme. On the other hand, the second group was conservative. It assigned a probability of 34% to events that actually occurred in just 26% of the cases. These results illustrate that the procedure of elicitation influences the degree of extremity of the estimates.
This paper is concerned with cognitive biases that stem from judgemental heuristics. These biases were not influenced by the motivational effects of payoffs and penalties. In fact, several of these errors occurred despite the fact that the subjects were encouraged and rewarded to be accurate. The reliance on heuristics is not restricted to laymen. Experienced researchers are also prone to the same biases when they think intuitively.