JAMA: The Journal of the American Medical Association

Copyright 1993 by the American Medical Association, 515 N State St, Chicago, IL 60610.

Volume 270(1)             Jul 7, 1993             pp 72-76

Understanding Patients' Decisions: Cognitive and Emotional Perspectives
[Special Communication]

Redelmeier, Donald A.; Rozin, Paul; Kahneman, Daniel

From the University of Toronto (Ontario) and the Wellesley Hospital Research Institute (Dr Redelmeier); the University of Pennsylvania, Philadelphia (Dr Rozin); the University of California, Berkeley (Dr Kahneman).
Reprint requests to Wellesley Hospital Research Institute, Jones Bldg, Room 123, 160 Wellesley St E, Toronto, Ontario, Canada M4Y 1J3 (Dr Redelmeier).

Objective--To describe ways in which intuitive thought processes and feelings may lead patients to make suboptimal medical decisions.

Design--Review of past studies from the psychology literature.

Results--Intuitive decision making is often appropriate and results in reasonable choices; in some situations, however, intuitions lead patients to make choices that are not in their best interests. People sometimes treat safety and danger categorically, undervalue the importance of a partial risk reduction, are influenced by the way in which a problem is framed, and inappropriately evaluate an action by its subsequent outcome. These strategies help explain examples where risk perceptions conflict with standard scientific analyses. In the domain of emotions, people tend to consider losses as more significant than the corresponding gains, are imperfect at predicting future preferences, distort their memories of past personal experiences, have difficulty resolving inconsistencies between emotions and rationality, and worry with an intensity disproportionate to the actual danger. In general, such intangible aspects of clinical care have received little attention in the medical literature.

Conclusion--We suggest that an awareness of how people reason is an important clinical skill that can be promoted by knowledge of selected past studies in psychology.

(JAMA. 1993;270:72-76)

Helping patients reach reasonable decisions is an important part of the art of medicine. The patient's perspective can be decisive in choosing when to seek help, the degree of compliance with treatment, and whether to change one's life-style. In a break from some traditional views, active patient participation is now considered desirable because of the ethical principle of patient autonomy and the legal requirement of informed consent. A patient who takes responsibility for a decision may also be less likely to blame the physician if things go wrong (1). Yet it is not always easy to elicit reasonable and informed decisions from patients. Their beliefs are susceptible to biases and their preferences sometimes appear irrational (2). This article reviews recent findings from research on judgment and decision making that help explain some of the difficulties that arise when patients are called on to make decisions about their care.

The ideal decision maker would gather all available information about the situation, calculate the costs and benefits of every feasible option, and then decide on the optimal choice (3). In contrast to the ideal, people reach judgments based on simplifying heuristic rules and search until they find an acceptable solution, not necessarily the best (4). These strategies are often effective because they economize on time and usually lead to sensible decisions. However, the departures from strict rationality can also lead to mistakes (5). Errors in reasoning arise from many sources, such as misinformation, denial, overconfidence, distrust, and confusion. In this article we present examples of research on the common biases in peoples' perceptions of risk and describe some new lines of investigation that focus on the role of emotion in decision making.

Categorical Safety and Danger
One feature of human judgment is the tendency to categorize an entity as either "dangerous" or "safe" without recognizing that low and high levels of exposure can have different or even opposite effects. For example, many respondents in a recent survey believed that a teaspoon of ice cream has more calories than a pint of cottage cheese (P.R., M. Markwith, M. Aschmore, unpublished data, March 1992). Presumably, ice cream is considered as inherently high in calories and cottage cheese low, regardless of the amount consumed. The same type of thinking that brands an activity as healthy or dangerous independent of dose can lead individuals to believe that a person cannot ingest too many vitamins, that infinitesimal amounts of salt or sugar are harmful, or that even the most trivial contact with anyone having the acquired immunodeficiency syndrome (AIDS) is dangerous.

Insensitivity to dosage and to the frequency of exposure also appear in situations that are coded as safe. For example, automobile travel entails only a one in 1 million risk of dying during an average trip, yet a one in 50 risk over a lifetime of driving (6). Fear diminishes as the anticipated danger fails to materi- alize over multiple occasions, even though individuals make many trips so that the cumulative risk is substantial. One method for lessening peoples' tendency to categorize an entity as either dangerous or safe is to explicitly discuss dose-response relationships; for example, emphasize that ingesting large quantities of vitamin C can cause kidney stones, that taking higher doses of antibiotics will not lead to faster recovery, or that years of uneventful cigarette smoking do not indicate invulnerability to the adverse effects of future cigarettes.

The Enchanting Appeal of Zero Risk
People often discriminate more sharply than is appropriate between interventions that eliminate a risk and interventions that merely reduce it. Imagine a container of pesticide priced at $10 that produces a harmful toxic reaction 15 times for every 10 000 containers used. How much is an equally effective product worth if it reduces the risk to 10, 5, or 0 incidents per 10 000 containers? The results of a survey showed that people were willing to pay $1.04 extra for the reduction from 15 to 10 reactions, whereas they would pay $2.41 extra for the reduction from five to zero events per 10 000 containers (7). Apparently, the absolute elimination of a risk is more attractive than a mere reduction of the probability of harm. However, the special premium that people are willing to pay to avoid worry cannot be justified as a purchase of improved safety. Indeed, the hope of eliminating risk is usually illusory given the many similar hazards that remain. In this particular scenario, for example, there are multiple sources of toxic reactions in any home so that complete protection against one possibility provides only partial protection against the overall risk of poisoning.

Lessening the odds is the realistic aim in most cases, yet it has far less appeal than the illusion of perfect safety (8). One explanation is that small probabilities all seem similar. "One chance in 20 000" is, for most people, an abstract notion: no living organism can accumulate sufficient personal experience to encompass such a low frequency. Thus, it is not surprising that, biologically, we have little intuitive understanding of the difference between a risk of one in 20 000 and a risk of one in 200 000. Helping patients to understand such small probabilities requires listing concrete events that have similar likelihoods; for example, one in 20 000 corresponds to the chance that a person will live more than 100 years and one in 200 000 to the chance that an airline flight will be hijacked (9). Additionally, epidemiologic analyses can help explain the importance of competing risks; for example, estimates suggest that a perfect cure for cancer would yield less than a 4-year increase in average life expectancy in North America (10). Finally, medical outcomes data can illustrate that even well-accepted treatments are imperfect; for example, in symptomatic patients with high-grade carotid stenosis, endarterectomy performed by expert surgeons reduces the annual risk of stroke from about 14% to 6%--not to 0% (11).

Framing and Presentation
Peoples' interpretation of most events depends on both the nature of the experience and the manner in which the situation is presented or "framed". For example, a foul odor near a sewer may be disgusting, whereas the same aroma emanating from a cheese counter can be enticing. Psychological research has shown that people are often sensitive to the presentation of problems and that they fail to realize the extent to which their preferences can be altered by an inconsequential change in formulation (12).

Consider how patients interpret data from medical research. Based on published reports, the outcome statistics for 100 middle-aged men undergoing surgery for lung cancer can be described as "90 live through the postoperative period . . . and 34 are alive at the end of 5 years". Now consider an alternative way of expressing the same results: "10 die during the postoperative period . . . and 66 die by the end of 5 years". Is one of these formulations more frightening than the other, relative to the minimal risk of early mortality found with radiation therapy for lung cancer? Using these data an experiment showed that surgery appeared much less attractive than radiation therapy when described using mortality rather than survival statistics (13). The explanation is that the difference between 10% mortality and 0% mortality is more impressive than the difference between 90% survival and 100% survival. The framing effect was just as large with physicians as with lay people.

Framing effects are difficult to avoid because there is no one optimal method for presenting statistics. However, it is disturbing that the preferences of both physicians and patients can be swayed by the accidental choice of one formulation of results rather than another. Equally troubling is the possibility of deliberately exploiting framing effects, as often occurs in the marketing of drugs and surgical supplies (14). Two useful defenses against framing effects are to consider the data from multiple perspectives and to discuss the issue with a person whose opinion on the same problem is different (15). When different formulations yield identical choices, one gains confidence in the final assessment (16). In situations where framing effects cannot be avoided, they sometimes may be redirected; for example, presenting data as the negative consequences of nonadherence, rather than the positive outcomes of adherence, might allow clinicians to be more persuasive when recommending health-promoting behaviors (17,18).

Hindsight Bias
Peer review organizations often presume that a detailed review of a patient's medical chart provides an unbiased assessment of quality of care. In contrast, psychologists have shown that such retrospective judgments can be distorted by knowledge of the ultimate outcome (19). For example, expert reviewers were asked to judge the appropriateness of care in surgical cases that involved an adverse anaesthetic outcome (20). Each case was formulated in one of two versions, describing the patient's outcome as either permanent or temporary. The two versions otherwise contained identical information and were assigned randomly to reviewers. However, the reviewers were more likely to rate the quality of care as "less than appropriate" if the outcome was permanent rather than temporary. Furthermore, knowledge of the outcome influenced both the harshness of the rating and the willingness to offer a judgment.

Hindsight bias occurs when people examine past decisions because they tend to highlight data that were consistent with the final outcome and de-emphasize data that were contradictory or ambiguous. Estimates of probability and risk are particularly prone to this bias; for example, people are more likely to indicate that a given outcome was inevitable if they later learn that it occurred (19). Individuals are more likely to classify a decision as a mistake if it was followed by a significant adverse consequence (21). Even the clinical dictum of Sutton's law ("to go where the money is") can be deceptive because the most rewarding strategy is often only apparent after the fact (22). Physicians can lessen the illusory confidence that arises from retrospective second-guessing by providing explicit statements of probabilities prior to obtaining informed consent. Doing so might promote a more proactive attitude about untoward medical events, lessen the chances of subsequent distortions, and prevent unduly negative evaluations of the wisdom of choices that were made prospectively (23).

In science there is a strong tendency to neglect variables that cannot be measured accurately. Mortality risk is much easier to measure and quantify than is emotion, so that medical research tends to focus on mortality rather than on quality of life. However, many medical interventions provide benefits primarily to a patient's sense of well-being, so that a lack of objective measures can lead to misleading conclusions about a lack of therapeutic efficacy. Patients often seek medical care for the sympathy, reassurance, and validation provided; thus, these outcomes should be included when measuring the effectiveness of a clinical intervention. We cannot afford to ignore feelings just because they present difficult scientific problems. To be sure, quality of life measures are gaining popularity in medical studies, yet standard instruments primarily focus on physical disabilities, mental impairments, and functional status, rather than comfort, satisfaction, and peace of mind.

An example of how the intangible benefits of medical treatment are disregarded occurred when the Food and Drug Administration decided to withdraw silicone-gel breast implants from the market. These medical devices represent little danger to most recipients (<10% will rupture) and most women indicate satisfaction with the results (>90% in one survey) (24). Yet in evaluating risks and benefits, the Food and Drug Administration acted as if these subjective gains were unimportant (25). Greater attention to intangible factors, such as patient satisfaction, happiness, and well-being, might help lessen the tension between medical science and clinical practice (26). However, incorporating peoples' feelings about health states and medical outcomes raises difficulties beyond those of measurement and diversity. In the following section we review selected research findings on problematic aspects of peoples' preferences and emotions.

Loss Aversion and Preference for Status Quo
A fully rational agent would not distinguish between an action that leads to a loss (of money, health, or time) and a missed opportunity to realize the equivalent gain. An actual loss and a foregone gain are similar because both represent the same failure to achieve the best possible outcome. However, people take actual losses far more seriously than foregone gains and are therefore reluctant to accept a loss in one dimension of life to achieve an improvement in another dimension (27). Recall the pesticide example where respondents were willing to pay only about $1.04 to purchase extra safety (decreasing the risk from 15 to 10 incidents per 10 000 containers). When asked whether they would be willing to sell safety (increasing the risk from 15 to 16 incidents per 10 000 containers) most respondents refused altogether, and the mean price the others demanded was $2.86 (6). Apparently, the trade-off between money and safety is different when people evaluate deteriorations rather than improvements.

Losses loom larger than corresponding gains in many clinical situations. Patients are often hesitant to take chloramphenicol due to its one in 25 000 risk of producing a fatal reaction but remain unenthusiastic about hepatitis vaccinations, which offer a one in 10 000 chance of preventing death (28,29). A similar pattern occurs in peoples' preferences for the status quo; for example, scientific evidence suggests that dipyridamole is ineffective at preventing strokes, yet many patients with cerebrovascular disease who have been receiving this drug are unwilling to accept its discontinuation (30). More generally, peoples' reluctance to relinquish current routines helps explain the difficulties in encouraging patients to change their diet, drive carefully, and exercise regularly. One method for lessening peoples' differential attitudes toward losses and gains is to confront them with different reference points, such as highlighting that if the individual were newly diagnosed with cerebrovascular disease the medical management would not entail initiation of dipyridamole therapy.

What factors determine the patient's reference point? In most cases the reference point corresponds to the status quo, the customary outcome, or some other value that is considered normal. However, reference points can also be designated or suggested almost arbitrarily. For example, people are willing to sit for 5 hours in an airplane during a transcontinental flight but can become annoyed if their baggage is returned 5 minutes late. In medical contexts, the sequence in which options are presented, the designation of some outcomes as normal, and the description of costs and benefits relative to the norm all can influence patients' choice and satisfaction. The physician has considerable power to define the dominant reference point from which patients will judge outcomes, and this power should be used with care.

Predicting Future Feelings
To understand the role of feelings and emotions as costs or benefits in decision making requires examining how these intangibles are represented in the mind. Typically, a person chooses between alternatives by trying to imagine what it will feel like to experience the available choices. As a consequence, medicolegal doctrine contends that providing patients with informed consent entails describing the possible outcomes of a given treatment and their relative probabilities. Yet psychologists have shown that people are prone to err when making decisions about long-term consequences because they fail to anticipate how their preferences will change over time. An informed decision about whether to undergo surgery for rectal cancer, for example, requires having patients understand both how their lives are likely to change with a colostomy as well as how their own attitudes toward a colostomy are likely to change (31).

One demonstration of the difficulty of anticipating future preferences appears in a recent set of experiments involving tastes for food (32). In one experiment, students were given a serving of ice cream and were asked to predict how much they would enjoy the experience after eating it daily for about a week. Subjects rated how much they liked the ice cream on the first experience, and 1 week later after sampling it daily. As expected, some individuals developed an increased liking for the experience whereas for others it became increasingly unpleasant. However, results showed almost no correlation between predicted and actual changes in liking. Evidently, people are not good at anticipating their future preferences, even when facing mundane experiences.

Medical outcomes can be much more difficult for patients to envision because of their unfamiliar character and the uncertain nature of human adaptation. Most individuals are surprised that 1 year after one group of people won a large lottery, and another group became paraplegic, the quality-of-life ratings of both groups were fairly similar (33). Likewise, people may gauge their reaction to colostomy by how they would feel about it immediately after the operation, rather than considering whether these concerns will fade over time (31). To the extent that physicians anticipate their patients' adapting to different health states, true informed consent would require highlighting these future preferences when discussing current therapeutic alternatives (34). This might include both statistics and interviews of people who underwent each therapeutic alternative months or years previously.

Memories of Past Experiences
When people make a medical decision between alternatives that they have already experienced (eg, a second round of radiation therapy), they compare their mental representations of each of the alternatives and, presumably, choose the alternative that they remember as less unpleasant. But memories are inaccurate and subject to error. One major distortion is that duration may not be as well represented as peak intensity (35). Thus, a few days of intense acute pain may be remembered as more unpleasant than many weeks of moderate chronic pain, in the same way that a brief media soundbite may be more memorable than a longer detailed report. When choosing between two unpleasant medical procedures, a patient might reasonably select the procedure that is shorter but more intense because it entails a smaller loss in quality of life, productivity, and well-being. However, the patient and the physician need to realize, especially if subsequent choices of the same sort are to be made, that the memory for the shorter procedure may be more potent and more aversive (36).

The difference between a patient's experience of an unpleasant medical procedure and the patient's memory of the experience also poses ethical problems for clinicians. For example, dressing changes on burn patients can be extremely painful. Should patients be prescribed amnesia-producing medications to deliberately lessen their memories of the experience? Doing so may not change the essential characteristics of the procedure but might increase compliance with subsequent treatments. Similarly, when choosing between equally effective treatments, should one recommend a treatment that is more unpleasant as it is experienced but may be remembered more favorably? Physicians who wish to respect patient autonomy will face conflicting priorities in situations where prospective and retrospective evaluations lead to different treatment recommendations.

Irrational Concerns
Emotional responses can be intense; in such cases, people may "realize" that their reactions are not "rational," yet they remain unable to reconcile priorities. Consider the case of disgust. Almost all college students claim that they would not drink a glass of their favorite juice if a cockroach had been briefly dipped into the beverage and then removed. The typical account from respondents for this aversion involves the possibility of bodily harm from dangerous microorganisms on the cockroach. However, almost everyone continues to claim reluctance to drink another glass of their favorite juice into which a dead, sterilized cockroach has been dipped and removed (37). Under these circumstances, people change the account of their rejection by ultimately concluding that there must be something inherently bad about cockroaches. Yet, one third of these subjects are also reluctant to drink a third glass of their favorite juice after it has been stirred by a brand new fly swatter (38). Apparently, just the association of fly swatters with an offensive entity, such as a cockroach, accounts for the negative response. Hence, disgust can influence decisions in situations where people may recognize that they have no rational account for their reaction.

A similar pattern of response can occur in peoples' reactions to other aversive situations. Many college students are reluctant to put on a sweater after it has been worn by a person with AIDS, justifying this decision in terms of health risk (P.R., M. Markwith, C. McCauley, PhD, unpublished data, November 1990). However, a sweater that was sterilized after being worn by a person with AIDS is almost as aversive as the original sweater. Furthermore, this same type of reluctance occurs in reactions to wearing sterilized sweaters of people who have tuberculosis, an amputated foot, or who have committed moral offenses (P.R., M. Markwith, C. McCauley, PhD, unpublished data, November 1990). The reaction to contact, therefore, is only weakly related to the risk of infection; instead, disgust may be under the control of cognitions that are not immediately acknowledged and that cannot be classified as rational. In accord with what has been described as the "law of contagion" (once in contact, always in contact), people may believe that temporary contact has permanently transferred the properties of AIDS into the sweater (39). Physicians need to realize that such beliefs are often not volunteered by the patient unless directly questioned, may be influenced by thoughts that the patient is unable to fully articulate, and can persist despite a discussion of valid scientific counterarguments.

Worry Management
Worry fulfills a useful purpose in helping to motivate health-related behaviors, yet sometimes it serves only to decrease quality of life without any medical gains. For example, cancer fears related to the purported carcinogenic effects of electromagnetic waves caused real anxiety in many Americans and radical behavior in a few individuals (40). Furthermore, people have some sense of their susceptibility to worry and may avoid situations and information that would give rise to this unpleasant emotion (P. Linville, PhD, L. Hasher, PhD, unpublished data, January 1991). For example, some women are reluctant to seek medical attention after detecting a breast lump despite the known medical advantages of early treatment. These direct and indirect effects of worry may help explain why peoples' compliance with a preventive medicine recommendation is not always proportional to the intervention's expected benefits. Physicians should try to reassure patients who are worrying too much. Situations where patients have inappropriately low levels of worry, however, can be more difficult to manage because individuals may experience urges to escape medical interaction when worrisome issues are identified.

Mismatches between the subjective intensity of worry and the objective level of danger may also explain some of the cognitive biases in risk perception. The disproportionate appeal of zero risk, for example, may indicate that a risk reduction from 15 to 10 chances in 10 000 has little effect on reducing the intensity of worry, whereas a drop to zero eliminates the emotion entirely. Furthermore, peoples' capacity to pay attention is limited, and individuals can allocate only a limited amount of attention to the activity of worrying (41). They explore what seems most important among the multiple sources of concern in their lives. In general, sudden problems with short deadlines tend to be most compelling, whereas intermittent problems of insidious onset are often overlooked. As a result, serious problems may be neglected because there is no particular reason to worry about them today (42). Clinicians who are able to identify what is--and is not--worrisome to individuals have the opportunity to address an important determinant of patients' behavior and the potential to show empathy and insight.

The movement toward increasing involvement of patients in medical decision making calls for an understanding of the psychology of intuitive thoughts and feelings. Physicians need to provide guidance not only about the actual decision, but also about common errors in thinking. The aim of this article is to encourage such enlightened discussion by enabling physicians to identify pitfalls in reasoning and to recognize the role of emotions. We have offered specific examples of heuristics and biases that can influence a patient's perception of risk. We have also reviewed some of the ways in which emotions can affect decision making. Risk and emotion, along with other decision-making fundamentals, are common in medical situations and salient to patients. An awareness of how people reason is a necessary clinical skill and could be taught as a basic medical science.

We thank the John D. and Catherine T. MacArthur Foundation Network on Determinants and Consequences of Health-Promoting and Health-Damaging Behavior, Chicago, Ill, and the Physicians' Services Inc Foundation, Toronto, Ontario. We are also grateful to Claire Bombardier, MD, Allan Detsky, MD, PhD, Dale Dotten, MD, Janet Hux, MD, and Miriam Shuchman, MD, and members of the Health and Behavior Network for comments on earlier drafts of the manuscript.


1. Wagener JJ, Taylor SE. What else could I have done? patients' responses to failed treatment decisions. Health Psychol. 1986;5:481-496. [Medline Link] [Context Link]

2. Merz JF, Fischhoff B. Informed consent does not mean rational consent: cognitive limitations on decision-making. J Leg Med. 1990;11:321-350. [Context Link]

3. Sox HC, Blatt MA, Higgins MC, Marton KI. Medical Decision Making. Woburn, Mass: Butterworths; 1988. [Context Link]

4. Simon H. Models of Man: Social and Rational. New York, NY: John Wiley & Sons Inc; 1959. [Context Link]

5. Bell DE, Raiffa H, Tversky A, eds. Decision Making. New York, NY: Cambridge University Press; 1988. [Context Link]

6. Sleet DA. Motor vehicle trauma and safety belt use in the context of public health priorities. J Trauma. 1987;27:695-702. [Medline Link] [Context Link]

7. Viscusi WK, Magat WA, Huber J. An investigation of the rationality of consumer valuations of multiple health risks. RAND J Economics. 1987;18:465-479. [Context Link]

8. Cates W, Hinman AR. AIDS and absolutism: the demand for perfection in prevention. N Engl J Med. 1992;327:492-494. [Medline Link] [Context Link]

9. McGervey JD. Probabilities in Everyday Life. Chicago, Ill: Nelson-Hall Publishers; 1986. [Context Link]

10. Olshansky SJ, Carnes BA, Cassel C. In search of Methuselah: estimating the upper limits to human longevity. Science. 1990;250:634-640. [Medline Link] [Context Link]

11. North American Symptomatic Carotid Endarterectomy Trial Collaborators. Beneficial effect of carotid endarterectomy in symptomatic patients with high-grade stenosis. N Engl J Med. 1991;325:445-453. [Medline Link] [Context Link]

12. Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211:453-458. [Medline Link] [Context Link]

13. McNeil BJ, Pauker SG, Sox HC, Tversky A. On the elicitation of preferences for alternative therapies. N Engl J Med. 1982;306:1259-1269. [Medline Link] [Context Link]

14. Naylor CD, Chen E, Strauss B. Measured enthusiasm: does the method of reporting trial results alter perceptions of therapeutic effectiveness? Ann Intern Med. 1992;117:916-921. [Medline Link] [Context Link]

15. Redelmeier DA, Tversky A. On the framing of multiple prospects. Psychol Sci. 1992;3:191-193. [Context Link]

16. Christensen C, Heckerling PS, Mackesky ME, Bernstein LM, Elstein AS. Framing bias among expert and novice physicians. Acad Med. 1991;9:S76-S78. [Medline Link] [Context Link]

17. Meyerowitz BE, Chaiken S. The effect of message framing on breast self-examination: attitudes, intentions, and behavior. J Pers Soc Psychol. 1987;52:500-510. [Medline Link] [Context Link]

18. Wilson DK, Wallston KA, King JE. Effects of contract framing, motivation to quit, and self-efficacy on smoking reduction. J Appl Soc Psychol. 1990;20:531-547. [Context Link]

19. Fischhoff B. Hindsight does not equal foresight: the effect of outcome knowledge on judgment under uncertainty. J Exp Psychol Hum Percept. 1975;41:288-299. [Context Link]

20. Caplan RA, Posner KL, Cheney FW. Effect of outcome on physician judgments of appropriateness of care. JAMA. 1991;265:1957-1960. [Medline Link] [Context Link]

21. Wu AW, Folkman S, McPhee SJ, Lo B. Do house officers learn from their mistakes? JAMA. 1991;265:2089-2094. [Medline Link] [Context Link]

22. Kassirer JP, Kopelman RI. A fatal flaw in Sutton's law. In: Kassirer JP, Kopelman RI. Learning Clinical Reasoning. Baltimore, Md: Williams & Wilkins; 1991. [Context Link]

23. Dawson NV, Arkes HR, Siciliano C, Blinkhorn R, Lakshmanan M, Petrelli M. Hindsight bias: an impediment to accurate probability estimation in clinicopathological conferences. Med Decis Making. 1988;8:259-264. [Medline Link] [Context Link]

24. Kessler DA. The basis for the FDA's decision on breast implants. N Engl J Med. 1992;326:1713-1715. [Medline Link] [Context Link]

25. Angell M. Breast implants: protection or paternalism? N Engl J Med. 1992;326:1695-1696. [Medline Link] [Context Link]

26. Llewellyn-Thomas HA, Naylor CD, Cohen MM, Basinski AS, Ferris LE, Williams JE. Studying patients' preferences in health care decision making. Can Med Assoc J. 1992;147:859-864. [Medline Link] [Context Link]

27. Kahneman D, Knetsch JL, Thaler RH. The endowment effect, loss aversion, and status quo bias. J Econ Perspect. 1991;5:193-206. [Context Link]

28. Polak BCP, Wessling H, Shut D, et al. Blood dyscrasias attributed to chloramphenicol: a review of 576 published and unpublished cases. Acta Med Scand. 1972;192:409-414. [Medline Link] [Context Link]

29. Owens DK, Nease RF. Occupational exposure to human immunodeficiency virus and hepatitis B virus: a comparative analysis of risk. Am J Med. 1992;92:503-512. [Medline Link] [Context Link]

30. Harker LA, Bernstein EF, Dilley RB, et al. Failure of aspirin plus dipyridamole to prevent restenosis after carotid endarterectomy. Ann Intern Med. 1992;116:731-736. [Medline Link] [Context Link]

31. Boyd NF, Sutherland HJ, Heasman KZ, Tritchler DL, Cummings BJ. Whose utilities for decision analysis? Med Decis Making. 1990;1:58-67. [Medline Link] [Context Link]

32. Kahneman D, Snell J. Predicting utility. In: Hogarth RAM, ed. Insights in Decision Making. Chicago, Ill: University of Chicago Press; 1990. [Context Link]

33. Brickman P, Coates D. Lottery winners and accident victims: is happiness relative? J Pers Soc Psychol. 1978;36:917-927. [Medline Link] [Context Link]

34. Christensen-Szalanski JJ. Discount functions and the measurement of patients' values: women's decisions during childbirth. Med Decis Making. 1984;4:47-58. [Medline Link] [Context Link]

35. Frederickson BL, Kahneman D. Duration neglect in retrospective evaluations of affective episodes. J Pers Soc Psychol. In press. [Context Link]

36. Redelmeier DA, Kahneman D. The pain of invasive procedures: an evaluation of patients' experiences and memories of colonoscopy. Med Decis Making. 1992;12:338. Abstract. [Context Link]

37. Rozin P, Fallon AE. A perspective on disgust. Psychol Rev. 1987;94:23-41. [Context Link]

38. Rozin P, Millman L, Nemeroff C. Operation of the laws of sympathetic magic in disgust and other domains. J Pers Soc Psychol. 1986;50:703-712. [Context Link]

39. Rozin P, Nemeroff C. The laws of sympathetic magic: a psychological analysis of similarity and contagion. In: Stigler J, Herdt G, Shweder RA, eds. Cultural Psychology: Essays on Comparative Human Development. New York, NY: Cambridge University Press; 1990. [Context Link]

40. Jauchem JR. Epidemiologic studies of electric and magnetic fields and cancer: a case study of distortions by the media. J Clin Epidemiol. 1992;45:1137-1142. [Medline Link] [Context Link]

41. Linville PW, Fischer GW. Preferences for separating and combining events. J Pers Soc Psychol. 1991;60:5-23. [Medline Link] [Context Link]

42. Tversky A, Shafir E. Choice under conflict: the dynamics of deferred decision. Psychol Sci. 1992;3:358-361. [Context Link]