Thursday, November 26, 2009

Interesting video by Professor Robin Hogarth at Universitat Pompeu Fabra

Behavioral Economics: How Do People Evaluate Risk in Everyday Situations?


Relevant papers by Professor Robin Hogarth and his colleagues

Is confidence in decisions related to feedback? Evidence – and lack of evidence – from random samples of real-world behavior
Confidence in decision making is an important dimension of managerial behavior. However, what is the relation between confidence, on the one hand, and the fact of receiving or expecting to receive feedback on decisions taken, on the other hand? To explore this and related issues in the context of everyday decision making, use was made of the ESM (Experience Sampling Method) to sample decisions taken by undergraduates and business executives. For several days, participants received 4 or 5 SMS messages daily (on their mobile telephones) at random moments at which point they completed brief questionnaires about their current decision making activities. Issues considered here include differences between the types of decisions faced by the two groups, their structure, feedback (received and expected), and confidence in decisions taken as well as in the validity of feedback. No relation was found between confidence in decisions and whether participants received or expected to receive feedback on those decisions. In addition, although participants are clearly aware that feedback can provide both “confirming” and “disconfirming” evidence, their ability to specify appropriate feedback is imperfect. Finally, difficulties experienced in using the ESM are discussed as are possibilities for further research using this methodology.


What risks do people perceive in everyday life? A perspective gained from the experience sampling method (ESM)
The experiential sampling method (ESM) was used to collect data from 74 parttime students who described and assessed the risks involved in their current activities when interrupted at random moments by text messages. The major categories of perceived risk were short-term in nature and involved “loss of time or materials” related to work and “physical damage” (e.g., from transportation). Using techniques of multilevel analysis, we demonstrate effects of gender, emotional state, and types of risk on assessments of risk. Specifically, females do not differ from males in assessing the potential severity of risks but they see these as more likely to occur. Also, participants assessed risks to be lower when in more positive self-reported emotional states. We further demonstrate the potential of ESM by showing that risk assessments associated with current actions exceed those made retrospectively. We conclude by noting advantages and disadvantages of ESM for collecting data about risk perceptions.


On heuristic and linear models of judgment: Mapping the demand for knowledge
Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from “as if” linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of “lens model” research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human and heuristic performance in the same tasks. Our results highlight the trade-off between linear models and heuristics. Whereas the former are cognitively demanding, the latter are simple to use. However, they require knowledge – and thus “maps” – of when and which heuristic to employ.


Regions of rationality: Maps for bounded agents
An important problem in descriptive and prescriptive research in decision making is to identify “regions of rationality,” i.e., the areas for which heuristics are and are not effective. To map the contours of such regions, we derive probabilities that heuristics identify the best of m alternatives (m > 2) characterized by k attributes or cues (k > 1). The heuristics include a single variable (lexicographic), variations of elimination-by-aspects, equal weighting, hybrids of the preceding, and models exploiting dominance. We use twenty simulated and four empirical datasets for illustration. We further provide an overview by regressing heuristic performance on factors characterizing environments. Overall, “sensible” heuristics generally yield similar choices in many environments. However, selection of the appropriate heuristic can be important in some regions (e.g., if there is low inter-correlation among attributes/cues). Since our work assumes a “hit or miss” decision criterion, we conclude by outlining extensions for exploring the effects of different loss functions.

Sci-Phi: Rational decisions

The following article discusses recent experimental work by Norbert Schwarz in an attempt of assessing the role of declarative information in human judgment and decision-making.

Written by: Mathew Iredale

Posted by: TPM November 26, 2009

A recent review of research into rational decision making, led by Dr. Norbert Schwarz of the Institute for Social Research at the University of Michigan, has once again illustrated the extraordinary fallibility of human judgment.

Research going back decades has consistently shown that our ability to make what we consider to be rational decisions can sometimes fall far short of a rational ideal. Over the years an increasing number of systematic biases have been discovered which underlie the errors in our judgment and decision making.

To earlier researchers, the solution to such fallibility seemed obvious: if people only thought enough about the issues at hand, considered all the relevant information and employed proper reasoning strategies, their decision making would surely improve. But as Schwartz et al report, these attempts to improve decision making often fail to achieve their goals, even under conditions assumed to foster rational judgment.

For example, models of rational choice assume that people will expend more time and effort on getting it right when the stakes are high, in which case, providing proper incentives should improve judgment. But the experimental evidence shows that it rarely does. Similarly, increasing people’s accountability for their decisions improves performance in some cases, but impedes it in others. A further problem described by Schwartz is that increased effort will only improve performance when people already possess strategies that are appropriate for the task at hand; “in the absence of such strategies, they will just do the wrong thing with more gusto.”

But even when no particularly sophisticated strategy is required, trying harder will not necessarily lead to better decision making. For example, asking people to “consider the opposite” is one of the most widely recommended debiasing strategies. And yet the more people try to consider the reasons why their initial judgment might be wrong, the more they convince themselves that their initial judgment was right on target.

Why should this be so? Schwartz argues that the strategy of “consider the opposite” often fails to achieve the desired effect because it ignores the metacognitive experiences that accompany the reasoning process.

Most theories of judgment and decision making focus on the role of declarative information, that is, on what people think about, and on the inference rules they apply to accessible thought content. But human reasoning is accompanied by a variety of metacognitive experiences: the ease or difficulty with which information can be brought to mind and thoughts can be generated, and the fluency with which new information can be processed as well as emotional reactions to that information.

According to Schwartz, these experiences qualify the implications of accessible declarative information, with the result that we can accurately predict people’s judgments only by taking the interplay of declarative and experiential information into account.

A similar situation occurs in another popular strategy used to counter false beliefs: using contradictory evidence. Given its use in public information campaigns, this is perhaps the most widespread mechanism for countering erroneous beliefs. It is perhaps also the most dangerous, given that it often doesn’t work. Amazingly, this rather pertinent piece of information has been common knowledge for some 60 years (ever since Floyd Allport and Milton Lepkin’s pioneering research into erroneous beliefs during the Second World War) and yet the contradictory evidence strategy is still very much in use. And it still doesn’t work, as a recent study by Ian Skurnik, Carolyn Yoon, and Schwartz himself, has shown.

The Centers for Disease Control and Prevention (CDC) in America has published a flyer, available online, which health professionals can download and give to their patients. It illustrates a common format of information campaigns that counter misleading information by confronting “myths” with “facts.” In this case, the myths are erroneous beliefs about flu vaccination (e.g. the side effects are worse than the flu), which are confronted with a number of facts (e.g. not everyone can take flu vaccine).

Skurnik et al split their participants into two groups, giving one the CDC’s “Facts & Myths” flyer and the other a “Facts” version of the flyer (presenting only the facts). They were interested to learn how the different flyers would affect participants’ beliefs about the flu and their intention to receive the flu vaccination. These measures were assessed either immediately after participants read the respective flyer or 30 minutes later.

Participants who read the “Facts & Myths” flyer received a list of statements that repeated the facts and myths and indicated for each statement whether it was true or false. Right after reading the flyer, participants had good memory for the presented information and made only a few random errors, identifying 4% of the myths as true and 3% of the facts as false. But after only thirty minutes, their judgments showed a systematic error pattern: they now misidentified 15% of the myths as true (their misidentification of facts as false remained at 2%).

Schwartz comments: “This is the familiar pattern of illusion-of-truth effects: once memory for substantive details fades, familiar statements are more likely to be accepted as true than to be rejected as false. This familiarity bias results in a higher rate of erroneous judgments when the statement is false rather than true, as observed in the present study. On the applied side, these findings illustrate how the attempt to debunk myths facilitates their acceptance after a delay of only 30 minutes.”

These findings suggest that participants drew on the declarative information provided by the flyers when it was highly accessible. As this information faded from memory, they increasingly relied on the perceived familiarity of the information to determine its truth value, resulting in the observed backfire effects.

As with the “consider the opposite” strategy, Schwartz concludes that the failure of the “Facts & Myths” flyer arises “because the educational strategy focuses solely on information content and ignores the metacognitive experiences that are part and parcel of the reasoning process.”

Unfortunately, such errors of judgement are all too common in decision making involving memory recall. For example, people wrongly assume that information that is well represented in memory is easier to recall than information that is poorly represented; that recent events are easier to recall than distant events; that important events are easier to recall than unimportant ones; and that thought generation is easier when one has high rather than low expertise relevant to the subject matter of the memory.

How, then, can we guard ourselves against such errors? The answer, at the present time, is not entirely clear. Despite years of research, “much remains to be learned about the role of metacognitive experiences” says Schwartz.

In the end, it may be the case that we simply cannot avoid making mistakes; that our thought processes are simply too complicated, too rich with emotion and content, to avoid systematic biases and the errors that they give rise to. And if this is the price that we have to pay for a full conscious experience, then we should not be too despondent; it is probably one that is well worth paying.

Suggested reading
“”Metacognitive Experiences And The Intricacies Of Setting People Straight: Implications For Debiasing And Public Information Campaigns” by Norbert Schwarz, Lawrence J. Sanna, Ian Skurnik, Carolyn Yoon (2007) Advances In Experimental Social Psychology, Vol. 39. pp.127-161

Tuesday, November 18, 2008

Judgment researchers question Implicit Association Test, (I.A.T.)

In Bias Test, Shades of Gray

Neal Dawson (regular attendee at Brunswik meetings) and Hal Arkes (past president of JDM) have written a paper questioning results of an I.A.T. study suggesting that physicians have unconscious racial bias. The New York Times article includes a link to their paper.

Friday, September 19, 2008

How Wall Street Lied to Its Computers

New York Times, September 18, 2008

How Wall Street Lied to Its Computers

By Saul Hansell

This is an example of what can happen when people and computer models interact. It exposes both deficiencies in the models in dealing with extreme situations, and people's willingness to override the model if it gets in their way.

Sunday, August 31, 2008

Visualizing data

Published: August 30, 2008

Many eyes website

I haven't had time to look at this site, but I plan to. If anyone has time to review it, please post here. I wonder if it has ideas that will be useful in visualizing data.


Wednesday, August 27, 2008

Evidence based forecasting?

Derailing the Boondoggle

"A Danish professor promotes a cure for billion-dollar cost overruns in government megaprojects: Use past boondoggles as a baseline."

The problem is optimistic bias and organizational incentives to under forecast. The solution proposed by Tversky, Kahneman, and Lovallo is "reference-class forecasting." Simply use past experience. This looks like a correspondence solution to a coherence problem. Right? Or is optimism bias really a correspondence issue?

Tuesday, August 12, 2008

Expert hot tubbing?

Published: August 12, 2008
Unlike in American courts, in most of the rest of the world expert witnesses are selected by judges and are meant to be neutral.

"In most of the rest of the world, expert witnesses are selected by judges and are meant to be neutral and independent. Many foreign lawyers have long questioned the American practice of allowing the parties to present testimony from experts they have chosen and paid.

The European judge who visits the United States experiences “something bordering on disbelief when he discovers that we extend the sphere of partisan control to the selection and preparation of experts,” John H. Langbein, a law professor at Yale, wrote in a classic article in The University of Chicago Law Review more than 20 years ago. "

"Dr. Frank Gersh, the defense expert in the case, did not respond to a request for comment. But Dr. Leonard Welsh, the psychologist who testified for the state, said he sometimes found his work compromising."

“After you come out of court,” Dr. Welsh said, “you feel like you need a shower. They’re asking you to be certain of things you can’t be certain of.”

"He might have preferred a new way of hearing expert testimony that Australian lawyers call hot tubbing."

"In that procedure, also called concurrent evidence, experts are still chosen by the parties, but they testify together at trial — discussing the case, asking each other questions, responding to inquiries from the judge and the lawyers, finding common ground and sharpening the open issues. In the Wilkins case, by contrast, the two experts “did not exchange information,” the Court of Appeals for Iowa noted in its decision last year."