Tuesday, November 18, 2008

Judgment researchers question Implicit Association Test, (I.A.T.)

In Bias Test, Shades of Gray

Neal Dawson (regular attendee at Brunswik meetings) and Hal Arkes (past president of JDM) have written a paper questioning results of an I.A.T. study suggesting that physicians have unconscious racial bias. The New York Times article includes a link to their paper.

Friday, September 19, 2008

How Wall Street Lied to Its Computers

New York Times, September 18, 2008

How Wall Street Lied to Its Computers

By Saul Hansell

This is an example of what can happen when people and computer models interact. It exposes both deficiencies in the models in dealing with extreme situations, and people's willingness to override the model if it gets in their way.

Sunday, August 31, 2008

Visualizing data

Published: August 30, 2008

Many eyes website

I haven't had time to look at this site, but I plan to. If anyone has time to review it, please post here. I wonder if it has ideas that will be useful in visualizing data.


Wednesday, August 27, 2008

Evidence based forecasting?

Derailing the Boondoggle

"A Danish professor promotes a cure for billion-dollar cost overruns in government megaprojects: Use past boondoggles as a baseline."

The problem is optimistic bias and organizational incentives to under forecast. The solution proposed by Tversky, Kahneman, and Lovallo is "reference-class forecasting." Simply use past experience. This looks like a correspondence solution to a coherence problem. Right? Or is optimism bias really a correspondence issue?

Tuesday, August 12, 2008

Expert hot tubbing?

Published: August 12, 2008
Unlike in American courts, in most of the rest of the world expert witnesses are selected by judges and are meant to be neutral.

"In most of the rest of the world, expert witnesses are selected by judges and are meant to be neutral and independent. Many foreign lawyers have long questioned the American practice of allowing the parties to present testimony from experts they have chosen and paid.

The European judge who visits the United States experiences “something bordering on disbelief when he discovers that we extend the sphere of partisan control to the selection and preparation of experts,” John H. Langbein, a law professor at Yale, wrote in a classic article in The University of Chicago Law Review more than 20 years ago. "

"Dr. Frank Gersh, the defense expert in the case, did not respond to a request for comment. But Dr. Leonard Welsh, the psychologist who testified for the state, said he sometimes found his work compromising."

“After you come out of court,” Dr. Welsh said, “you feel like you need a shower. They’re asking you to be certain of things you can’t be certain of.”

"He might have preferred a new way of hearing expert testimony that Australian lawyers call hot tubbing."

"In that procedure, also called concurrent evidence, experts are still chosen by the parties, but they testify together at trial — discussing the case, asking each other questions, responding to inquiries from the judge and the lawyers, finding common ground and sharpening the open issues. In the Wilkins case, by contrast, the two experts “did not exchange information,” the Court of Appeals for Iowa noted in its decision last year."


Friday, August 8, 2008

Prospect theory in court

Study Finds Settling Is Better Than Going to Trial
By JONATHAN D. GLATER, New York Times

Published: August 7, 2008

From the article

The findings are consistent with research on human behavior and responses to risk, said Martin A. Asher, an economist at the University of Pennsylvania and a co-author. For example, psychologists have found that people are more averse to taking a risk when they are expecting to gain something, and more willing to take a risk when they have something to lose.

“If you approach a class of students and say, I’ll either write you a check for $200, or we can flip a coin and I will pay you nothing or $500,” most students will take the $200 rather than risk getting nothing, Mr. Asher said.

But reverse the situation, so that students have to write the check, and they will choose to flip the coin, risking a bigger loss because they hope to pay nothing at all, he continued. “They’ll take the gamble.”

Tuesday, May 6, 2008

How many cases are needed for judgment analysis?

Knofczynski, G. T., & Mundfrom, D. (2008). Sample Sizes When Using Multiple Linear Regression for Prediction. Educational and Psychological Measurement, 68, 431-442.

http://epm.sagepub.com/cgi/reprint/68/3/431
(the link will get you the article if you are on the campus network)

This paper is based on Monte Carlo simulation.

"Unlike the previously mentioned studies, this study is not concerned with finding an accurate value of the squared multiple correlation coefficient or minimizing the shrinkage of the squared multiple correlation coefficient. Instead, this research attends to the task of finding sample regression models that predict similarly to population regression models. More precisely, what sample size is needed to ensure, with a desired amount of accuracy, that the sample regression equation will perform similarly to the population regression equation? These minimum sample sizes were determined by conducting a series of Monte Carlo simulations. This study determines minimum sample sizes for a wide range of population correlation structures." (p. 433)

The chart below shows how recommended sample size varies with R2.

In judgment analysis, we can assume that R2 will be around .7. The tables in this paper suggest that our general guideline of "the number of cases should be at least six times the number of cues" is reasonable. I am a little unclear on how cue intercorrelations factor into their results. They indicate that they did explore different correlation matrices, but I don't see these reflected in their tables.

They conclude:

"When utilizing MLR for prediction purposes, any author or researcher who does not take some aspect of the relationship between variables into consideration when making a sample size recommendation will seldom determine an appropriate sample size needed for the study. Also, the number of predictor variables is an important factor in determining the minimum required sample size. Authors and researchers who do not use the number of predictor variables as a determining factor when selecting appropriate sample sizes will probably end up with sample sizes that are too small or too large.

We recommend using the sample sizes presented in this article as a guideline when using multiple regression for predictive purposes. The results of this study are not recommended when using multiple regression for purposes other than prediction. Different applications of multiple regression usually require different minimum sample sizes (Brooks & Barcikowski, 1995, 1996; Casciok et al., 1978; Darlington, 1990; Gross, 1973; Pedhazur, 1997; Tabachnik & Fidell, 2001)." (p. 441)

In other words, if you are interested in estimating R2 or weights, these sample size estimates don't apply. They only indicate that the sample model will predict about as well in the sample as the population model predicts in the population.

Sunday, May 4, 2008

Brunswikian versus Gibsonian research

Click here to see summary of Vicente's article

Source: Vicente, K. J. (2003). “Beyond the Lens Model and Direct Perception: Toward a Broader Ecological Psychology.” In Ecological Psychology. Volume 15, Issue 3, pp. 241-267

Any suggestions/edits/thoughts on the summary of Vicente's article are most welcome.

Friday, April 18, 2008

Prospect Theory or Image Theory?

We had a disagreement whether Prospect Theory or Image Theory involved a two-stage decision making process: (1) editing followed by (2) evaluation.

The quoted text below is from Payne's 1981 article in the Psychological Bulletin entitled Contingent Decision Behavior. Basically, this is excerpted from the last page in which he contrasts Hammond's (1980) hypothesis that cognition will oscillate with the two-stage process of Prospect Theory.



"The idea of switching among modes of thought seems reasonable. The relation between time and modes of thought, however, may have even more order than Hammond suggested. Consider, for example, prospect theory (Kahneman & Tversky, 1979). A key concept in prospect theory is that risky-choice behavior consists of a two-phase process. The first phase involves editing the given decision problem into a simpler representation in order to make the second phase of evaluation and choice of gambles easier for the decision maker. Included in the first phase are such editing operations as coding, cancellation, and segregation (Kahneman & Tversky, 1979; Tversky & Kahneman, 1981). Editing operations would seem to correspond to the intuitive and perceptual mode of thought. Evaluation would be more an analytical mode of thought. Consequently, a combination of the Hammond and Kahneman and Tversky ideas suggests that a complex risky choice problem will involve a progression from intuitive to analytical cognition. This suggests that the types of errors observed and the influence of various task variables
will vary systematically over the course of the risky-problem-solving episode. Of course, the possibility exists that the process of intuitive to analytical cognition could be short circuited at any time."

How Tasks Change

Tom raises an interesting inquiry into the nature of how task properties change. I have thought about two ways in which task properties change: (1) through cognitive engineering efforts and (2) certain properties - cue intercorrelations and cue validities (note: two components of Re) - will change as more information is added. The first is self explanatory but the second deserves an example.

Consider again the Moneyball example we discussed at the meeting (which was fun by the way!). I have seven years worth of baseball statistics for each team. Simple analysis clearly shows that the validities and intercorrelations for all the cues (with the obvious exception of the wins-losses relationship) change from year to year.

The question I have is: how the heck do we model that, say, in the Continuum Standard Model I am proposing? It may be wisest to assume randomness and add a noise seed to those specified task properties. Feedback anyone? (Get it? It's a joke. You're right - not funny)

Monday, April 7, 2008

Birnbaum's presentations and talks

In case you have not seen it on JDM listserv, Michael H. Birnbaum has a number interesting tutorials and talks on the following site:
http://psych.fullerton.edu/mbirnbaum/talks

In addition, he also has Archive of Experiments and Data at:
http://psych.fullerton.edu/mbirnbaum/archive.htm

Monday, March 31, 2008

Let's Go Team!

JDMers,

I recently came across a research report in the journal Science that quantitatively measured the impact of teamwork on the production of knowledge. The main findings of Wuchti, Jones, and Uzzi (2007) include: (a) research is increasingly done in teams in virtually all fields (scientific journals & patents), (b) teams produce the highest impact (i.e. more highly cited) research, and (c) these trends are increasing over time.

I met this past weekend with Andy Whitmore - a fellow doctoral student at Albany (also a groomsman in my wedding) - and we discussed a strategy for conducting research that will benefit us both. Knowing that our primary goal is to graduate ASAP, we have chosen research questions that will aid us in attaining this goal.

We should use this forum - as well as our monthly meetings in Albany - to find avenues of collaboration that can benefit us all. Individually none of us may be the likes of Ken Hammond, but together we can become the next hub of JDM research!

~RMT

Friday, March 28, 2008

Berkeley Interview - Daniel Kahneman

Hey All,

I think this interview with Kahneman is really interesting.

http://globetrotter.berkeley.edu/people7/Kahneman/kahneman-con0.html


~RMT

CCT Citation Network

Hello All,

I am currently working on a literature review of Cognitive Continuum Theory from 1980 to 2007. Social network analysis is being used to map the literature in the form of a citation network. The image below is a visualization of the work-in-progress - a few more nodes will be added shortly. However, I thought I'd give you a snippet. Node size is a function of the number of times a particular work is cited. Hammond (1996) and Hammond, Hamm, Grassia, & Pearson (1987) are the two largest nodes. The green nodes are part of the fastest growing CCT subfield, those works in the domain of medical decision making. Enjoy!
~RMT


Wednesday, January 2, 2008

Availablity cascade: Heuristics & biases meet loops

In 2008, a 100 Percent Chance of Alarm
By JOHN TIERNEY
Published: January 1, 2008

"Thanks to availability entrepreneurs, misinterpreting the weather is getting easier and easier."

"Today's interpreters of the weather are what social scientists call availability entrepreneurs: the activists, journalists and publicity-savvy scientists who selectively monitor the globe looking for newsworthy evidence of a new form of sinfulness, burning fossil fuels."

Availability entrepreneurs? So now availability is a market good?

"When judging risks, we often go wrong by using what's called the availability heuristic: we gauge a danger according to how many examples of it are readily available in our minds. Thus we overestimate the odds of dying in a terrorist attack or a plane crash because we've seen such dramatic deaths so often on television; we underestimate the risks of dying from a stroke because we don't have so many vivid images readily available."

"Slow warming doesn't make for memorable images on television or in people's minds, so activists, journalists and scientists have looked to hurricanes, wild fires and starving polar bears instead. They have used these images to start an "availability cascade," a term coined by Timur Kuran, a professor of economics and law at the University of Southern California, and Cass R. Sunstein, a law professor at the University of Chicago."

"The availability cascade is a self-perpetuating process: the more attention a danger gets, the more worried people become, leading to more news coverage and more fear. Once the images of Sept. 11 made terrorism seem a major threat, the press and the police lavished attention on potential new attacks and supposed plots. After Three Mile Island and "The China Syndrome," minor malfunctions at nuclear power plants suddenly became newsworthy."


So the availability heuristic is used to create an availability cascade that can be used by availability entrepreneurs to deceive us.