George Miller

George Miller, one of the instigators of the “cognitive revolution”, has died. He passed away on Sunday July 22nd of natural causes, aged 92. I mentioned Miller’s classic 1956 paper in my first post on this blog. Here’s a nice article he wrote for Trends in Cognitive Science in 2003: “The cognitive revolution: a historical perspective”. The article starts:

Cognitive science is a child of the 1950s, the product of a time when psychology, anthropology and linguistics were redefining themselves and computer science and neuroscience as disciplines were coming into existence. Psychology could not participate in the cognitive revolution until it had freed itself from behaviorism, thus restoring cognition to scientific respectability. By then, it was becoming clear in several disciplines that the solution to some of their problems depended crucially on solving problems traditionally allocated to other disciplines. Collaboration was called for: this is a personal account of how it came about.

Miller writes “[a]t the time it was happening I did not realize that I was,
in fact, a revolutionary”. He was, of course one of the most important figures in psychology, and will be missed.

The Psychology of Science

The Psychology of Science is the study of

any form of scientific or technological thought or behavior, with the understanding that scientific thought and behavior are defined both narrowly and broadly. Narrowly defined, the field refers to thought and behavior of professional scientists and technologists. Broadly defined, the field includes thought and behavior of any person or people (past or present) of any age (infants to the elderly) engaged in theory construction, learning scientific or mathematical concepts, model building, hypothesis testing, scientific reasoning, problem finding or solving, or creating or working on technology. Indeed, mathematical, engineering, and invention activities are included as well.

This weekend the 4th Biennial Conference of the International Society for the Psychology of Science and Technology (ISPST) is being held in Pittsburgh. Past conferences have been in Zacatecas (2006), Berlin (2008), and Berkeley (2010). Although I did not go the first conference, I have been to the others, and the ISPST conference ranks among my favorite, given that the focus of most of my work falls under the Psychology of Science umbrella. At these conferences I have met and/or listened to talks by several excellent scholars, and have made many new friends.

I am looking forward to hearing the keynote address by Paul Thagard (who has an nice blog), Clark Chinn, Iris Tabak, and Katy Börner. I am also expecting interesting presentations from Keisha Varmer, Amy Masnick, Susanne Koerber, Bill Brewer, and Ryan Tweney (among others).

Dishonesty, Bias, and Replication in Psychology

In the wake of the cases of Diederik Stapel last year, and Marc Hauser the year before, another depressing case of academic fraud has come to light:

Smeesters, who was Professor of Consumer and Society in the Rotterdam School of Management [Part of Erasmus University – SC], was found guilty by the Inquiry of ‘data selection’ and failing to keep suitable data records. Smeesters resigned his post after admitting to using a ‘blue dot technique’ whereby, after achieving a null result, he omitted participants who failed to read the instructions properly (7 to 10 per study, he claims), thus lifting the findings into statistical significance – a procedure he failed to detail in his affected papers. However, Smeesters blamed the unavailability of his raw data on nothing more heinous than a computer crash and a lab move. The Inquiry said it ‘doubted the credibility’ of these reasons.

On the face of it, removing participants who did not demonstrate that they had read the instructions does not seem problematic. However, the press release from Erasmus University states:

Two articles were found to have irregularities with findings that, in a statistical sense, are highly unlikely. The raw data forming the basis of these articles was not available for inspection by third parties, and the professor indicated that he had selected data so that the sought-after effects were statistically significant.

The problems with these papers came to light when a U.S. scientist analyzed the published data from one of Smeesters’ papers and “and found that the data were ‘too good to be true.'” The technique used by this scientist is a new method for looking for unlikely patterns of data. At present, the technique is an unpublished secret. The investigating panel at Erasmus University

asked two statistical experts to analyze the method; after concluding it was “valid,” it took a close look at the papers co-authored by Smeesters—including those still under review— for which he had control over the data. The statistical method could be applied to a total of 22 experiments; of those, three experiments were problematic…. [T]he university panel goes on to say that it can’t determine whether the numbers Smeesters says he massaged existed at all. He could not supply raw data for the three problematic experiments; they had been stored on a computer at his home that had crashed in September 2011 and whose data his brother-in-law had assured him were irretrievable. In addition, the “paper-and-pencil data” had also been lost when Smeesters moved his office at the school. The panel says it cannot establish Smeesters committed fraud, but says he is responsible for the loss of the raw data and their massaging.

This certainly looks like a case in which it would be easy to be cynical about the “conveniently” missing electronic and paper data, however it is possible that the data did exist and really did get lost. So, unlike Stapel, Smeesters may not be guilty of fabricating data, but I find his cavalier attitude toward the data massaging rather troubling.

According to the report, Smeesters said this type of massaging was nothing out of the ordinary. He “repeatedly indicates that the culture in his field and his department is such that he does not feel personally responsible, and is convinced that in the area of marketing and (to a lesser extent) social psychology, many consciously leave out data to reach significance without saying so.”

Ed Yong, who has also commented on the Smeesters case, published an article in Nature in May in which he discusses the attitudes of some psychologists towards biased data collection and reporting:

In a survey of more than 2,000 psychologists, Leslie John, a consumer psychologist from Harvard Business School in Boston, Massachusetts, showed that more than 50% had waited to decide whether to collect more data until they had checked the significance of their results, thereby allowing them to hold out until positive results materialize. More than 40% had selectively reported studies that “worked”. On average, most respondents felt that these practices were defensible. “Many people continue to use these approaches because that is how they were taught,” says Brent Roberts, a psychologist at the University of Illinois at Urbana–Champaign.

What is to be done? I think there are two key issues. First, replication studies need to be conducted with greater frequency and journals need to willing to publish such studies. When Darrell Bem published a paper in which he provided evidence for psi phenomena last year, many were naturally skeptical. Stuart Richie, Chris French, and Richard Wiseman carried out replications of three of Bem’s four experiments, but their results did not show evidence for psi. When they submitted their work to the Journal of Personality and Social Psychology, the same journal that published Bem’s paper, it was rejected on the grounds that JPSP does not publish replications. According to Chris French

We then submitted it to Science Brevia and received the same response. The same thing happened when we submitted it to Psychological Science… When we submitted it to the British Journal of Psychology, it was finally sent for peer review. One referee was very positive about it but the second had reservations and the editor rejected the paper. We were pretty sure that the second referee was, in fact, none other than Daryl Bem himself, a suspicion that the good professor kindly confirmed for us. It struck us that he might possibly have a conflict of interest with respect to our submission. Furthermore, we did not agree with the criticisms and suggested that a third referee be brought in to adjudicate. The editor rejected our appeal.

Ritchie, Wiseman, and French’s paper was eventually published in PLoS ONE.

The second issue that needs addressing is the type of papers that psychology journals want to publish. As discussed above, editors are not keen on replications. In addition, they (in my experience and that of many of my colleagues) tend to prefer “sexy”, “newsworthy” research to solid incremental science. It is not uncommon for a psychologist to be told “your work is sound and you’ve conducted the experiments well, but all you do is build on the main findings that have already been published by X and Y. Sure, you show the boundary conditions under which the effect occurs and extend the work to a novel sample, but here at the Journal of Exciting Psychology we want to publish novel findings.”