Dishonesty, Bias, and Replication in Psychology

In the wake of the cases of Diederik Stapel last year, and Marc Hauser the year before, another depressing case of academic fraud has come to light:

Smeesters, who was Professor of Consumer and Society in the Rotterdam School of Management [Part of Erasmus University – SC], was found guilty by the Inquiry of ‘data selection’ and failing to keep suitable data records. Smeesters resigned his post after admitting to using a ‘blue dot technique’ whereby, after achieving a null result, he omitted participants who failed to read the instructions properly (7 to 10 per study, he claims), thus lifting the findings into statistical significance – a procedure he failed to detail in his affected papers. However, Smeesters blamed the unavailability of his raw data on nothing more heinous than a computer crash and a lab move. The Inquiry said it ‘doubted the credibility’ of these reasons.

On the face of it, removing participants who did not demonstrate that they had read the instructions does not seem problematic. However, the press release from Erasmus University states:

Two articles were found to have irregularities with findings that, in a statistical sense, are highly unlikely. The raw data forming the basis of these articles was not available for inspection by third parties, and the professor indicated that he had selected data so that the sought-after effects were statistically significant.

The problems with these papers came to light when a U.S. scientist analyzed the published data from one of Smeesters’ papers and “and found that the data were ‘too good to be true.'” The technique used by this scientist is a new method for looking for unlikely patterns of data. At present, the technique is an unpublished secret. The investigating panel at Erasmus University

asked two statistical experts to analyze the method; after concluding it was “valid,” it took a close look at the papers co-authored by Smeesters—including those still under review— for which he had control over the data. The statistical method could be applied to a total of 22 experiments; of those, three experiments were problematic…. [T]he university panel goes on to say that it can’t determine whether the numbers Smeesters says he massaged existed at all. He could not supply raw data for the three problematic experiments; they had been stored on a computer at his home that had crashed in September 2011 and whose data his brother-in-law had assured him were irretrievable. In addition, the “paper-and-pencil data” had also been lost when Smeesters moved his office at the school. The panel says it cannot establish Smeesters committed fraud, but says he is responsible for the loss of the raw data and their massaging.

This certainly looks like a case in which it would be easy to be cynical about the “conveniently” missing electronic and paper data, however it is possible that the data did exist and really did get lost. So, unlike Stapel, Smeesters may not be guilty of fabricating data, but I find his cavalier attitude toward the data massaging rather troubling.

According to the report, Smeesters said this type of massaging was nothing out of the ordinary. He “repeatedly indicates that the culture in his field and his department is such that he does not feel personally responsible, and is convinced that in the area of marketing and (to a lesser extent) social psychology, many consciously leave out data to reach significance without saying so.”

Ed Yong, who has also commented on the Smeesters case, published an article in Nature in May in which he discusses the attitudes of some psychologists towards biased data collection and reporting:

In a survey of more than 2,000 psychologists, Leslie John, a consumer psychologist from Harvard Business School in Boston, Massachusetts, showed that more than 50% had waited to decide whether to collect more data until they had checked the significance of their results, thereby allowing them to hold out until positive results materialize. More than 40% had selectively reported studies that “worked”. On average, most respondents felt that these practices were defensible. “Many people continue to use these approaches because that is how they were taught,” says Brent Roberts, a psychologist at the University of Illinois at Urbana–Champaign.

What is to be done? I think there are two key issues. First, replication studies need to be conducted with greater frequency and journals need to willing to publish such studies. When Darrell Bem published a paper in which he provided evidence for psi phenomena last year, many were naturally skeptical. Stuart Richie, Chris French, and Richard Wiseman carried out replications of three of Bem’s four experiments, but their results did not show evidence for psi. When they submitted their work to the Journal of Personality and Social Psychology, the same journal that published Bem’s paper, it was rejected on the grounds that JPSP does not publish replications. According to Chris French

We then submitted it to Science Brevia and received the same response. The same thing happened when we submitted it to Psychological Science… When we submitted it to the British Journal of Psychology, it was finally sent for peer review. One referee was very positive about it but the second had reservations and the editor rejected the paper. We were pretty sure that the second referee was, in fact, none other than Daryl Bem himself, a suspicion that the good professor kindly confirmed for us. It struck us that he might possibly have a conflict of interest with respect to our submission. Furthermore, we did not agree with the criticisms and suggested that a third referee be brought in to adjudicate. The editor rejected our appeal.

Ritchie, Wiseman, and French’s paper was eventually published in PLoS ONE.

The second issue that needs addressing is the type of papers that psychology journals want to publish. As discussed above, editors are not keen on replications. In addition, they (in my experience and that of many of my colleagues) tend to prefer “sexy”, “newsworthy” research to solid incremental science. It is not uncommon for a psychologist to be told “your work is sound and you’ve conducted the experiments well, but all you do is build on the main findings that have already been published by X and Y. Sure, you show the boundary conditions under which the effect occurs and extend the work to a novel sample, but here at the Journal of Exciting Psychology we want to publish novel findings.”

Advertisements

2 thoughts on “Dishonesty, Bias, and Replication in Psychology

  1. One of my grad school professors taught me that it was an appropriate practice to collect data and then, before collecting more data, to check the results to see if they were significant. If not, collect more data. The professor’s assumption (which has some merit) was that non-statistical results were likely due to low statistical power. If the effect size was larger than expected then perhaps the results would be significant with a smaller-than-expected sample; thus, save time and cease the experiment when significant results were obtained. If, on the other hand, the effect size was smaller than expected, it would be necessary to collect data from a larger sample. I never questioned this method, but it is easy to see how it plays into the researcher’s hand and favors significant findings.

    • I remember similar conversations in grad school. I had a conversation with a postdoc in which he pointed out that he had a done a similar thing: non-significant effect -> collect more data -> bingo! We were concerned that any lack of a significant finding could be thus “corrected”. Having said that, your point about power is well made.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s