Recommendations for peer review in current (strained?) climate

In this post, I will discuss challenges that arise when peer reviewing submitted articles in the current tense climate. Such climate stems from the growing recognition that we need to more openly and fully report our methods and results, avoiding questionable research practices and hence questionable conclusions. This is inspired by a recent piece wherein two authors felt unfairly accused of “nefarious practices” and also based on some of my own recent experiences peer reviewing articles.

The goal of peer review — for empirical articles at least — is to carefully evaluate research to make sure that conclusions drawn from evidence are valid (i.e., correct). This involves evaluating many different aspects of the reported research including whether correct statistical analyses were carried out, whether appropriate experimental designs were used,  and whether any confounds were unintentionally introduced, to name a few.

Another concern, which has recently received a lot more attention, is to assess the extent to which flexibility in design and/or analyses may have contributed to the reported results (Simmons et al., 2011; Gelman & Lokan, 2013). That is, if a set of data are analyzed in many different ways and such analytic multiplicity isn’t appropriately accounted for, incorrect conclusions can be drawn from the evidence due to an inflated false positive error rate (e.g., incorrectly concluding an IV had a causal effect on a DV when in fact the data are entirely consistent with what one would expect due to sampling error assuming the null is true).

Hence, a crucial task when reviewing an empirical article is to rule out that flexibility in analyses (&/or design, e.g., data collection termination rule) can account for the reported results, and hence avoid the possibility that invalid conclusions have been made. From my perspective, however, it is really important that as reviewers we do this very carefully so that authors (whose work is being reviewed) do not feel accused of intentional p-hacking or researcher misconduct.

Here’s an example to demonstrate my point. During peer-review of an article on goal-directed bias in memory judgments (at Consciousness & Cognition), O’Connor & Mill felt unfairly accused of “unconventional and nefarious practices” in analyzing their data (see here for details). We don’t have all of the details, but it looks like one of the reviewers was concerned about how exclusions were made by the authors with regard to (1) overly low sensitivity index (d’) and (2) native language requirements. This reviewer went on to say that “the authors must accept the consequences of data that might disagree with their hypotheses”. It should be clear that this reviewer was completely justified in being concerned that flexibility in the different exclusions criteria that could have been used could have lead to invalid conclusions regarding the target phenomenon (i.e., how goals can bias memory processes). However, in my opinion, the language used to express such a concern was inappropriate because it insinuated that such flexibility may have been intentionally exploited.

Another example comes from a recent paper I reviewed that reported evidence that “response effort” may moderate the impact of cleanliness priming on moral judgments (under review at Frontiers). On the surface, the evidence seemed very strong, but upon closer inspection I realized that there seemed to be quite a bit of flexibility with respect to (1) how “response effort” was operationalized across the 4 reported studies and (2) exclusion criteria used for excluding participants who exhibited “insufficient effort responding”. Concerned that such flexibility may have contributed to an inflated false positive error rate (and hence invalid conclusions), I carefully delineated these concerns and concluded my review by stating:

“In sum, the main problem is that based on the methods and results presented in the current manuscript, we cannot rule out the possibility that unintentional confirmation bias inadvertently (1) biased the operationalization of “response effort” and (2) biased the chosen exclusion criteria, which in combination represents a potential alternative explanation for the current pattern of results.”

It is important to notice that I intentionally framed my concern in terms of the fact that flexibility in analyses may have unintentionally biased the results. This is extremely crucial because most authors are probably not aware that flexibility in analyses/methods may have unduly influenced their reported results. Hence, of course they will become defensive if you insinuate that they have intentionally exploited such flexibility, when they in fact have not intentionally done so. This would be akin to insinuating that researchers intentionally confounded their experimental manipulation! The point here is that flexibility in analyses/design — just like experimental confounds — need to be ruled out, and this is necessary for valid inference regardless of whether these problems were intentionally or unintentionally introduced.


1. Always frame your concerns about flexibility in analyses/design (or any other concern) using language that focuses on the ideas rather than the authors.
2. Give the benefit of the doubt to authors and always assume that flexibility in analyses/design may have unintentionally influenced the reported results.
3. Use a standard reviewer statement that has been specifically designed to help with such matters. The statement (developed by Uri Simonsohn, Joe Simmons, Leif Nelson, Don Moore, and myself) can be used by any reviewer to request disclosure of additional methodological details, which can help assess the extent to which flexibility in analyses/design may have contributed to the reported results. Using this standard statement is another way to avoid having the authors feel as though you are insinuating they have intentionally done something questionable.

“I request that the authors add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes. The authors should, of course, add any additional text to ensure the statement is accurate. This is the standard reviewer disclosure request endorsed by the Center for Open Science [see]. I include it in every review.”

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s