Etienne P. LeBel & Anne Scheel
[Version 2.4; We thank Nick Brown for valuable feedback on a previous version of this blog post.]

Imagine your child is diagnosed with cancer. You have the choice between two drugs: One was developed and tested in a series of registered studies1, the other in non-registered studies. Which one do you choose? You would probably feel that the answer is a no-brainer — you want the drug whose efficacy was based on evidence least influenced by bias.
Extremely high stakes of pharmaceutical research, in the form of billion-dollar revenues generated from FDA-approved drugs, led the World Medical Association (WMA) in 2008 to institute mandatory study registration for all clinical trials reporting evidence on drug efficacy. This was preceded by the International Committee of Medical Journal Editors’ (ICMJE) decision in 2005 whereby non-registered clinical trial studies would no longer be considered for publication. The logic is that the risk posed by researcher biases in the analysis and reporting of study results, including bias in reporting inconclusive or negative studies, is so high that non-registered studies, consequently, simply cannot and should not be trusted.
The modern era of hyper-competitive high-output academic research culture has also led to extremely high stakes for individual researchers in the form of personal rewards such as prestigious jobs, promotion, book deals, outside financial interests, social status, and media attention. Consequently, there are no intellectually honest and defensible reasons against applying this same requirement to all published research involving human subjects. The person who prefers the cancer drug from registered studies cannot simultaneously dismiss the requirement of study registration for their own psychology studies. Consequently, it follows that all human subjects research not publicly registered should not even be considered for publication in any scientific journal (psychology or otherwise).
Indeed, the latest revised Declaration of Helsinki ethical principles, from 2013, dictates precisely such requirement:
- 35. Every research study involving human subjects must be registered in a publicly accessible database before recruitment of the first subject.
- 36. Researchers, authors, sponsors, editors and publishers all have ethical obligations with regard to the publication and dissemination of the results of research. Researchers have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports. All parties should adhere to accepted guidelines for ethical reporting. Negative and inconclusive as well as positive results must be published or otherwise made publicly available. Sources of funding, institutional affiliations and conflicts of interest must be declared in the publication. Reports of research not in accordance with the principles of this Declaration should not be accepted for publication.
Given study registration is not yet mandatory in psychology, however, professional psychologist researchers are not yet complying with these new ethical principles.2 Due to the high-stake personal rewards of the current academic research culture, however, we strongly believe it is time that all professional psychologist researchers abide by such new ethical principles requiring mandatory study registration, in addition to minimal reporting standards, open materials/data, and hypothesis pre-registration.
Anything short of this, given the environment in which researchers operate, fails to adhere to fundamental scientific principles: That is, reporting and testing hypotheses with sufficient transparency and thus falsifiability to maximize the likelihood that we as a research community can conclude a hypothesis is wrong, if it is in fact wrong (which can be easily achieved given new technologies3):
- Without study registration at a centralized public registry, it is impossible, for us as researchers, to account for the selective file-drawering of “failed” or inconclusive studies.
- Without a pre-registered method protocol (specified prior to data collection), it is near-impossible for us to account for the multitude of ways researchers may have (un)intentionally exploited analytic and design flexibility to achieve a publishable result.
- Without minimal reporting standards (e.g., 21-word solution, BASIC 4 Psychological Science reporting standard), we cannot properly evaluate the strength of the reported evidence.
- Without open materials, we cannot properly scrutinize the experimental design, nor can we conduct diagnostic independent replicability tests.
- Without open data, we cannot verify the analytic reproducibility or the analytic robustness of the reported results, which need to be independently confirmed before investing precious research resources conducting expensive independent replications.
Being a scientist is a special and precious privilege. It is not an irrevocable right. As credentialed professionals, public intellectuals, and mentors, we have an inordinate amount of influence on citizens, the media and journalists, industry research and corporations, government agencies, NGOs, and other researchers both within and outside our respective fields. But with such importance and respect comes great responsibility.
Consequently, it follows that insufficiently transparent, and hence insufficiently falsifiable, research should be considered professionally unethical for the following reasons:
- When the public funds research, taxpayers provide money in good faith that the funded projects will advance knowledge and help address societal problems. Non-falsifiable research wastes public funds which could otherwise be spent on social services and programs that reduce suffering and safe lives.
- Non-falsifiable research also wastes additional public funds spent misguidedly trying to replicate and build upon such research.
- Non-falsifiable research also leads to costly and ineffective practical implementation attempts, which can have grave consequences on real-world practical, legal, and political decisions.
- Non-falsifiable research wastes the time of volunteering human subjects and in some cases unjustly puts their well-being at risk.
- Non-falsifiable research erodes the public’s trust in scientists, will lead to further research funding cuts, and stifles society’s evolution toward evidence-based policy-making.
We propose that all professional psychologists need to abide by the new 2013 Declaration of Helsinki ethical principles, which are consistent with current lower bar country-based professional society code of ethics including the APA, CPA, DGPs, VSNU, and European Code of Conduct for Research Integrity (and as has been previously argued here). This is gravely needed to finally be accountable to the public. Accountable to the fact that all published research actually follows fundamental scientific principles, ensuring the necessary degree of transparency and falsifiability required for scientific progress (building upon existing softer and voluntary initiatives such as the Commitment to Research Transparency and the TOP guidelines).
Such a new ethical code of conduct would explicitly stipulate the following standards for all published scientific research4:
- Public registration of all studies at a field-relevant centralized registry, which includes a pre-registered method protocol document clearly describing rationale of study, study sample and design, and planned data analytic approaches (e.g., IRB ethics approval documents).
- Compliance with fundamental reporting standards relevant to the reported research (e.g., BASIC 4, CONSORT standard for experimental studies; STROBE standard for observational/correlational studies)
- Open materials: Public online archiving of all relevant procedural details, materials, and measures, unless proprietary exclusions apply, to allow for proper scrutiny of experimental design and independent replicability tests.
- Open data: Public online archiving of all relevant data, raw or transformed data, unless proprietary or confidentiality exclusions apply, to allow for verification of analytic reproducibility and analytic robustness of reported results.
Compliance with this new code of ethics could be implemented by having each stakeholder in a researcher’s ecosystem (i.e., via journals, professional societies, funding agencies, university employment contracts) require that individual researchers explicitly consent to following such code. This is akin to the Hippocratic Oath for medical professionals, guided by the more general Hippocratic Oath proposed for all scientists (see also here). Upon taking such oath, violation of the new ethical standards should be considered unethical and should be investigated as researcher misconduct by the appropriate stakeholder(s) involved.
We urgently need, at this current moment, to have a serious discussion within the psychological research community about the minimum scientific standards that need to be met to be an ethical researcher in this modern era of high-stake hyper-competitive high-output academic research culture. This discussion should incite calls to action to ensure that all stakeholders vigilantly enforce compliance to this new code of ethics. Otherwise, the reputation of all professional psychologists will continue to be tarnished, extensive research waste and direct and indirect harm to society will continue, and the public’s trust in science will be further eroded.
***Footnotes***
1. “Registered studies” as in studies registered in public centralized study registries prior to data collection, such as ClinicalTrials.gov.
2. We must emphasize, however, that a growing minority of psychologists have made admirable efforts to pre-register and provide open materials/open data for some or all of their studies.
3. E.g. technologies to safely store and share data and materials, preregister studies, establish a reproducible workflow, conduct multi-lab collaborations, verify the accuracy of one’s own and others’ reported results, and make manuscripts publicly available for pre-publication peer feedback.
4. These standards should not be misconstrued as guaranteeing scientific knowledge, but rather as minimal standards that need to be in place to allow the possibility of achieving valid and generalizable knowledge about how our world works.
I applaud the field’s shift towards pre-registering many studies (and had a hand in it, as part of the Badges initiative), agree with the need for a stronger code of ethics, and I am doing preregistration in my own lab. But I don’t think preregistration should be required. Exploratory findings are ok if they are marked as exploratory – we should not prohibit their publication. Because I sometimes study effects with extremely large effect sizes, I run a lot of experiments with only 2 to 6 subjects. Almost invariably, I follow up with a larger experiment and can preregister that, but if for some reason I run out of resources, I don’t think we should prohibit publication of that initial experiment if it stumbled into something big, as long as the finding is marked exploratory so we take it with a grain of false-positive salt. If radio astronomers, while calibrating their equipment, receive a message that seems to be from aliens, I’d like them to publish that rather than waiting for a preregistered replication, as the aliens might not send another message. I have a few other problems with requiring all the things you suggest (there are many kinds of human subjects research and not all should be burdened with or fit well with the standardized reporting etc. that you mention) but that’s it for now…
“But I don’t think preregistration should be required. Exploratory findings are ok if they are marked as exploratory – we should not prohibit their publication”
How do we determine exactly how exploratory they were?
For instance, If i measure 100 variables in some experiment and report only the 5 that were significant and mark them as “exploratory” my published results could surely be considered to be “weaker” evidence than if i measured 10 variables and only report the 5 that were significant and mark them as “exploratory”? In both cases i adhered to “marking them as exploratory” but i would reason that the additional information about exactly how many variables i measured could be considered to be very important information in order to be able to determine just with “how many grains of false-positive salt to take these findings”.
It seems to me that the following 3 arguments in the post above could be very relevant concerning being able to try and answer this question. If that makes any sense, it could be argued that pre-registration is important for *all* research simply because it makes it possible to determine if, and to what extent, the research is exploratory/confirmatory:
Without study registration at a centralized public registry, it is impossible, for us as researchers, to account for the selective file-drawering of “failed” or inconclusive studies.
Without a pre-registered method protocol (specified prior to data collection), it is near-impossible for us to account for the multitude of ways researchers may have (un)intentionally exploited analytic and design flexibility to achieve a publishable result.
Without minimal reporting standards (e.g., 21-word solution, BASIC 4 Psychological Science reporting standard), we cannot properly evaluate the strength of the reported evidence.
Great points Alex! We’re definitely not suggesting the prohibition of publishing exploratory findings. As suggested by Anonymous’ comment below, however, registered pre-registration is simply required to be able to truly distinguish exploratory research from confirmatory research. The stakes are so high, people *will* try to get ahead by presenting cherry-picked results as “exploratory” in a non-transparent way, making their results appear more compelling than they actually are in reality.
Of course this is annoying for honest people like you, but no matter how honest you think you’re being, how do you know that you’re not just massively fooling yourself in interpreting a set of findings as merely “exploratory” when in fact they were planned as confirmatory from the outset??? (i.e., exploratory results that don’t pan out are file-drawered and never talked about whereas exploratory results that do work can be non-transparently presented as more compelling than they actually are.)
Registerd pre-registration of all exploratory and confirmatory research studies fixes all of these problems (and to us, is the ONLY way to overcome all of these serious problems). We realize this sounds strange and very onerous, but we really can’t think of any other solution given the high stakes for bias involved.
That said, it’s important to emphasize that researchers would still be able to publish papers based on secondary data *without* registration (though planned analyses could and should still be pre-registered), and of course researchers would still be able to publish theoretical papers, conceptual papers, simulation-based papers all of which don’t need to be registered.
It is only when NEW data are collected that registered pre-registration would be required (pilot testing of stimuli, and pilot testing related to calibration of instruments would of course NOT require registration).
Thanks for some good points. So you would ban journal publication of my finding from an unpreregistered 3-subject experiment which I couldn’t follow up on due to lack of resources?
Some days I do 3 pilot experiments in one afternoon. I am concerned that having to preregister each of these will greatly retard the rate of new discoveries in the field of psychophysics. And I am dismayed by the prospect of ethics/IRB panels becoming involved, as that is sure to result in burdensome virtual paperwork, based on past experience (http://www.chronicle.com/article/Long-Sought-Research/239459).
The proposal for required preregistration etc. is described as applying to “all published scientific research”, which as I mentioned in the original comment, seems to preclude the publication in a journal of a serendipitous finding of a message from aliens. This would seem to be an instance of the “NEW data” you mention above.
Actually no, pilot studies would be exempt from the registered pre-registration requirement. So you could still run such studies as unencumbered as you do now.
The ultimate problem is this:
If UNREGISTERED research can still “produce” positive evidence for ESP phenomena (e.g., pre-cognitive ability to see into the future), as Bem continues to do in UNREGISTERED pre-registered studies (NO JOKE, he recently confirmed this with me via email, D. Bem, personal communication, March 13, 2017), then is it really ethical for taxpayers to continue paying for such UNREGISTERED research??
Just thinking out load about possible ways to share exploratory v confirmatory findings. I agree that an exciting finding is typically something worth following up on and is worth sharing (with all appropriate caveats about not presenting it, accidentally, as confirmatory). I wonder if a future ideal would be to share such findings as preprints, distribute it as much as you can with the emphasis on “this is a possibly neat idea, but needs to be confirmed! I’m not going to do that right now, but have at it, or let’s collaborate!”. I think the ideal is to reward more of the boring confirmatory work while making sure that exploratory work is given the more appropriate qualifications that they probably deserve.
Again, just thinking out loud about the best ways to minimize type 1 and 2 errors.
“I wonder if a future ideal would be to share such findings as preprints, distribute it as much as you can with the emphasis on “this is a possibly neat idea, but needs to be confirmed! I’m not going to do that right now, but have at it, or let’s collaborate!”.”
Cool idea! Could also be possibly useful to combine this sort of thing with 1) Study Swap https://osf.io/view/studyswap/ and 2) Registered Reports https://osf.io/8mpji/wiki/home/
Researchers could then perform an exploratory study which could possibly result in “finding something interesting”. They could then post it as a preprint, post a “need” on Study Swap with a link to the preprint in the description, and then propose to do a Registered Report with any possible collaborators.
Yes yes yes! All I hear in ethics teaching is “the identity of the participants should not be revealed” and “don’t give people shocks” and “it may not be ethical to mislead participants” again and again… Completely missing the point that non-informative research is deceiving your participants (and funders, and society) big time.
21st century research ethics demand transparency.
Two stray thoughts:
1) I’m pretty confident that outcome switching will be one of the next problems, after we start registering studies. It already undermines medicine* – I’m wondering, if anyone knew if/how the COMPare protocol has been adapted to psych reviewers? Would be really useful if it can be done in a short time, as Goldacre mentions.
*http://compare-trials.org/blog/jama-reject-all-correction-letters/
2) Nice post on research waste here: http://blogs.bmj.com/bmj/2016/01/14/paul-glasziou-and-iain-chalmers-is-85-of-health-research-really-wasted/
thanks for the comment Matti! and relevant links.
i agree that outcome switching will become a huge problem in psychology’s near future once pre-registration becomes more popular. (and no i’m not aware whether COMPare protocol has been adapted for reviewing psychology findings)