theory testing

Insufficiently open science — not theory — obstructs empirical progress!

I stumbled upon Greenwald et al.’s (1986) “Under what conditions does theory obstruct research progress” article the other day and decided to re-read it. I found it very interesting to re-read in the context of current controversies about p-hacking and replication difficulties! Very prescient indeed.

In the article, Greenwald et al. argued that theory obstructs research progress when:
1. testing theory is the central goal of research, and
2. the researcher has more faith in the correctness of the theory than in the suitability of the procedures used to test the theory.

Though I agree with their main argument (& indeed we’ve made a very similar argument here), I don’t think it’s completely correct (or at least incomplete given what we now know about modal research practices).

I want to put forward the possibility that it is insufficiently open research practices — rather than theory-confirming practices — that obstruct empirical progress! Testing theory has always involved the (precarious) goal of producing experimental results that confirm a theory-derived novel empirical predictions. Such endeavors almost always involve repeated tweaking and refinement of procedures and calibration of instruments. As long as researchers are sufficiently open about the methods used to execute their experimental tests, however, such theory-confirming practices *can* lead to empirical progress. This is the case because being open means other researchers can gauge more objectively all of the required methodological tweakings that were required to get the theory-confirming result, but is also the case because being open means using stronger methods and better thought-out experimental designs to begin with. Consequently, being more open means theory-derived empirical predictions are more open to disconfirmation (given disconfirmation requires strong methods), which actually substantially *accelerates* research progress! Don’t take my word for it, here’s what Richard Feynman had to say on the subject:

“We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress.” (Richard Feynman)


Two quotes from Greenwald et al.’s article that inspired this post!

“The theory-testing approach runs smoothly enough when theoretically predicted results are obtained. However, when predictions are not confirmed, the researcher faces a predicament that can be called the disconfirmation dilemma (Greenwald & Ronis, 1981). This dilemma is resolved by the researcher’s choosing between proceeding (a) as if the theory being tested is incorrect (e.g., by reporting the disconfirming results), or (b) as if the theory is still likely to be correct. The researcher who preserves faith in the theory’s correctness will persevere at testing the theory — perhaps by conducting additional data analyses, by collecting more data, or by revising procedures and then collecting more data.” (p. 219).

“A theory-confirming researcher perseveres by modifying procedures until prediction-supporting results are obtained. Particularly if several false starts have occurred, the resulting confirmation may well depend on conditions introduced while modifying procedures in response to initial disconfirmations. However, because no systematic empirical comparison of the evolved (confirming) procedures with earlier (disconfirming) ones has been attempted, the researcher is unlikely to detect the confirmation’s dependence on the evolved details of procedure. Although the conclusions from such research need to be qualified by reference to the tried-and-abandoned procedures, those conclusions are often stated only in the more general terms of the guiding theory. Such conclusions constitute avoidable overgeneralizations.” (p. 220)