A group of researchers gathered around Brian Nosek and the Center for Open Science in Virginia have published a large replication study of psychological research from three journals last week. The negotiations over the interpretation of the results have started even before the virtual ink has dried on the article - heck, it even started within the article (Nosek et al, 2015):
After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true? Zero. And how many of the effects have we established are false? Zero. Is this a limitation of the project design? No. It is the reality of doing science, even if it is not appreciated in daily practice. Humans desire certainty, and science infrequently provides it. As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation. The original studies examined here offered tentative evidence; the replications we conducted offered additional, confirmatory evidence. In some cases, the replications increase confidence in the reliability of the original results; in other cases, the replications suggest that more investigation is needed to establish the validity of the original findings. Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims.
I like the rhetorical panache in this: We just did the most comprehensive replication project of psychological research out there, and we are certain of nothing. The virtuous ideal scientist stands humbled in front of the uncertainty, and as a good Popperian, can only falsify ad infinitum and hope for the best. I am somewhat skeptical of this representation of things: there is a reason why Nosek and his colleagues chose the studies they did, and invested a lot of time and effort into producing a comprehensive testing of their hypotheses, and I am pretty sure that was not done to claim: I know that I know nothing.
Their aim was to improve reproducibility in psychology, as they state themselves.
The present results suggest that there is room to improve reproducibility in psychology. Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should. Hypotheses abound that the present culture in science may be negatively affecting the reproducibility of findings. An ideological response would discount the arguments, discredit the sources, and proceed merrily along. The scientific process is not ideological. Science does not always provide comfort for what we wish to be; it confronts us with what is.
They provided an empirical answer to a lot of discussions about the state of psychological research and the knowledge produced within its boundaries. A couple of episodes come to mind, in no way exhaustive: Kahneman's skepticism of social priming, the Bem precognition studies kerfuffle, the Stapel affair and all the resulting research misconduct investigations, the debates over p-hacking that eerily remind me of the decades long war between the Bayesians and frequentists over p-values. A spectre is haunting psychology - the spectre of negotiations of objectivity.
As far as I can tell, by talking to people and reading as much as I can about the reactions to the study online, there's a couple of possible interpretations:
Psychological research is all false and we should burn it down and start from scratch (something along the lines of this wonderful, ahem, article about abolishing social science.)
Certain subdisciplines of psychology have finally gotten what they deserve - their research is on shaky foundations at best (yes, social psychology, we're looking at you).
This is just the sophisticated manifestation of the self-correcting critical enterprise that is science. Let's all celebrate how cool that is!
You should have went Bayesian, you know? (I had to include that one, if only because of this pretty awesome re-analysis of the data)
I can't say I subscribe to any of the above interpretations - probably nobody really does, but I think they touch upon opinions on reproducibility forming and being written out nowadays. The above are just straw-men, set up to crystallize potentially opposing opinions that have a lot more shades to them when fully stated by their exponents.
I think my opinion forms somewhere where the reproducibility study authors' article ends, literally, somewhere after the last paragraph in the conclusion of their article:
We conducted this project because we care deeply about the health of our discipline and believe in its promise for accumulating knowledge about human behavior that can advance the quality of the human condition. Reproducibility is central to that aim. Accumulating evidence is the scientific community’s method of self-correction and is the best available option for achieving that ultimate goal: truth.
I think all of the latest debates surrounding the uncertain scientific status of psychology converge on a certain defining factor of psychological research in the second part of the 20th century: lack of theories. Nosek's conclusion instantly rings of that, at least to my ear. The truth that psychologists seek is the one that advances the quality of the human condition, not the one that spells out what the human condition is and how should we conceptualize it to begin with. Psychologists are very prone to methodological debates instead of substantial theoretical answers. Or in Gigerenzer's words, psychologists are very apt at generating surrogates for theories.
What do I mean by that? Am I arguing for more practical research (applied!), or the supposed opposite, generating more abstract content disconnected from the context human beings live in?
Let's explain it through a sketch of a history - not a comprehensive historical account because that's beyond my understanding and the format of a blog post, but a historical speculation if you will.
My replication blues
In the second part of the twentieth century, psychology grew really big. I mean, really big. Whether you count the number of psychologists out there, the number of journals, or published articles, it's a burgeoning beast. Psychologists love discussing how the discipline is disintegrating under this expansion (nice impression of this by Chris Green, and I apologize, it's paywalled). Psychology expanded into all kinds of things, and great many of these things were quite unaffiliated with universities. Into clinics, hospitals, managerial offices, military, schools, sports, politics, public opinion research, marketing, governance, spying, ergonomics, arts - you name it, there's a psychology of it. This is already an old story, the expansion of an administrative science that filled the gaping vacuum of interpreting meaning and inner mental life for us. It also provided some nifty tools for managing people in the liberal democracies of the West.
Psychologists could go about this work, generating a lot of psychological knowledge, because they had quite a few tools at their disposal to do it. They gave away with big theoretical systems that constrained the types of research that could be done (it was the age of eclecticism and the cognitive revolution). Doing psychology means being anti-metaphysical. The methodologies and statistics of the old correlational and experimental traditions fused into a definite way that provided the actual tools for generating data and testing hypotheses; and this thing combined into a universal approach that was agnostic regarding the object of research it was applied to.
You can investigate and conceptualize psychologically pretty much anything. From schizophrenia to learning environment, from creativity to work burnout. By conceptualizing psychologically I don't mean anything abstract - it's a very down-to-earth representation in the familiar language and conceptual horizon of variables and psychological constructs.
And you can unleash this research program on countless problems facing the human condition, and you can write them out in short reports that fulfill the negotiated and mutually agreed criteria of objectivity, and you can start publishing within an academic ecology that values people who publish a lot, and publish fast.
The above is an over-simplification and more a statement of my gut feeling. I hope to be able to craft it into a more solid account but that takes time. For now, that gut feeling is what becomes articulated when I read the discussions and negotiations over what is the proper interpretation of the replication study. So I'm offering my candidate: The replication discussion is yet another surrogate for the lack of systematic theorizing and theoretical integration in psychology. A distraction trick, the cynical might say; another proposal for a solution to a systematic problem psychologists somehow recognize, the more charitable critic would retort.
Yes, we need to replicate more, but who's going to connect the dots in that growing confetti factory? Do we even have an idea how to do it - at least one that's not a meta-analysis, or a literature review, or a call for more applied research, or a call back to behaviorism, or psychoanalysis, or the abolition of psychology in general?
Not so sure about that, and thus, we end with replication blues.
Cover image credit: http://www.flickr.com/photos/heiner1947/4388600906/sizes/l/in/set-72157623509439544