« new weight loss scheme | ra ra ra » |
science, replication, and journalism on econtalk
This was one of a series of interviews with economist Russ Roberts, whose podcast subjects range broadly through the human enterprise, around how and why people do what they do, and what happens as a result. A recurring theme has been to question how we know what we know - how much is "science" and how much is story telling.
This interview focused on that question with science writer Ed Yong, who blogs at Discover magazine
A large part of this interview discussed what has apparently become an open topic among psychology researchers, on the lack of replication in many studies, with poor documentation of experimental protocol. Additionally is the tendency of science journals to avoid publishing negative results (that might refute earlier results) and in other cases avoid publishing confirmatory studies (which reduces the incentive of researchers to even try replication). Yong even cites cases where independent researchers don't believe their own refuting results, discounting their own work by thinking instead that they are not sufficiently observant in comparison to the original researchers. Yong's work in this area is the subject of an article in Nature, and subsequent response by one of the researchers whose work could not be replicated in subsequent research.
While physics and chemistry and the like are not immune to this problem, the social sciences have it worse inasmuch as the researchers are themselves subject to the behavior under study, and to their own biases that are hard to eliminate even when trying to do so. Roberts, as an economist, is well aware of the tendencies of researchers in his own profession to find data that conveniently aligns with their political beliefs.
Compared with hard science, measurements are much more subjective in the social sciences, while both are additionally subject to the challenges of correctly dealing with statistics. Adding to that difficulty is the need to show statistical significance in the results, a publication criteria, which leads people to hunt the data looking for correlations after the fact. This is a point which I didn't fully understand when I was first exposed to that principle because it was explained in terms of findings that had been repeatedly shown many many times through the centuries; my head at the time did not appreciate how easy it is for us to fool ourselves about correlations that look significant because our brains are good at pattern matching.
All this provides good cause for skepticism, especially toward social science research. And to be clear, I don't want it said that I am skeptical of science - far from it, as I believe that the scientific process is one of the greatest achievements of mankind. However, I will argue that there is room for improvement to that process, including that far more information be published on the experimental protocols, that negative results be given greater prominence, that all the source data be published to allow independent analysis, and that refuting papers be linked to the earlier works so that future researchers can more clearly see where a prior result was determined to be invalid.
Perhaps this topic is in the zeitgeist, but for whatever reason, a recent episode of "On the Media" (which I also listen to on podcast) has three related segments: a piece about how cultural norms in Britain to expect less criticism in science journalism get transposed to the States with the same news stories, one about the increasing number of retractions made in science journals, and a story about the people who run a web site devoted to keeping track of such retractions. RetractionWatch is the name of that site, which gives brief explanations, mundane and scandalous, for retractions from science journals (and a few others).