« College Bowls, 2011-12 | the singularity on econtalk » |
problems with science
Some things can look like science (e.g., be published in peer-reviewed journals) but still be subject to subtle procedural and statistical and other errors and biases that can call the results into question.
I wouldn't say that this one event demands we condemn this person's whole body of work. Statistics can be hard enough, but there are other problems like publication bias that can skew the literature by causing more "positive" results to be submitted and published. And who knows why the Lancet, a reasonably prestigious UK medical journal, took until last year to retract their 1998 publication of flawed research that alleged connections between vaccines and autism. FWIW, the ESP article appears in the Journal of Personality and Social Psychology, which ranks 9 out of 210 journals in the Psychology category, according to http://www.eigenfactor.org/map; but maybe we should be a little more skeptical about it now.The social sciences have a harder time than physical sciences - measurement is complicated by the constantly changing psychology of the experimenter and the subjects, parameters can be nebulous, there are so many uncontrolled variables in human differences, and test conditions can be subject to environmental effects that are not recorded in the lab notes or published works. My sense is that social sciences often overreach when they try to quantify individual and group behavior, because there is no overcoming many of these difficulties beyond the most rudimentary of conditions. Perhaps for these reasons we should be a little more skeptical of other broad claims and predictions in the social sciences
The interesting part of that original NYT story is how it brings out to the general public that the practice of science is not perfect, and good researchers and journals don't necessarily always produce good results. Science is done by human beings with all of our foibles and faults, so the conclusion is not that science cannot provide meaningful answers, but that scientists don't always do so. In overcoming those deficiencies there's also room for improvement to science publications, in as much as statistical problems can be repeatedly missed in the review process, and I think there are efforts in that community to be more careful with that and other problems.
For us lay people, we need to keep in mind that single studies do not necessarily represent a scientific consensus, and that extraordinary claims require extraordinary evidence. If it's "true", then better studies will make the signal stronger; any signal will dissipate otherwise