This post reflects much that is in the science news, in particular our current culture's romance with data (or, to be more market-savvy about it, Big Data). I was led to write this after listening to a BBC Radio program, The Inquiry, an ongoing series of discussions of current topics. This particular episode is titled Is The Knowledge Factory Broken?
Replicability: a problem and a symptom
The answer is pretty clearly yes. One of the clearest bits of evidence is the now widespread recognition that too many scientific results, even those published in 'major' journals, are not replicable. When even the same lab tries to reproduce previous results, they often fail. The biggest recent noise on this has been in the social, psychological, and biomedical sciences, but The Inquiry suggests that chemistry and physics also have this problem. If this is true, the bottom line is that we really do have a general problem!
But what is the nature of the problem? If the world out there actually exists and is the result of physical properties of Nature, then properly done studies that aim to describe that world should mostly be replicable. I say 'mostly' because measurement and other wholly innocent errors may lead to some false conclusion. Surprise findings that are the luck of the draw, just innocent flukes, draw headlines and are selectively accepted by the top journals. Properly applied, statistical methods are designed to account for these sorts of things. Even then, in what is very well known as the 'winner's curse', there will always be flukes that survive the test, are touted by the major journals, but pass into history unrepeated (and often unrepentant).
This, however, is just the tip of the bad-luck iceberg. Non-reproducibility is so much more widespread that what we face is more a symptom of underlying issues in the nature of the scientific enterprise itself today than an easily fixable problem. The best fix is to own up to the underlying problem, and address it.
Is it rats, or scientists who are in the treadmill?
Scientists today are in a rat-race, self-developed and self-driven, out of insatiability for resources, ever-newer technology, faculty salaries, hungry universities....and this system can be arguably said to inhibit better ideas. One can liken the problem to the famous skit in a candy factory, on the old TV show I Love Lucy. That is how it feels to many of those in academic science today.
This Inquiry episode about the broken knowledge factory tells it like it is....almost. Despite concluding that science is "sending careers down research dead-ends, wasting talent and massive resources, misleading all of us", in my view, this is not critical enough. The program suggests what I think are plain-vanilla, clearly manipulable 'solutions. They suggest researchers should post their actual data and computer program code in public view so their claims could be scrutinized, that researchers should have better statistical training, and that we should stop publishing just flashy findings. In my view, this doesn't stress the root and branch reform of the research system that is really necessary.
Indeed, some of this is being done already. But the deeper practical realities are that scientific reports are typically very densely detailed, investigators can make weaknesses hard to spot (this can be done inadvertently, or sometimes intentionally as authors try to make their findings dramatically worthy of a major journal--and here I'm not referring to the relatively rare actual fraud).
A deeper reality is that everyone is far too busy on what amounts to a research treadmill. The tsunami of papers and their online supporting documentation is far too overwhelming, and other investigators, including readers, reviewers and even co-authors are far too busy with their own research to give adequate scrutiny to work they review. The reality is that open-publishing of raw data and computer code etc. will not generally be very useful, given the extent of the problem.
Science, like any system, will always be imperfect because it's run by us fallible humans. But things can be reformed, at least, by clearing the money and job-security incentives out of the system--really digging out what the problem is. How we can support research better, to get better research, when it certainly requires resources, is not so simple, but is what should be addressed, and seriously.
We've made some of these points before, but with apology, they really do bear stressing and repeating. Appropriate measures should include:
(1) Stop paying faculty salaries on grants (have the universities who employ them, pay them);
(2) Stop using manipulable score- or impact-factor counting of papers or other counting-based items to evaluate faculty performance, and try instead to evaluate work in terms of better measures of quality rather than quantity;
(3) Stop evaluators considering grants secured when evaluating faculty members;
(4) Place limits on money, numbers of projects, students or post-docs, and even a seniority cap, for any individual investigator;
(5) Reduce university overhead costs, including the bevy of administrators, to reduce the incentive for securing grants by any means;
(6) Hold researchers seriously accountable, in some way, for their published work in terms of its reproducibility or claims made for its 'transformative' nature.
(7) Grants should be smaller in amount, but more numerous (helping more investigators) and for longer terms, so one doesn't have to start scrambling for the next grant just after having received the current one.
(8) Every faculty position whose responsibilities include research should come with at least adequate baseline working funds, not limited to start-up funds.
(9) Faculty should be rewarded for doing good research that does not require external funding but does address an important problem.
(10) Reduce the number of graduate students, at least until the overpopulation ebbs as people retire, or, at least, remove such number-counts from faculty performance evaluation.
Well, these are snarky perhaps and repetitive bleats. But real reform, beyond symbolic band-aids, is never easy, because so many people's lives depend on the system, one we've been building over more than a half-century to what it is today (some authors saw this coming decades ago and wrote with warnings). It can't be changed overnight, but it can be changed, and it can be done humanely.
The Inquiry program reflects things now more often being openly acknowledged. Collectively, we can work to form a more cooperative, substantial world of science. I think we all know what the problems are. The public deserves better. We deserve better!
PS. P.S.: In a next post, I'll consider a more 'anthropological' way of viewing what is happening to our purported 'knowledge factory'.
Even deeper, in regard to the science itself, and underlying many of these issues are aspects of the modes of thought and the tools of inference in science. These have to do with fundamental epistemological issues, and the very basic assumptions of scientific reasoning. They involve ideas about whether the universe is actually universal, or is parametric, or its phenomena replicable. We've discussed aspects of these many times, but will add some relevant thoughts in the near future.
Kaydol:
Kayıt Yorumları (Atom)
Rare Disease Day and the promises of personalized medicine
O ur daughter Ellen wrote the post that I republish below 3 years ago, and we've reposted it in commemoration of Rare Disease Day, Febru...
-
Pakistan dizileri Hint dizilerinden farklı. Onlar gibi coşkulu olmuyor genelde. Bu yüzden yarım bıraktıklarım hayli fazla. Ama bu dizi ...
-
Pakistan dizisi önyargımı biraz olsun kıran bir dizi izledim geçenlerde. Baştan söyleyeyim Hindistan dizilerindeki gibi rüzgarlar essi...
-
W e discussed a Japanese pachinko machine in an earlier post , a pinball machine, as an example of the difference between randomness and det...
Hiç yorum yok:
Yorum Gönder