reform etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster
reform etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster

The 'knowledge factory'

This post reflects much that is in the science news, in particular our current culture's romance with data (or, to be more market-savvy about it, Big Data).  I was led to write this after listening to a BBC Radio program, The Inquiry, an ongoing series of discussions of current topics.  This particular episode is titled Is The Knowledge Factory Broken?

Replicability: a problem and a symptom
The answer is pretty clearly yes.  One of the clearest bits of evidence is the now widespread recognition that too many scientific results, even those published in 'major' journals, are not replicable.  When even the same lab tries to reproduce previous results, they often fail.  The biggest recent noise on this has been in the social, psychological, and biomedical sciences, but The Inquiry suggests that chemistry and physics also have this problem.  If this is true, the bottom line is that we really do have a general problem!

But what is the nature of the problem?  If the world out there actually exists and is the result of physical properties of Nature, then properly done studies that aim to describe that world should mostly be replicable.  I say 'mostly' because measurement and other wholly innocent errors may lead to some false conclusion.  Surprise findings that are the luck of the draw, just innocent flukes, draw headlines and are selectively accepted by the top journals.  Properly applied, statistical methods are designed to account for these sorts of things.  Even then, in what is very well known as the 'winner's curse', there will always be flukes that survive the test, are touted by the major journals, but pass into history unrepeated (and often unrepentant).

This, however, is just the tip of the bad-luck iceberg.  Non-reproducibility is so much more widespread that what we face is more a symptom of underlying issues in the nature of the scientific enterprise itself today than an easily fixable problem.  The best fix is to own up to the underlying problem, and address it.

Is it rats, or scientists who are in the treadmill?
Scientists today are in a rat-race, self-developed and self-driven, out of insatiability for resources, ever-newer technology, faculty salaries, hungry universities....and this system can be arguably said to inhibit better ideas.  One can liken the problem to the famous skit in a candy factory, on the old TV show I Love Lucy.  That is how it feels to many of those in academic science today.

This Inquiry episode about the broken knowledge factory tells it like it is....almost.  Despite concluding that science is "sending careers down research dead-ends, wasting talent and massive resources, misleading all of us", in my view, this is not critical enough.  The program suggests what I think are plain-vanilla, clearly manipulable 'solutions.  They suggest researchers should post their actual data and computer program code in public view so their claims could be scrutinized, that researchers should have better statistical training, and that we should stop publishing just flashy findings.  In my view, this doesn't stress the root and branch reform of the research system that is really necessary.

Indeed, some of this is being done already.  But the deeper practical realities are that scientific reports are typically very densely detailed, investigators can make weaknesses hard to spot (this can be done inadvertently, or sometimes intentionally as authors try to make their findings dramatically worthy of a major journal--and here I'm not referring to the relatively rare actual fraud).

A deeper reality is that everyone is far too busy on what amounts to a research treadmill. The tsunami of papers and their online supporting documentation is far too overwhelming, and other investigators, including readers, reviewers and even co-authors are far too busy with their own research to give adequate scrutiny to work they review. The reality is that open-publishing of raw data and computer code etc. will not generally be very useful, given the extent of the problem.

Science, like any system, will always be imperfect because it's run by us fallible humans.  But things can be reformed, at least, by clearing the money and job-security incentives out of the system--really digging out what the problem is.  How we can support research better, to get better research, when it certainly requires resources, is not so simple, but is what should be addressed, and seriously.

We've made some of these points before, but with apology, they really do bear stressing and repeating.  Appropriate measures should include:

     (1) Stop paying faculty salaries on grants (have the universities who employ them, pay them);

     (2) Stop using manipulable score- or impact-factor counting of papers or other counting-based items to evaluate faculty performance, and try instead to evaluate work in terms of better measures of quality rather than quantity;

     (3) Stop evaluators considering grants secured when evaluating faculty members;

     (4) Place limits on money, numbers of projects, students or post-docs, and even a seniority cap, for any individual investigator;

     (5) Reduce university overhead costs, including the bevy of administrators, to reduce the incentive for securing grants by any means;

     (6) Hold researchers seriously accountable, in some way, for their published work in terms of its reproducibility or claims made for its 'transformative' nature.

     (7) Grants should be smaller in amount, but more numerous (helping more investigators) and for longer terms, so one doesn't have to start scrambling for the next grant just after having received the current one.

     (8) Every faculty position whose responsibilities include research should come with at least adequate baseline working funds, not limited to start-up funds.

     (9)  Faculty should be rewarded for doing good research that does not require external funding but does address an important problem.

     (10)  Reduce the number of graduate students, at least until the overpopulation ebbs as people retire, or, at least, remove such number-counts from faculty performance evaluation.

Well, these are snarky perhaps and repetitive bleats.  But real reform, beyond symbolic band-aids, is never easy, because so many people's lives depend on the system, one we've been building over more than a half-century to what it is today (some authors saw this coming decades ago and wrote with warnings). It can't be changed overnight, but it can be changed, and it can be done humanely.

The Inquiry program reflects things now more often being openly acknowledged. Collectively, we can work to form a more cooperative, substantial world of science.  I think we all know what the problems are.  The public deserves better.  We deserve better!

PS.  P.S.:  In a next post, I'll consider a more 'anthropological' way of viewing what is happening to our purported 'knowledge factory'.

Even deeper, in regard to the science itself, and underlying many of these issues are aspects of the modes of thought and the tools of inference in science.  These have to do with fundamental epistemological issues, and the very basic assumptions of scientific reasoning.  They involve ideas about whether the universe is actually universal, or is parametric, or its phenomena replicable.  We've discussed aspects of these many times, but will add some relevant thoughts in the near future.

Reforming research funding and universities

Any aspect of society needs to be examined on a continual basis to see how it could be improved.  University research, such as that which depends on grants from the National Institutes of Health, is one area that needs reform. It has gradually become an enormous, money-directed, and largely self-serving industry, and its need for external grant funding turns science into a factory-like industry, which undermines what science should be about, advancing knowledge for the benefit of society.  

The Trump policy, if there is one, is unclear, as with much of what he says on the spur of the moment. He's threatened to reduce the NIH budget, but he's also said to favor an increase, so it's hard to know whether this represents whims du jour or policy.  But regardless of what comes from on high, it is clear to many of us with experience in the system that health and other science research has become very costly relative to its promise and too largely mechanical rather than inspired.

For these reasons, it is worth considering what reforms could be taken--knowing that changing the direction of a dependency behemoth like NIH research funding has to be slow because too many people's self-interests will be threatened--if we were to deliver in a more targeted and cost-efficient way on what researchers promise.  Here's a list of some changes that are long overdue.  In what follows, I have a few FYI asides for readers who are unfamiliar with the issues.

1.  Reduce grant overhead amounts
FYI:  Federal grants come with direct and indirect costs.  Direct costs pay the research staff, the supplies and equipment, travel and collecting data and so on.  Indirect costs are worked out for each university, and are awarded on top of the direct costs--and given to the university administrators.  If I get $100,000 on a grant, my university will get $50,000 or more, sometimes even more than $100K.  Their claim to this money is that they have to provide the labs, libraries, electricity, water, administrative support and so on, for the project, and that without the project they'd not have these expenses. Indeed, an indicator of the fat that is in overhead is that as an 'incentive' or 'reward', some overhead is returned as extra cash to the investigator who generated it.]

University administrations have notoriously been ballooning.  Administrators and their often fancy offices depend on individual grant overhead, which naturally puts intense pressure on faculty members to 'deliver'.  Educational institutions should be lean and efficient. Universities should pay for their own buildings and libraries and pare back bureaucracy. Some combination of state support, donations, and bloc grants could be developed to cover infrastructure, if not tied to individual projects or investigators' grants. 

2.  No faculty salaries on grants
FYI:  Federal grants, from NIH at least, allow faculty investigators' salaries to be paid from grant funds.  That means that in many health-science universities, the university itself is paying only a fraction, often tiny and perhaps sometimes none, of their faculty's salaries.  Faculty without salary-paying grants will be paid some fraction of their purported salaries and often for a limited time only.  And salaries generate overhead, so they're now well paid: higher pay, higher overhead for administrators!  Duh, a no-brainer!]

Universities should pay their faculty's salaries from their own resources.   Originally, grant reimbursement for faculty investigators' salaries were, in my understanding, paid on grants so the University could hire temporary faculty to do the PI's teaching and administrative obligations while s/he was doing the research.  Otherwise, if they're already paid to do research, what's the need? Faculty salaries paid on grants should only be allowed to be used in this way, not just as a source of cash.  Faculty should not be paid on soft money, because the need to hustle one's salary steadily is an obvious corrupting force on scientific originality and creativity. 

3.  Limit on how much external funding any faculty member or lab could have
There is far too much reward for empire-builders. Some do, or at least started out doing, really good work, but that's not always the case and diminishing returns for expanding cost is typical.  One consequence is that new faculty are getting reduced teaching and administrative duties so they can (must!) write grant applications. Research empires are typically too large to be effective and often have absentee PIs off hustling, and are under pressure to keep the factory running.  That understandably generates intense pressure to play it safe (though claiming to be innovative); but good science is not a predictable factory product. 

4.  A unified national health database
We need health care reform, and if we had a single national health database it would reduce medical costs and could be anonymized so research could be done, by any qualified person, without additional grants.  One can question the research value of such huge databases, as is true even of the current ad hoc database systems we pay for, but they would at least be cost-effective.

5. Temper the growth ethic 
We are over-producing PhDs, and this is largely to satisfy the game of the current faculty by which status is gained by large labs.  There are too many graduate students and post-docs for the long-term job market.  This is taking a heavy personal toll on aspiring scientists.  Meanwhile, there is inertia at the top, where we have been prevented from imposing mandatory retirement ages.  Amicably changing this system will be hard and will require creative thinking; but it won't be as cruel as the system we have now.

6. An end to deceptive publication characteristics  
We routinely see papers listing more authors than there are residents in the NY phone book.  This is pure careerism in our factory-production mode.  As once was the standard, every author should in principle be able to explain his/her paper on short notice.  I've heard 15 minutes. Those who helped on a paper such as by providing some DNA samples, should be acknowledged, but not listed as authors. Dividing papers into least-publishable-units isn't new, but with the proliferation of journals, it's out of hand.  Limiting CV lengths (and not including grants on them) when it comes to promotion and tenure could focus researchers' attention on doing what's really important rather than chaff-building.  Chairs and Deans would have to recognize this, and move away from safe but gameable bean-counting.  

FYI: We've moved towards judging people internally, and sometimes externally in grant applications, on the quantity of their publications rather than the quality, or on supposedly 'objective' (computer-tallied) citation counts.  This is play-it-safe bureaucracy and obviously encourages CV padding, which is reinforced by the proliferation of for-profit publishing.  Of course some people are both highly successful in the real scientific sense of making a major discovery, as well as in publishing their work.  But it is naive not to realize that many, often the big players grant-wise, manipulate any counting-based system.  For example, they can cite their own work in ways that increase the 'citation count' that Deans see.  Papers with very many authors also lead to red-claiming that is highly exaggerated relative to the actual scientific contribution.  Scientists quickly learn how to manipulate such 'objective' evaluation systems.] 

7.  No more too-big-and-too-long-to-kill projects
The Manhattan Project and many others taught us that if we propose huge, open-ended projects we can have funding for life.  That's what the 'omics era and other epidemiological projects reflect today.  But projects that are so big they become politically invulnerable rarely continue to deliver the goods.  Of course, the PIs, the founders and subsequent generations, naturally cry that stopping their important project after having invested so much money will be wasteful!  But it's not as wasteful as continuing to invest in diminishing returns.  Project duration should be limited and known to all from the beginning.

8.  A re-recognition that science addressing focal questions is the best science
Really good science is risky because serious new findings can't be ordered up like hamburgers at McD's.  We have to allow scientists to try things.  Most ideas won't go anywhere.  But we don't have to allow open-ended 'projects' to scale up interminably as has been the case in the 'Big Data' era, where despite often-forced claims and PR spin, most of those projects don't go very far, either, though by their size alone they generate a blizzard of results. 

9. Stopping rules need to be in place  
For many multi-year or large-scale projects, an honest assessment part-way through would show that the original question or hypothesis was wrong or won't be answered.  Such a project (and its funds) should have to be ended when it is clear that its promise will not be met.  It should be a credit to an investigator who acknowledges that an idea just isn't working out, and those who don't should be barred for some years from further federal funding.  This is not a radical new idea: it is precedented in the drug trial area, and we should do the same in research.  

It should be routine for universities to provide continuity funding for productive investigators so they don't have to cling to go-nowhere projects. Faculty investigators should always have an operating budget so that they can do research without an active external grant.  Right now, they have to piggy-back their next idea by using funds in their current grant, and without internal continuity funding, this is naturally leads to safe 'fundable'  projects, rather than really innovative ones.  The reality is that truly innovative projects typically are not funded, because it's easy for grant review panels to fault-find and move on the safer proposals.

10. Research funding should not be a university welfare program
Universities are important to society and need support.  Universities as well as scientists become entrenched.  It's natural.  But society deserves something for its funding generosity, and one of the facts of funding life could be that funds move.  Scientists shouldn't have a lock on funding any more than anybody else. Universities should be structured so they are not addicted to external funding on grants. Will this threaten jobs?  Most people in society have to deal with that, and scientists are generally very skilled people, so if one area of research shrinks others will expand.

11.  Rein in costly science publishing
Science publishing has become what one might call a greedy racket.  There are far too many journals, rushing out half-way reviewed papers for pay-as-you-go authors.  Papers are typically paid for on grant budgets (though one can ask how often young investigators shell out their own personal money to keep their careers).  Profiteering journals are proliferating to serve the CV-padding hyper-hasty bean-counting science industry that we have established.  Yet the vast majority of papers have basically no impact.  That money should go to actual research.

12.  Other ways to trim budgets without harming the science 
Budgets could be trimmed in many other ways, too:  no buying journal subscriptions on a grant (universities have subscriptions), less travel to meetings (we have Skype and Hangout!), shared costly equipment rather than a sequencer in every lab.  Grants should be smaller but of longer duration, so investigators can spend their time on research rather than hustling new grants. Junk the use of 'impact' factors and other bean-counting ways of judging faculty.  It had a point once--to reduce discrimination and be more objective, but it's long been strategized and manipulated, substituting quantity for quality.  Better evaluation means are needed.  

These suggestions are perhaps rather radical, but to the extent that they can somehow be implemented, it would have to be done humanely.  After all, people playing the game today are only doing what they were taught they must do.  Real reform is hard because science is now an entrenched part of society.  Nonetheless, a fair-minded (but determined!) phase-out of the abuses that have gradually developed would be good for science, and hence for the society that pays for it.

***NOTES:  As this was being edited, NY state has apparently just made its universities tuition-free for those whose families are not wealthy.  If true, what a step back towards sanity and public good!  The more states can get off the grant and other grant and strings-attached private donation hooks, the more independent they should be able to be.

Also, the Apr 12 Wall St Journal has a story (paywall, unless you search for it on Twitter) showing the faults of an over-stressed health research system, including some of the points made here.  The article points out problems of non-replicability and other technical mistakes that are characteristic of our heavily over-burdened system.  But it doesn't go after the System as such, the bureaucracy and wastefulness and the pressure for 'big data' studies rather than focused research, and the need to be hasty and 'productive' in order to survive.

Rare Disease Day and the promises of personalized medicine

O ur daughter Ellen wrote the post that I republish below 3 years ago, and we've reposted it in commemoration of Rare Disease Day, Febru...