The Problem With Science Is Scientists

Five years ago astrophysicist and science communicator Neil deGrasse Tyson tweeted a very memorable and quote-worthy tweet:

Tyson’s ideal world appealed to many people fatigued from emotion-driven, knee jerk politics and political tribal warfare that had invaded every arena of public life, including science. It appealed to many of his fellow scientists, people trained to think objectively and test hypotheses based on observations about the natural world.

The only problem—the huge weight of evidence demonstrates why the virtual country Rationalia just plain ain’t ever gonna happen.

That’s because for humans, thinking rationally takes a tremendous amount of energy and effort. As a result, most of the time we don’t bother. Instead, the vast majority of our thinking is guided completely by our intuition—our instincts alone with none of that pesky interfering rational thought.

This dichotomy is masterfully explained in exquisite detail by Nobel Laureate Daniel Kahneman in his book Thinking Fast and Slow, and covered with a focus on political divisions in Jonathan Haidt’s masterpiece The Righteous Mind. Both are fantastic works in their own right, and provide fascinating explanations for why people have different views and why it is so difficult to change them.

More importantly, this cognitive dichotomy applies to everyone, even scientists. That may be surprising to some (including some scientists, apparently), as the media and politicians have portrayed scientists (at least the ones they agree with) as imbued with a magical ability to discern and pronounce absolute truth.

This couldn’t be further from reality. I often tell people that the difference between a scientist and the average person is that a scientist is more aware of what he/she doesn’t know about their specific field, whereas the average person doesn’t know what they don’t know. In other words, everyone suffers from crushing ignorance, but scientists are (one hopes) usually more aware of the depth of theirs. They might occasionally have an idea about how to slightly increase a particular body of knowledge, and sometimes that idea might even prove successful. But for the most part they spend their time thinking about a deep chasm of knowledge specific to their field.

Scientists are often hindered by their own years of experience and the potentially misleading intuition that has developed as a result. In the book Virus Hunter, authors C.J. Peters and Mark Olshaker tell how a former CDC director remarked that “young, inexperienced EIS (Epidemic Intelligence Service) officers CDC customarily sent out to investigate mystery disease outbreaks and epidemics actually had some advantage over their more experienced and seasoned elders. While having first-rate training and the support of the entire CDC organization, they hadn’t seen enough to have preset opinions and might therefore have been more open to new possibilities and had the energy to pursue them.” Experts are also terrible at making predictions, and as explained by researcher and author Philip Tetlock in his book Expert Political Judgement, they are no more accurate at forecasting than the average person. The more recent failures of pandemic prediction models have only strengthened this conclusion.

Most successful scientists can trace their crowning achievements to work that occurred early in their careers. This happens, not only because scientists get more job security, but because they get hampered by their own experiences and biases. When I was a lab technician in the late 90s, I remember asking an immunologist for his advice on an experiment I was planning. He ended up giving me a bunch of reasons why there wasn’t any good way to do that experiment and get useful information. I told a postdoc about this encounter, and I recall her saying “Don’t listen to him. That guy can talk you out of doing anything”. Experienced scientists are acutely aware of what doesn’t work, and that can result in an unwillingness to take risks.

Scientists operate in a highly competitive environment where they are forced to spend most of their time seeking research funding by endless grant writing, the vast majority of which are unfunded. To be competitive for this limited pool, researchers put the most positive spin on their work, and publish their most positive results. Even if the study veers off-track from what was originally planned, the resulting manuscript rarely reads that way. And these pressures often result in data analysis falling into an error-prone spectrum from more innocently emphasizing positive results to ignoring negative or contrary data to outright fabrication. Detailed examples of this are given by author Stuart Ritchie in his book Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Not only does Ritchie explain how science gets distorted by pressures for recognition and funding by well-meaning scientists, he gets into gory details about some of the most prolific fraudsters. Another excellent resource that covers scientific errors and research malfeasance is the website Retraction Watch. The sheer numbers of retracted papers, many by the same scientists, highlight the importance of documenting and attacking scientific fraud.

The problems with research data reporting and replicability have been known for years. In 2005, Stanford Professor John Ioannidis, among the most highly cited scientists, published one of the most highly cited articles (over 1,600), Why Most Published Research Findings are False. In the study, Ioannidis used mathematical simulations to show “that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” Ioannidis also offered six corollaries derived from his conclusions:

  1. The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
  2. The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
  3. The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
  4. The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
  5. The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
  6. The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

If you look at the list carefully, 5 and 6 should jump out and scream at you. Here’s a closer look:

“Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research, and typically they are inadequately and sparsely reported. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings (emphasis mine). Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable.”

“Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. This seemingly paradoxical corollary follows because, as stated above, the PPV (positive predictive value) of isolated findings decreases when many teams of investigators are involved in the same field.This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results…”

Scientists prejudiced because of their beliefs, motivated by the “heat” of the field, and thus prioritizing positive results are all blazingly obvious sources of bias in SARS-CoV-2 research. Ioannidis and colleagues have published on the explosion of published SARS-CoV-2 research, noting “210,863 papers as relevant to COVID-19, which accounts for 3.7% of the 5,728,015 papers across all science published and indexed in Scopus in the period 1 January 2020 until 1 August 2021.” Authors of COVID-19-related articles were experts in almost every field, including “fisheries, ornithology, entomology or architecture”. By the end of 2020, Ioannidis wrote, “only automobile engineering didn’t have scientists publishing on COVID-19. By early 2021, the automobile engineers had their say, too.” Others have also commented on the “covidization” of research, highlighting the reduction of quality of research as COVID mania drove researchers from unrelated fields towards the hottest and most lucrative game in town.

https://brownstone.org/articles/the-problem-with-science-is-scientists/

5 and 6 are also relevant to the myth of anthropogenic “global climate change”, another rationale for depopulation used by interests which have made a business model of environmental catastrophe.   Fear-based marketing is an important clue to scientific fraud.

“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.”
— Dr. Marcia Angell, 2009 http://www.nybooks.com/articles/archives/2009/jan/15/drug-companies-doctorsa-story-of-corruption/

“The case against science is straightforward: Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity [i.e. pervasiveness within the scientific culture] of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.”
— Richard Horton, editor in chief of Lancet http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736%2815%2960696-1.pdf

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.