Conditional Science: Extraordinary Claims Beget Lucrative Tangents


There's a kind of scientific project that bothers me. First, an extravagant claim is made by a researcher. This then encourages other researchers to pursue a set of loosely-related questions that don't test the original claim. The importance of these derivative questions is then conditional on the truth of the initial, untested claim.

This situation is distinct from pure, curiosity-driven research. The research is not necessarily interesting in itself. It would be interesting if the original claim was true. The research is motivated — and constrained by — a mere possibility, which becomes a foundational axiom1 of the research program.

A few years ago a scientist approached me about a potential collaboration. There had been a big fuss about a particular technology. Another scientist had claimed that this technology could be responsible for an unusally wide range of health issues affecting human populations.

Significant time had passed since this claim was made and there was still little supporting evidence. The mechanistic justification for the claim had been roundly criticised by the scientific community. That the original scientist had since built a profitable company purporting to sell a "healthier" version of the technology attracted some well-earned suspicion.

In my office, the scientist and I both shared the view that the original claim was likely bogus. They reminded me, however, that there was a lot of interest in this topic and that it could be the basis of a lucrative funding proposal. The scientist assured me that while the work would focus on this technology, none of the research would relate directly to the tricky question of its health effects.

The technology in question was a commercial food product2, and it was claimed that this product was a major contributor to human disease. If that was true, some healthier alternatives would need to be developed. We might then focus on the science of their flavour and texture, such that in the event that they were needed people might find them tolerable to consume. This is a common tendency in food science because many food scientists do not have the expertise to study health effects. A product will generate interest and prestige because of its health effects (real or not) and food scientists will study everything but those health effects.

Funding proposals, press releases and lectures relating to such a project would then all lead with the premise that the technology under investigation could be a major factor in human disease, even if this claim was not going to be tested in that same project.

It became apparent in our conversation that even if the original claim was false it was true that there was now a market interest in "healthier" versions of the technology. There were companies who wanted to invest in the manufacture of these alternatives. To implement them effectively good science was required and so funding was made available by governments and corporations.

The average member of the public would soon be confronted with technologies that (prematurely) claimed to address their health issues. In parallel, there would be press releases and newspaper articles about the legitimate science behind the development of these technologies. It would not be surprising then if many people started to buy these products and assume that they were important in their lives.

To the scientists involved there was somehow no conflict between thinking that the original health claim was probably bullshit and securing research funding on the basis that the health claim was probably not bullshit. For those who did acknowledge the conflict they might have appealed to pragmatism/desperation ("I have to get my research funded somehow"), theory/misdirection ("Good science is found in the unlikeliest of places") or specialisation/ignorance ("Oh I'm not an expert in health so who is to know really").

Let's consider briefly why this might be an undesirable set of circumstances:

  • Other real problems of actual consequence exist and need to be solved. Given the existence of real problems that are known to affect people right now then these should be the focus of investment and research, not possible issues that can be reasonably assumed to not exist.

  • Public perception is influenced by the rhetoric of science. If a technology is being marketed as important then scientific research on the technology that doesn't test its efficacy may lend legitimacy to that technology, especially when the science is motivated by an untested claim to efficacy.

In popular discourse there is sometimes a skepticism about applied sciences. I think these criticisms are often grandiose and unfounded, frequently missing the important contributions that these sciences have made to society. However, I think that there is sometimes a cynicism and opportunism in how scientists choose their projects, especially when the scientific problems relate to a technology of potential commercial value. Technologies making extraordinary claims come and go by the dozen. Some eventually shrink their claims in response to regulatory scrutiny3. In the intervening period — which can span many years — scientists help shape the perception of technology, among students, funders and the public. They therefore have some responsibility for considering the impact of their choices, in terms of public understanding of science, the effective use of resources and the perceived efficacy of their own discipline.

I explained to the scientist4 that I was not interested in researching the technology. They later received funding for their project. It was a bad career decision on my part and I'm glad I made it.


In a mathematical proof an axiom is taken to be a self-evident truth, stipulated so that the proof can be constructed.


I do not want to focus on the particular technology but rather the general issue it represents.


A claim to treat a disease may become a promise of providing more "comfort".


A lovely person and great colleague by the way.