One of my favorite readings is about the philosophy and history of science. It helps me understand why things are the way they are and where the scientific endeavor might be heading. It is also very useful to realize that, above all, science is the result of a human collective adventure. As Jacques Monod defined in Le hasard et la nécessité “The ultimate aim of the whole of science is … to clarify man’s relationship to the universe.”
A description and explanation for beginners related to the scientific endeavor are nicely presented in a book written by Peter Godfrey-Smith called Theory and reality – An introduction to the philosophy of science. Throughout the book, there is an underlying idea: the philosophy of science is about understanding science, which purpose is to describe the universe. There is a clear distinction between science and its applications as if technology was something different: both have to be kept apart to preserve science’s independence, which is one of its main strengths. Some quotes from the book in that respect:
“For (James) Kuhn, science depends on the good normal scientist’s keen interest in puzzle-solving for its own sake. Looking outside the paradigm too often to applications and external rewards is not good for normal science.”
“Kuhn warned that the insulation of science from pushes and pulls deriving from external political and economic life was a key source of science’s strength. We do not know how fragile the social structure of science might be”
“Science has a tradition of rather unconstrained inquiry, of scientists “following their noses” without worrying too much about practical implications for their work or externally set priorities…Science is an empirical mode of investigation that is not tied down to specific practical projects…The early modern period gave us the combination of empirically based work, full of practical tinkering, but led by fundamental questions and not held to a particular set of practical concerns. This is not a celebration of the un-useful, a disdain for the practical applications of science. Instead, the picture is that if you let people follow their noses, they will find out things that will eventually be very useful, though we can’t predict where they will be found. The result is a practical justification for “pure” inquiry.”
In other words, if you direct science work towards practical applications, you might kill the hen that lays golden eggs.
According to Godfrey-Smith, the applications of science are a byproduct of science itself, which means it is not inherent to it, which means the description of the world science makes is independent of its applications, and indeed, more effective without them. That is a thesis I want to challenge here.
I think that view corresponds to the origins of modern science when scientific work and technological development could go on different paths. That status quo is less and less applicable for two main reasons. The first one is the complexity of the systems studied and the level of detail to which these systems are described. Second, the technology this requires.
Looking back in time in different fields, we can hardly connect the experiments performed back then, the systems and the methods used, to the highly specialized and defined systems used at present. Today, these systems are analyzed to an incredible level of detail, using complex equipment developed and used by highly specialized staff, producing gigantic amounts of information, making its analysis and interpretation one of the main challenges in contemporary science.
This complexity had major consequences on how science is performed nowadays Firstly, the omnipresent use of informatic tools to curate, analyze, and interpret large sets of data, such as the typical example of sequencing whole genomes or the responses of large cohorts of patients in clinical trials. This led to the use of artificial intelligence to help make sense of all those letters and numbers.
The second consequence is worrisome and it has to do with the lack of reproducibility of academic research published in scientific journals. It is considered that many published research findings are false or exaggerated and an estimated 85% of research resources are wasted. In cancer research, for instance, the situation is particularly bad. At least 50% of high-impact studies (which will likely lead to some sort of clinical strategy or intervention in humans) are not reproducible. The lack of reproducibility might be due to two main reasons. On one side it is related to how prizes and recognitions are attributed. During the last decades, there has been a disproportionate increase in the pressure on scientists for productivity, reflected in the rise of a plethora of indexes full of flaws that are the rule to measure the quality of scientists’ work. The outcome is a crazy race to publish a lot in what is considered high-impact journals, which set the agenda about what is worth publishing and what is not, with a lot of politics and networking behind the scenes. This culture is resumed in the phrase “publish or perish”, which of course pushes scientists to cut corners and publish data that is not well corroborated if corroborated at all.
The second source for the lack of reproducibility is the complexity of the questions and the systems being studied, as well as the intricacy and size of the data.
In the complex current scientific research, how most experimental models work is partially known. Some systems are just too complex to know all their parts and interactions to fully understand exactly what is going on when we measure an output on a system we perturbed. A typical example is biological systems. A mammalian cell, for instance, is the brick with which mammalian organisms are built. Despite more than a century of studying these tiny entities, there’s still a lot we don’t know. Moreover, when we perturb them to see what is happening, what we detect is the summary of a huge amount of events, most of which we just do not even suspect. If you zoom out to organs or whole organisms, the complexity is very difficult to imagine. Therefore, oftentimes these systems are nicknamed black boxes because you know what you put into it (your action upon the system) and you measure and output, but what happens in the black box is very difficult to describe. In many cases, it is close to impossible to perform exactly the same experiment two independent times, because so many variables can change and most of them are not even noticed by the person experimenting.
Concerning the first source of irreproducibility, I think that it is pure human politics and conventions, and therefore can be changed. Regarding the second source, tackling it is much more difficult, as it is inherent to scientific research. Nevertheless, I will risk proposing that the application of that knowledge is a way to improve reproducibility.
Application of fundamental research into technologies, or technology/knowledge transfer, is a long, arduous, laborious, risky, very very expensive, and chaotic process. Decades and hundreds of millions of whatever currency, and the careers of a huge amount of people are at stake, and most of the time it doesn’t work. The reasons why that happens are many and very diverse. There’s a bunch of people out there having their more or less realistic set of rules to predict what might work and what might not but, as far as I know, none of them are very convincing. The cause(s) of failure can be there from the beginning and they are only seen when it’s too late to go back to the drawing board. The causes can be inherent to the project – the problem chosen to be solved by the technology is not the right one, bad planning, etc – or external – a competitor with a better solution, a new government policy that makes the application impossible. In many cases a new technology reaches the market but, surprise surprise, no one uses it.
Interestingly, a topic that is not so recurrent in the development of technologies, or at least lesser than in scientific research, is reproducibility. That is for two reasons. One is, as explained above, it is so difficult to reach a successful ending, that you have to be very sure that the idea upon which you will build your whole building is correct. The second one is traceability. When you develop a new technology that you hope to be used by a society or part of it, governments have to make sure that whatever is out there is safe and does the job it claims to do. The gatekeepers are the so-called regulatory agencies (for instance the FDA in the US or the EMA in Europe) which are, many times, the bad guys because by default they will refuse to approve your idea. They are the most boring and annoying thing you have to deal with, but they play a very important role. One of those roles is to check the data you use to ask them for permission was properly obtained. So companies spend a lot of time and money on another annoying activity: traceability. Every single experiment and clinical trial detail o has to be reported in a way that can be easily checked by a regulatory authority during an inspection.
Traceability is not really a topic in scientific research. Here, the game is to publish your data fast in a well-known scientific journal. Reviewers of these journals usually will not check the source information upon which the data was obtained, they review publications for free and they do not have the time for such activity. If you publish a paper using doubtful data, that might get noticed or not (some examples can be hilarious). Traceability in scientific research, at least in my experience, is very limited. Research institutions have no clear rules on how to do it, and I have never heard about an inspection in a research lab. Scientists might argue that it would take a lot of time and money that they just don’t have, and more importantly, it would undermine the flexibility of scientific research that allows you to try out things all the time. If you have to record, following a long set of rules, every little thing you do, then trying out things will be very difficult and therefore scientists will do it less and less, which would be detrimental to scientific creativity. That is a fair point.
So, how the rules of technology development might help scientific research to increase reproducibility? Well, develop some application of your fancy new idea. If it passes the first filters, then you will convince a lot of people that your idea is reproducible. Some scientists might argue that another way of increasing reproducibility is by repeating experiments in different labs around the world. If it is reproduced then it will be incorporated into the corpus of knowledge. I have to argue that it is just incredible the amount of scientific “facts” based on very poor data and taken for granted by scientific communities and upon which whole theories are built. This proposal is not putting fundamental research at the service of technology applications, or limiting its agenda to commercial purposes; of course, that is a risk but is not inherent to the interaction. I imagine a virtuous circle that goes back and forth from scientific research to its applications: scientific research will be translated into technology and those successful technologies will be used by scientific research to further explore new questions using well-validated tools. Indeed, in biomedical research that is already the case because scientists use medicines with well-known and validated mechanisms of action to test new hypotheses.
I think that science is seen as a success story in human mankind. But that sentiment might also create a romantic or idealistic view of how it was built. making people think that it should not be changed or we risk killing the hen that lays the golden eggs. The truth is that in some scientific fields, especially biomedicine if you look back in time at how things were done not so long ago, it is tough to understand how we got here from there, and we have no clue what things will look like in the future. I believe that a much bigger risk than subjecting science to technology development, which is not a good idea, is to refer to the mindset and the way of doing things of the heroes of the scientific Odyssey and keep it that way forever, thinking that otherwise, science will die.
Looking at the history of science as a whole, I think science’s main strength has been to help in understanding the place of man in the universe and solve the challenges societies face through time. In other words, being synchronized with and synchronizing the times. Science brought new perspectives and change, I do not see why science should not be subjected to new perspectives and change.
Bruyninckx, J. (2020, November 16). “Nanobubbles”: how, when and why does science fail to correct itself? (ERC Synergy). Maastricht University STS – Research Group on Science, Technology and Society Studies at Maastricht University. https://www.maastrichtsts.nl/erc-synergy-nanobubbles-how-when-and-why-does-science-fail-to-correct-itself/
Godfrey-Smith, P. (2003). Theory and Reality: An Introduction to the Philosophy of Science (Science and Its Conceptual Foundations series) (1st ed.). University of Chicago Press.
Ioannidis, J. P. A. (2014). How to Make More Published Research True. PLoS Medicine, 11(10), e1001747. https://doi.org/10.1371/journal.pmed.1001747
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124
Monod, J. (2018). Le Hasard Et La Nécessité: Essai Sur La Philosophie Naturelle de la Biologie Moderne (Najafizadeh.org Series in Philosophy and History of Science in Persian) (Persian Edition). Createspace Independent Publishing Platform.
Image Credit: Museo Nacional del Prado. Duelo a garrotazos. Francisco de Goya y Lucientes.