Many of our deepest and most celebrated scientific laws and insights have depended crucially on quantitative approaches, from the Newton’s laws to recent advances in quantitative biology. I believe that quantitative approaches hold much promise for the future and we should aim to improve our ability to quantify and deeply understand nature as much as we can.
The phenomenal successes of quantitative approaches in some scientific realms have inspired scholars to apply pseudo quantitative approaches to some realms that we do not yet know how to quantify. Some prominent historical examples come to mind. Karl Marx claimed to have identified the immutable laws of history that indicated the inevitability of communism. Similarly, the early protagonists of eugenics believed that extending the beautifully quantitative laws of genetics supports their erroneous conclusions of ethnic supremacy. A more recent example is the quantitative assessment of individual scientific articles by the journal impact factor (JIF) of the journals in which the articles are published. This unfortunate practice has been brought to a new level by the innumerate advertisement of JIFs with 5 significant figures. Technically this is a case of false precision since the precision of the input data is often below 5 significant figures (Wiki article with explanation). This innumeracy is only the tip of the iceberg. The inadequacy of the approach (articulated in the San Francisco Declaration on Research Assessment) is a much more serious problem.
Making important decisions based on pseudo quantitative data and analysis is more than innumeracy; it is a HUGE problem. Being quantitative means respecting the limits of what can be quantified and limiting claims to what can be substantiated. The fact that somebody can somehow compute a number does not mean that the number should be taken seriously. Regretfully, sometimes pseudo quantitative approaches prove mightily influential.
In one of my favorite essays of all times, Marc Kirschner makes a compelling argument why pretending that we can evaluate the significance of scientific results contemporaneously with the scientific research is fool’s errand. We cannot, certainly not with 5 significant figures. Pretending otherwise is more than innumeracy; it is a HUGE problem. We should try by all means possible to improve our ability to assess and quantify research quality but let’s not pretend we can do that at present. We cannot, certainly not with 5 significant figures.
In the meantime, before we have the verdict of time and the community in the rear view mirror, we should be more humble. We should be honest about what we can and what we cannot do. Rather than pretending that we quantify what we cannot, we should quantify at least what we can: the reproducibility of the results. We should also avoid increasing the inequality of grant money distribution based on the shaky contemporaneous estimates of significance and impact. After all, the best and the brightest should prove their brilliance with the quality of their thought and research not with the quantity of their research output. At least historically, the scientific achievements that I judge as great and significant tend to be great in quality and not great in quantity. They depended more on creative thought, inspiration and serendipity than on great concentration of resources based on flawed, albeit quantitative, statistics.
Maybe in our efforts to be quantitative and objective, we have focused on what can be easily quantified (quantity) and pretended that it reflects what really counts (quality). A measure of humility and realignment is in order if we are to preserve and further the research enterprise.
4 thoughts on “Respect for the limits of quantification”
Pingback: Distribution of NIH funding per PI | Forest Vista
Interesting and accurate points, Nikolai! I wonder how to deal with the deluge of papers and scientific output these days. Relying on Impact Factor is just another way of saying “Help, I can’t keep up with the literature!”
LikeLiked by 1 person
Evaluating the deluge of scientific output these days is certainly very challenging and I do not know of a perfect solution. My best suggestions are:
1) Stimulate a scientific culture and implement policy measures that shift the competition from quantity to quality, i.e., stimulate individual researchers to reduce the quantity and increase the quality of their output so that a large fraction of the published literature is worth reading. I know that is easier said than done and I do not expect it to work perfectly. However, I do think that we can take very practical steps (as I discussed in the linked posts) to move in that direction.
2) Be more active as a community in writing very short summaries and sharing good papers. I am more likely to read a single-cell paper published in mid/low tier journal for which you write two strong senescences in pubmed commons than a single-cell paper published in nature magazine. I might be in the minority here. However, I think that most scientists would be influenced strongly by the comments of their respected colleagues. The collective comments of a few hundred scientists whom I respect and follow are likely to be a better, albeit imperfect, heuristic for selecting good papers than the current journals.
Pingback: The bigger picture | Slavov Lab Blog