Which publications are citable?

The number of references to a scientific publication is frequently used as an objective measure for the significance of the publication. This metric is far less precise than it may appear and in the short/medium-term it certainly fails to capture the most visionary and creative research. Consider, for example, that the publications of Richard Feynman are currently cited 6-10 times more frequently than during the peak of Feynman’s career. The respective comparison for the citations to publications by Albert Einstein is even more extreme. Still, citations are used as an influential metric, and thus the community standards of what is citable are very influential as well.

The community standards should be set by a community-wide discussion. Here, I will express my opinion in hope of stimulating discussion and soliciting more opinions. To be citable, a publication must (i) be permanent and traceable (e.g., it must have a DOI), and (ii) provide scientific support for what it is being cited. The second criterion is loaded and needs clarification. By scientific support, I mean data, reasoning and theoretical/computational results that are verifiable and refutable. Crucially, this assessment of “scientific support” must be made by the authors referring to the work. Most magazines and journals do not publish the names of the editors and the peer-reviewers, or even the contents of the peer-reviews. This type of hidden assessment and the anonymous people involved in it cannot possibly assume responsibility for the scientific merits of what is published. The scientific merits of a paper and the extent to which it provides scientific support must be evaluated by the authors referring to it.

These two criteria of what is citable apply equally to traditional papers having undergone open/hidden peer-review and to preprints uploaded on permanent servers guaranteeing timestamps and traceability. In fact, the data suggest that some communities have long recognized and adopted these standards. To my delight, I saw much enthusiasm among broader communities who have not yet adopted them:

I am optimistic that scientists will embrace their duties of independent critical assessment of the publications they refer to. I would love to hear your thoughts and ideas on what should be the criteria and the community practices for citing scientific publications.

High quality journals with low quality peer-reviews

There is much outcry about the increasing competition in scientific research. Yet, I do not hear comparable outcry about the increasing competition in the Olympic 100-meters dash. I see competition as a very powerful driving force; whether it drives positive or negative changes depends on our metrics and the system. Unlike the metrics for the Olympic 100-meters dash, the metrics for research performance seem rather poor. It is the metrics that make all the difference between competition driving better research or undermining the research enterprise.

The peer-review is the central metric in our current system for scientific research. Yet, sometimes PIs are embarrassed to show the reviews to their students who had done all the work. How can a magazine or journal be any good if it uses poor reviews to judge what to publish and what not to publish ? It cannot!

Evaluating research results will never be as simple as measuring the time for a 100-meters dash. However, I think that is not a justification for the sorry state of our current system and not a reason to despair. We can improve the quality of the peer-reviews by making their contents public, as some journals (such as molecular systems biology, eLife and others) already do. I cannot think of a good reason for other journals not to adopt this good practice. However, most journals refuse to do so. I think that publishing peer-reviews will help increase their quality and will provide useful background and additional (hopefully thoughtful) discussion of the research for interested readers. The content of the peer-reviews will also give us a useful indication for the quality of the journal, perhaps much more useful and meaningful than the summary statistics (first moment) of its visibility and the availability bias.

I would love to hear arguments for and against making the contents of the peer-reviews openly accessible for published papers. Stay tuned for a post on whether the reviewers should be anonymous. What are your thoughts ?

“Publishing in PLOS Genetics will hurt my reputation”

I have heard extreme opinions defending and opposing the importance of scientific journals and magazines. The extremes have desensitized my reaction. Yet, this opinion shocked me: “Publishing in PLOS Genetics will hurt my reputation.”

I am so shocked as not to be sure what to make out of it. Perhaps the reputation of Enrico Fermi was ruined by describing theoretically a law of nature (the weak force) in the lowly Zeitschrift für Physik after nature magazine rejected the seminal work. Perhaps Hans Krebs’s reputation was similarly ruined by describing the Krebs cycle in the lowly Enzymologia after nature magazine rejected the seminal work. Actually, there is a long list of scientists whose careers were not ruined because they published seminal and visionary results in obscure journals after rejections by the prestigious journals and magazines. I can only conclude that scientists fearing damage to their reputations by publishing in PLOS Genetics have low confidence in the timeless importance of their “high–impact” work !

Respect for the limits of quantification

Many of our deepest and most celebrated scientific laws and insights have depended crucially on quantitative approaches, from the Newton’s laws to recent advances in quantitative biology. I believe that quantitative approaches hold much promise for the future and we should aim to improve our ability to quantify and deeply understand nature as much as we can.

The phenomenal successes of quantitative approaches in some scientific realms have inspired scholars to apply pseudo quantitative approaches to some realms that we do not yet know how to quantify. Some prominent historical examples come to mind. Karl Marx claimed to have identified the immutable laws of history that indicated the inevitability of communism. Similarly, the early protagonists of eugenics believed that extending the beautifully quantitative laws of genetics supports their erroneous conclusions of ethnic supremacy. A more recent example is the quantitative assessment of individual scientific articles by the journal impact factor (JIF) of the journals in which the articles are published. This unfortunate practice has been brought to a new level by the innumerate advertisement of JIFs with 5 significant figures. Technically this is a case of false precision since the precision of the input data is often below 5 significant figures (Wiki article with explanation). This innumeracy is only the tip of the iceberg. The inadequacy of the approach (articulated in the San Francisco Declaration on Research Assessment) is a much more serious problem.

Making important decisions based on pseudo quantitative data and analysis is more than innumeracy; it is a HUGE problem. Being quantitative means respecting the limits of what can be quantified and limiting claims to what can be substantiated. The fact that somebody can somehow compute a number does not mean that the number should be taken seriously. Regretfully, sometimes pseudo quantitative approaches prove mightily influential.

In one of my favorite essays of all times, Marc Kirschner makes a compelling argument why pretending that we can evaluate the significance of scientific results contemporaneously with the scientific research is fool’s errand. We cannot, certainly not with 5 significant figures. Pretending otherwise is more than innumeracy; it is a HUGE problem. We should try by all means possible to improve our ability to assess and quantify research quality but let’s not pretend we can do that at present. We cannot, certainly not with 5 significant figures.

In the meantime, before we have the verdict of time and the community in the rear view mirror, we should be more humble. We should be honest about what we can and what we cannot do. Rather than pretending that we quantify what we cannot, we should quantify at least what we can: the reproducibility of the results. We should also avoid increasing the inequality of grant money distribution based on the shaky contemporaneous estimates of significance and impact. After all, the best and the brightest should prove their brilliance with the quality of their thought and research not with the quantity of their research output. At least historically, the scientific achievements that I judge as great and significant tend to be great in quality and not great in quantity. They depended more on creative thought, inspiration and serendipity than on great concentration of resources based on flawed, albeit quantitative, statistics.

Maybe in our efforts to be quantitative and objective, we have focused on what can be easily quantified (quantity) and pretended that it reflects what really counts (quality). A measure of humility and realignment is in order if we are to preserve and further the research enterprise.

Papers that triumphed over their rejections

Most of us know of very significant foundational scientific results that were rejected by the major journals and magazines but have nonetheless stood the test of time and proven of exceptional importance to science. The goal of this post (work in progress) is to compile a list of such papers. I have limited the list below only to papers that proved to be exceptionally influential and for which there are reliable and traceable accounts of their rejections. Although the discoveries described by most of these rejected papers have been awarded the Nobel Prize, this has not been a criterion in compiling this list nor will it be as I expand it. Suggestions are most welcomed!

Bose–Einstein statistics and condensate, 1924

Bose, S (1924). Plancks Gesetz und Lichtquantenhypothese. Z. Physik 26: 178. doi:10.1007/BF01327326

Late in 1923 [Bose] submitted a paper to the subject to the Philosophical magazine. Six months later the editors of the magazine informed him that (regrettably) the referee’s reports on his paper were negative. Undeterred, he sent the rejected manuscript to Einstein …

[Bose, S., & Wali, KC., 2009, page 523]

The weak interaction (beta decay), 1933

Fermi, E (1934). An attempt of a theory of beta radiation. Z. phys, 88(161), 10.

Nature Editors: It contained speculations too remote from reality to be of interest to the reader

[Rajasekaran, 2014, page 20]Wikipedia

The Krebs cycle, 1937

Krebs, H, Johnson, WA (1937) The role of citric acid in intermediate metabolism in animal tissues. Enzymologia, 4, 148-156.

Hans Krebs: The paper was returned [from Nature] to me five days later accompanied by a letter of rejection written in the formal style of those days. This was the first time in my career, after having published more than fifty papers, that I had rejection or semi-rejection

[Krebs, 1981, page 98]

A year before Enzymologia published Kreb’s work, Nature published a welcome for Enzymologia that is remarkably relevant to our current concerns!

Laser, 1960

Maiman TH (1960). Stimulated Optical Radiation in Ruby. Nature 187: 493–494.

Charles H. Townes: He [Theodore Maiman] promptly submitted a short report of the work [report of the first laser] to the journal Physical Review Letters, but the editors turned it down.

[Townes, 2003]

The Higgs model, 1966

Higgs, PW (1966). Spontaneous symmetry breakdown without massless bosons. Physical Review, 145(4), 1156.

Peter Higgs: Higgs wrote a second short paper describing what came to be called “the Higgs model” and submitted it to Physics Letters, but it was rejected on the grounds that it did not warrant rapid publication.

[Higgs, 2013]

FT NMR, 1966

Ernst, RR, Anderson WA (1966) Application of Fourier transform spectroscopy to magnetic resonance. Review of Scientific Instruments, 37, 93-102.

Richard Ernst: The paper that described our achievements [awarded the 1991 Nobel Prize in Chemistry] was rejected twice by the Journal of Chemical Physics to be finally accepted and published in the Review of Scientific Instruments.

[Ernst, 1991]

Endosymbiotic theory, 1967

Sagan/Margulis, L. (1967). On the origin of mitosing cells. Journal of Theoretical Biology 14 (3): 225–274. PMID 11541392

Lynn Margulis: In 1966, I wrote a paper on symbiogenesis called “The Origin of Mitosing [Eukaryotic] Cells,” dealing with the origin of all cells except bacteria. (The origin of bacterial cells is the origin of life itself.) The paper was rejected by about fifteen scientific journals, because it was flawed; also, it was too new and nobody could evaluate it. Finally, James F. Danielli, the editor of The Journal of Theoretical Biology, accepted it and encouraged me. At the time, I was an absolute nobody, and, what was unheard of, this paper received eight hundred reprint requests.

[Brockman, 1995], Wikipedia

Magnetic Resonance Imaging (MRI), 1973

Lauterbur, PC (1973). Image formation by induced local interactions: examples employing nuclear magnetic resonance. Nature, 242(5394), 190-191.

Paul Lauterbur: You could write the entire history of science in the last 50 years in terms of papers rejected by Science or Nature.

[Wade, 2003], Wikipedia

The Cell Division Cycle, 1974

Hartwell LH, Culotti J, Pringle JR, Reid BJ (1974) Genetic control of the cell division cycle in yeast. Science 183:46–51.

John Pringle: Hartwell et al. (1974) was rejected without review by Nature, leaving a bad taste that has lasted…

[Pringle, 2013]

Missing data, 1976

Rubin DB (1976) Inference and missing data. Biometrika, 63, 581-592

Molenberghs (2007) wrote: … it is fair to say that the advent of missing data methodology as a genuine field within statistics, with its proper terminology, taxonomy, notation and body of results, was initiated by Rubin’s (1976) landmark paper. DB Rubin wrote …But was this a bear to get published! It was rejected, I think twice, from both sides of JASA; also from JRSS B and I believe JRSS A. … But I did not give up even though all the comments I received were very negative; but to me, these comments were also very confused and very wrong.

[Lin, 2014]

Descriptive versus normative economic theory, 1980

Thaler, R. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior & Organization, 1(1), 39-60.

Richard Thaler: Toward a Positive Theory of Consumer Choice was rejected by six or seven major journals

[Thaler, 2015]

Quasicrystals, 1984

Shechtman, D., Blech, I., Gratias, D., & Cahn, J. W. (1984). Metallic phase with long-range orientational order and no translational symmetry. Physical Review Letters, 53(20), 1951.

Dan Shechtman: It was rejected on the grounds that it will not interest physicists

[Shechtman, 2011]

Site-directed mutagenesis, 1987

Hutchison, C.A., Phillips S., Edgell M.H., Gillam S., Jahnke P., and Smith, M. Mutagenesis at a specific position in a DNA sequence. Journal of Biological Chemistry 253, no. 18 (1978): 6551-6560.

Michael Smith: When Michael Smith submitted his first article on site-directed mutagenesis for publication in Cell, a leading academic journal, it was rejected; the editors said it was not of general interest.

[Smith, 1993, 2011]

Interpreting mass-spectra, 1994

Eng, Jimmy K., Ashley L. McCormack, and John R. Yates. “An approach to correlate tandem mass spectral data of peptides with amino acid sequences in a protein database.” Journal of the American Society for Mass Spectrometry 5.11 (1994): 976-989.

John Yates: Fred McLafferty sent it back out to Biemann and whoever else and they rejected it again.

[Yates, 2018]

Cluster analysis and display, 1998

Eisen, MB, Spellman, PT, Brown, PO, & Botstein, D (1998). Cluster analysis and display of genome-wide expression patterns. Proceedings of the National Academy of Sciences, 95(25), 14863-14868.

David Botstein: The only thing I remember telling her [the science editor] was that it was my thought that this would someday be a citation classic, and in this case I was right

[Botstein, 2009]

Please suggest other papers that belong to this list !

References

Botstein D. (2009), Personal communication. See also Riding Out Rejection that followed up this post and interviewed David.

Brockman J. (1995), The Third Culture, New York: Touchstone, 144.

Ernst R. (1991) Biographical, http://www.nobelprize.org/

Higgs P. (2013) Biographical, http://www.nobelprize.org/, Brief History

Krebs, H. (1981), Reminiscences and Reflections, Clarendon Press, Oxford.

Lin, X., Genest, C., Banks, D. L., Scott, D. W., Molenberghs, G., & Wang, J. L. (2014). Past, present, and future of statistical science. Taylor and Francis.

Mullis, K. (1998), Dancing Naked in the Mind Field, Vintage Books, New York

Pringle, J. R. (2013). An enduring enthusiasm for academic science, but with concerns. Molecular biology of the cell, 24(21), 3281-3284.

Rajasekaran, G. (2014). Fermi and the theory of weak interactions.Resonance, 19(1), 18-44.

Bose, S., & Wali, K. C. (2009). Satyendra Nath Bose: his life and times: selected works (with commentary). World Scientific. link

Shechtman D. (2011) Nobel Lecture, http://www.nobelprize.org/

Smith, M. (2011) Science.ca

Smith, M. (1993) Biographical, http://www.nobelprize.org/

Thaler, R. H. (2015). Misbehaving: The Making of Behavioral Economics. WW Norton & Company.

Townes CH. (2003) A Century of Nature: Twenty-One Discoveries that Changed Science and the World, University of Chicago Press, Link

Wade N. (2003) American and Briton Win Nobel for Using Chemists’ Test for M.R.I.’s, The New York Times, Link

Yates, JR, The Invention of SEQUEST, SCP2018, Northeastern University

Tell me about the science, not the prizes!

The more we focus on awards and advertise career building, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

The independent and critical assessment of data and of analysis is at the core of the scientific method. Yet, the rapid growth of the scientific enterprise and the explosion of the scientific literature have made it not only hard but impossible to read, think deeply, and assess independently all published papers, or even the subset of all papers relevant to one’s research. This is alarming. It has alarmed many people thinking of creative and effective ways to evaluate the quality of scientific research. This exceptionally hard endeavor has attracted much needed attention and I am hopeful that progress will be made.

In this essay, I suggest another approach to alleviating the problem, starting with two related questions: Why is low quality “science” written up and submitted for publication and what can we do to curb such submissions? These questions touch upon the poorly quantifiable subject of human motivation. Scientists have a complex set of incentives that include understanding nature, developing innovating solutions to important problems, and aspirations for social status, prestige and successful careers. All these incentives are part of our human nature, have always existed and always will. Yet, the balance among them can powerfully affect the problems that we approach and the level of evidence that we demand to convince ourselves of the truths about nature.

In my opinion, scientific culture can powerfully affect the incentives of scientists and in the process harness the independent thought of the individual scientists — not only the external reviewers — in raising the standards and rigors of their own work. I see a culture focused on prizes and career building as inimical to science. If the efforts of bright young people are focused on building careers, they will find ways to game the system. Many already have. As long as the motivation of scientists is dominated by factors other than meeting one’s own high standards of scientific rigor, finding the scientific results worthy of our attention will remain a challenge even with the best heuristics of ranking research papers. However, if “the pleasure of finding things out” — to use Feynman’s memorable words — is a dominant incentive, the reward, the pleasure, cannot be achieved unless one can convince oneself of the veracity of the findings. The higher the prominence of this reward intrinsic to scientific discovery is, the lower the tendency to game the system and the need for external peer review.

A scientific culture that emphasizes the research results — not their external reflections in prizes and career advancement — is likely to diminish the tendency to use publishing primarily as a means of career advancement, and thus enrich the scientific literature of papers worthy our attention. We know that racial stereotypes can be very destructive and we have lessened their destructive influences by changing the popular culture. How can we apply this lesson to our scientific culture to focus on the critical and independent assessment of research and thus lessen the negative aspects of career building and glamour seeking?

A great place to begin is by replacing the headlines focused on distinctions and building careers with headlines focused on factual science. For example, the “awards” section in CVs, faculty profiles and applications for grants and tenure-track faculty-positions can be replaced by a “discoveries” section that outlines, factually, significant research findings. Similarly, great scientists should be introduced at public meetings with their significant contributions rather than with long lists of prizes and grants they received. One might introduce Egas Moniz as the great Nobel laureate and Dmitri Mendeleev as a chemist with few great awards. Much more informatively, however, one should introduce Egas Moniz as an influential protagonist of lobotomy and Dmitri Mendeleev as the co-inventor of the periodic table of elements. 

Admittedly Mendeleev and Moniz are prominent outliers but they are far from being the only examples of discrepancy between awarded prizes and scientific contributions. Still the worst aspect of focusing on prizes, grants and career building is not only the reinforcement of political misattribution of credit; far worse is the insidious influence of excessive focus on prizes and career building on the scientific culture. The more we celebrate awards, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

We should celebrate research and examples like those of Einstein and Feynman, not the prizes that purport to reflect such research. Focusing on the work and not the prize would hardly diminish the credit. Indeed, the Nobel prize derives its prestige from scientists like Einstein and Feynman and not the other way around. A prize may or may not reflect significant contributions and we should be given the opportunity to evaluate independently the contributions. We should focus on the scientific contributions not only because critical and independent evaluation of data is the mainstay of science but because it nourishes constructive scientific culture, a culture focused on understanding nature and not gaming the system. Only such a culture of independent assessment can give the best ideas a chance to to prevail over the most popular ideas.

The next time you have a chance to introduce an accomplished colleague, respect their contributions with an explicit reference to their work, not their prizes. With this act of respect you will help realign our scientific culture with its north star: the independent and critical evaluation of experiments, data, ideas, and conceptual contributions.

An edited version of this opinion essay was published by The Scientist Accomplishments Over Accolades