Label-free single-cell proteomics

Recently, Matthias Mann and colleagues published a preprint (doi: 10.1101/2020.12.22.423933v1) reporting a label-free mass-spectrometry method for single-cell proteomics. Many colleagues asked me what I think about the preprint, and I summarized a few comments in the peer review below. I did not examine all aspects of the work, but I hope my comments are useful:

Dear Matthias and colleagues,

I found your preprint interesting, especially as it focuses on an area that recently has received much attention. Methods for single-cell protein analysis by label-free mass-spectrometry have made significant gains over the last few years, and the method that you report looks promising. Below, I suggest how it might be improved further and benchmarked more rigorously.

To analyze single Hela cells, you combined the recently reported diaPASEF method with Evosep LC and timsTOF improvements developed in collaboration with Bruker. This is a logical next step and sounds like a good approach for label-free MS. The method quantifies about 1000 proteins per Hela cell, a coverage comparable to published DDA label-free methods (doi: 10.1039/D0SC03636F) and reported by the Aebersold group for a DIA method performed on a Lumos instrument (data presented at the third Single-Cell Proteomics Conference). This is a good coverage, though given the advantages of diaPASEF and the timsTOF improvements, there is potential for even better performance. I look forward to exploring the raw data.

The major advantage of your label-free MS approach is its speed. It is faster than previously reported label-free single-cell proteomics methods, which allowed you to analyze over 400 single Hela cells, generating the largest label-free dataset to date. This increased speed is a major advance for label-free single-cell proteomics. The speed (and thus throughput) can be increased further based on multiplexing using the isobaric carrier approach.  

You combine Hela data from single-cell MS analysis with Hela data from two scRNA-seq methods. This is good, and I think such joint analysis of protein and RNA should be an integral part of analyzing single-cell MS proteomics data. The results shown in Fig. 5A,B are straightforward to interpret and indicate that your method compares favorably to scRNA-seq in terms of reproducibility and missing data. The interpretation of Fig. 5A, B is more confounded by systematic biases. Both mass-spec and sequencing have significant biases, such as sequence-specific biases and peptide-specific ionization propensities. These biases contribute to estimates of absolute abundances (doi: 10.1038/nmeth.2031, 10.1038/nbt.2957) and might contribute to the variance captured by PC2 in Fig. 5C, and thus may alter your conclusion.



I have possible suggestions:  

— Benchmark the accuracy of relative quantification. Ideally, this can be done by measuring protein abundance in single cells by an independent method (such as fluorescent proteins measured by a FACS sorter) and comparing the measurements to the MS estimates. You may possibly choose other methods, such as spiked in protein/peptide standards. Benchmarks of accuracy (rather than merely reproducibility) would strengthen your study. 

— Order the unperturbed Hela cells by the cell division cycle (CDC) phase and display the abundances of the periodic proteins.

— Provide more discussion positioning your work in the context of the field and other approaches, in terms of technology, depth of coverage, throughput, and biological applications.


Nikolai Slavov
slavovlab.net

Evaluating preprints

I am hugely enthusiastic for communicating research by preprints. So naturally, I am happy to see when the president and strategic advisers of one of the most elite funding institutes embraces preprints:

For centuries, publishing a scientific article was just about sharing the results. More recently, publishing research articles in a journal has served two distinct functions: (i) Public disclosure and (ii) Partial validation by peer-review (Vale & Hyman, 2016). The partial validation is sometimes followed up by strong validation: (iii) Independent reproduction and building upon the published work.

Preprints clearly can serve the first function, public disclosure. It has been less clear to me how to validate and curate the highly heterogeneous research that is published as preprints. I think this question remains open, though I have seen signs that some preprints are strongly validated (independently reproduced & built upon) even before the more conventional partial validation by peer-review.

For example, the methods and ideas underlying Single Cell ProtEomics by Mass Spectrometry (SCoPE-MS) were independently validated by multiple laboratories. Some presented their results at conferences before our preprint was peer-reviewed:

Several groups published their results after our preprint was published in a peer-reviewed journal, crediting the preprint for the ideas:

More (that I know of) are underway. All inspired by a preprint.  I see this as a datapoint that preprints can get strong validation even outside of the boundaries of the peer-review system that has dominated our field for the last few decades.  It’s not a complete solution for evaluating all preprints, but I think it’s very encouraging evidence that preprints can be strongly validated even before the weak validation of peer-review!

Which publications are citable?

The number of references to a scientific publication is frequently used as an objective measure for the significance of the publication. This metric is far less precise than it may appear and in the short/medium-term it certainly fails to capture the most visionary and creative research. Consider, for example, that the publications of Richard Feynman are currently cited 6-10 times more frequently than during the peak of Feynman’s career. The respective comparison for the citations to publications by Albert Einstein is even more extreme. Still, citations are used as an influential metric, and thus the community standards of what is citable are very influential as well.

The community standards should be set by a community-wide discussion. Here, I will express my opinion in hope of stimulating discussion and soliciting more opinions. To be citable, a publication must (i) be permanent and traceable (e.g., it must have a DOI), and (ii) provide scientific support for what it is being cited. The second criterion is loaded and needs clarification. By scientific support, I mean data, reasoning and theoretical/computational results that are verifiable and refutable. Crucially, this assessment of “scientific support” must be made by the authors referring to the work. Most magazines and journals do not publish the names of the editors and the peer-reviewers, or even the contents of the peer-reviews. This type of hidden assessment and the anonymous people involved in it cannot possibly assume responsibility for the scientific merits of what is published. The scientific merits of a paper and the extent to which it provides scientific support must be evaluated by the authors referring to it.

These two criteria of what is citable apply equally to traditional papers having undergone open/hidden peer-review and to preprints uploaded on permanent servers guaranteeing timestamps and traceability. In fact, the data suggest that some communities have long recognized and adopted these standards. To my delight, I saw much enthusiasm among broader communities who have not yet adopted them:

I am optimistic that scientists will embrace their duties of independent critical assessment of the publications they refer to. I would love to hear your thoughts and ideas on what should be the criteria and the community practices for citing scientific publications.

High quality journals with low quality peer-reviews

There is much outcry about the increasing competition in scientific research. Yet, I do not hear comparable outcry about the increasing competition in the Olympic 100-meters dash. I see competition as a very powerful driving force; whether it drives positive or negative changes depends on our metrics and the system. Unlike the metrics for the Olympic 100-meters dash, the metrics for research performance seem rather poor. It is the metrics that make all the difference between competition driving better research or undermining the research enterprise.

The peer-review is the central metric in our current system for scientific research. Yet, sometimes PIs are embarrassed to show the reviews to their students who had done all the work. How can a magazine or journal be any good if it uses poor reviews to judge what to publish and what not to publish ? It cannot!

Evaluating research results will never be as simple as measuring the time for a 100-meters dash. However, I think that is not a justification for the sorry state of our current system and not a reason to despair. We can improve the quality of the peer-reviews by making their contents public, as some journals (such as molecular systems biology, eLife and others) already do. I cannot think of a good reason for other journals not to adopt this good practice. However, most journals refuse to do so. I think that publishing peer-reviews will help increase their quality and will provide useful background and additional (hopefully thoughtful) discussion of the research for interested readers. The content of the peer-reviews will also give us a useful indication for the quality of the journal, perhaps much more useful and meaningful than the summary statistics (first moment) of its visibility and the availability bias.

I would love to hear arguments for and against making the contents of the peer-reviews openly accessible for published papers. Stay tuned for a post on whether the reviewers should be anonymous. What are your thoughts ?

Papers that triumphed over their rejections

Most of us know of very significant foundational scientific results that were rejected by the major journals and magazines but have nonetheless stood the test of time and proven of exceptional importance to science. The goal of this post (work in progress) is to compile a list of such papers. I have limited the list below only to papers that proved to be exceptionally influential and for which there are reliable and traceable accounts of their rejections. Although the discoveries described by most of these rejected papers have been awarded the Nobel Prize, this has not been a criterion in compiling this list nor will it be as I expand it. Suggestions are most welcomed!

Bose–Einstein statistics and condensate, 1924

Bose, S (1924). Plancks Gesetz und Lichtquantenhypothese. Z. Physik 26: 178. doi:10.1007/BF01327326

Late in 1923 [Bose] submitted a paper to the subject to the Philosophical magazine. Six months later the editors of the magazine informed him that (regrettably) the referee’s reports on his paper were negative. Undeterred, he sent the rejected manuscript to Einstein …

[Bose, S., & Wali, KC., 2009, page 523]

The weak interaction (beta decay), 1933

Fermi, E (1934). An attempt of a theory of beta radiation. Z. phys, 88(161), 10.

Nature Editors: It contained speculations too remote from reality to be of interest to the reader

[Rajasekaran, 2014, page 20]Wikipedia

The Krebs cycle, 1937

Krebs, H, Johnson, WA (1937) The role of citric acid in intermediate metabolism in animal tissues. Enzymologia, 4, 148-156.

Hans Krebs: The paper was returned [from Nature] to me five days later accompanied by a letter of rejection written in the formal style of those days. This was the first time in my career, after having published more than fifty papers, that I had rejection or semi-rejection

[Krebs, 1981, page 98]

A year before Enzymologia published Kreb’s work, Nature published a welcome for Enzymologia that is remarkably relevant to our current concerns!

Laser, 1960

Maiman TH (1960). Stimulated Optical Radiation in Ruby. Nature 187: 493–494.

Charles H. Townes: He [Theodore Maiman] promptly submitted a short report of the work [report of the first laser] to the journal Physical Review Letters, but the editors turned it down.

[Townes, 2003]

The Higgs model, 1966

Higgs, PW (1966). Spontaneous symmetry breakdown without massless bosons. Physical Review, 145(4), 1156.

Peter Higgs: Higgs wrote a second short paper describing what came to be called “the Higgs model” and submitted it to Physics Letters, but it was rejected on the grounds that it did not warrant rapid publication.

[Higgs, 2013]

FT NMR, 1966

Ernst, RR, Anderson WA (1966) Application of Fourier transform spectroscopy to magnetic resonance. Review of Scientific Instruments, 37, 93-102.

Richard Ernst: The paper that described our achievements [awarded the 1991 Nobel Prize in Chemistry] was rejected twice by the Journal of Chemical Physics to be finally accepted and published in the Review of Scientific Instruments.

[Ernst, 1991]

Endosymbiotic theory, 1967

Sagan/Margulis, L. (1967). On the origin of mitosing cells. Journal of Theoretical Biology 14 (3): 225–274. PMID 11541392

Lynn Margulis: In 1966, I wrote a paper on symbiogenesis called “The Origin of Mitosing [Eukaryotic] Cells,” dealing with the origin of all cells except bacteria. (The origin of bacterial cells is the origin of life itself.) The paper was rejected by about fifteen scientific journals, because it was flawed; also, it was too new and nobody could evaluate it. Finally, James F. Danielli, the editor of The Journal of Theoretical Biology, accepted it and encouraged me. At the time, I was an absolute nobody, and, what was unheard of, this paper received eight hundred reprint requests.

[Brockman, 1995], Wikipedia

Magnetic Resonance Imaging (MRI), 1973

Lauterbur, PC (1973). Image formation by induced local interactions: examples employing nuclear magnetic resonance. Nature, 242(5394), 190-191.

Paul Lauterbur: You could write the entire history of science in the last 50 years in terms of papers rejected by Science or Nature.

[Wade, 2003], Wikipedia

The Cell Division Cycle, 1974

Hartwell LH, Culotti J, Pringle JR, Reid BJ (1974) Genetic control of the cell division cycle in yeast. Science 183:46–51.

John Pringle: Hartwell et al. (1974) was rejected without review by Nature, leaving a bad taste that has lasted…

[Pringle, 2013]

Missing data, 1976

Rubin DB (1976) Inference and missing data. Biometrika, 63, 581-592

Molenberghs (2007) wrote: … it is fair to say that the advent of missing data methodology as a genuine field within statistics, with its proper terminology, taxonomy, notation and body of results, was initiated by Rubin’s (1976) landmark paper. DB Rubin wrote …But was this a bear to get published! It was rejected, I think twice, from both sides of JASA; also from JRSS B and I believe JRSS A. … But I did not give up even though all the comments I received were very negative; but to me, these comments were also very confused and very wrong.

[Lin, 2014]

Descriptive versus normative economic theory, 1980

Thaler, R. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior & Organization, 1(1), 39-60.

Richard Thaler: Toward a Positive Theory of Consumer Choice was rejected by six or seven major journals

[Thaler, 2015]

Quasicrystals, 1984

Shechtman, D., Blech, I., Gratias, D., & Cahn, J. W. (1984). Metallic phase with long-range orientational order and no translational symmetry. Physical Review Letters, 53(20), 1951.

Dan Shechtman: It was rejected on the grounds that it will not interest physicists

[Shechtman, 2011]

Site-directed mutagenesis, 1987

Hutchison, C.A., Phillips S., Edgell M.H., Gillam S., Jahnke P., and Smith, M. Mutagenesis at a specific position in a DNA sequence. Journal of Biological Chemistry 253, no. 18 (1978): 6551-6560.

Michael Smith: When Michael Smith submitted his first article on site-directed mutagenesis for publication in Cell, a leading academic journal, it was rejected; the editors said it was not of general interest.

[Smith, 1993, 2011]

Interpreting mass-spectra, 1994

Eng, Jimmy K., Ashley L. McCormack, and John R. Yates. “An approach to correlate tandem mass spectral data of peptides with amino acid sequences in a protein database.” Journal of the American Society for Mass Spectrometry 5.11 (1994): 976-989.

John Yates: Fred McLafferty sent it back out to Biemann and whoever else and they rejected it again.

[Yates, 2018]

Cluster analysis and display, 1998

Eisen, MB, Spellman, PT, Brown, PO, & Botstein, D (1998). Cluster analysis and display of genome-wide expression patterns. Proceedings of the National Academy of Sciences, 95(25), 14863-14868.

David Botstein: The only thing I remember telling her [the science editor] was that it was my thought that this would someday be a citation classic, and in this case I was right

[Botstein, 2009]

Please suggest other papers that belong to this list !

References

Botstein D. (2009), Personal communication. See also Riding Out Rejection that followed up this post and interviewed David.

Brockman J. (1995), The Third Culture, New York: Touchstone, 144.

Ernst R. (1991) Biographical, http://www.nobelprize.org/

Higgs P. (2013) Biographical, http://www.nobelprize.org/, Brief History

Krebs, H. (1981), Reminiscences and Reflections, Clarendon Press, Oxford.

Lin, X., Genest, C., Banks, D. L., Scott, D. W., Molenberghs, G., & Wang, J. L. (2014). Past, present, and future of statistical science. Taylor and Francis.

Mullis, K. (1998), Dancing Naked in the Mind Field, Vintage Books, New York

Pringle, J. R. (2013). An enduring enthusiasm for academic science, but with concerns. Molecular biology of the cell, 24(21), 3281-3284.

Rajasekaran, G. (2014). Fermi and the theory of weak interactions.Resonance, 19(1), 18-44.

Bose, S., & Wali, K. C. (2009). Satyendra Nath Bose: his life and times: selected works (with commentary). World Scientific. link

Shechtman D. (2011) Nobel Lecture, http://www.nobelprize.org/

Smith, M. (2011) Science.ca

Smith, M. (1993) Biographical, http://www.nobelprize.org/

Thaler, R. H. (2015). Misbehaving: The Making of Behavioral Economics. WW Norton & Company.

Townes CH. (2003) A Century of Nature: Twenty-One Discoveries that Changed Science and the World, University of Chicago Press, Link

Wade N. (2003) American and Briton Win Nobel for Using Chemists’ Test for M.R.I.’s, The New York Times, Link

Yates, JR, The Invention of SEQUEST, SCP2018, Northeastern University

Tell me about the science, not the prizes!

The more we focus on awards and advertise career building, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

The independent and critical assessment of data and of analysis is at the core of the scientific method. Yet, the rapid growth of the scientific enterprise and the explosion of the scientific literature have made it not only hard but impossible to read, think deeply, and assess independently all published papers, or even the subset of all papers relevant to one’s research. This is alarming. It has alarmed many people thinking of creative and effective ways to evaluate the quality of scientific research. This exceptionally hard endeavor has attracted much needed attention and I am hopeful that progress will be made.

In this essay, I suggest another approach to alleviating the problem, starting with two related questions: Why is low quality “science” written up and submitted for publication and what can we do to curb such submissions? These questions touch upon the poorly quantifiable subject of human motivation. Scientists have a complex set of incentives that include understanding nature, developing innovating solutions to important problems, and aspirations for social status, prestige and successful careers. All these incentives are part of our human nature, have always existed and always will. Yet, the balance among them can powerfully affect the problems that we approach and the level of evidence that we demand to convince ourselves of the truths about nature.

In my opinion, scientific culture can powerfully affect the incentives of scientists and in the process harness the independent thought of the individual scientists — not only the external reviewers — in raising the standards and rigors of their own work. I see a culture focused on prizes and career building as inimical to science. If the efforts of bright young people are focused on building careers, they will find ways to game the system. Many already have. As long as the motivation of scientists is dominated by factors other than meeting one’s own high standards of scientific rigor, finding the scientific results worthy of our attention will remain a challenge even with the best heuristics of ranking research papers. However, if “the pleasure of finding things out” — to use Feynman’s memorable words — is a dominant incentive, the reward, the pleasure, cannot be achieved unless one can convince oneself of the veracity of the findings. The higher the prominence of this reward intrinsic to scientific discovery is, the lower the tendency to game the system and the need for external peer review.

A scientific culture that emphasizes the research results — not their external reflections in prizes and career advancement — is likely to diminish the tendency to use publishing primarily as a means of career advancement, and thus enrich the scientific literature of papers worthy our attention. We know that racial stereotypes can be very destructive and we have lessened their destructive influences by changing the popular culture. How can we apply this lesson to our scientific culture to focus on the critical and independent assessment of research and thus lessen the negative aspects of career building and glamour seeking?

A great place to begin is by replacing the headlines focused on distinctions and building careers with headlines focused on factual science. For example, the “awards” section in CVs, faculty profiles and applications for grants and tenure-track faculty-positions can be replaced by a “discoveries” section that outlines, factually, significant research findings. Similarly, great scientists should be introduced at public meetings with their significant contributions rather than with long lists of prizes and grants they received. One might introduce Egas Moniz as the great Nobel laureate and Dmitri Mendeleev as a chemist with few great awards. Much more informatively, however, one should introduce Egas Moniz as an influential protagonist of lobotomy and Dmitri Mendeleev as the co-inventor of the periodic table of elements. 

Admittedly Mendeleev and Moniz are prominent outliers but they are far from being the only examples of discrepancy between awarded prizes and scientific contributions. Still the worst aspect of focusing on prizes, grants and career building is not only the reinforcement of political misattribution of credit; far worse is the insidious influence of excessive focus on prizes and career building on the scientific culture. The more we celebrate awards, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

We should celebrate research and examples like those of Einstein and Feynman, not the prizes that purport to reflect such research. Focusing on the work and not the prize would hardly diminish the credit. Indeed, the Nobel prize derives its prestige from scientists like Einstein and Feynman and not the other way around. A prize may or may not reflect significant contributions and we should be given the opportunity to evaluate independently the contributions. We should focus on the scientific contributions not only because critical and independent evaluation of data is the mainstay of science but because it nourishes constructive scientific culture, a culture focused on understanding nature and not gaming the system. Only such a culture of independent assessment can give the best ideas a chance to to prevail over the most popular ideas.

The next time you have a chance to introduce an accomplished colleague, respect their contributions with an explicit reference to their work, not their prizes. With this act of respect you will help realign our scientific culture with its north star: the independent and critical evaluation of experiments, data, ideas, and conceptual contributions.

An edited version of this opinion essay was published by The Scientist Accomplishments Over Accolades