Accessible single-cell proteomics

Recently single-cell mass-spectrometry analysis has allowed quantifying thousands of proteins in single mammalian cells. Yet, these technologies have been adopted in relatively few mass-spectrometry laboratories. Increasing their adoption can help reveal biochemical mechanisms that underpin health and disease, and it requires robust methods that can be widely deployed in any mass spectrometry laboratory.    

This aim for a “model T” single-cell proteomics has been the guiding philosophy in the development of Single Cell ProtEomics by Mass Spectrometry (SCoPE-MS) and its version 2 (SCoPE2). We aimed to make every step easy to reproduce, from sample preparation and experimental parameters optimization to an open source data analysis pipeline. The emphasis has been on accuracy and accessibility, which has facilitated replication (video) and adoption of SCoPE2. Yet, we still found that some groups adopting these single-cell technologies fail to make quantitatively accurate protein measurements, because they skip important quality control steps of sample preparation (such as negative controls and labeling efficiency), and mass spectrometry analysis, such as apex sampling and purity of MS2 spectra. 

These observations motivated us to write a detailed protocol for multiplexed single-cell proteomics. The protocol emphasizes quality controls that are required for accurately quantifying protein abundance in single cells and scaling up the analysis to thousands of single cells. The protocol and its associated video and web resources should make single-cell proteomics accessible to the wider research community.

Multimodal single-cell proteomics

After a lag phase of skepticism, single-cell proteomics by mass-spectrometry is increasingly gathering speed. It is attracting the interest of colleagues seeking to develop and improve the methods and colleagues seeking to apply the technology to their problems of interest.

The opportunities are beckoning. Which are likely to be the most fruitful and impactful directions? I do not have a crystal ball, but I am more excited for some directions that I will summarize below.  

Empower applications
Current methods allow quantifying > 3,000 proteins across thousands of single cells. We can analyze > 200 single cells / day. These methods are ready to power many applications, and I think all method developers should release detailed protocols to facilitate broad adoption. I think this is a step with a huge marginal benefit. If our experience with the SCoPE2 protocol is indicative, writing a detailed protocol can also be constructive for the team.

Quantifying protein functions: Multimodal single-cell proteomics
The need and opportunities for increasing the depth and accuracy of measuring protein abundances are clear. I am confident that we will see much progress on this front, but I am more excited about harnessing the power of single-cell MS to measure protein functions, not merely abundance. Yes, that is hard. I think it’s also clearly possible and exciting. I alluded to one approach for measuring protein conformations, but there are many other exciting opportunities. Mass-spectrometry analysis of bulk samples affords numerous functional assays and so will single-cell mass-spectrometry analysis. It is up to our creativity to figure out assays for functional multimodal single-cell proteomics, and we will see some examples of those at the Single Cell Proteomics Conference.  

Respect for the limits of quantification

Many of our deepest and most celebrated scientific laws and insights have depended crucially on quantitative approaches, from the Newton’s laws to recent advances in quantitative biology. I believe that quantitative approaches hold much promise for the future and we should aim to improve our ability to quantify and deeply understand nature as much as we can.

The phenomenal successes of quantitative approaches in some scientific realms have inspired scholars to apply pseudo quantitative approaches to some realms that we do not yet know how to quantify. Some prominent historical examples come to mind. Karl Marx claimed to have identified the immutable laws of history that indicated the inevitability of communism. Similarly, the early protagonists of eugenics believed that extending the beautifully quantitative laws of genetics supports their erroneous conclusions of ethnic supremacy. A more recent example is the quantitative assessment of individual scientific articles by the journal impact factor (JIF) of the journals in which the articles are published. This unfortunate practice has been brought to a new level by the innumerate advertisement of JIFs with 5 significant figures. Technically this is a case of false precision since the precision of the input data is often below 5 significant figures (Wiki article with explanation). This innumeracy is only the tip of the iceberg. The inadequacy of the approach (articulated in the San Francisco Declaration on Research Assessment) is a much more serious problem.

Making important decisions based on pseudo quantitative data and analysis is more than innumeracy; it is a HUGE problem. Being quantitative means respecting the limits of what can be quantified and limiting claims to what can be substantiated. The fact that somebody can somehow compute a number does not mean that the number should be taken seriously. Regretfully, sometimes pseudo quantitative approaches prove mightily influential.

In one of my favorite essays of all times, Marc Kirschner makes a compelling argument why pretending that we can evaluate the significance of scientific results contemporaneously with the scientific research is fool’s errand. We cannot, certainly not with 5 significant figures. Pretending otherwise is more than innumeracy; it is a HUGE problem. We should try by all means possible to improve our ability to assess and quantify research quality but let’s not pretend we can do that at present. We cannot, certainly not with 5 significant figures.

In the meantime, before we have the verdict of time and the community in the rear view mirror, we should be more humble. We should be honest about what we can and what we cannot do. Rather than pretending that we quantify what we cannot, we should quantify at least what we can: the reproducibility of the results. We should also avoid increasing the inequality of grant money distribution based on the shaky contemporaneous estimates of significance and impact. After all, the best and the brightest should prove their brilliance with the quality of their thought and research not with the quantity of their research output. At least historically, the scientific achievements that I judge as great and significant tend to be great in quality and not great in quantity. They depended more on creative thought, inspiration and serendipity than on great concentration of resources based on flawed, albeit quantitative, statistics.

Maybe in our efforts to be quantitative and objective, we have focused on what can be easily quantified (quantity) and pretended that it reflects what really counts (quality). A measure of humility and realignment is in order if we are to preserve and further the research enterprise.

My best publishing experience

My experience publishing scientific research has ranged from receiving valuable critical and constructive feedback to receiving vague, utterly unsubstantiated criticisms. One example, however, stands apart. I want to share it, to spread the word for the good practice.

Last autumn, I measured changes in the composition of the ribosomes that defy our decades-old understand of this structure. Exceptional claims require exceptional evidence; I did my best to test the evidence, and by the time I was ready to submit a manuscript to a peer-reviewed journal, I was too impatient to wait for a few months before I start receiving feedback, before I know what my peers thought about the work. This impatience gave me the final push to try the new preprint server for biology research, bioRxiv. There are very many good idealistic reasons why we should all use it and a few pragmatic fears that inhibit its use. I tried it. I uploaded my manuscript.

My experience exceeded my expectations, by far! I received very constructive and thoughtful feedback from some of the most knowledgeable experts in the field, both by emails and comments on the server. Is not this what publication is supposed to do, to disseminate widely the results to our colleagues and solicit their feedback? BioRxiv worked marvelously in this regard, providing all essential aspects of publishing research results!

How about the stamp of approval of a “prestigious” journal/magazine? We see increasing number of cases undermining the credibility of this stamp. Still, I am far from denying the value of critical peer feedback; it is a very valuable initial screening, and I already got this feedback from bioRxiv. The ultimate stamp of validation is when the work is successfully reproduced by independent researchers and tested again and again over the coming decades. Of course I cannot be 100% certain that my work (or any other recent work) will survive this test of time and no peer-reviewed journal can tell me that, only time can. The essential functions of peer-reviewed journals — providing a wide dissemination and initial feedback from peers — is also provided by bioRxiv only much faster and at no expense. Use it! Let’s make publishing about sharing results and critical feedback, not about journal/magazine politics.

Slavov N., Semrau S., Airoldi E.M., Budnik B., van Oudenaarden A. (2014)
Variable stoichiometry among core ribosomal proteinsbioRxiv, PDFdoi:http://dx.doi.org/10.1101/005553

Reproducible science

How can we increase the reproducibility of scientific results? In my opinion, the solution depends on a simple principle: People respond to incentives and cultural values. Until our scientific system begins to incentivize reproducible research, we will have to deal with the problem of irreproducible results. Actions, incentives and values speak louder than words.

As I wrote previously, our scientific culture has a profound influence on the research that we do and its quality. An element of this culture is the attention that we pay to the research methods and their description. The fact that an increasing number of journals and magazines relegate the methods section to the end of their scientific articles or even just to the online supporting materials is sending a strong message that the methods are less important than the findings. Yet the findings mean nothing without solid, reliable methods. Before I am interested in the findings, I would like to know whether the methods can support them, i.e., whether the findings might be true. I agree with John Pringle that the relegation of the materials and methods section is a very unfortunate and pernicious trend for scientific journals. It is an appropriate practice for news magazines but not for journals publishing original scientific research.

Tell me about the science, not the prizes!

The more we focus on awards and advertise career building, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

The independent and critical assessment of data and of analysis is at the core of the scientific method. Yet, the rapid growth of the scientific enterprise and the explosion of the scientific literature have made it not only hard but impossible to read, think deeply, and assess independently all published papers, or even the subset of all papers relevant to one’s research. This is alarming. It has alarmed many people thinking of creative and effective ways to evaluate the quality of scientific research. This exceptionally hard endeavor has attracted much needed attention and I am hopeful that progress will be made.

In this essay, I suggest another approach to alleviating the problem, starting with two related questions: Why is low quality “science” written up and submitted for publication and what can we do to curb such submissions? These questions touch upon the poorly quantifiable subject of human motivation. Scientists have a complex set of incentives that include understanding nature, developing innovating solutions to important problems, and aspirations for social status, prestige and successful careers. All these incentives are part of our human nature, have always existed and always will. Yet, the balance among them can powerfully affect the problems that we approach and the level of evidence that we demand to convince ourselves of the truths about nature.

In my opinion, scientific culture can powerfully affect the incentives of scientists and in the process harness the independent thought of the individual scientists — not only the external reviewers — in raising the standards and rigors of their own work. I see a culture focused on prizes and career building as inimical to science. If the efforts of bright young people are focused on building careers, they will find ways to game the system. Many already have. As long as the motivation of scientists is dominated by factors other than meeting one’s own high standards of scientific rigor, finding the scientific results worthy of our attention will remain a challenge even with the best heuristics of ranking research papers. However, if “the pleasure of finding things out” — to use Feynman’s memorable words — is a dominant incentive, the reward, the pleasure, cannot be achieved unless one can convince oneself of the veracity of the findings. The higher the prominence of this reward intrinsic to scientific discovery is, the lower the tendency to game the system and the need for external peer review.

A scientific culture that emphasizes the research results — not their external reflections in prizes and career advancement — is likely to diminish the tendency to use publishing primarily as a means of career advancement, and thus enrich the scientific literature of papers worthy our attention. We know that racial stereotypes can be very destructive and we have lessened their destructive influences by changing the popular culture. How can we apply this lesson to our scientific culture to focus on the critical and independent assessment of research and thus lessen the negative aspects of career building and glamour seeking?

A great place to begin is by replacing the headlines focused on distinctions and building careers with headlines focused on factual science. For example, the “awards” section in CVs, faculty profiles and applications for grants and tenure-track faculty-positions can be replaced by a “discoveries” section that outlines, factually, significant research findings. Similarly, great scientists should be introduced at public meetings with their significant contributions rather than with long lists of prizes and grants they received. One might introduce Egas Moniz as the great Nobel laureate and Dmitri Mendeleev as a chemist with few great awards. Much more informatively, however, one should introduce Egas Moniz as an influential protagonist of lobotomy and Dmitri Mendeleev as the co-inventor of the periodic table of elements. 

Admittedly Mendeleev and Moniz are prominent outliers but they are far from being the only examples of discrepancy between awarded prizes and scientific contributions. Still the worst aspect of focusing on prizes, grants and career building is not only the reinforcement of political misattribution of credit; far worse is the insidious influence of excessive focus on prizes and career building on the scientific culture. The more we celebrate awards, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

We should celebrate research and examples like those of Einstein and Feynman, not the prizes that purport to reflect such research. Focusing on the work and not the prize would hardly diminish the credit. Indeed, the Nobel prize derives its prestige from scientists like Einstein and Feynman and not the other way around. A prize may or may not reflect significant contributions and we should be given the opportunity to evaluate independently the contributions. We should focus on the scientific contributions not only because critical and independent evaluation of data is the mainstay of science but because it nourishes constructive scientific culture, a culture focused on understanding nature and not gaming the system. Only such a culture of independent assessment can give the best ideas a chance to to prevail over the most popular ideas.

The next time you have a chance to introduce an accomplished colleague, respect their contributions with an explicit reference to their work, not their prizes. With this act of respect you will help realign our scientific culture with its north star: the independent and critical evaluation of experiments, data, ideas, and conceptual contributions.

An edited version of this opinion essay was published by The Scientist Accomplishments Over Accolades

The Best Projects Are Least Obvious

We are fortunate to live in an exciting time. Today, new technologies enable the design and execution of straightforward experiments, many of which were not possible just a few years ago. These experiments hold the potential to bring new discoveries and to improve medical care. An abundance of obvious-next-step experiments creates a buzz of activities and excitement that is quite palpable among graduate students, postdocs, and professors alike.

Such enthusiasm permeates the air and stimulates; it also overwhelms. It seems there is always so much to do and never enough time to do it. Recent findings have opened up many new research avenues, and emerging technologies are ever-alluring. How are investigators to pursue all of these things, given our limited time? Or, failing that, how can we at least choose the best leads to follow?

Much of the aforementioned buzz is often the result of an overabundance of next-step projects that are obvious to most researchers. Many of these projects are quite good, but rarely are they exceptional — at least in the sense that they result in a nontrivial connection. It’s not often that these projects help researchers advance their fields. Many such projects use novel, fashionable technologies, but bring little new perspective to the scientific community. Yet I have seen colleagues become so busy pursuing such experiments that they lack the time to complete most of their projects, or to even think conceptually and creatively.

Of course, some next-step experiments are poised to become major landmarks, as were the first gene expression measurement by RNA-seq, the first comprehensive mass spectroscopy-based quantification of a eukaryotic proteome, the first gene deletion collection, the first analysis of conserved DNA sequences in mammalian genomes, and the first induction of pluripotent stem cells. If I do not pursue the obvious experiments likely to become landmarks, someone else will, and science will progress without delay. These tempting experiments typically lure multiple independent groups, at least some of which abandon the projects once their competitors’ first big paper has been published.

Thus, none of the many tempting next-step experiments — even among these that are poised to be landmarks — is likely the best to do if I want to make a difference. After all, the many experiments that are obvious to me are likely to be obvious to most of my colleagues. Few of the most tempting experiments are likely to bring genuinely new perspectives to standing problems or find new important problems. In fact, I find that the more obvious an experiment is to me, the less likely it is to evoke a new perspective, no matter what new and fashionable technologies are used. What’s more, the more tied up I become with next-step experiments, the less time I have to think of truly great ones.

The overabundance of stimulating next-step experiments contrasts strikingly with a dearth of genuinely new perspectives. Focusing on the genuinely creative ideas rephrases the original question of “How can I possibly follow all of the many tempting avenues?” to a harder, but potentially much more fruitful question: “How can I chart a course that is truly worth following?”

An edited version of this opinion essay was published by The ScientistThe Best Projects Are Least Obvious

The mission of MIT

I still remember very clearly the key reason behind my decision to attend MIT about a decade ago. It was a statement that set MIT apart from the other top schools. On one of the MIT webpages, I read that an MIT education is a calling about understanding nature and not about building a career. Throughout my time at MIT, both as an undergrad and as a postdoc, I have seen many examples to support this mission statement that have always made MIT special for me.

Recently, however, I have been hearing more voices of an alternative culture; one puts career first and science second. I often hear my colleagues being more concerned about “spinning” and “selling” a paper rather than about understanding nature. I hear MIT students and postdocs for whom the “impact factor” (IF) of the magazine/journal in which they publish is more important than the substance of what they publish. This worship of the IF, computed and published by the Thomson corporation, is particularly odd for scientists given the methods of computing the IF, and particularly out of place at MIT (see this excellent editorial for more information). I still believe that MIT is a special place; I have met too many students and faculty passionate about science to think otherwise. Yet, I also think that we as a community should make a concerted effort to counteract the cancerous spread of the IF worship and preserve what makes MIT special. The personal example of the senior members of the community who put science first can be a particularly effective and inspiring part of such an effort. I know from personal experience because I have benefited tremendously from the example of my mentors.

This emphasis on the IF can be seen as a particular example of the general trend of decoupling merit from social reward. Such decoupling is rather widespread in all realms of life, whether actively fostered by specious advertising or passively allowed by hiring and promotion committees focusing excessively on the IF. The decoupling is perhaps more common in business than in science, perhaps more common in other academic institutions than at MIT. Yet, I find it particularly unacceptable in science and completely incongruous with MIT’s culture and mission.

This opinion appeared in the The MIT Tech

On the Wings of a Seagull

The time had stopped. Watching the chaotic ballet of ripples in the lake, I was thinking of the deep connections that unify seemingly disparate phenomena. The direction of the wind was fluctuating from second to second, switching the direction of the ripples almost in synchrony with my thoughts. In the Platonic tradition, I have always searched for and marveled at the deep unifying connections between different aspects of life and the world around me.

Still, I never suspected that the same fundamental ideas and symmetries that lie at the heart of nuclear physics, quantum chromodynamics and a huge variety of critical phenomena in physical systems would apply to biochemical networks, would govern the finely orchestrated regulation of tens of thousands of genes, would make the miracle of life possible. I wanted to savor the glory of the moment, not to be conscious of time.

Nonetheless, my thoughts were interrupted by a couple of seagulls soaring majestically in the air. The same fickle blows that caused the ripples to lurch back and forth were converted into a graceful flight. Through occasional adroit movements of their wings, the seagulls were gliding elegantly in the air. They were harnessing the erratic wind for their graceful ascent.

My immediate impulse was to emulate the seagulls, not only literally – because of my desire to glide through the air and soar toward the sky – but also figuratively, because of my desire to harness the power of the winds and storms in my life that I cannot even hope to control. I felt an irresistible urge to make the most of what I have with the grace and mastery of nature, with an insightful understanding of and respect for the deep principles shaping the universe. The seagulls went sky-high without controlling the wind, an example worth admiration! An example of gentleness and power synergistically united into a beautiful ascent.

Basking in the light of these thoughts, I sensed the blueprint of the erratic wind imprinted onto the random Hamiltonian matrices of atomic nuclei. Similarly to the seagulls, an elegant flap of wings (chiral symmetry) is enough to transform the randomness into a robust system-characteristic behavior. This is a recurrent, pervasive pattern that charms with uncanny, magnetic appeal: Complex systems transcending stochastisity and shaping randomness into exquisite creative dynamics. I know of nothing that can rival the grace and power of nature to leave freedom for creativity and still direct and orchestrate the large-scale dynamics. The world is a beautiful picture painted with gentle and adroit touches of the nature’s brush.

Beauty

I open my eyes and see beauty, a gratifying perception that inspires respect and reverence, that directs my actions and endows them with meaning and purpose. What is it? It is beyond me to verbally articulate it and yet I feel compelled to share the emotion and power that beauty engenders in me, even try to understand its essence; it is so easy to trivialize and so hard to capture the deep meaning of beauty with its multifarious forms, form a drop of dew and a falling autumn leaf to a majestic mountaintop, form a bird song to a symphony orchestra, from a snowdrop to a beautiful woman.

What is the unifying feature among all of these? Is it just a spatiotemporally organized neuronal activity, the response of my brain to fundamentally different stimuli or is it a multifaceted diamond presenting itself in hologramic mosaic of images. Is it the dynamic interplay of elegant simplicity creating astounding diversity, an intricate world that puzzles with its variety and charms with its few powerful unifying principles?

I can hardly approach these questions without biases! Being educated in the Western scientific tradition, I seek and marvel at the immutable forms from that sublime world that Plato called superlunary and Aristotle considered quintessential. Now we call it with different names but more often just tacitly assume that it exists; the implicit assumption that we can explain diverse phenomena in terms of universal principles is the very foundation behind much of modern science. It was the assumption that Newton made in inferring the laws of motion from astronomical observations of “the heavenly bodies” and then boldly generalized the principles to all terrestrial phenomena. We have indeed pursued this seductive assumption with a remarkable success and found common unifying principles behind ostensibly disparate phenomena.

I would go further and claim that even our ability to perceive the world testifies to its structure, to a set of statistical dependencies that consistently and reliably guide us through the dynamical interplay of thousands of variables; when I fail that is because my intuition cannot grasp the sharp turns in the enormous phase-space of my life, the high-order combinatorial nonlinearities that paint the dynamical portraits of life, portraits that often appear quite impressionistic while carrying their distinctive characters: Fractal portraits imprinted by stochastic influences and yet unmistakably reflecting their underlying intrinsic dynamics. I perceive and marvel and still cannot articulate the awe-inspiring grandeur that animates my life, the beauty that surrounds me — it is a feeling, a wonderfully gratifying perception!