Research fest

Academics like to discuss paper rejections and gatekeeping. The other end of this spectrum is highlighting research that deserves our attention.

Let’s promote good research: Share it in accessible and engaging ways. Put it in context and help your colleagues appreciate it. The more we can put substance ahead of hype, the more science and our colleagues benefit from our highlights.

Below are links to tweets of papers that I find interesting and worth sharing!

Inadvertent Support

Everyday, thousands of colleagues use social media to express surprise, dislike, or even outrage for the impact factor, for articles in luxury journals, against closed access, against Trump and so on and so forth.

This voluminous response carries a powerful self-sabotagemessage about the influence and visibility of what is criticized; this response and its hyperlinks tell internet search engines just how influential and thus highly ranked the criticized pages should be. It is a self-defeating response, a response providing strong and vital support for the nemeses. This support is unintended and inadvertent but powerful.

There is another option. Focus on spreading and sharing what you like and admire, i.e., what is worth sharing. Whether that is a great paper in a luxury journal or a great paper in a less visible journal, share it for its own merits. Emphasize the good since the bad is not worth your time or my time, or the high rank that search engines will give it. And what about the these transgressions that you find outrageous? Ignoring them is a far more powerful and effective message than honoring them with your attention. They do not deserve attention. Consider this:

Ellsworth Toohey: There’s the building that should have been yours. There are buildings going up all over the city which are great chances refused and given to incompetent fools. You’re walking the streets while they’re doing the work that you love but cannot obtain. This city is closed to you. It is I who have done it! Don’t you want to know my motive?
Howard Roark: No!
Ellsworth Toohey: I’m fighting you and shall fight you in every way I can.
Howard Roark: You’re free to do what you please!
Ellsworth Toohey: Mr. Roark, we’re alone here. Why don’t you tell me what you think of me in any words you wish.
Howard Roark: But I don’t think of you!
[Roark walks away and Toohey’s head slumps down]

– The Fountainhead

Magnanimity pays off

Earlier this year, I read an inspiring recollection (by Sydney Brenner) of a grand scientific milestone: the elucidation of the genetic code. How do DNA nucleotides code for the amino-acid sequence of proteins? This fundamental question had captivated numerous scientists, including Francis Crick and Sydney Brenner. The punchline of this wonderful interview/recollection is a magnanimous act by Francis Crick:

crick-13154-portrait-mini-2xIn August 1961, more than 5,000 scientists came to Moscow for five days of research talks at the International Congress of Biochemistry. A couple of days in, Matt Meselson, a friend of Crick’s, told him the news: The first word of the genetic code had been solved, by somebody else. In a small Friday afternoon talk at the Congress, in a mostly empty room, Marshall Nirenberg—an American biochemist and a complete unknown to Crick and Brenner—reported that he had fed a single repeated letter into a system for making proteins, and had produced a protein made of repeating units of just one of the amino acids. The first word of the code was solved. And it was clear that Nirenberg’s approach would soon solve the entire code.

Here’s where I like to imagine what I would have done if I were Crick. For someone driven solely by curiosity, Nirenberg’s result was terrific news: The long-sought answer was arriving. The genetic code would be cracked. But for someone with the human urge to attach one’s name to discoveries, the news could not have been worse. Much of nearly a decade’s worth of Crick and Brenner’s work on the coding problem was about to be made redundant.

I’d like to believe I would have reacted honorably. I wouldn’t have explained away Nirenberg’s finding to myself, concocting reasons why it wasn’t convincing. I wouldn’t have returned to my lab and worked a little faster to publish my own work sooner. I’ve seen scientists react like this to competition. I’d like to believe that I would have conceded defeat and congratulated Nirenberg. Of course, I’ll never know what I would have done.

Crick’s response was, to me, remarkable and exemplary. He implored Nirenberg to give his talk again, this time to announce the result to more than 1,000 people in a large symposium that Crick was chairing. Crick’s Moscow meeting booklet survives as an artifact of his decision, with a hand-written “Nirenberg” in blue ink, and a long arrow inserting into an already-packed schedule the scientist who had just scooped him. And when Nirenberg reached the stage, he reported that his lab had just solved a second word of the code.

by Bob Goldstein 

I admire Crick’s reaction. It is very honorable. In the long run, it helped both science and Crick’s reputation. Nirenberg had a correct result and sooner or later, he was going to receive credit for it. Crick facilitated this process, and in the process Crick only added to his own credit. Our current admiration for Crick’s reaction at the Moscow conference is the only proof I need.

Any interpretation that sees Crick’s magnanimous act as being good only for the science but bad for Crick’s personal reputation is myopic; it misses the long run. It misses mine (and hopefully yours) opinion of Crick’s magnanimous act.

Chalk talks are awesome!

Many articles discuss the process of obtaining an independent academic position. Some articles aim at objective quantitative analysis of data while other articles present particularly clean and well controlled cases. To this collection, I want to add my experience and to focus particularly on important aspects that cannot be captured by systematic analyses of aggregated data. My account, on the other hand, is certain to be biased by aspects specific to my background and research.


The good

I interviewed for 12 positions and I enjoyed the interviews, every single one of them. There are many good experiences to discuss, but I will focus on the highlights. The first highlight is that in every search committee, some professors had read my research articles carefully and understood them deeply. This enabled very thoughtful and fun discussions. The second highlight, in a similar vein, were the chalk talks. I absolutely loved those. At every single chalk talk, I received engaging and thoughtful questions; my chalk talk at EMBL was particularly memorable in this regard. I feel that the collective experience of my 12 chalk talks contributed much to refining my future research program.


The bad

For most of my interviews (except for 2), the heads of the search committees and the senior professors told me that they have decided to make me an offer and they are in the process of finalizing the formal steps. In many cases, the formal steps were never finalized, and I never received offers. In some of the cases, I know that the deans decided that they cannot afford to spend the allocated money of the budget on a start-up. In another case, the offer was halted by interdepartmental differences in preference. Such promises that did not materialize were my only significant negative experience.


The bottom line

My experience of very thoughtful interviews contrasts markedly with the widely discussed cynical views that selection is mostly based on grants, and papers in magazines with the highest impact factors (IF). It could be that none of the IF-focused committees invited me for interviews because I never published in Nature and Science magazines. My doctoral mentor refused to be a senior author of papers submitted for publication to Nature or Science. I accepted this attitude in large part because none of his influential papers were published in these magazines, and in fact many of the most influential papers transformed scientific paradigms despite being published in journals without exceptional claims.

I was, at the time, very disappointed that some of the offers fell through, particularly for one of the places that seemed to be absolutely ideal for me. In retrospect, however, I feel less certain that in the long run that place would have been better for me than Northeastern University, especially considering my personal life. There are many ways leading to success, and their beginnings are not always predictive of their future meanderings.

Based on my experience, interviews for tenure track positions are not to be feared but to be anticipated and enjoyed. If your experience is anything like mine, you will meet many thoughtful colleagues and perhaps even start new collaborations. Enjoy the adventure!

The dark age of 20th century science


Wegener fossil map

Continental displacement/drift theory is among the most significant paradigm changes in science. It transformed our understanding of the planet Earth; a static picture of fixed continents gave way to a model of dynamic tectonic plates that shape our landscape, from mountains to continents. At present, the movement of continents is supported by direct measurements and established with the highest confidence allowed by the scientific method. However, the scientific establishment persistently rejected continental displacement/drift theory for over half a century despite the compelling evidence that Alfred Wegener provided in support of the theory. Why? Why did geologists and geophysicists ignore the evidence supporting Wegener’s theory?

The reasons include (i) the dogmatic belief that the oceanic crust was too firm for the continents to “simply plough through”, (ii) that most geologists viewed Wegener as an outsider to their field, and (iii) that Wegener’s data and arguments were misunderstood. Perhaps having biases about what is possible (i.e., reason i) and about people (i.e., reason ii) is inevitable human folly. Yet, allowing these biases to be justified by misinterpreting and ignoring data (i.e., reason iii) seems less inevitable. Rather, allowing dogma to suppress reason is the very antithesis of the scientific method, the very weakness of the human mind that the scientific method attempts to avoid.

How and why were Wegener’s data and arguments misunderstood? One of Wegener’s arguments was that the coastlines of different continents match one another complementary. After continent separation, erosion sculpts coastlines and thus complicates the analysis of coastline complementarity.  To mitigate these complications, Wegener used the 200m isobath, not the modern coastlines. Still, his contemporaries incorrectly claimed that Wegener used modern coastlines, ignoring that Wegener used the 200m isobath. At the same time, mainstream geologists condoned the grave problems of the alternative theory favored by the establishment; this alternative theory posited — without empirical support — the existence of flooded land–bridges across the continents as an alternative way of explaining observed fossil distributions.

This dark age of 20th century science ended with data and reason triumphing over dogma, similarly to other discoveries that triumphed over their rejections. That is very encouraging. We should also remember, however, that dogma held reason back for half a century. This did not happen in Mediaeval Europe. It happened in the age of “modern science” and the atomic bomb. It happened because of sloppy reasoning and twisting arguments in favour of the preferred conclusion, the dogma of the time. What could be more inimical to science ? What is the antidote ?

Can mere hype succeed in the long-term ?

Michael I. Jordan, one of the most prominent statisticians of our time, warns that “The overeager adoption of big data is likely to result in catastrophes…“. This comment raises a question: If much of the enthusiasm for big data is indeed mere hype, is the big data movement likely to deliver results in the long–term?

I believe that the big data movement will deliver many significant results, but not because it is a particularly good new idea substantiated by real progress in statistics, machine learning, or useful data collection. I agree that we “… are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.”. Much of big data nowadays seems to be about manipulating people for political and mercenary purposes based on simplistic statistical models.

I think that any hype is likely to “succeed” if it attracts enough attention and resources and if it is stated broadly/vaguely enough. Resources and prominence attract great people and they bring many ideas, most of which way beyond the scope and the imagination of the people who started the hype. Some of these ideas are likely to be good and result in significant progress. A decade from now, we will be able to look back and see that many significant methodological developments in statistics were enabled by the big data hype and most likely would not have happened without it. What we would not be able to see are all the other, perhaps much more significant and beneficial, developments that would have happened if the resources and the attention were allocated to other areas.

Yes, mere hype can succeed in the long-term. Attracting resources and attention is the key to the success of any huge initiative, and a hype is perhaps the easiest way to attract them. Huge initiatives attract at least some good people who are likely to come up with at least some good ideas that deliver something useful. If the hype was stated broadly/vaguely enough, these results seem to validate the vision of the people who started the movement. The hype is declared a success.

Honor or Disrespect

Before presenting, a scientist is introduced to a group of expert scientists with a list of prizes, awards, grants, titles and fellowships but no word of the research that led to these accolades. I have been in the audience many times, and every single time I felt disrespected. I also felt that the presenter’s research was being disrespected.

I previously wrote why I find this type of introduction subversive to our scientific culture. Why is it disrespectful to the presenter and the audience? On the surface, some may see it as honoring the presenter and their research. However, I see it as delegating the critical thinking and the evaluation to external, often anonymous and sometimes highly political, committees. Instead of proving useful background framing the research we are about to hear, it asserts the authority of the presenter based on accolades. Assertion of authority is inimical to critical thinking. If the presenter has done significant research, the audience of scientists — attending the talk because of their interest and expertise in the research area — should be able to understand and appreciate the research. The audience would not appreciate the significance only if they lack expertise or if the research accomplishments of the presenter are not highly significant. These two possibilities are exactly my interpretation of an introduction enumerating a list of accolades. Both possibilities are disrespectful!

Respect for the limits of quantification

Many of our deepest and most celebrated scientific laws and insights have depended crucially on quantitative approaches, from the Newton’s laws to recent advances in quantitative biology. I believe that quantitative approaches hold much promise for the future and we should aim to improve our ability to quantify and deeply understand nature as much as we can.

The phenomenal successes of quantitative approaches in some scientific realms have inspired scholars to apply pseudo quantitative approaches to some realms that we do not yet know how to quantify. Some prominent historical examples come to mind. Karl Marx claimed to have identified the immutable laws of history that indicated the inevitability of communism. Similarly, the early protagonists of eugenics believed that extending the beautifully quantitative laws of genetics supports their erroneous conclusions of ethnic supremacy. A more recent example is the quantitative assessment of individual scientific articles by the journal impact factor (JIF) of the journals in which the articles are published. This unfortunate practice has been brought to a new level by the innumerate advertisement of JIFs with 5 significant figures. Technically this is a case of false precision since the precision of the input data is often below 5 significant figures (Wiki article with explanation). This innumeracy is only the tip of the iceberg. The inadequacy of the approach (articulated in the San Francisco Declaration on Research Assessment) is a much more serious problem.

Making important decisions based on pseudo quantitative data and analysis is more than innumeracy; it is a HUGE problem. Being quantitative means respecting the limits of what can be quantified and limiting claims to what can be substantiated. The fact that somebody can somehow compute a number does not mean that the number should be taken seriously. Regretfully, sometimes pseudo quantitative approaches prove mightily influential.

In one of my favorite essays of all times, Marc Kirschner makes a compelling argument why pretending that we can evaluate the significance of scientific results contemporaneously with the scientific research is fool’s errand. We cannot, certainly not with 5 significant figures. Pretending otherwise is more than innumeracy; it is a HUGE problem. We should try by all means possible to improve our ability to assess and quantify research quality but let’s not pretend we can do that at present. We cannot, certainly not with 5 significant figures.

In the meantime, before we have the verdict of time and the community in the rear view mirror, we should be more humble. We should be honest about what we can and what we cannot do. Rather than pretending that we quantify what we cannot, we should quantify at least what we can: the reproducibility of the results. We should also avoid increasing the inequality of grant money distribution based on the shaky contemporaneous estimates of significance and impact. After all, the best and the brightest should prove their brilliance with the quality of their thought and research not with the quantity of their research output. At least historically, the scientific achievements that I judge as great and significant tend to be great in quality and not great in quantity. They depended more on creative thought, inspiration and serendipity than on great concentration of resources based on flawed, albeit quantitative, statistics.

Maybe in our efforts to be quantitative and objective, we have focused on what can be easily quantified (quantity) and pretended that it reflects what really counts (quality). A measure of humility and realignment is in order if we are to preserve and further the research enterprise.

Tell me about the science, not the prizes!

The more we focus on awards and advertise career building, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

The independent and critical assessment of data and of analysis is at the core of the scientific method. Yet, the rapid growth of the scientific enterprise and the explosion of the scientific literature have made it not only hard but impossible to read, think deeply, and assess independently all published papers, or even the subset of all papers relevant to one’s research. This is alarming. It has alarmed many people thinking of creative and effective ways to evaluate the quality of scientific research. This exceptionally hard endeavor has attracted much needed attention and I am hopeful that progress will be made.

In this essay, I suggest another approach to alleviating the problem, starting with two related questions: Why is low quality “science” written up and submitted for publication and what can we do to curb such submissions? These questions touch upon the poorly quantifiable subject of human motivation. Scientists have a complex set of incentives that include understanding nature, developing innovating solutions to important problems, and aspirations for social status, prestige and successful careers. All these incentives are part of our human nature, have always existed and always will. Yet, the balance among them can powerfully affect the problems that we approach and the level of evidence that we demand to convince ourselves of the truths about nature.

In my opinion, scientific culture can powerfully affect the incentives of scientists and in the process harness the independent thought of the individual scientists — not only the external reviewers — in raising the standards and rigors of their own work. I see a culture focused on prizes and career building as inimical to science. If the efforts of bright young people are focused on building careers, they will find ways to game the system. Many already have. As long as the motivation of scientists is dominated by factors other than meeting one’s own high standards of scientific rigor, finding the scientific results worthy of our attention will remain a challenge even with the best heuristics of ranking research papers. However, if “the pleasure of finding things out” — to use Feynman’s memorable words — is a dominant incentive, the reward, the pleasure, cannot be achieved unless one can convince oneself of the veracity of the findings. The higher the prominence of this reward intrinsic to scientific discovery is, the lower the tendency to game the system and the need for external peer review.

A scientific culture that emphasizes the research results — not their external reflections in prizes and career advancement — is likely to diminish the tendency to use publishing primarily as a means of career advancement, and thus enrich the scientific literature of papers worthy our attention. We know that racial stereotypes can be very destructive and we have lessened their destructive influences by changing the popular culture. How can we apply this lesson to our scientific culture to focus on the critical and independent assessment of research and thus lessen the negative aspects of career building and glamour seeking?

A great place to begin is by replacing the headlines focused on distinctions and building careers with headlines focused on factual science. For example, the “awards” section in CVs, faculty profiles and applications for grants and tenure-track faculty-positions can be replaced by a “discoveries” section that outlines, factually, significant research findings. Similarly, great scientists should be introduced at public meetings with their significant contributions rather than with long lists of prizes and grants they received. One might introduce Egas Moniz as the great Nobel laureate and Dmitri Mendeleev as a chemist with few great awards. Much more informatively, however, one should introduce Egas Moniz as an influential protagonist of lobotomy and Dmitri Mendeleev as the co-inventor of the periodic table of elements. 

Admittedly Mendeleev and Moniz are prominent outliers but they are far from being the only examples of discrepancy between awarded prizes and scientific contributions. Still the worst aspect of focusing on prizes, grants and career building is not only the reinforcement of political misattribution of credit; far worse is the insidious influence of excessive focus on prizes and career building on the scientific culture. The more we celebrate awards, the more we attract people seeking awards and glamorous careers, and the bigger the burden on the peer review system.

We should celebrate research and examples like those of Einstein and Feynman, not the prizes that purport to reflect such research. Focusing on the work and not the prize would hardly diminish the credit. Indeed, the Nobel prize derives its prestige from scientists like Einstein and Feynman and not the other way around. A prize may or may not reflect significant contributions and we should be given the opportunity to evaluate independently the contributions. We should focus on the scientific contributions not only because critical and independent evaluation of data is the mainstay of science but because it nourishes constructive scientific culture, a culture focused on understanding nature and not gaming the system. Only such a culture of independent assessment can give the best ideas a chance to to prevail over the most popular ideas.

The next time you have a chance to introduce an accomplished colleague, respect their contributions with an explicit reference to their work, not their prizes. With this act of respect you will help realign our scientific culture with its north star: the independent and critical evaluation of experiments, data, ideas, and conceptual contributions.

An edited version of this opinion essay was published by The Scientist Accomplishments Over Accolades

Discoveries lie hidden behind the façade of popular assumptions

This Q&A appeared on the blog of Cell Reports: Discoveries lie hidden behind the façade of popular assumptions: Q&A with Nikolai Slavov. The article is also highlighted in this news blog.

The latest published work in Cell Reports includes this intriguing paper from Nikolai Slavov and Alexander van Oudenaarden and their colleagues at Harvard and MIT: Constant Growth Rate Can Be Supported by Decreasing Energy Flux and Increasing Aerobic Glycolysis. They show (in yeast batch cultures) that exponential growth at a constant growth rate represents not a single metabolic/physiological state but a continuum of changing states characterized by different oxidative- and heat-stress resistance, protein expression, and metabolic fluxes. We asked Dr. Slavov to tell us more about the work, his ideas, and his experiences.

How did you get into this area? What drew you to this question?

Cells can produce energy (ATP) via fermentation or via respiration. Although respiration has higher ATP yield per glucose molecule, cancer/yeast cells tend to ferment most glucose into lactate/ethanol even in the presence of sufficient oxygen to support respiration, a phenomenon known as aerobic glycolysis. This apparently counter–intuitive metabolic strategy of using the less energy-efficient pathway is conserved from yeast to human and has been extensively studied for decades; yet it remains poorly understood.

One can come up with very many reasonable trade-offs that theoretically could account for aerobic glycolysis. Such hypotheses make sense and appear plausible but are diametrically opposing each other. For example, aerobic glycolysis could either increase the total rate of ATP production (if the flux of fermented glucose increases enough to overcompensate for the reduction in ATP flux generated by respiration) or decrease the total rate of ATP production (if the flux of fermented glucose does not increase enough to compensate for the reduction in ATP flux generated by respiration). These hypotheses are exactly the opposite of each other, but they both appear plausible and have indeed been suggested and hotly contested in the literature. Yet, in the absence of direct measurements of the absolute rates of respiration and fermentation, these hypotheses cannot be distinguished.

Our motivation was to collect direct and accurate measurements of the absolute rates of respiration and fermentation that can distinguish the trade-offs relevant to cells from the ones that appear plausible and theoretically possible but are not relevant to living cells. Direct measurements were essential. We wanted to directly detect and quantify carbon dioxide and oxygen, not their surrogates, such as changes in pH and fluoresce of oxygen-binding fluorophores.

Any interesting moments/stories from your early life as a scientist?

I did my doctoral research in the Botstein lab, which was a great learning experience. I found David’s opinions to be substantiated by deep insight and compelling data. There was one exception: David claimed that yeast cells do not reach steady-state during the standard batch conditions of cultivation. I did not believe that claim. My disbelief came from assuming that exponentially growing cells are at steady-state and from having convinced myself that the growth of a yeast batch culture can be exponential; I had measured carefully the growth of yeast batch cultures and found that the deviations from exponential growth at low biomass-densities, if any, were smaller than my measurement error (<0.2%). I took such exponential growth over several doublings at a constant rate as evidence for steady-state.

The data in our Cell Reports paper convinced me that – contrary to my assumption – exponentially growing cells can represent not a single metabolic/physiological state but a continuum of changing states characterized by different metabolic fluxes. This result reconciles perfectly my measurements of exponential growth in batch cultures with the claim that batch cultures do not reach a steady-state. This reconciliation was not part of my motivation for doing the experiments, but it is nonetheless a particularly gratifying resolution of a long-standing question in my mind.

What were some of the key factors that facilitated the success of your research?

One key factor was collecting quantitative measurements in a well-controlled system. Quantitative data are often essential even for making qualitative observations. For example, I find the observations that aerobic glycolysis increases and the total ATP flux decreases during the first exponential growth phase very interesting even as qualitative observations. However, these qualitative observations depended crucially on collecting and analyzing quantitative data.

Another key factor was making direct measurements. I found my data and their implications so surprising that if my measurements were not direct – no matter how quantitative – I would have ignored the results, at least until I could come up with a direct approach to measuring the relevant fluxes. For example, if I had estimated the carbon dioxide flux by an indirect surrogate – such as changes in the pH – I would not have had the confidence to overturn long-standing assumptions.

What are the big questions right now in your field? The big challenges? Big changes?

A primary challenge in systems biology, which we also encountered during the work on our Cell Reportspaper, is the causal interpretation and conceptual understanding of coincident/correlated events during complex physiological responses. We do not have a general approach, experimental or theoretical, to confidently deconvolve direct causal interactions from the many indirect correlations that we observe. We can easily make computational inferences based on a myriad of algorithms that are likely correct but not inferences that are certainly correct. We can also overexpress and delete individual genes or small groups of genes, which is very helpful. However, even such perturbation experiments fall far short of identifying and understanding the mechanisms of biological dynamics dependent on multiple molecules, as physiological responses often are.

Any interesting stories about this work? Setbacks or unexpected insights? Mistakes, humor, epic experiments, all-nighters?

The surprising trends in the data brought both thrilling excitement and excruciating discomfort from the possibility of artifacts. I had plenty of all-nighters during the long time-courses (over 50 hours) and many early-morning visits to the lab since I would wake up before sunrise wondering how the data from the new experiment running overnight looked and whether they remained consistent with the current model. Initially I was very skeptical of the pervasive dynamics during exponential growth and did a lot of control experiments – some of which provided interesting new leads – just to convince myself and rule out artifacts.

What would you like non-scientists to know about your work?

In my opinion, the most general lesson is to always be a little skeptical of well-established assumptions, especially those that allow convenient simplifications and have been accepted before precise quantitative measurements were available. Rather, one should collect the most direct empirical data that one can. We have a lot to learn and understand about even the most widely used and studied scientific model systems if we approach them quantitatively. I strongly believe that much of this understanding is essential to developing effective therapies with minimal unintended consequences. Without understanding, we may engineer desirable results but cannot rule out potential unintended consequences of our assumptions.

What are the next steps for your group and/or this project?

I think that our results raise many questions. One question that I find intriguing, even though we did not discuss it in our report, is that some the measured dynamics might reflect anticipatory cellular responses. Scientific systems are often chosen or assumed to be at steady-state since the steady-state assumption simplifies the analysis. However, cells in the real world often exist in a more dynamic environment. Optimal responses to dynamic environments require sensing environmental changes and hedging the optimal future outcomes. My speculative guess is that sensing the dynamical changes in the growth conditions is among the factors causing the dynamics that we observed during growth at a constant rate. Coming up with clever experiments to characterize such dynamical sensing and responses can add significantly to our understanding of cellular physiology in changing environments.

This Q&A appeared on the blog of Cell Reports: Discoveries lie hidden behind the façade of popular assumptions: Q&A with Nikolai Slavov

Slavov N.*, Budnik B., Schwab D., Airoldi E.M., van Oudenaarden A.* (2014)
Constant Growth Rate Can Be Supported by Decreasing Energy Flux and Increasing Aerobic Glycolysis
Cell Reports, vol. 7