Academics like to discuss paper rejections and gatekeeping. The other end of this spectrum is highlighting research that deserves our attention.
Let’s promote good research: Share it in accessible and engaging ways. Put it in context and help your colleagues appreciate it. The more we can put substance ahead of hype, the more science and our colleagues benefit from our highlights.
Below are links to tweets of papers that I find interesting and worth sharing!
Over the last 15 years, my opinion about the significance of publishing in elite journals has evolved considerably. Below are some of main phases and the factors that have shaped my opinion:
As a beginning PhD student, I took several classes based on discussing primary research papers. Many of these papers, especially in my biochemistry course, reported milestone results and were published in elite journals. This created the impression that a disproportionate number of influential papers are published in elite journals and elite journals are the place where one finds such papers.
Later in my PhD, I became an expert in a small field. The two best known papers in that field were published in Science, and I was as confident as I could be that these papers are deeply problematic and with incorrect interpretations. (I still believe that, and I think that Science is a good journal) Being well known and influential, these papers stymied further developments in the field. It did not help that the senior author of these papers extended a death threat to me.
Simultaneously, my PhD mentor (David Botstein) expressed strong misgivings towards elite journals and even declined being a senior author for one of my papers if we submit it to Nature or Science. I have always had deep respect for David (he is a brilliant scientist), and his opinion was very influential for me. Given that he was very accomplished and successful based on any metric of merit, I became convinced that one does not need to publish in elite journals to be successful in academic research. (David has published many influential milestone papers but only a small fraction of them are published in Nature or Science.)
During my postdoctoral and PI years, my opinion about elite journals grew increasingly complex and nuanced. Here is what I think now:
Elite journals can significantly promote papers in the short term and make a big difference for papers that do not stand out on their own.
The establishment generally resists innovation. For various reasons (mostly not related to the editors), papers reporting very original results seem to encounter much more resistance for the limited space in elite journals. This may be frustrating for their authors, but it is much less frustrating once we realize that these are the papers that need advertisement the least.
Ultimately, I aim to publish our papers in good journals that are read by colleagues interested in our results and have broad visibility. Yet, my opinion continues to evolve: I see that our results are read, cited, and reproduced even when shared via preprints, and the importance of the publication venue is declining in my opinion.
“I am a scientist focused on conducting research, not on promoting it.” This thinking strongly resonated with me when I was a PhD student. If it resonates with you, read on to learn why and how you should promote (your) research.
Common approaches to promoting your research include aiming to publish your papers in elite journals (they invest in advertising and some have dedicated professionals focusing on advertising) and attending conferences. I will not focus on these approaches because they are commonly known and often used more than I think is useful. Rather, I prefer more thoughtful and organic approaches outlined below.
The first (and my favorite) approach is clear and compelling communication. If you make important discoveries but fail to communicate them clearly and compellingly, you may be the only one (or one of the few people) knowing they are important. Such results are unlikely to drive scientific progress. Clear communication, means clear logic without hype, vague phrases and unnecessary adjectives. It means good framing with the relevant background needed to understand the questions and approaches without extraneous clutter or meanderings into tangential but not essential discussions.
The second and related approach is to communicate your results to the communities interested in them, which includes presenting in relevant conferences. It also includes scientific social networks. Since the most prominent of those is twitter, I will make a few suggestions with twitter in mind:
When I tweet about a paper of a colleague, I think of the tweet as a mini (and thus very limited) peer review that highlights substantive findings or element of a paper. The format does not allow rigorously scholarly treatment, but it does allow pointing to something specific that you genuinely think is exciting. If you tell me you are excited about sharing your paper or publishing it in a particular journal, I do not learn new scientific information. Make your tweet as informative about the science as you can.
Promote all good work that you come across. This includes your work, but also the work of your colleagues more broadly. I think of it as a good service to the scientific community.
I particularly like highlighting research that otherwise may not get noticed. Papers that are promoted by a sophisticated advertisement system don’t need my help as much.
As you can tell from these brief remarks, my definition of promoting research is enhancing its communication, both the formal and rigorous description of the research itself and in providing thoughtful and informative comments that will attract the attention to the formal description. I think such communication is an important component of the scientific ecosystem, and I strongly encourage all students to participate in it. It helps you, and it helps your scientific community.
Update: For colleagues who are active on Twitter, it is a major means for disseminating research articles.
There can be many definitions of successful research and many factors that contribute to it. Comprehensively discussing all of these can fill the pages of several books and is beyond the aims of this post. Rather, I want to focus on two factors that I consider most important for success as I define it: Accelerating the rate of progress. Making discoveries that stand out and stand the test of time.
The first factor is the direction that we choose for our research. Choosing a worthwhile direction is essential: No amount of work can compensate for misguided direction. Choosing a fruitful and original directions is very hard, and I consider it as one of the most limiting factors in advancing biomedical research at the moment. By fruitful I mean a question that is worth asking and that can be meaningfully answered with existing tools and resources. By original I mean a question not already pursued by investigators with similar skill sets and tools.
People and culture
The second factor is the people, both their individual abilities and their ability to work as a team. The pool of colleagues whom a PI can successfully recruit is defined by the research itself: the vision, the tools and past success. Having recruited strong people, a PI should help them grow both individually and as members of a team pursuing a shared vision. This mentoring is the second major role of a PI (in addition to developing and articulating a compelling vision) and deserves a devoted future post.
Of course other factors can be influential and even intertwined with the vision and the people seeking to realize it, i.e., prestige and resources can help to attract more capable people. Still, I consider all other factors secondary, especially for resourceful leaders. Overthinking some of these secondary factors may waste precious time. For example, should you buy all equipment as soon as you start your lab or buy equipment when you need it? I am not sure how to determine which is better, but I think either way can work well and the benefit of the “better” option is unlikely to make the difference between success and failure or even to deserve the time spent on deliberation.
I think we — the research community — can be much more successful if we invest more time and effort in what matters: Coming up with original new leads and helping each other grow as scientists and people. Happy & successful new year to everybody!
It is tempting to think of scientific progress in discrete units: papers. Indeed, graduate students often devote many years on a single paper, and it looms large. Its significance then may be codified by neat numeric metrics. Yet, this view is rather myopic. Revising it not only changes the way we feel about our work (principles) but also the way we chose to communicate research (practice).
Some papers stand out as exceptionally important, but even such exceptional papers depend critically on a body of related research. Take for example Einstein’s theories of special and general relativity. As brilliant as they might be, they will mean very little without the follow up experimental papers that confirmed the theoretical predictions. More generally, a single paper can never establish a big discovery. A paper can report a big discovery, but the discovery will not be established until independently reproduced and cross validated in sequel studies.
What about the more typical paper? It’s a part of a continuous body of research that is reported in discrete units mostly because of old customs. The discrete units are intertwined to shape a bigger picture, and thus the significance of a single unit intimately depends on its role in the bigger picture. This thinking shifts the focus from the success of a particular paper to the success of the overall research agenda: If the overall body of work is visible and important, so are its components, even if some of them are published in relatively obscure journals. Thus, if your research is important and at least some parts of it are visible, publishing a particular part of it in a top journal in unlikely to be essential for it to be recognized and become influential.
Single-cell analysis is trendy for good reasons: It has enabled asking and answering important questions. Of course, the substantive reasons are surrounded by much hype. Sometimes colleagues tell me they want to add single-cell RNA-seq analysis since it will help them publish their paper in a more prestigious journal, and sadly there is perhaps more truth to that than I want to believe.
On the other end of the spectrum, some colleagues from the mass-spec community are puzzled by our efforts to develop methods for single-cell mass-specanalysis: At HUPO, I have been repeatedly asked: “Why analyze single cells when you can identify more peptides in bulk samples?”
So, when do we need single-cell analysis? Can’t we just FACS sort cells based on markers and analyze the sorted cells? Indeed, that maybe a good strategy when the cells we analyze fall into relatively homogenous clusters (they will never be perfectly homogeneous) and we have a reliable marker for each cluster. If these assumptions hold, the averaging out of differences between individual cells will give us very useful coarse graining. Unfortunately, bulk analysis of the sorted cells cannot validate the assumption of homogeneity. For example, we can easily sort B-cells and T-cells from blood samples because we have well-defined markers for each cell type. However, the bulk analysis of the sorted cells will not provide any information on the homogeneity of the sorted T-cells. Yet, a wealth of single-cell analysis has demonstrated the existence of multiple states within T-cell subpopulations, states for which we rarely have well-defined markers allowing efficient FACS sorting and follow up bulk analysis.
FACS sorting is especially inadequate when the cell heterogeneity is not easily captured by discrete subpopulations / clusters of cells. For example the continuous gradient of macrophage states that we recently observed in our SCoPE2 data:
In some cases, e.g., analysis of small clonal populations, the benefits of single-cell analysis may be too small to justify the increased cost. Sometimes, we can gain single-cell information from analyzing small groups of cells, e.g., Shaffer et al., 2018. Sometimes, nobody can be sure if single-cell analysis is needed. If we assume it’s needed and perform it, the data can refute our assumption and show us that there is no much heterogeneity, at least at the level of what we could measure. If we assume that there is no heterogeneity and thus no need for single cell analysis, e.g., FACS sort T-cells, the bulk analysis of the sorted cells will not correct our assumption. We can feel the assumption is validated while being blinded to what might be the most meaningful cellular diversity in the system. So, single-cell analysis is not always needed, but it is much better at correcting our assumptions and teaching us if it is needed or not.
Have you seen a paper failing to cite a very relevant source? The chances are you have and that you felt — more than once — that your work was not referenced when it should have been.
Authors may choose not to cite a reference for many reasons, some legitimate, e.g., they find the evidence unconvincing, and other less so, e.g., they believe that a reference will undermine the novelty of their work. If the latter sounds incredible to you, here is a quote:
I am sorry for not referencing your paper, but it would have undermined the novelty of our work. You know how Nature editors think.
Missed citations are hot potatoes. If you complain that your papers are not cited, when you believe they should have, most of yours colleagues are unlikely to take you seriously. Indeed, authors are likely to be biased toward their work and seek more references in these citation-obsessed times. Why care about missed citations?
This is SO true. I call it the "impact factor scoop": you publish first, and then someone pubs same thing after, but gets more attention.
I think most scientist will agree that we should give credit where credit is due. So, how can we fight the “impact factor scoop”? Here is an idea:
We can start a PubPeer style database in which everybody who has been a corresponding author for a PubMed paper can list missed citations to papers for which they are not authors. The latter is essential to avoid the biases of authors who believe that global conspiracy against them is the only reason why everybody is not citing them. Furthermore, the database should collect missed citations in a machine-readable form so that they can be analyzed more easily.
What do you think about the above idea? Do you have suggestions for other approaches that can improve citation practices?
Everyday, thousands of colleagues use social media to express surprise, dislike, or even outrage for the impact factor, for articles in luxury journals, against closed access, against Trump and so on and so forth.
This voluminous response carries a powerful message about the influence and visibility of what is criticized; this response and its hyperlinks tell internet search engines just how influential and thus highly ranked the criticized pages should be. It is a self-defeating response, a response providing strong and vital support for the nemeses. This support is unintended and inadvertent but powerful.
There is another option. Focus on spreading and sharing what you like and admire, i.e., what is worth sharing. Whether that is a great paper in a luxury journal or a great paper in a less visible journal, share it for its own merits. Emphasize the good since the bad is not worth your time or my time, or the high rank that search engines will give it. And what about the these transgressions that you find outrageous? Ignoring them is a far more powerful and effective message than honoring them with your attention. They do not deserve attention. Consider this:
Ellsworth Toohey: There’s the building that should have been yours. There are buildings going up all over the city which are great chances refused and given to incompetent fools. You’re walking the streets while they’re doing the work that you love but cannot obtain. This city is closed to you. It is I who have done it! Don’t you want to know my motive?
Howard Roark: No!
Ellsworth Toohey: I’m fighting you and shall fight you in every way I can.
Howard Roark: You’re free to do what you please!
Ellsworth Toohey: Mr. Roark, we’re alone here. Why don’t you tell me what you think of me in any words you wish.
Howard Roark: But I don’t think of you!
[Roark walks away and Toohey’s head slumps down]
Earlier this year, I read an inspiring recollection (by Sydney Brenner) of a grand scientific milestone: the elucidation of the genetic code. How do DNA nucleotides code for the amino-acid sequence of proteins? This fundamental question had captivated numerous scientists, including Francis Crick and Sydney Brenner. The punchline of this wonderful interview/recollection is a magnanimous act by Francis Crick:
In August 1961, more than 5,000 scientists came to Moscow for five days of research talks at the International Congress of Biochemistry. A couple of days in, Matt Meselson, a friend of Crick’s, told him the news: The first word of the genetic code had been solved, by somebody else. In a small Friday afternoon talk at the Congress, in a mostly empty room, Marshall Nirenberg—an American biochemist and a complete unknown to Crick and Brenner—reported that he had fed a single repeated letter into a system for making proteins, and had produced a protein made of repeating units of just one of the amino acids. The first word of the code was solved. And it was clear that Nirenberg’s approach would soon solve the entire code.
Here’s where I like to imagine what I would have done if I were Crick. For someone driven solely by curiosity, Nirenberg’s result was terrific news: The long-sought answer was arriving. The genetic code would be cracked. But for someone with the human urge to attach one’s name to discoveries, the news could not have been worse. Much of nearly a decade’s worth of Crick and Brenner’s work on the coding problem was about to be made redundant.
I’d like to believe I would have reacted honorably. I wouldn’t have explained away Nirenberg’s finding to myself, concocting reasons why it wasn’t convincing. I wouldn’t have returned to my lab and worked a little faster to publish my own work sooner. I’ve seen scientists react like this to competition. I’d like to believe that I would have conceded defeat and congratulated Nirenberg. Of course, I’ll never know what I would have done.
Crick’s response was, to me, remarkable and exemplary. He implored Nirenberg to give his talk again, this time to announce the result to more than 1,000 people in a large symposium that Crick was chairing. Crick’s Moscow meeting booklet survives as an artifact of his decision, with a hand-written “Nirenberg” in blue ink, and a long arrow inserting into an already-packed schedule the scientist who had just scooped him. And when Nirenberg reached the stage, he reported that his lab had just solved a second word of the code.
I admire Crick’s reaction. It is very honorable. In the long run, it helped both science and Crick’s reputation. Nirenberg had a correct result and sooner or later, he was going to receive credit for it. Crick facilitated this process, and in the process Crick only added to his own credit. Our current admiration for Crick’s reaction at the Moscow conference is the only proof I need.
Any interpretation that sees Crick’s magnanimous act as being good only for the science but bad for Crick’s personal reputation is myopic; it misses the long run. It misses mine (and hopefully yours) opinion of Crick’s magnanimous act.
The news buzz alive with excitement about human genome editing, even human germline engineering. Successful germline engineering requires (1) a technology for editing DNA safely and (2) knowledge of what to edit and how to edit based on understanding the underlying biology. We are approaching (1), which is the easier part; we do not have (2), and we are far from achieving it for most desired “edits”.
A huge hurdle to germline engineering is that, beyond a few simple cases, our understanding does not allow achieving desired effects while avoiding unintended consequences. Unlike DNA sequencing, silicon chips and DNA editing, our understanding of complex combinatorial multi-gene interactions has made very little progress over the last few decades. Until we made more progress and understand gene interactions and the respective health outcomes better, germline engineering is akin to medieval quack therapies based the technology to bleed patients and feed them various concoctions but with very limited understanding of the medical consequences, and with plenty of unintended consequences. We can fix the unintended consequences later and then fix the unintended consequences from the fixing, and we will keep trying!