Science, Nature, and Cell Aren’t the Problem, Exactly

Randy Schekman made news this week when he published a column in the Guardian, where he proclaimed that his lab would be boycotting Science, Nature, and Cell, probably the three most prominent scientific journals.

There is a lot to be happy about in Schekman’s column. Most of all for its existence: Schekman just won the Nobel Prize in Physiology and Medicine, and he is using his fifteen minutes at the Bully Pulpit to draw attention to our deeply flawed system of valuing science, including how we fund and publish it. At a minimum, his column has reignited interest in an extremely important topic, and has already spawned a number of responses, including interesting thoughts from Michael Eisen, Retraction Watch, Luboš Motl, PZ Myers, Junk Science, Scholarly Kitchen, and mathbionerd.[1]

But is he right about the problem? The solution? I’m not sure.

Schekman argues that a key problem is the influence of these “luxury” journals. Yes they publish some good and interesting science, but not everything they publish is good, and not everything good gets published there. In fact, there is an argument to be made that a paper published in a quality field-specific journal is more likely to contain good, solid science than a typical luxury journal paper, at least on average.

DarwinEatsCake0085Yet, in many fields, publication in one of these fancy journals is a, if not the, primary determinant of who gets that tenure-track slot at the big research university. This, then, distorts the incentives on scientists. Rather than trying to do good science, young scientists feel that they need to do something flashy. This can lead to asking the questions that sound deep in a cocktail-party setting, rather than the questions that actually are deep, and that move the field forward in a meaningful way.

He’s right about this, of course. In fact, there are a couple of additional problems that arise from the Science/Nature/Cell-publication-equals-job system. One is stochasticity. There is always going to be a random element that goes into getting a paper accepted by these journals. There is also a degree of randomness in the nature of science itself. Sometimes you ask the right question, and the answer turns out to be a little dry, or a lot complicated. That means that, no matter how skilled a scientist you are, you’re not going to be publishing your work in one of the glossy magazines.

The other issue is a sort of nepotism. Academia, like our current economic system, is riddled with features that create rich-get-richer dynamics. The best predictor for publishing in one of the luxury magazines is having published in them before. Because, well, then you’re the sort of scientist who publishes in those magazines, so obviously your work belongs in those magazines, and so on. So, if you wind up going to the right grad school, and land in the right lab, you can co-author with one of those Science/Nature/Cell scientists, and next thing you know, you are one of those Science/Nature/Cell scientists.

For my money, this starts to get closer to the actual core of the problem: the lazy use of proxies to evaluate quality and assert expertise.

The “Hire the person with the Nature publication” phenomenon is just one facet of the systemic rot throughout academia — the thing that triggers the “Emperor has no clothes” reaction from people who are not immersed in the system. The fact is, it is extremely rare for one academic to put in the actual time and effort required to understand another academic’s research. Yet, we are always more than happy to lay out our value judgments.

You know that thing, where you read an article on the internet, and then you look at the comments, and the first comment is something super judgmental, or scolding, or something defensive and fawning? Yet it is blindingly obvious from the comment that the commenter did not actually finish reading the article?

That dynamic pretty well describes the faculty discussion of candidates in every job search in academia.

Except, on the internet, there is usually some other commenter who points out that the first one did not read the whole article. Now imagine an internet comment thread where no one finished reading the article, but where everyone felt compelled to express an opinion.

That, kids, is how tenure-track positions are filled.

Academics are deeply habituated to making quick value judgments — partly out of necessity. The pace and scope of scientific publishing is absolutely insane, and keeping up with the literature is daunting, even in a narrow field. A typical tenure-track position will receive hundreds of applications, which have to be evaluated and ranked by people who are already working crazy long hours.

This habituation is also driven partly by the social dynamics of academia. When you articulate a judgment of a paper or a candidate, you assert your own authority. You are a person with expertise and intelligence and taste, as demonstrated by your informed opinion. The broader the set of subjects on which you can express an opinion, the broader the domain of your expertise. The stronger your opinion is, the keener your intelligence. The more critical you are, the more refined your tastes must be.

Of course, these inferences only make sense if your judgments are actually correct. The problem is that, in many academic settings, asserted judgments don’t get fact checked. Maybe no one else in the room has the requisite expertise to know if you’re full of shit or not. Maybe no one in the world can evaluate your judgment until years in the future, when some experiment validates or invalidates your judgment.

The resulting situation is that there are many short-term benefits to quick and firm value judgments. The costs associated with making bad value judgements — of being wrong — are typically deferred and diffuse. If you hire the “wrong” job candidate, it might not be obvious for years, and the cost is borne by the entire department. Plus, you never really know for sure, because you don’t have the appropriate controls (such as access to parallel universes in which you hired each of the other candidates).

So, you start to rely on proxies:

Where did the person go to college? Where did they go to grad school? Who was their advisor? How many publications do they have? In which journals? How many citations?

But how bad are those proxies? After all, each of these pieces of information probably does individually correlate with the thing you’re actually interested in — the quality of their work. And, of course, they let you make your evaluation quickly, which is critical if you have to work your way through a pile of, say, three hundred applications, and the new season of American Idol is coming up.

But the correlations are noisy. And, perhaps more to the point, they are correlated among themselves in a way that reflects that rich-get-richer dynamic.

So, candidate A went to Harvard, and they worked with a National Academy member, and they’ve got a paper in Cell. Awesome!

Except that maybe their paper got into Cell — at least partly — because it was co-authored with their National-Academy-member advisor. And they got to work with that advisor because they got in to Harvard. And maybe the advisor was elected to the National Academy — at least partly — because he/she landed a job at Harvard, where he/she got to know some other National Academy members, who then nominated him/her to the Academy.

By my reckoning, the number of independent data points you have about candidate A is somewhere in the vicinity of one.

When you’re in the triage phase, with your pile of hundreds of CVs, you might rely on these proxies out of necessity. But when you’re down to the a manageable pile, you really have to do better. You have to read the papers, understand them, understand the research program. [2] If everyone in the room does this, your discussion will naturally focus on the quality of the work, which is what we all care about, right? Right?

Too much to hope for? Well, consider this. If even one person in the room carefully reads the work, they can at least call out when someone else is making their judgments based on superficial (or even incorrect) aspects of a candidate and their work.

If you don’t have the time or the background knowledge to understand the research, well, you need not to be functioning in a position where you decide who gets hired and promoted and funded and published. Even though, the way academia is structured, you can probably remain in that position and get away with it for years.

In summary:

Goofus says, “George Price published a paper in Nature. He must be really smart. And I am smart because I have smartly recognized his smartness.”, because Goofus did not actually read the paper or maybe did not understand it, and is lazily relying on the journal name to signal quality and establish authority.

Gallant says, “George Price published a paper where he integrates ideas about group and kin selection through the hierarchical use of covariances. In the future, some people will view this work as true, but with limited utility in the real world — sort of like Fisher’s Fundamental Theorem. Some will even say that it is tautological and meaningless. Others will view it with an almost religious reverence, a sort of Rosetta Stone of population genetics.”, because Gallant read and understood the paper and its implications, and is attempting to provide an intellectually honest evaluation.

–––––––––––––––––––––––––––––

[1] The responses include a lot of “Hear! Hear!”, especially from people who have been fighting this battle for years. Folks are also (rightly) calling out Schekman for a degree of hypocrisy — he’s built his career through the luxury journals, publishing in Science as recently as this year.

Also, the journals that are included and excluded are a bit — not suspicious, exactly, but something like that. If you were to list extend the list of “luxury” journals to four, the fourth would probably be PNAS. Coincidentally, Schekman was the editor of PNAS for about five years.

On the other side, Schekman proposes Open Access publication as the key to solving this problem. Open Access is awesome for many reasons, but those reasons are really orthogonal to arguments about “sexy” science versus “solid” science — but that’s a subject for a different post. In particular, he calls out three Open Access publishers: PLoS, BMC, and eLife. Here, if you were to cut your list down to two, there is no question that the two would be PLoS and BMC. Coincidentally, Schekman is an editor at eLife.

There’s nothing unusual — or even necessarily wrong — with promoting entities with which you have an association. It’s just that, to me, it smacks a bit of the type of “branding” that he accuses the luxury journals of in the same column.

[2] To be fair, Schekman does make the argument that we also need not to evaluate papers based on where they are published, but this point is limited to a few sentences that are tangential to his central argument:

Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal’s brand, that matters.

This post is a perspective of the author, and does not necessarily reflect the views of the Ronin Institute.

22 Comments

  1. Pingback:On Schekman’s Call to Boycott the “Luxury” Science Journals | Lost in Transcription

  2. Superb article! I thought Randy Schekman’s most important point (on the use of “luxury” journals for short-cutting faculty selections) was lost in the somewhat hypocritical stance of the boycott. The problem is not the journals but how we use them. It’s true that many journals tout their Impact Factors while simultaneously noting they are flawed but it’s the use of false proxies and non-independent metrics/correlations to cut corners in faculty assessment that is the real issue. We know all too well of these errors in scientific data collection and analysis yet we apply them to one of the most important elements of academic life – assessment of our peers. Hopefully, this aspect of Schekman’s pronouncement will survive the talk of a rather frivolous boycott.

    • Yeah, it’s like he knows that there is something wrong that needs to be fixed, but he has not actually spent enough time thinking about it yet. And, yes, I second your hope that this part of the message will survive.

      Like I say, at a minimum, this has stirred up some discussion around an important issue. And, there are a lot of folks out there who have been thinking about this for a long time. Hopefully, their voices will gain some traction in the wake of the discussion.

  3. [I’ll comment here as I did at Jon’s personal blog.]

    I agree completely. I don’t have much more to say, because (a) after all my years in academia, I’m really tired of all this – the nepotism, the posturing, etc. – and (b) now that I’m no longer an academic, I don’t have to deal with it anymore, thank goodness.

    • [I’ll reply here as I did at my personal blog.]

      Yeah, many of these problems are standard human behavior that you encounter everywhere, but there is a certain type of posturing that gets taken to extremes in academia that you just don’t see anywhere else.

  4. I like the ArXiv model. There is nothing to judge papers on but their quality and people can make comments on them, etc. It seems close to the way someone would design a system from scratch in the early 21st century.

  5. Evelyn Ch'ien

    It’s not just journal branding but the impact factor obsession that has skewed people’s sense of value. Original ideas take a lot of time to accept and absorb, and at first don’t make an impact… just as ideas that are too forward thinking are sometimes beyond the purview of conservative editorial boards… we need to have people who decide on tenure / make major publication decisions who think this way…

    • Yup. It seems to me there is a fundamental problem of mismatch between the desire to support and promote groundbreaking work on the one hand, and the institutional drive towards risk-averse hiring and promotion on the other. Unfortunately, I think it is a natural — perhaps unavoidable — consequence of having academia built around a relatively small number of permanent positions. In the post-Revolution Ronin world, you can imagine a more continuous distribution of resources, which would drive the system towards originality and innovation.

  6. Dmitri PETROV

    I agree with everything you say. I am honestly trying to do better myself whenever I am in a position to pass judgement on job candidates but I know (i) that people often do not even try and (ii) it is hard to do better when you try. Other people’s work is often very hard to penetrate. But we must try in any case.

    At the same time I agree with Schekman that NSC are distorting the system in a lot of way and boycotting them seems like an actual thing one can do. So I agree with you but also salut Schekman wholeheartedly for trying to do something.

    • Yes. It is an incredibly difficult problem. There is the time and expertise issue, which is challenging even in the best-case-scenario situation where everyone is trying in good faith to hire good people. If only a few people in the room are really trying it is that much harder. Then, of course, if you have one or two senior folks in the department who are more interested in hiring minions/allies than good scientists, it becomes almost hopeless.

      I think you make a great point about the fact that Schekman’s proposal is at least something concrete — and something that probably at least moves the ball down the field in the right direction, even if it is not an ideal solution.

  7. Pingback:Some Academic Agonistes | basil.CA

  8. Robert Austin

    I wouldn’t get too excited about the Brave New World that Schekman is leading us towards. He has not disclosed that he is the Editor of a new Journal designed to compete with Nature, Science and Cell (eLife Sciences), which is no ArXiv. They have every intent of being exclusionary as the Big Three. What he really wants is for you to boycott the Big Three so you will submit your best papers to his journal, where he will happily cull the submissions and reject most without review, just like Nature, Science and Cell.

    If you don’t believe me, try submitting a paper to eLife Sciences and see what happens.

  9. Pingback:What we’re reading: Coevolutionary diversification, replication, archiving, and the real trouble with “luxury” journals | The Molecular Ecologist

  10. Pingback:Publishing and open access world links. | Åse Fixes Science

  11. Pingback:Research Universities’ Excellence Adventure | Ronin Institute

  12. excellent (so to speak)

  13. Pingback:Science, performance and collaboration at CESTEMER | Ronin Institute

  14. Pingback:Cell / Nature / Science – as a grad student, am I really missing out when publishing? – Confessions of the Brown Lab Researchers

  15. Pingback:Nature’s Cultural Blindspot – Ronin Institute

Leave a Reply to Jon F. Wilkins Cancel

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.