Randy Schekman made news this week when he published a column in the Guardian, where he proclaimed that his lab would be boycotting Science, Nature, and Cell, probably the three most prominent scientific journals.
There is a lot to be happy about in Schekman’s column. Most of all for its existence: Schekman just won the Nobel Prize in Physiology and Medicine, and he is using his fifteen minutes at the Bully Pulpit to draw attention to our deeply flawed system of valuing science, including how we fund and publish it. At a minimum, his column has reignited interest in an extremely important topic, and has already spawned a number of responses, including interesting thoughts from Michael Eisen, Retraction Watch, Luboš Motl, PZ Myers, Junk Science, Scholarly Kitchen, and mathbionerd.
But is he right about the problem? The solution? I’m not sure.
Schekman argues that a key problem is the influence of these “luxury” journals. Yes they publish some good and interesting science, but not everything they publish is good, and not everything good gets published there. In fact, there is an argument to be made that a paper published in a quality field-specific journal is more likely to contain good, solid science than a typical luxury journal paper, at least on average.
Yet, in many fields, publication in one of these fancy journals is a, if not the, primary determinant of who gets that tenure-track slot at the big research university. This, then, distorts the incentives on scientists. Rather than trying to do good science, young scientists feel that they need to do something flashy. This can lead to asking the questions that sound deep in a cocktail-party setting, rather than the questions that actually are deep, and that move the field forward in a meaningful way.
He’s right about this, of course. In fact, there are a couple of additional problems that arise from the Science/Nature/Cell-publication-equals-job system. One is stochasticity. There is always going to be a random element that goes into getting a paper accepted by these journals. There is also a degree of randomness in the nature of science itself. Sometimes you ask the right question, and the answer turns out to be a little dry, or a lot complicated. That means that, no matter how skilled a scientist you are, you’re not going to be publishing your work in one of the glossy magazines.
The other issue is a sort of nepotism. Academia, like our current economic system, is riddled with features that create rich-get-richer dynamics. The best predictor for publishing in one of the luxury magazines is having published in them before. Because, well, then you’re the sort of scientist who publishes in those magazines, so obviously your work belongs in those magazines, and so on. So, if you wind up going to the right grad school, and land in the right lab, you can co-author with one of those Science/Nature/Cell scientists, and next thing you know, you are one of those Science/Nature/Cell scientists.
For my money, this starts to get closer to the actual core of the problem: the lazy use of proxies to evaluate quality and assert expertise.
The “Hire the person with the Nature publication” phenomenon is just one facet of the systemic rot throughout academia — the thing that triggers the “Emperor has no clothes” reaction from people who are not immersed in the system. The fact is, it is extremely rare for one academic to put in the actual time and effort required to understand another academic’s research. Yet, we are always more than happy to lay out our value judgments.
You know that thing, where you read an article on the internet, and then you look at the comments, and the first comment is something super judgmental, or scolding, or something defensive and fawning? Yet it is blindingly obvious from the comment that the commenter did not actually finish reading the article?
That dynamic pretty well describes the faculty discussion of candidates in every job search in academia.
Except, on the internet, there is usually some other commenter who points out that the first one did not read the whole article. Now imagine an internet comment thread where no one finished reading the article, but where everyone felt compelled to express an opinion.
That, kids, is how tenure-track positions are filled.
Academics are deeply habituated to making quick value judgments — partly out of necessity. The pace and scope of scientific publishing is absolutely insane, and keeping up with the literature is daunting, even in a narrow field. A typical tenure-track position will receive hundreds of applications, which have to be evaluated and ranked by people who are already working crazy long hours.
This habituation is also driven partly by the social dynamics of academia. When you articulate a judgment of a paper or a candidate, you assert your own authority. You are a person with expertise and intelligence and taste, as demonstrated by your informed opinion. The broader the set of subjects on which you can express an opinion, the broader the domain of your expertise. The stronger your opinion is, the keener your intelligence. The more critical you are, the more refined your tastes must be.
Of course, these inferences only make sense if your judgments are actually correct. The problem is that, in many academic settings, asserted judgments don’t get fact checked. Maybe no one else in the room has the requisite expertise to know if you’re full of shit or not. Maybe no one in the world can evaluate your judgment until years in the future, when some experiment validates or invalidates your judgment.
The resulting situation is that there are many short-term benefits to quick and firm value judgments. The costs associated with making bad value judgements — of being wrong — are typically deferred and diffuse. If you hire the “wrong” job candidate, it might not be obvious for years, and the cost is borne by the entire department. Plus, you never really know for sure, because you don’t have the appropriate controls (such as access to parallel universes in which you hired each of the other candidates).
So, you start to rely on proxies:
Where did the person go to college? Where did they go to grad school? Who was their advisor? How many publications do they have? In which journals? How many citations?
But how bad are those proxies? After all, each of these pieces of information probably does individually correlate with the thing you’re actually interested in — the quality of their work. And, of course, they let you make your evaluation quickly, which is critical if you have to work your way through a pile of, say, three hundred applications, and the new season of American Idol is coming up.
But the correlations are noisy. And, perhaps more to the point, they are correlated among themselves in a way that reflects that rich-get-richer dynamic.
So, candidate A went to Harvard, and they worked with a National Academy member, and they’ve got a paper in Cell. Awesome!
Except that maybe their paper got into Cell — at least partly — because it was co-authored with their National-Academy-member advisor. And they got to work with that advisor because they got in to Harvard. And maybe the advisor was elected to the National Academy — at least partly — because he/she landed a job at Harvard, where he/she got to know some other National Academy members, who then nominated him/her to the Academy.
By my reckoning, the number of independent data points you have about candidate A is somewhere in the vicinity of one.
When you’re in the triage phase, with your pile of hundreds of CVs, you might rely on these proxies out of necessity. But when you’re down to the a manageable pile, you really have to do better. You have to read the papers, understand them, understand the research program.  If everyone in the room does this, your discussion will naturally focus on the quality of the work, which is what we all care about, right? Right?
Too much to hope for? Well, consider this. If even one person in the room carefully reads the work, they can at least call out when someone else is making their judgments based on superficial (or even incorrect) aspects of a candidate and their work.
If you don’t have the time or the background knowledge to understand the research, well, you need not to be functioning in a position where you decide who gets hired and promoted and funded and published. Even though, the way academia is structured, you can probably remain in that position and get away with it for years.
Goofus says, “George Price published a paper in Nature. He must be really smart. And I am smart because I have smartly recognized his smartness.”, because Goofus did not actually read the paper or maybe did not understand it, and is lazily relying on the journal name to signal quality and establish authority.
Gallant says, “George Price published a paper where he integrates ideas about group and kin selection through the hierarchical use of covariances. In the future, some people will view this work as true, but with limited utility in the real world — sort of like Fisher’s Fundamental Theorem. Some will even say that it is tautological and meaningless. Others will view it with an almost religious reverence, a sort of Rosetta Stone of population genetics.”, because Gallant read and understood the paper and its implications, and is attempting to provide an intellectually honest evaluation.
––––––––––––––––––––––––––––– The responses include a lot of “Hear! Hear!”, especially from people who have been fighting this battle for years. Folks are also (rightly) calling out Schekman for a degree of hypocrisy — he’s built his career through the luxury journals, publishing in Science as recently as this year.
Also, the journals that are included and excluded are a bit — not suspicious, exactly, but something like that. If you were to list extend the list of “luxury” journals to four, the fourth would probably be PNAS. Coincidentally, Schekman was the editor of PNAS for about five years.
On the other side, Schekman proposes Open Access publication as the key to solving this problem. Open Access is awesome for many reasons, but those reasons are really orthogonal to arguments about “sexy” science versus “solid” science — but that’s a subject for a different post. In particular, he calls out three Open Access publishers: PLoS, BMC, and eLife. Here, if you were to cut your list down to two, there is no question that the two would be PLoS and BMC. Coincidentally, Schekman is an editor at eLife.
There’s nothing unusual — or even necessarily wrong — with promoting entities with which you have an association. It’s just that, to me, it smacks a bit of the type of “branding” that he accuses the luxury journals of in the same column. To be fair, Schekman does make the argument that we also need not to evaluate papers based on where they are published, but this point is limited to a few sentences that are tangential to his central argument:
Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal’s brand, that matters.