A Rant on Refereeing

Before I started this blog, I posted an essay on my webpage about refereeing called A World Without Referees. There was a bit of discussion about it on the blogosphere. I argued that our peer review system is outdated and unfair.

David Banks has raised this issue here in the Amstat News. Karl Rohe also has an excellent commentary here.

Since I have never posted my original essay on my blog I decided that I should do so now. Here it is. Comments welcome as always.

(For a dissenting view, see Nicolas Chopin’s post on Christian’s blog here.)

Note: For those who have already read this essay, please note that at the end I have added a short postscript which wasn’t in the original.

A World Without Referees

Our current peer review is an authoritarian system resembling a priesthood or a guild. It made sense in the 1600’s when it was invented. Over 300 years later we are still using the same system. It is time to modernize and democratize our approach to scientific publishing.

1. Introduction

The peer review system that we use was invented by Henry Oldenburg, the first editor of the Philosophical Transactions of the Royal Society in 1665. We are using a refereeing system that is almost 350 years old. If we used the same printing methods as we did in 1665 it would be considered laughable. And yet few question our ancient refereeing process.

In this essay I argue that our current peer review process is bad and should be eliminated.

2. The Problem With Peer Review

The refereeing process is very noisy, time consuming and arbitrary. We should be disseminating our research as widely as possible. Instead, we let two or three referees stand in between our work and the rest of our field. I think that most people are so used to our system, that they reflexively defend it when it is criticized. The purpose of doing research is to create new knowledge. This knowledge is useless unless it is disseminated. Refereeing is an impediment to dissemination.

Every experienced researcher that I know has many stories about having papers rejected because of unfair referee reports. Some of this can be written off as sour grapes, but not all of it. In the last 24 years I have been an author, referee, associate editor and editor. I have seen many cases where one referee rejected a paper and another equally qualified referee accepted it. I am quite sure that if I had sent the paper to two other referees, anything could have happened. Referee reports are strongly affected by the personality, mood and disposition of the referee. Is it fair that you work hard on something for two years only to have it casually dismissed by a couple of people who might happen to be in a bad mood or who feel they have to be critical for the sake of being critical?

Some will argue that refereeing provides quality control. This is an illusion. Plenty of bad papers get published and plenty of good papers get rejected. Many think that the stamp of approval by having a paper accepted by the refereeing process is crucial for maintaining the integrity of the field. This attitude treats a field as if it is a priesthood with a set of infallible, wise elders deciding what is good and what is bad. It is also like a guild, which protects itself by making it harder for outsiders to compete with insiders.

We should think about our field like a marketplace of ideas. Everyone should be free to put their ideas out there. There is no need for referees. Good ideas will get recognized, used and cited. Bad ideas will be ignored. This process will be imperfect. But is it really better to have two or three people decide the fate of your work?

Imagine a world without refereeing. Imagine the time and money saved by not having journals, by not having editors, associated editors and imagine never having to referee a paper again. It’s easy if you try.

3. A World Without Referees

Young statisticians (and some of us not so young ones) put our papers on the preprint server arXiv (www.arXiv.org). This is the best and easiest way to disseminate research. If you don’t check arXiv for new papers every day, then you are really missing out.

So a simple idea is just to post your papers on arxiv. If the paper is good, people will read it. If they find mistakes, you can thank them a post a revision. Pretty simple.

Walter Noll is a Professor Mathematics at Carnegie Mellon. He suggests that we all just post our papers on our own websites. Here is a quote from his paper The Future of Scientific Publication.

1) Every author should put an invitation like the following on his or her website: Any comments, reviews, critiques, or objections are invited and should be sent to the author by e-mail. (I have this on my website.) The author should reply to any response and initiate a discussion.

2) Every author should notify his or her worldwide colleagues as soon as a new paper has been published on the website.

3) The traditional review journals (e.g. Mathematical reviews and Zentralblatt), or perhaps a new online journal, should invite the appropriate public to submit reviews, counter-reviews, and discussions of papers on websites and publish them with only minor editing.

4) Promotion committees in universities should give credit to faculty members for writing reviews.

The “publish on your own website” model can be used in concert with the arXiv model.

4. Questions and Answers

Question: Won’t we be deluged by papers? I rely on referees to filter out the bad papers.

Answer: I hope we are deluged with papers. That would be great. But I doubt it will be a problem. Math and Physics, who rely heavily on the arXiv model, have done just fine.

If you rely on referees to filter papers for you then I think you are making a huge error. Do you really want referees deciding what papers you get to read? Would like two referees to decide what wines can be sold at the winestore? Isn’t the overwhelming selection of wine a positive rather than a negative? Wouldn’t you prefer having a wide selection so you can decide yourself? Do you really want your choices limited by others? Anyway, if there does end up being a flood of papers then smart, enterprising people will respond by creating websites and blogs that tell you what’s out there, review papers, etc. That’s a much more open, democratic approach.

Question: What is the role of journals in a world without referees?

Answer: The same as the role of punch cards.

Question: How about grants?

Answer: I think we still do need referees here. (Although flying 20 people to Washington for a panel review is ludicrous and unnecessary, but that’s another story.)

Question: How about bad papers?

Answer: Ignore them or critique them. But don’t suppress them.

Question: How about promotion cases?

Answer: Every promotion case includes a few letter writers who know the area and will be able to write substantial letters. They don’t need the approval of a journal to tell them whether the papers are good. But there will also be some letter writers who are less familiar with the candidate or the field. Sometimes these people just count papers in big journals. But you can always just look at their CV and quickly peruse a few of the candidate’s papers. That doesn’t take much time and is certainly no worse than paper counting.

Question: How about medical research?

Answer: There is arguably danger in bad medical papers. But again, I think the answer is to critique rather than suppress. However, I am mainly focusing on areas I am more familiar with, like statistics, computer science etc.

5. Conclusion

When I criticize the peer review process I find that people are quick to agree with me. But when I suggest getting rid of it, I usually find that people rush to defend it. Is it because the system is good or is it because we are so used to it that we just assume it has to be this way?

In three years we will reach the 350th birthday of the peer review system. Let’s hope we can come up with better ideas before then. At the very least we can have a discussion about it.

6. Postscript: An Analogy

In her book The Future and Its Enemies, Virginia Postrel discusses in detail the fact that the birth of new ideas is a messy, unpredictable process. She describes people who accept the unsupervised, unpredictable nature of progress as dynamists. She describes those who fear the disorderly, trial-and-error process of knowledge discovery, as stasists. She divides the stasists into two groups: the reactionaries who oppose progress and the technocrats who try to control progress with bureaucracy and centralized decision making. I classify our current system as technocratic and I am arguing for a more dynamist approach.



  1. Joe Pickrell
    Posted October 20, 2012 at 6:54 pm | Permalink

    This is a comment sentiment. The question is now: what can we do to move towards rapid dissemination of results? In population genetics, a few of us are trying to encourage people to post their papers to arXiv prior to review, and we’ve started a forum for discussion and promotion of the best preprints:

    Though we just started this site a couple months ago, it has definitely had some effect of encouraging rapid sharing of results in our (relatively small) field, and I think similar efforts in statistics would be great. In general, talk is cheap, do something! 🙂


    Joe Pickrell

    • Posted October 20, 2012 at 7:03 pm | Permalink

      Great idea!
      We need something like that in statistics

  2. Posted October 20, 2012 at 8:09 pm | Permalink

    A GREAT proposal. I saw your paper earlier and it was very clear at that time you are raising an IMPORTANT issue of the broken Quid pro quo academic publication system. I loved your manuscript. Inspired by your article I also thought about the remedy and proposed: http://www.stat.tamu.edu/~deep/peerR.pdf, albeit not a full-proof. We need to “modernize” arXiv so that readers can quickly detect interesting papers (signals) on a specific topic. I highly welcome your effort, vision and courage. Thank you so much for speaking on behalf of the victims

  3. Posted October 20, 2012 at 10:05 pm | Permalink

    I am not ready to live in a world without referees. I would welcome, however, an incentive-based peer-review system that promotes fast high-quality reviews. For example, whenever I provide a review, the editor gives it a grade: +, neutral or -. For every N + reviews I provide within 15 days to a journal, I can request N/2 + reviews for a paper I submit to the same journal, within the same time frame. There should be penalties as well to discourage lazy low-quality reviews. It almost sounds like a research problem in statistical mechanism design .. anyone?

  4. Posted October 20, 2012 at 11:43 pm | Permalink

    I believe you’re exactly right about this and really hope this is the way of the future. I was surprised to see Walter Noll mentioned. His thesis advisor and mentor, Clifford Truesdell often made very strong criticisms of the academic guild system long before there was an internet. It is likely that Noll picked up much of his attitude from Truesdell. Truesdell in his time ran several journals with an unusual system which didn’t involve pear review. Truesdell implicitly makes a strong argument in much of his historical writing that the cost of suppressing one good paper is far greater than the benefit of suppressing thousands of poor papers. Tuesdells books are true gems but are hard to find now and seem to have been forgotten. A couple that I would highly recommend that have some relation to this topic are:

    An Idiot’s Fugitive Essays on Science: Methods, Criticism, Training, Circumstances
    The Tragicomical History of Thermodynamics 1822-1854
    Essays in the History of Mechanics

    Incidentally, John Nash of “Nash Equilibrium” fame was one of Truesdell’s students for a time.

    • Posted October 21, 2012 at 8:49 am | Permalink

      Interesting. I don’t even remember how I came across Noll.

  5. Posted October 21, 2012 at 6:02 am | Permalink

    Take an affirmative action: accept all papers you get for review. If many of us do this then the current system will collapse.

    • Joe
      Posted October 21, 2012 at 9:42 pm | Permalink

      I do this. Sometimes I accept conditionally upon major revisions, such as “rethink the implications of …”, but I haven’t rejected a paper in many years.

  6. Posted October 21, 2012 at 9:25 am | Permalink

    Just to point out a longer version of this column by Nicolas, where Nicolas Chopin, Andrew Gelman, Kerrie Mengersen, and myself wrote about the refereeing system. (ArXived and already rejected many times!)

  7. Christian Hennig
    Posted October 21, 2012 at 11:14 am | Permalink

    There are many good arguments against the peer reviewing system. However, without taking a clear position, I want to say that
    1) The vast majority of reviewer reports that I got for my own papers were fair and led to a substantial improvement of many of them.
    2) We may be rejected once unfairly or even twice but I think it’s very rare that a good paper won’t find its way into any OK journal at all.
    3) As somebody who is associate editor and does quite a bit of reviewing, of course I’m biased, but I’m quite convinced that it wasn’t a too bad thing having played my part in the rejection of a quite large number of bad papers, by which I don’t mean papers with controversial content, but rather papers which were almost unreadable, full of errors or obviously unoriginal. It’s not a too bad thing that thousands of researchers don’t waste even 10 minutes of their time looking into these.
    4) As a reviewer, I put much more effort into reading a paper than when browsing new papers, and my browsing behaviour will become even worse if more papers with interesting titles/abstracts are out in the open. Apart from the papers I’m reviewing, I read only a handfull of papers so well, which I really need to understand in all detail.

  8. Posted October 21, 2012 at 12:12 pm | Permalink

    I agree that the present system is unsatisfactory for various reasons*. Still, I think that many, especially younger, scholars are likely to benefit from feedback both to improve clarity** and to become aware of existing work on the problem. We don’t want people starting from scratch on problems. Even now it is not unusual to see ideas promoted as original when in fact someone has put forward the argument/idea years ago. In this connection, many authors feel that publishing in a journal/book offers some protection against another person claiming credit for their ideas. Perhaps in a Wasserman world with no journals individuals could copywrite their own work.
    *In my field I think the current system encourages work that sticks very, very closely to the reigning views and popular trends.
    **Over the years I feel I’ve spend enormous amounts of time fixing up papers and helping authors to strengthen weak arguments; I now do so more sparingly.

    • Posted October 21, 2012 at 1:39 pm | Permalink

      Feedback is indeed useful that can be done independently of refereeing.

      Re: priority claims. Posting a paper on arXiv creates a public record of who
      has what idea when.

      • Posted October 21, 2012 at 4:15 pm | Permalink

        Sure but who is going to police that? Mary posts on arXiv that Max is putting forward her idea as if it is original with him, and she is ignored and gets no response from Max. Mary is non-confrontational, and the Max’s of this world get accolades for their brilliance.

      • Posted October 21, 2012 at 4:17 pm | Permalink

        That happens already … even with refereeing.

  9. Posted October 21, 2012 at 1:52 pm | Permalink

    There are many problems with the existing peer review system. The biggest, in my opinion, is that the system isn’t double-blind: while the author doesn’t know who his reviewers are, the reviewers do know whose paper they are reviewing–that seems to be exactly backwards, if the aim is to reduce bias.

    A big oversight of this proposal, though, is that it leaves no guidance for people outside of the profession. For example, you sometimes hear politicians citing peer-reviewed studies when they advocate their policy ideas. Without a peer review system, anyone would be able to post a study, which means that there will be a lot of bogus studies out there with an ideological agenda, and the public won’t be able to sort the legitimate studies from the bogus ones. So I’d say that it is very important to have some sort of peer-review process to certify a study as methodologically objective and correct. Something to filter the pseudoscience from actual research.

    • Posted October 21, 2012 at 2:39 pm | Permalink

      Actually many journals do use double-blind systems.

      • A.C. Thomas
        Posted October 21, 2012 at 11:39 pm | Permalink

        While this is true, it’s way easier to break the double-blind one way than the other: aside from pre-prints, people cite themselves and build off their own ideas.

  10. Posted October 21, 2012 at 4:15 pm | Permalink

    What we need is
    1) a system in which reviewing follows publication (and does not precede it)
    2) one or several web-based mechanisms by which papers are vetted, commented on, reviewed, and brought to the attention of the community if they deserve to be.
    I have devised such a system, described here: http://bit.ly/RPs7vF

    • Posted October 21, 2012 at 6:04 pm | Permalink

      I took a look at your proposal.
      I think it is an excellent idea.


    • Eric Hunsberger
      Posted December 8, 2012 at 1:20 pm | Permalink

      Reading your “Questions, Problems and Issues” section got me thinking. My thoughts:
      1) Even with double-blind review in the current system, being an unknown author or from a small institution can still be a problem. A good paper needs to be read and cited by the community to have an impact, which may not happen as long as a phobia of unknown authors and institutions exists. I think that this phobia is a problem that we have to address in either the current or the proposed system.
      2) I wonder how many people would actually publish half-baked ideas in the proposed system. Surely some, but personally I would not, given that there will be a permanent record of my half-baked paper with the associated critiques available for the community to see. This would not be something I would like to have on my record.

  11. dzrlib
    Posted October 21, 2012 at 4:19 pm | Permalink

    This idea is really only appropriate for small narrowly focused fields like HEP, Math, Statistics, etc. … where one can easily identify authors worth reading. It clearly would not work for many other fields, because of the enormous volume of papers … like chemistry, neuroscience, etc.

    • Eric Hunsberger
      Posted December 8, 2012 at 1:27 pm | Permalink

      There are also a lot of web pages on the internet, but we are still able to find the ones worth reading, thanks in part to innovative algorithms like Google’s page-rank. Surely we can think up analogous algorithms for filtering published papers. As you noted, fields like chemistry and neuroscience already have enormous volumes of papers under the current system, so it seems like filtering algorithms are necessary anyway, and could likely be implemented better under the proposed system because all reviews, comments, and critiques are available to the community.

  12. wayne mueller
    Posted October 21, 2012 at 11:10 pm | Permalink

    What I find interesting about this idea is that statistics will probably decide what is a good paper and what is not. Good papers will get more readers/downloads. And if you set up a like/dislike function you can get refereeing too. Perhaps those who read more and comment more can become expert commenters/referees whose scoring carries more weight, making the refereeing more valuable. Just be sure every submitter and every referee is identified.

    • Posted October 22, 2012 at 12:11 pm | Permalink

      I fully endorse this idea..Like/dislike button is a cool idea 🙂 also we can put a rating system just like in IMDb to quantify the “like/dislike” aspect…example: http://www.imdb.com/title/tt0068646/.. This IMDb like rating system will not only rank the papers (movies) but automatically rank the authors also by their impact…

  13. Posted October 22, 2012 at 3:11 pm | Permalink

    I think 1665 was a pretty good year. Look back at the history of sharing of scientific knowledge earlier in the 17th century. A lot of intellectual theft. On the other hand, I guess I have been following your advice for years in a certain way. Most of my work is published only in the proceedings of ASA conferences.

  14. Anonymous
    Posted October 23, 2012 at 3:58 pm | Permalink

    Kind of related to this, http://thatsmathematics.com/blog/archives/102

    A randomly-generated paper accepted by a journal, and the referees even made comments on it.

  15. Posted October 26, 2012 at 11:54 pm | Permalink

    I am hurt* that you didn’t mention my reply.

    *: For an indiscernible if not negative value of “hurt”.

    • Posted December 21, 2012 at 1:27 am | Permalink

      I strongly agree with your points on filtering. This is especially important when crossing disciplinary lines — something that is critical to the future of the research enterprise.

  16. Posted December 21, 2012 at 1:26 am | Permalink

    I wish to quibble with your opening lines. We are still using our human brains to evaluate papers, even though the design of the brain is thousands of years old. The period of time that a procedure has been employed is not evidence that it is time for a change. This is a spurious argument. One could just as easily argue that it has been highly successful and should be continued.

    Journals (and other reviewing organizations) provide a key service: They recruit reviewers to carefully read submitted manuscripts. An unknown author can post a paper on arXiv any time, but if no one reads it, it could go undiscovered for years. Maybe no one reads it because the title isn’t appropriately descriptive or the language needs polishing. None of the proposals for new systems that I have seen address this issue. The review process helps authors improve their papers and helps good work get better dissemination.

    In the early days of machine learning, there was a lot of grumbling because we had many papers rejected from the then-available venues. When we launched the Machine Learning journal, it forced us to set standards for our work. We found ourselves rejecting some of “our own” papers. Pat Langley–the first Executive Editor–did a tremendous service to the machine learning community by working hard with authors to improve the quality of their papers (including both the writing and the methodology). This helped improve the quality of research across the entire machine learning field.

    Nowadays, I have the impression that the review process is being abused, particularly at refereed CS conferences. The vast size of the research literature combined with tight conference deadlines discourages people from doing their scholarly homework—reading deeply the previous research. Manuscripts are submitted before their ideas have been sufficiently tested and related to existing work. Authors expect the referees to help them find the relevant literature. Because of double-blind review, there is no reputation penalty for this behavior. The result is that the research literature becomes even more vast, and good papers are lost in the fog of least-publishable units.

    I like Yann LeCun’s proposals to separate dissemination from review. A challenge is to create the right incentive structure for people to write good papers and for people to do good reviewing.

    • Posted December 21, 2012 at 8:36 am | Permalink

      Your point about the brain is a good one.
      But we don’t use our brain the same way we did thousands of years ago.
      For example, we augment our brains with a huge knowledge base (i.e. the cloud).

      Your story about the machine learning journal is interesting.
      I guess my perspective comes from statistics which, having come from
      math, has had journals since prehistoric times.

4 Trackbacks

  1. By Links for 10-21-2012 | The Penn Ave Post on October 21, 2012 at 3:15 am

    […] at 3:31 on October 21, 2012 by Mark Thoma Different approaches to austerity – mainly macro A Rant on Refereeing – Normal Deviate Bubble, Bubble, Conceptual Trouble – Paul Krugman […]

  2. […] can read about academic publishing, its issues and the proposed future systems here, here, here and […]

  3. By Links for 10-21-2012 | FavStocks on October 25, 2012 at 4:20 am

    […] A Rant on Refereeing – Normal Deviate […]

  4. […] Esse post do Normal Deviate, apresenta uma situação a qual é bem comum em ambientes acadêmicos: O autor trabalha meses na escrita de um arquivo original, faz a revisões, manda para algum journal e após isso tem simplesmente a negativa da publicação; na qual muitas das vezes bons artigos são descartados muito mais por questões relacionadas a forma do que pelo conteúdo, e artigos que não fazem nada mais do que ser um bolo de citações são publicados. […]

%d bloggers like this: