Let’s get rid of Journal Rankings (and Journals)

I got the following today from F1000 – a company that I know reasonably well and get on fine with those people I have met including Vitek Tracz for whom I have a very high regard. But I am not in favour of this…

On Wed, Oct 5, 2011 at 12:06 PM, <Eleanor.Howell@f1000.com> wrote:

Dear Peter,

I’m excited to let you know that we at Faculty of 1000 have launched the beta version of our new F1000 Journal Rankings.

The rankings enable researchers to see where the best research is being published, as judged by the F1000 Faculty. Each month F1000 will publish current rankings based on evaluations of research papers received in the previous 12 months. Each year we will make available historical rankings, based on a calendar year’s worth of articles, for easy comparison with the Journal Impact Factor.

Our press release, including technical details of how the rankings are calculated, can be viewed here:http://f1000.com/resources/Journal_Rankings_PressRelease_Web.pdf

The journal rankings themselves are here: http://f1000.com/rankings/journals/year/current

Please contact me with any queries or comments.

So I wrote:

I think ranking journals is outdated and pernicious. It leads to glory-oriented branding, editorial coziness, arbitrary office-made decisions and distorts scientific publishing.

I approve of per-article metrics done by humans reading the papers. If so, publish the articles.

I would also note that a very high proportion of your journals are closed access – it would be useful to indicate which journals are open, as 99.99+% of the human race can only read

I also commented that publishing rankings to 4 significant figures when the raw data could vary by 20% was ridiculous and unscientific.

I now think conventional journals per se are outdated. There are, perhaps, a few places where journals make sense but most are vehicles for commercial (and commercial-thinking non-profit) companies to promote competition with other commercial companies. The decisions on what a journals is, what’s in it, why it exists, what its policy is, are increasingly undemocratic and distorting. There is also publisher-think where even well-intentioned publishers get sucked into the reader-doesn’t-matter and we-don’t-care-about-the science syndromes.

That’s one reason for wishing to see journal rankings abolished. But while we have journals, there are others. The main is that it trivializes the role of individual articles against the collective standing of the journal. *Where* you publish is more important than *what* you publish. I accept that some of this is probably inevitable while Planck’s Law for editors still operates [1]. However the increasing pressure of commercial greed on scientific publishing distorts editorial judgments or even bypasses them completely. If we want changes in publishing, do we wish to delegate our decisions to the marketing departments of commercial companies? (And I think the F1000 impact factor is a clear indication of marketing triumphing).

Journal impact factors will never be morally or ethically acceptable as long as the primary motivation for journals is commercial. And even without that it’s a seriously flawed concept.

So I am sorry to write this about F1000 as they actually do more than most to try to assess science. But this is retrograde.

 

[1] http://en.wikiquote.org/wiki/Max_Planck Science progresses one funeral at a time

This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Let’s get rid of Journal Rankings (and Journals)

  1. rpg says:

    Hello Peter
    thanks for your feedback. I have a few comments in response.
    1. We agree that there are significant misuses of journal ranking. But as you say, we have journals, and we have to live with them: we do think it’s a benefit to working scientists to have an alternative to the others that are done differently. Whether or not we should have journals at all is another matter.
    2. The main reason for our journal rankings–a simple calculation from the article evaluations–is to provide a service to authors when they are trying to decide where to publish a paper. It’s not “glory-oriented branding”–we dislike that as much as you do. You can examine the F1000 rankings within very narrow specialities relevant to you, and then determine exactly which papers contributed to that ranking.
    3. All our calculations are open and “auditable”–you can see exactly how the number was arrived at, as well as seeing evaluations of which articles contributed to the rankings; who evaluated them and what they said about them.
    4. The rankings exist in a world of many imperfect tools, but by being open and auditable we allow the user to decide how much value to put on them.
    5. We are in beta. We are watching how the information will be used and misused, and we encourage open criticism of what we do. We’re trying to make it a useful service to active scientists.
    And a quick technical note: going to 2 decimal places meant we had quite a few journals sharing the same position. It’s a fairly arbitrary decision how precise we go, but we thought that was better than lots having the same rank lower down. Yes the systematic error is more than that, but we need to draw a line somehow.
    I’ve written a bit more on the F1000 blog at http://blog.the-scientist.com/2011/10/05/f1000-rankings/.
    Thanks again for your comments,
    Richard

    • pm286 says:

      Thanks for commenting Richard,
      I appreciate the Openness. Presumably until now only F1000 subscribers could see individual rankings. Is that still true?

  2. rpg says:

    Yes–unfortunately rankings–individual or otherwise–is something we have to keep behind the paywall. I’d love to make it open, but that kind of business model isn’t going to work for F1000. Our reviewers and their labs do get access though.

Leave a Reply

Your email address will not be published. Required fields are marked *