scifoo: the mindless impact factory

More scifoo follow up from Richard Akerman. No comment from me needed. I’m leaving the second life picture because …
open science and the impact factory


Jean-Claude Bradley instigated a session in Second Life – SciFoo Lives On: Open Science.
[SF-SL-004]
Next week will be something like “Medicine 2.0”.
You can see in the transcript that one part of SciFoo that definitely lived on was a discussion around Open Science and webliometrics, both definitions and how to handle impact. It seems to me that we get tangled in endless debates about definitions. I have proposed that the nodalpoint Open Science wiki page be used to come to a consensus definition, but in the meantime:
open science
opening your scientific activities up to public examination, making work available without it having gone through formal peer review
peer review
The process of a group of scientific peers assessing the quality of a submitted piece of scientific work, currently most commonly associated with gatekeeping into a scientific publication, wherein it may also involve aspects of improving both the scientific thinking used in the paper and the expression thereof. There is no relationship between peer review and closed or open access.
open access
making a publication available without subscription fee, but possibly with usage limitations
free access
unfortunate term due to existing definition of open access, adding element of unrestricted usage and reuse (e.g. text mining)
impact factor
An imperfect measure of the scientific “importance” of an entire journal. Misused to measure the quality of individual scientific output

(Marked up using HTML definition lists, which you have probably never heard of, which incidentally is why the Semantic Web will fail.)
Yes, there are many types of peer review in different disciplines, and yes, things are often considered published and citable without having gone through peer review, such as conference papers and presentations which often go through a sort of editorial board selection instead.
I know these definitions are far from perfect, but good lord, can we get to good enough and go beyond this debate?
What I keep hearing is, how can we impact factorize open science. Well, the answer is, you can’t. Let’s stop trying to find some magic algorithm whereby a machine tells us what quality science is. What’s completely mad to me about this is that we already have processes to assess science quality. Every time you review a new student, every time you look at a grant proposal, heck, even on the infamous tenure committees and research assessments, a group of humans looks at a portfolio of existing or proposed work, and decides whether it is good enough.
So if I may modestly propose, let’s continue to do that, and no one other than journal publishers should ever look at impact factor numbers again. Arise, qualitative assessment, begone quantitive nonsense.
There is still a place for technology, but it’s not in providing some bogus seemly-quantitative quality measure. It’s in enabling us all to present our scientific portfolios online, or to use Euan’s words, our “professional lifestreams”. And there is a real problem to be solved. It starts with students and their scholarly output stuck in closed university systems. Students move around. Scientists move around. Their work history should move with them, not be lost in some scholarly dark web, or frozen as some web page at their previous institution that they no longer can access.
The European e-Portfolio is one effort to address this for students.
Electronic Theses and Dissertations is another piece.
The next step is to have those integrate into some, shall we say, flow or… flux (sekrit inside Nature joke) of the rest of their scholarly activity when they graduate. Bookmarks created, databases curated, papers reviewed, etc. etc.
That’s the technology piece.
The other piece, however, cannot be solved with technology.
Find better ways for humans to review scholarly portfolios and make decisions based on them. That’s going to address this problem of evaluation far better than anything else.
SIDEBAR
And of course you can do some side bits with technology of course once you have all this info circulating around, like ranking relevance to help people find the best, most relevant work in the flood of science that is sloshing around. Usage factor, other metrics, these may all help in recommending things to read.
END SIDEBAR
References
Richard Monastersky, “The Number That’s Devouring Science“, Chronicle of Higher Education, Volume 52, Issue 8, Page A12 (2005)
The PLoS Medicine Editors, “The Impact Factor Game”, PLoS Med 3(6): e291 doi:10.1371/journal.pmed.0030291 (2006)
Peter A. Lawrence, “The Mismeasurement of Science”. Current Biology, 17 (15), r583. doi:10.1016/j.cub.2007.06.014 (2007)
Bruno Granier, “Impact of research assessment on scientific publication in Earth Sciences” (PDF), a presentation at ICSTI June 2007 Public Conference on Assessing the quality and impact of research: practices and initiatives in scholarly information
Richard Akerman, “Web tools for peer reviewers…and everyone” (PDF), a presentation at ICSTI June 2007 Public Conference on Assessing the quality and impact of research: practices and initiatives in scholarly information
Corie Lok, “Scifoo: day 1; open science” (2007)
Alex Palazzo, “Scifoo – Day 2 – Science Communication” (2007)
Alex Palazzo, “Scifoo – Day 3 (well that was yesterday, but I just didn’t have the time …)” (2007)
Previously:
June 2007 Science Library Pad: ICSTI 2007 category

This entry was posted in open issues, Uncategorized. Bookmark the permalink.

4 Responses to scifoo: the mindless impact factory

  1. You’re absolutely right about the misapplication of metrics. If you rely too heavily on them you start thinking that if it can’t get a number it doesn’t exist (or matter). Then you have people gaming the system (how many papers) instead of doing what needs to get done.
    I do hope you’ll be able to make our SciFoo Lives on session on Definitions in Open Science. If you email me a brief presentation (ppt or images) I’ll be happy to create a poster for you in Second Life.

  2. Mr. Gunn says:

    You’ve really put your finger on why the obsession with journal ranking is so bad. It’s distracting from the whole purpose. Incidentally, that’s also kinda why Second Life is so bad. It’s technology for technology’s sake – the servant becoming the master – and not a useful tool for collaboration just as more and more advanced bibliometrics aren’t a useful tool for assessing scientific merit.
    That’s not to say people shouldn’t keep pushing the envelope, but it does seem like a little bit of stfu-n-gbtw is in order after a while, doesn’t it?

  3. Like any technology, Second Life has strengths and weaknesses. In my experience, it can work very well for networking with people at poster sessions.
    A common reason for having a negative experience with Second Life is an inadequate video card.

  4. Pingback: advantage web » plos medicine impact factor

Leave a Reply

Your email address will not be published. Required fields are marked *