Here’s a comment from a blog some days ago which is so compelling I reproduce it in full. It needs little comment from me.
I think scientific publications are a victim of our own “research success measurement yardstick”. I did my EECS graduate work in a far east university. Situation here is something like, your productivity as a researcher equals to the number of publication you write a year. On the first day I showed up in the graduate school, head of research summoned me and said “I want you to publish a journal paper and a conference paper every year! I won’t accept your thesis until you publish 2 journal papers”. In another words, he is putting the status quo — publish or perish — in few sentences. This pressure is even worse for junior academics, who are trying to build an academic career. Unless they author/co-author 20+ journal papers a year, their advancements in an academic institutions is most often ill fated.
I think this is deleterious for the whole of sciences. Such quantitative success measures lead to enormous pressures on researchers, which eventually leads to:
1. Publishing poor quality papers with half baked ideas or less rigorous experimental evidence
2. Helping unheard/unrecognized journals to proliferate
3. Researchers losing their integrity and proliferations of research malpractices
e.g. – fabrication of data, dishonesty, plagiarism, fragmenting single publication into multiple publications (just to get the brownie points), intellectual piracy (trying to get your name into colleague’s publications), publishing same results in multiple journals under different titles.
I was quite frustrated in academia and it eventually lead to my untimely departure, as I couldn’t stand what was happening around. After doing a long and thorough investigation, when I publish a paper, I see others have published half a dozen by the means of malpractices listed above. In university administrators perspective, I am nothing but an unproductive “dead-wood”. Finally I decided to do a 9-5 job in industry to earn the bread, and do research in spare time. This way, I won’t have any of the drawbacks being attached to an academic institution, and allow me to be more independent and honest researcher.
I wish the science community (as well as universities) reward more for “quality” research and publications rather than pure volume. It is my hypothesis, this is the key reason why science is not progressing at present. As most researchers have to “survive” in their respective institutions, hence they work on research that leads to predictable results, which translate into papers; rather engage in high quality/productive research, which always comes with high risk, long term rigorous investigations and most often not, big price tags.
I empathasize with these comments – I have seen similar in the blogosphere over the last few years. I have no idea how common they are. It was very worrying that 2 years ago Acta Crystallographica detected 50+ publications from one institution which were all fake. The credit goes to the crystallographic community for detecting this – I suspect that a significant amount (probably not a large percentage) of scientific publications are partly or wholly fraudulent. In Cheminformatics, for example, (where I am on an editorial board) there is a culture of not publishing data (its IP is protected, and it gives the authors an “advantage”), using closed software (you make money from it) and not revealing all your analysis methods in detail. Although some editors are trying to change it, the culture of not allowing reproducibility (and not being interested in it) is still there. Almost by definition very few chemoinformatics papers can be reproduced from what is published in the paper. I am not saying any of the published work is fraudulent (I think quite a lot of it is meaningless, and that also leads to unnecessary forms of publication) but it would be difficult to detect problems simply by reading the paper.