Another PDF hamburger; why must scientific publishing destroy science?

#jiscxyz #okfn #quixotechem

I’m off to #JISCMRD (Managing Research Data) to hear about the new round of projects including our own JISCXYZ. Ours concentrates on the publication of data and we are working with publishers to save and validate data at early stages in the publication process.

Meanwhile here’s an indication of how to destroy data (supplemental data):


That’s the commonest method. And here’s another ( ). This file could have released useful data to the world. In fact it destroyed it by putting it into PDF. The file should have looked like:



D001 with INT=ULTRAFINE\,2\C,0.1063168353,0.3005635652,-0.5502851935










3808,-0.5162435,-0.9249031\PG=C01 [X(C4H8Cl1)]\\@

Notice the precise formatting. This is REQUIRED to read the file in. Instead the author or the publisher (neither of whom apparently care) tipped it into PDF which introduced spurious line ends. It’s UNREADABLE by a machine. Follow the link and Read the file and see what I mean .

It’s beautiful and garbage. A sickly hamburger.

That’s because almost all publishers don’t care about data. Which means that many of their publications are second-rate. Many are suspect scientifically because the data aren’t published.

This entry was posted in Uncategorized. Bookmark the permalink.

7 Responses to Another PDF hamburger; why must scientific publishing destroy science?

  1. Nick Barnes says:

    A parser which understood the required format could certainly remove spurious line-endings, unless they introduce ambiguity (which these ones seem not to do, but I don’t know the format).
    But yes, PDF is absolutely the wrong answer. All the publishers have is a PDF hammer, so everything looks to them like a textual nail.

    • pm286 says:

      The problem is that there is significant TRAILING whitespace!. It’s fortran after all. If the lines are split we don’t know where the whitespace is.

  2. Henry Rzepa says:

    Talking of space, lack of it can cause even greater problems. Thus the purveyor of a well known quantum mechanics program recently released a new version. We needed it to explore a spectroscopic method known as ROA (Raman optical activity, a very powerful method for assigning absolute configurations of chiral molecules). The output is complex, and demands graphical interpretation. So it was loaded up into (the only program I know which recognises ROA data) and displayed … nothing. Turns out the ROA scattering intensities for the (otherwise relatively unremarkable molecule) were rather larger than normal. The relevant field is identified in the output with the string ROA– and the numerical value is identified as -9999.9. Well, for our molecule, this ended up as ROA—10000.0 (the numbers are fictitious, to illustrate the problem). You can see how one missing space totally messed up the interpretations. I am also reminded how, when we first did a calculation on a molecule containing TWO iodine atoms, the energy display similarly vanished. Yes, you guessed, the total energy reached -10000.00 (remember, these programs started life in an era where even the thought of an all electron calculation on a single iodine was beyond the pale).
    Of course, all my suggestions, urgings, etc to persuade the purveyors of the two programs involved in the story above to adopt a structured format in which white space less less potentially destructive have thus far failed.

  3. Pingback: Twitter Trackbacks for Unilever Centre for Molecular Informatics, Cambridge - Another PDF hamburger; why must scientific publishing destroy science? « petermr’s blog [] on

  4. Henry Rzepa says:

    Here is an interesting discussion on data and its interpretation. Its all about how the fine structure constant varies according to where in the universe it is measured. The controversy is apparently because the raw data used for analysis has not always been made openly available. And this from the physics community which is actually rather good about this sort of thing.

  5. Richard says:

    Lovely – one of ours. This is from 2002 – I don’t know for sure whether we received a txt or doc file from the author, but it could have been a mistake by us. Our policy isn’t to convert txt to pdf but mistakes can get occasionally be made. We should remember what the issue was in 2002 though…we were more worried then about how often the doc format changed and PDFing the suppdata was the best bet to ensure the files would still be readable in future. I fully support retaining and publishing more raw data alongside and within publications, but the block to this isn’t with the publishers.

  6. Duff Johnson says:

    The error was made by the author, period.
    Blaming “PDF” for this is like blaming your car for a crash. Yes… sometimes it’s the car’s fault, but c’mon… not that often.

Leave a Reply

Your email address will not be published. Required fields are marked *