TheContentMine is Ready for Business and will make scientific and medical facts available to everyone on a massive scale.

 
slack-imgs.com
It’s a year since I started TheContentMine (contentmine.org) – a project funded by the Shuttleworth Foundation. In ContentMine we intend to extract all the world’s scientific and medical facts from the scholarly literature and make them available to everyone under permissive Open licences. We have been so busy – writing code, lobbying politically, building the team, designing the system, giving workshops, creating content, writing tutorials, etc. that I haven’t had time to blog.
This week we launched, without fanfare, at a workshop sponsored by Robert Kiley of the Wellcome Trust:
robert_kiley__720
[RK presented with an AMI, the mascot of TheContentMine]
Robert (and WT) have been magnificent in supporting ContentMining. He has advocated, organised, corralled, pushed, challenged over many years. The success of the workshop owes a great deal to him.
On Monday and Tuesday (2015-04-13/14) we ran a 2day workshop – training , hacking and advocacy/policy. We advertised the workshop, primarily for Early Career Researchers and were overwhelmed – FOUR TIMES oversubscribed [1]. Jenny Molloy organised the days, roughly as follows:

  • Day1
  • tutorials and simple hands on about technology
  • aspects of policy and protocols
  • planning projects
  • Day2
  • hacking projects for 6 hours
  • 2-hour policy/advocacy session with key UK and EU attendees.

It worked very well and showed that ContentMine is now viable in many areas:

  • We have unique software that has a completely new approach to searching scientific and medical literature.
  • We have an infrastructure that allows automatic processing of the literature through CRAWLing, SCRAPE-ing, NORMAlising and MINING (AMI).
  • architecture
  • We have a back-end/server CATalogue (contracted through CottageLabs) which has ingested and analysed a million articles.
  • We have novel search interfaces and display of results.
  • We have established that THE RIGHT TO READ IS THE RIGHT TO MINE. in the UK
  • We have built a team, and shown how to build communities.
  • We have tested training sessions that can be used to train trainers and spread the word.
  • And we are credible at the policy level.

20150414_172655_hdr_720
[Part of the policy session]
We are delighted that a dozen funders, policy makers, etc came. They included JISC, IPO, LIBER, RLUK, RCUK, HEFCE, CUL, WT, BIS, UbiquityPress, NatureNews. The discussion took for granted that ContentMining is critically important and addressed how it could be suported and encouraged.
My slides for the policy session are at http://www.slideshare.net/petermurrayrust/content-mining-at-wellcome-trust.
I will blog more details later and show more pictures and so will Graham McDawg Steel. But the highlight for me was the speed and effciency of the Early Career Researchers in adopting, using, modifying and promoting the system. They came mainly from bioscience, /medical and ranged from UNIX geeks to those who hadn’t seen a commandline. In their projects they were able to make the CM software work for them and extract facts from the literature. One group wrote additional processing software, another created a novel display with D3.
Best of all they said they’d be happy to learn how to run a workshop and take the ideas and software (which is completely Open Apache2/CC BY/CC0) to their communities.
NOTE: Hargreaves allows UK researchers to mine ANYTHING (that they have legal right to read) for non-commercial use. The publishers cannot stop them, either by technical means or contracts with libraries.
This should make the UK the content-mining capital of the world. Please join us!
 

Posted in Uncategorized | 4 Comments

32-year old Elsevier paper could have averted Ebola but Liberians would have had to pay to read it

I am very angry with the publishing industry.
Last week the NY Times reported that the Ministry of Health in Liberia had discovered a 30-year old paper that, if they had known about it, might have alerted Liberians to the possibility of Ebola. See a report in TechDirt (https://www.techdirt.com/articles/20150409/17514230608/dont-think-open-access-is-important-it-might-have-prevented-much-ebola-outbreak.shtml ) and also the article in the NY Times itself (http://www.nytimes.com/2015/04/08/opinion/yes-we-were-warned-about-ebola.html ). The paper itself (http://www.sciencedirect.com/science/article/pii/S0769261782800282 ) is in Science Direct and paywalled (31 USD for ca 1000 words (3.5 pages). I’ll write more on what the Liberians had to say and how they feel about the publishing industry and Western academia (they are incredibly restrained). But I’m not, and this makes me very angry .
This paper contains the words;
“The results seem to indicate that Liberia has to be included in the Ebola virus endemic zone.” In the future, the authors asserted, “medical personnel in Liberian health centers should be aware of the possibility that they may come across active cases and thus be prepared to avoid nosocomial epidemics,”
The Liberians argue that if they had known about this risk some of the effects of Ebola could have been prevented.
Suppose I’m a medical educational organization In Liberia and I wanted to distribute this paper to 50 centers in Liberia. I am forbidden to do this by Elsevier unless I pay 12 USD per 3-page reprint (from https://s100.copyright.com).
liberia
I adamantly maintain “Closed access means people die”.
This is self-evidently true to me, though I am still cricitized for not doing a scientific study (which would be necessarily unethical). But the Liberian Ministry is not impressed with academia and:
There is an adage in public health: “The road to inaction is paved with research papers.
We’ve paid 100 BILLION USD over the last 10 years to “publish” science and medicine. Ebola is a massive systems failure which I’ll analyze shortly.
 
 

Posted in Uncategorized | 3 Comments

Content Mining Hackday in Cambridge this Friday 20150123 all welcome

We are having a ContentMine hackday – open to all – this Friday in Cambridge https://www.eventbrite.co.uk/e/contentmining-hackday-in-cambridge-facilitated-by-contentmine-tickets-716287435 .
We are VERY grateful to Laura James, from our Advisory Board who also set up the Cambridge Makespace where the event will be held. This event will cover everything – technical, science, sociolegal, etc. We are delighted that Professor Charles Oppenheim , another of our Advisory Board, will be present. Charles is a world expert on scholarship including the policy and legality of mining. For example he flagged up today that the EU and its citizens are pushing for reform…
We’re also expecting colleagues from Cambridge University Library so we can have a lively political stream. And we’ve got scientific publishers in Cambridge – love to see you.
There’ll be a technical stream – integrating the components of quickscrape, Norma, AMI and our API created by Mark MacGillivray and colleagues at CottageLabs. All the technology is brand new and everything is offered Openly (including commercial use).
And there’ll be a group of subprojects based on scientific disciplines. They include:

  • clinical trials
  • farming and agronomy
  • crystallography

If you have an area you’d like to mine, come along. You’ll need to have a good idea of your sources (journals, theses, etc.) , and some idea of what you’d like to extract. And, ideally, you’ll need energy and stamina and friends…
Oh, and in the unlikely event you get bored we are 15 metres from the Cambridge Winter Beer Festival.

Posted in Uncategorized | 1 Comment

This month's typographical horror: Researchers PAY typesetters to corrupt information

One of the “benefits” we get from paying publishers to publish our work is that they “typeset” it. Actually they don’t. They pay typesetters to mutilate it. I don’t know how much they pay but it’s probably > 10 USD per page. This means that when you pay APCs (Article Processing Charges) YOU are paying typesetters – maybe 200 USD.
Maybe you or your funder is happy with this?
I’m not. Typesetters destroy information. Badly. Often enough to blur or change the science. ALL journals do this. I happen to be hacking PLoSONE today (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0115884), but this is unlikely to be specific to them:
ploshorror
So what’s the typographical symbol/s in the last line? Hint. It’s NOT what it SHOULD be

Unicode Character ‘PLUS-MINUS SIGN’ (U+00B1)

image of Unicode Character 'PLUS-MINUS SIGN' (U+00B1)
So what’s happened? Try cutting and pasting the last line into a text editor. Mine gives:
(TY/SVL = 0.05+0.01 in males, 0.06+0.01 in females versus 0.08+0.01 in both sexes in L.
This is a DESTRUCTION of information.
So authors should be able to refuse charges for typesetting and save over 100 USD. and thereby improve science.
BTW the same horror appears in the XML. So when the publishers tell you how wonderful XML is, make your own judgment.
There are other horrors of the same sort (besides plus-minus) in the document. Can you spot them?
The only good news is that ContentMine sets out to normalize and remove such junk. It will be a long slog, but if you are committed to proper communication of science, lend a hand.
 
 

Posted in Uncategorized | 11 Comments

FORCE2015 ContentMine Workshop/hack – we are going to index the scientific literature and clinical trials…

TL;DR We had a great session at FORCE2015 yesterday in Oxford – people liked it, understood it, and are wanting to join us.
We ran a pre-conference workshop for 3 hours followed by extra hack. This was open to all and all sorts of people came including:

  • library
  • publisher
  • academic
  • hacker
  • scholarly poor
  • legal
  • policy
  • campaigner

So we deliberately didn’t have a set program but we promised that anyone could learn about many of the things that ContentMine does and get their hands dirty. Our team presented the current state of play and then we broke into subgroups looking at legal/policy, science, and techie.
ContentMining is at a very early stage and the community, including ContentMine, is still developing tools and protocols. There’s a lot to know and a certain amount of misunderstanding and disinformation. So very simply:

  • facts are uncopyrightable
  • large chunks of scientific publications are facts
  • in the UK we have the legal right to mine these documents for facts for non commercial activity / research
  • the ContentMine welcomes collaborators who want to carry out this activity – it’s inclusive – YOU are part of US. ContentMine is not built centrally but by volunteers.
  • Our technology is part alpha, part beta. “alpha” means that it works for us, and so yesterday was about the community finding out whether it worked for them.

And it did. The two aspects yesterday were (a) scraping and (b) regexes in AMI. The point is that YOU can learn how to do these in about 30 mins . That means that YOU can build your bit of the Macroscope (“information telescope”) that is ContentMine. Rory’s interested in farms, so he, not us, is building a regexes for agriculture. (A week ago he didn’t know what a regex was). Yesterday the community built a scraper for peerj – so if you want anything from that, it’s now added to the repertoire (and available to anyone). We’ve identified clinical trials as one of the areas that we can mine – and we’d love volunteers here.
What can we mine? Anything factual from anywhere. What are facts (asked by one publisher yesterday)? There’s the legal answer (“what the UK judge decides when the publisher takes a miner to court”) and I hope we can move beyond that – that publishers will recognize the value of mining and want to promote a community approach. Operationally it’s anything which can be reliably parsed by machine into a formal language and regenerated without loss. So here are some facts: “DOI 123456 contains…”

  • this molecule
  • this species
  • this star, galaxy
  • this elementary particle.

and relationships (“triples” in RDF-speak)

  • [salicylic acid] [was dissolved in] [methanol]
  • [23] [fairy penguins] [breed] [in St Kilda, VA]

Everything in […] is precisely definable in ontologies and can be precisely annotated by current ContentMine technologies.
We can do chemistry (in depth), phylogenetics, agriculture, etc. but what about clinical trials? So we need to build:

  • a series of scrapers for appropriate journals
  • a series of regexes for terms in clinical trials. “23 adult females between the ages of …”.

For the really committed and excited we will also be able to analyze tables, figures and phrases in text using Natural Language Processing. So if this is you, and you are committed, then it will be very exciting.
 
 
 
 

Posted in Uncategorized | 3 Comments

FORCE2015 Workshop: How ContentMine works for you and what you can bring

TL;DR. WE outline the tools and pipeline which ContentMine will show on Sunday at Force2015. They are very general and accessible to everyone….
ContentMine technology and community is maturing quickly. We’ve just had a wonderful three days in Berlin with Johnny West a co-Shuttleowrth Fellow. Johnny runs http://openoil.net/ – a project to find public information about the extractive industries (oil/gas, mining). Technically his tasks and ours are very similar – the information is there but hard to find and locked in legacy formats. So at the last Shuttleworth gathering we suggested we should have a hack/workshop to see how we could help each other.
I thought this would initially be about OCR, but it’s actually turned out that our architecture for text analysis and searching is exactly what Openoil needs. By using regexes on HTML (or PDF-converted-to-HTML) we can find company names and relations, aspects of contracts etc. The immediate point is that ContentMine can be used out-of-the-box for a wider range of information tasks.
architecture

  1. We start with a collection of documents. Our mainstream activity will be all papers published in a day – somewhere between 2000 – and 3000 (no one quite knows). We need a list of those and there are several sources such as CrossRef or JournalToCs. We may also use publishers’ feeds. The list is usually a list of references – DOIs or URLs which we use in the scraping. But we can also use other sources such as Repositories. (We’d love to  find people at Force2015 who would like their repositories searched and indexed – including for thsese (which are currently very badly indexed indeed)). But ContentMine can also be used on personally collections such as hard drives.
  2. The links are then fed to Richard-Smith-Unna’s quickscrape which can determine all the documents associated with a publication (PDF text, HTML text, XML, supplementary files, images, DOCX, etc.). This needs volunteers to write scrapers but quite a lot of this has already been done. A scraper for a journal can often be written in 30 minutes and there’s no special programming required. This introduces the idea of community. Because ContentMine is Open and kept so the contributions will remain part of the community. We’re looking for community and this is “us” , not “we”-and-“you”. And the community has already started with Rory Aaronson (also Shuttleworth Fellow) starting a sub-project on agriculture (https://openfarm.cc/). We’re going to find all papers that contain farming terms and extracts the FACTs.
    The result of scraping is a collection of files. They’re messy and irregular – some articles have only a PDF, others have tens of figures and tables. Many are difficult to read. We are going to scrape these and make them usable.
  3. The next stage is normalization (Norma). The result of Norma’s processing is tagged, structured, HTML – or “scholarly HTML” (http://scholarlyhtml.org/)  which a group of us designed 3 years ago. At that time we were thinking of authoring, but because proper scholarship closes the information loop, it’s also an output.
    Yes, Scholarly HTML is a universal approach to publishing. Because HTML can carry any general structure, and because it can host foreign namespaces (MathML, CML), and because it has semantic graphics (SVG) and because it has tables and list and because it manages figures  and links, it has everything. So Norma will turn everything into sHTML/SVG/PNG.
    That’s a massive step forward. It means we have a single simple tested supported format for everything scholarly.
    Norma has to do some tricky stuff. PDF has no structure and much raw HTML is badly structured. So we have to use sections for different parts and roles in the document (abstract, introduction, … references, licence…) That means we can restrict the analysis to just one or a few  parts of the article (“tagging”). that’s a huge win for precision , speed and usability.  A paper about E. coli infection (“Introduction” or “Discussion”) is very different from one that uses E. coli as a tool for cloning (“Materials”).
  4. So we now have normalized sHTML. AMI provides a wide and communty-fuelled set of services to analyze and process this. There are at least three distiint tasks: {a} indexing, (Information retrieval, or classification) where we want to know what sort of a paper it is and find it later (b) information extraction, where we pull out chunks of Facts from the paper (e.g. all the chemical reactions) and (c) transformation, where we create something new out of one or papers – for example calculating the physical properties of materilas from the chemical composition.
    AMI does this though a plugin architecture. These can be very sophisticated , such as OSCAR and Chemicaltagger which recognise and interpret chemical names and phrases and are large Java programs in their own right. Or Phylotree which interprets pixel diagrams and turns them into semantic NexML trees. These took years. But at the other end we can search text for concepts using reguar expressions and our Berlin and OpenFarm experience shows that people can learn these in 30 minutes!

In summary, then, we are changing the way that people will search for scientific information, and changing the power balance. Currently people wait passively for large organizations to create push-button technology. If it does what you want fine (perhaps, if you don’t mind snoop-and-control) ; if it doesn’t, you;re hosed. With ContentMine YOU == WE decide what we want to do, and then just do it.
We/you need y/our involvement in autonomous communities.
Join us on Sunday. It’s all free  and Open.
It will range from very straightforward (using our website) to running your own applications in a downloadable virtual machine which you control. No programming experience required but bring a lively mind. It will help if we know how many are coming and if you download the virtual machine beforehand, jsut to check it works. easy, but takes a bit of time to download 1,8 GB.
 
 
 
H

Posted in Uncategorized | 1 Comment

ContentMine Update and FORCE2015; we read and index the daily scholarly literature

We’ve been very busy and I haven’t blogged as much as I’d liked. Here’s an update and news about immediate events.
Firstly to welcome Graham Steel (McDawg) who is joining us as community manager. Graham is a massive figure in the UK and the world in fighting for Open. We’ve known each other for several years. Graham is a tireless, fearless fighter for access to scholarly information. He’s one to the #scolarlypoor (i.e. not employed by a rich university) so he doesn’t have access to the literature. Nonethelesss he fights for justice and access.
Here’s a past blog post 4 years ago where I introduce him and McDawg. He’ll be with us this weekend at FORCE2015, more later.
We have made large advances in the ContentMine technology. I’m really happy with the architecture which Cottagelabs, Richard Smith-Unna and I have been hacking. Essentially we automate the process of reading the daily scientific literature – this is between 1000 and 4000 articles depending on what you count. Each is perhaps 5-20 pages, many with figures. Our tools (quickscrape, Norma, and AMI) carry out the process of

  • scraping (downloading all the components of a paper (XML, HTML, PDF, CSV, DOC, TXT, etc.)
  • Normalising and tagging the papers. We convert PDF and XML to HTML5 , which is essentially Scholarly HTML. We extract the figures and interepret them where possible. We also identify the sections and tag them, so – for example – we can look at just the Materials and Methods section, or just the LIcence.
  • indexing and transformation (AMI). AMI now has several well tested plugins: chemistry, species, sequences, phylogenetic trees, and more generally Regular expressions designed for community creation.

Mark MacGillivray and colleagues have created a lovely faceted search index so it’s possible to ask scientific questions with a facility and precision that we think is completely novel.
We’re doing a workshop on this at FORCE2015 next Sunday (Jan 11) for 3 hours and hacking thereafter. The software is now easily used on contentmine.org or distributable in virtual machines. Everything is Open, so there is no control by third parties. The workshop will start by searching for species, and then move on to building custom searches and browsing. For those who can’t be there, Graham/McDawg is hoping to create a livestream – but no promises.
I’ve spent a wonderful 3 days in Berlin with fellow Shuttleworth fellow Johnny West. Johnny’s OpenOil project – http://openoil.net/ – is about creating Open information about the extractive industries. It turns out that the technology we are using in ContentMine are extremely useful for understanding corporate reports. So I’ve been hacking corporate structure diagrams which are extremely similar to metabolic networks or software flowcharts.
More later, as we have to wrap up the hack….
 

Posted in Uncategorized | 1 Comment

Wiley's "Free to read" actually means "pay 35 USD"

eurjchem
I got the above unwanted Twitter from Wiley (I have checked as far as possible that it’s genuine). It seems to be Wiley advertising a free to read article. I have pasted the message so you can try this at home:
Progress in #nanotechnology within the last several decades review from @unifr is #freetoread! http://ow.ly/FXDFQ
I check the poster https://twitter.com/ChemEurJ/status/544832871564050432/photo/1 and it seems to be a genuine site. So off I go to get my free copy (sorry, my free set of photons for sighted readers)…
 
eurjchem1
I click the “View Full Article (HTML)” and get…
eurjchem2
So Wiley equate “35 USD” with “free to read”.
I don’t.
I’m sure it’s a BUMP-ON-THE-ROAD (Elsevier excuse).
But this is the independent fourth publisher foul-up I have got in the last four days. We pay them 20 Billion USD and they can’t get it right.
 

Posted in Uncategorized | 3 Comments

How publishers destroy science: Elsevier's XML, API and the disappearing chemical bond. DO NOT BUY XML

TL;DR Elsevier typsetting turns double bonds into garbage.
Those of you who follow this blog will know that I contend that publishers corrupt manuscripts and thereby destroy science.
Those of you who follow this blog will know that Elsevier publicly stated that I could not use the new “Hargreaves” law to mine articles on their web page and I must do this through their API. Originally there were zillions of conditions, which – under our constant criticism – have gradually (but nowhere completely) disappeared. They now allow me to mine from the web page, but insist that their XML-API gives better content.
I have consistently refused to use Elsevier’s API for legal, political and social reasons (I don’t want to sign my rights away, be monitored, have to ask permission, etc.). But I also know from at least 5 years of trying to interpret publishers’ PDFs and HTML that information is corrupted. By this I mean that what the author submits is turned into something different lexically, typographically and often semantics. (Yes, that means that by changing the way something looks , you can change its meaning).
Anyway yesterday Chris Shillum, who was part of the team I challenged, tweeted that he would let me have a paper – in XML format – from the Elsevier API. For those who don’t know, XML is designed to hold information in a style-free form. It can be rendered by a stylesheet or program (e.g. FOP) into whatever font you like. I’m very familiar with XML having run the developers’ list with Henry Rzepa in 1997 and been co-author of the universal SAX protocol. Henry and I have developed Chemical Markup Language (CML) precisely for the purpose of chemical publishing (among many other things).
 
But Elsevier don’t use CML, they use typographers who know nothing about chemistry. At school you may have heard of a “double bond” (http://en.wikipedia.org/wiki/Double_bond). It’s normally represented by two lines between the atoms. We used to draw these with rapidographs, but now we type them. So every chemist in the world will type Carbon Dioxide as
O=C=O
capital-O equals capital-C equals capital-O
You can do it – nothing terrible happens. You can even search chemical databases using this. They all understand “equals”.
But that’s not good enough for Elsevier (and most of the others). It has to look “pretty”. It’s more important that a publication looks pretty than that it’s correct. And that’s one of the major ways they corrupt information. So here’s the paper that Chris Shillum sent me.
First as a PDF.
elsevierchem4
Can you see the C=O double bond in the middle? “(C=O stretching)”. It’s no longer an equals, but a special publisher-only symbol they think looks prettier. Among other things if I search for “C=O” I won’t find the double bond in the text. That’s bad enough. But what’s far worse is that this symbol has been included in their XML. And this gets transmitted to the HTML – which looks like (you can try this yourself http://www.sciencedirect.com/science/article/pii/S0014579301033130 ).
elsevierchem
???
What’s happened??? Do you also see a square? The double bond has disappeared.
The square is Firefox saying “I have been given a character I don’t understand and the best I can do is draw a square” – sorry. Safari does the same. Do ANY of you get anything useful? I doubt it.
Because Elsevier has created a special Elsevier-only method of displaying chemistry. It probably only works inside Elsevier back-room – it won’t work in any normal browser. Here’s what has happened.
Elsevier wanted a symbol to display a double bond. “Equals” which all the rest of the world uses – isn’t good enough. So they created their own special Elsevier-double-bond. It’s not a standard Unicode codepoint – it’s in a Private Use Area: (http://en.wikipedia.org/wiki/Private_Use_Areas). This is reserved for a single organisation to use. It is not intended for unrestricted public use. In certain cases groups, with mutual agreement, have developed communities of practice. But I know of no community outside Elsevier that uses this. (BTW the XML uses 6 Elsevier-only DTDs and can only be understood by reading a 500-page manual – the chemistry is hidden somewhere at the end. This is the monstrosity that Elsevier wishes to force us to use.
It’s highly dangerous. If you change a double bond to a triple bond (ethylene => acetylene) it can explode and blow you up. But double and triple bonds are both represented by a hollow square if you try to view Elsevier-HTML. And goodness knows what else:
So Elsevier destroys information.
Chris Shillum tells me on Twitter that it’s not a problem. But it is. Using the Private Use Area without the agreement of the community is utterly irresponsible. No one even knew that Elsevier was doing it.
Why’s it irresponsible? Because many languages use it for other purposes. See Wikipedia above. Estonian, Tibetan, Chinese … If an Elsevier-double-bond is used in these documents (e.g. an Estonian chemistry department) there will be certain corruption of both the chemistry and the Estonian. There are probably 10 million chemical compounds with double bonds and all will be corrupted.
But it’s also arrogant. “We’re Elsevier. We’re not going to work with existing DTDs (XML specifications) – we’re going to invent our own.” Who uses it outside Elsevier? “And we are going to force text-miners to use this monstrosity.”
And it’s the combined arrogance and incompetence of publishers that destroys science during the manuscript processing. I’ve been through it. I know.
 
 

Posted in Uncategorized | 6 Comments

Publishers' typesetting destroys science: They are all as bad as each other. Can you spot the error?

I’ve just been trying to mine publicly visible scientific publications from scholarly publishers. (That’s right – “publicly visible” – Hargreaves comes later).
AND THE TECHNICAL QUALITY IS AWFUL. PUBLISHERS DESTROY SCIENCE THROUGH THEIR TECHNICAL INCOMPETENCE AND INDIFFERENCE.
They destroy the text. They destroy the images and diagrams. And we pay them money – usually more than a thousand dollars for this. Sometimes many thousands. And when I talk to them – which is regular – they all say something like:
“Oh, we can’t change our workflow – it would take years” (or something similar). As if this was a law of the universe.
Unfortunately it’s a law of publishing arrogance. They don’t give a stuff about the reader. There’s no market forces – the only thing that the PublisherAcademic complex worries about is the shh-don’t-mention-the-Impact-Factor.
And it’s not just the TollAccess ones but also the OpenAccess ones. So today’s destruction of quality comes from BMC. (I shall be even handed in my criticism).
I’m trying to get my machines to read HTML from BMC’s site. Why HTML? Well publisher’s PDF is awful – I’ll come to that tomorrow or sometime). Whereas HTML is a standard of many years and so it’s straightforward to parse. Yes,
unless it comes from a Scholarly publisher…
PUZZLE TODAY. What’s (seriously) wrong with the following. [Kaveh, you will spot it, but give the others a chance to puzzle!]. It’s verbatim from http://www.biomedcentral.com/1471-2229/14/106 (I have added some CR’s to make it readable

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html id="nojs" xmlns="http://www.w3.org/1999/xhtml"
    xmlns:m="http://www.w3.org/1998/Math/MathML"
    xmlns:og="http://ogp.me/ns#" xml:lang="en-GB"
    lang="en-GB" xmlns:wb=“http://open.weibo.com/wb”>
<head> ... [rest of document snipped]

When you see it you’ll be as horrified as I was. There is no excuse for this rubbish. Why do we put up with this?

Posted in Uncategorized | 10 Comments