petermr's blog

A Scientist and the Web


Archive for the ‘Uncategorized’ Category

Jean-Claude Bradley Memorial Symposium ; Updates, including live streaming

Sunday, July 13th, 2014

Tomorrow we have the Memorial Symposium for Jean-Clause Bradley in Cambridge:

We have 13 speakers and other items related to JCB. The lecture theatre is nearly full (ca 48 people)

** We have arranged live streaming and recording so those who cannot attend in person can follow and we will also have a recording (don’t know how long that will take to edit) **

Here are the notes – please try them out:

Meeting Name: Unilever Centre Lecture Theatre

Invited By: IT Support Chemistry

To join the meeting:


If you have never attended an Adobe Connect meeting before:

Test your connection:

Get a quick overview:


I  suggest a hashtag of #jcbmemorial

We meet tonight in the Anchor pub in Cambridge – I and TonyW will be there at 1800 – I will have to leave ca 1830.



Content Mining: Extraction of data from Images into CSV files – step 0

Wednesday, July 9th, 2014

Last week I showed how we can automatically extract data from images. The example was a phylogenetic tree, and although lots of people think these are wonderful, even more will have switched off. So now I’m going to show how we can analyse a “graph” and extract a CSV file. This will be in instalments so that you will  be left on a daily cliff-edge… (actually it’s because I am still refining and testing the code).  I am taking the example from “Acoustic Telemetry Validates a Citizen Science Approach for Monitoring Sharks on Coral Reefs” ( [I’ve not read it, but I assume they got volunteers to see how long they could evade being eaten with and without the control).

Anyway here’s our graph. I think most people  can understand it. There’s:

  • an x-axis, with ticks, numbers (0-14), title (“Sharks detected”) and units (“Individuals/day”)
  • a y-axis, with ticks, numbers (0-20), title (“Sharks observed”) and units (“Individuals/day”)
  • 12 points (black diamonds)
  • 12 error bars (like Tie-fighters) appearing to be symmetric
  • one “best line” through the points


We’d like to capture this as CSV. If you want to sing along, follow: (the link will point to a static version – i.e. not updated as I add code).

This may look simple, but let’s magnify it:


Whatever has happened? The problem is that we have a finite number of pixels. We might paint them black (0) or white (255) but this gives a jaggy effect which humans don’t like. So the plotting software adds gray pixels to fool your eye. It’s called antialiasing (not a word I would have thought of). So this means the image is actually gray.

Interpreting a gray scale of images is tough, and most algorithms can only count up to 1 (binary) so we “binarize” the image. That means that  pixel becomes either 0 (black) or 1 (white). This has the advantage that the file/memory can much smaller and also that we can do toplogical analyses as in the last blog post. But it throws information away and if we are looking at (say) small characters this can be problematic. However it’s a standard first step for many people and we’ll take it.

The simplest way to binarize a gray scale (which goes from 0 to 255 in unit steps) is to classify 0-127 as “black” and 128-255 as “white”. So let’s do that:



Now if we zoom in we can see the pixels are binary:


So this is the next step on our journey – how are we going to turn this into a CSV file? Not quite as simple as I have made it out – keep your brain in gear…

I’ll leave you on the cliff edge…



Social Machines, SOCIAM, WWMM, machine-human symbiosis, Wikipedia and the Scientist’s Amanuensis

Tuesday, July 8th, 2014

Over 10 years ago, when peer-to-peer was an exciting and (through Napster) a liberating idea, I proposed the World Wide Molecular Matrix (Cambridge), (wikipedia) as a new approach to managing scientific information. It was bottom-up, semantic, and allowed scientists to share data as peers. It was ahead of the technology and ahead of the culture.

I also regularly listed tasks that a semi-artificially-intelligent chemical machine – the Scientists’ Amanuensis – could do,  such as read the literature, find new information and compute the results and republish to the community. I ended with:

“pass a first year university chemistry exam”

That would be possible today – by the end of this year – we could feed past questions into the machine and devise heuristics, machine learning and regurgitation that would get a 40% pass mark. Most of the software was envisaged in the 1970′s in the Stanford and Harvard AI/Chemistry labs.

The main thing stopping us doing it today is that the exam papers are Copyright. And that most of published science is Copyright. And I am spending my time fighting publishers rather than building the system. Oh dear!

Humans by themselves cannot solve the problem – the volume is too great – 1500 new scientific papers each day. And machines can’t solve it, as they have no judgment. Ask them to search for X and they’ll often find 0 hits or 100,000.

But a human-machine symbiosis can do wonderfully. Its time has now come – and epitomised by the SOCIAM project which involves Southampton and Edinburgh (and others). It’s aim is to build human-machine communities. I have a close lead as Dave Murray-Rust (son) is part of the project and asked if The Content Mine could provide some synergy/help for a meeting today in Oxford. I can’t be there, and suggested that Jenny Molloy could (and I think she’ll meet in the bar after she has fed her mosquitoes).

There’s great synergy already. The world of social machines relies on trust – that various collaborators provide bits pf the solution and that the whole is larger than the parts. Academic in-fighting and meaningless metrics destroy progress in the modern world – the only thing worse is publishers’  lawyers. The Content Mine is happy to collaborate with anyone – The more you use what we can provide the better for everyone.

Dave and I have talked about possible SOCIAM/ContentMine projects. It’s hard to design them because a key part is human enthusiasm and willingness to help build the first examples. So it’s got to be something where there is a need, where the technology is close to the surface, where people want to share and where the results will wow the world. At present that looks like bioscience – and CM will be putting out result feeds of various sorts and seeing who is interested. We think that evolutionary biology, especially of dinosaurs, but also of interesting or threatened species , would resonate.

The technology is now so much better and more importantly so much better known. The culture is ready for social machines. We can output the results of searches and scrapings in JSON, link to DBPedia using RDF – reformat and repurpose using Xpath or CSS. The collaborations doesn’t need to be top-down – each partner says “here’s what we’ve got” and the others say “OK here’s how we glue it together”. The vocabularies in bioscience and good. We can use social media such as Twitter – you don’t need to have an RDF schema to understand #tyrannosaurus_rex. One of the great things about species is that the binomial names are unique (unless you’re a taxonomist!) and that Wikipedia contains all the scientific knowledge we need.

There don’t seem to be any major problems [1]. If it breaks we’ll add glue just as TimBL did for URLs in the early web. Referential and semantic integrity are not important in social machines – we can converge onto solutions. If people want to communicate they’ll evolve to the technology that works for them – it may not be formally correct but it will work most of the time. And for science that’s good enough (half the science in the literature is potentially flawed anyway).



[1] One problem. The STM publishers are throwing money at politicians desperately trying to stop us. Join us in opposing them.


Why I am fortunate to live and work in Cambridge

Monday, July 7th, 2014


Today was the Tour de France; third day – Cambridge to London. A once-in-a-lifetime opportunity. Should I “take the morning off” to watch the race – or should I continue to hack code for freedom. After all we are in a neck and neck race with those who wish to control scientific information and restrict our work in the interests of capitalist shareholders.

I’m very fortunate in that I can do both. I’m 7 mins cycle from the historic centre of Cambridge. I can carry my laptop in my sack, find a convenient wall to sit on – and later stand on – and spend the waiting time hacking code. And when I got into the Centre I found the “eduroam” network. Eduroam is an academic network which is common in parts of the anglophone world, especially the British Commonwealth. So I could sit in front of the norman Round Church – 1000 years old – and pick up eduroam, perhaps from St Johns College.

The peleton rode ceremonially through Cambridge (it speeded up 2 kilometers down the road) but even so it only took 20 seconds to pass.

So I can do my work anywhere in Cambridge – on a punt, in a pub, in the Market Square, at home

and sometimes even in the Chemistry Department…

So thank you everyone who makes the networks work in Cambridge.

And here, if you can see it half way up the lefthand side (to the left of the red shirt) , is the bearsuit who came to watch the race.


Jean Claude Bradley Memorial Symposium; July 14th; let’s take Open Notebook Science to everyone

Friday, July 4th, 2014

On July 14th we are holding a memorial meeting for Jean-Claude Bradley in Cambridge. Do come; it’s open for all. [NOTE: we hope to get live streaming for those who can't come.]

Jean-Claude Bradley was one of the most influential open scientists of our time. He was an innovator in all that he did, from Open Education to bleeding edge Open Science; in 2006, he coined the phrase Open Notebook Science. His loss is felt deeply by friends and colleagues around the world.

On Monday July 14, 2014 we shall gather at Cambridge University to honour his memory and the legacy he leaves behind with a highly distinguished set of invited speakers to revisit and build upon the ideas which inspired and defined his life’s work.


Simon Coles, University of Southampton, UK
Robert Hanson, St. Olaf College, USA
Nina Jeliazkova, Ideaconsult, Bulgaria
Andrew Lang, Oral Roberts University, USA
Daniel Lowe, NextMove Software, UK
Cameron Neylon, PLOS, USA
Peter Murray-Rust, Cambridge University, UK
Noel O’Boyle, NextMove Software, UK
Henry Rzepa , Imperial College London, UK
Valery Tkachenko , Royal Society of Chemistry, UK
Matthew Todd, University of Sydney, Australia
Antony Williams, Royal Society of Chemistry, UK
Egon Willighagen, Maastricht University, Netherlands

For me this is not to look back but forward.  Science, and science communication is in crisis. We need bold, simple visions to take us out of this, and Open Notebook Science (ONS) does exactly that. It:

  • is inclusive. Anyone can be involved at any level. You don’t have to be an academic.
  • is honest. Everything that is done is Open, so there is no fraud, no misrepresentation.
  • is immediate. The science is available as it happens. Publication is not an operation, but an attitude of mind
  • is preserved. ONS ensures that the record, and the full record, persists.
  • is repeatable or falsifiable. The full details of what was done are there so the experiment can be challenged or repeated at any time
  • is inexpensive. We waste 100 Billion USD / year of science through bad practice so we save that immediately. But also we get rid of paywalls, lawyers, opportunity costs, nineteenth century publishing practices, etc.

and a lot more. I shall take the opportunity to show the opportunities:

“Open Notebook Science NOW!” – Peter Murray-Rust, University of Cambridge and Shuttleworth Fellow
Open Notebook Science can revolutionise science in the same way as Open Source has changed software. Its impact will be massive: greatly increased quality, removal of waste and duplication, and an inclusive approach to involving citizens in science. It’s straightforward to do in many areas of science, especially computational. I shall present an ONS model which we can all follow and adapt. The challenge is changing minds and to do that we should start young.


Mozilla Global Science Hack – A must-attend event for scientists who want programs

Wednesday, July 2nd, 2014

In 3 weeks from now we’ll have a massive global hack for science. Many scientists probably think software is something that other people do. “I’m not a programmer” is a frequent cry. But things are changing. Programming is increasingly about finding out what the problem is, and finding tools and people who can help solve it. If you can run a chromatograph, or a mass spectrometer or a PCR machine you can use and build programs.

The main thing is your frame of mind. If you can organize and run an experiment , you can organize data. If you can organize data you are effectively doing computing. I had the great opportunity to go to a Software Carpentry course last year and it changed my life. It showed me that I needed to understand how I think and how I work and that the rest comes relatively naturally. And it showed the value of friends.

You want a program to do X? Thinking of writing it? Chances are that much of it exists already. Much of what programs do is universal – sorting, matching, transforming, searching. And we have great toolkits – R, Python, Apache, and visualisation D3, etc. So much of the solution is knowling what, and who, is out there.

So I’m off to Mozilla, in the heart of London. I went there for the first time a month ago – a great place that is human-friendly. Here’s the blurb – join us!

A multi-site sprint this July

(Also posted on the Software Carpentry blog.)

We’ll be holding our first-ever global sprint on July 22-23, 2014. This event will be modeled on Random Hacks of Kindness: people will work with friends and colleagues at sites around the globe, then hand off to participants west of them as their days end and others’ begin. We will set up video conferencing between the various locations and a show-and-tell at the end (and yes, there will be stickers and t-shirts).

We have booked space for the sprint at the Mozilla offices in Paris, London, Toronto, Vancouver, and San Francisco. If you aren’t in one of those cities, but are willing to help organize in your area, please add yourself to this Etherpad. We’ll hash out the what and how at the next Software Carpentry lab meeting—it’s a community event, so we’d like the community to choose what to sprint on—but please get the date in your calendar: it just wouldn’t be a party without you.

Visit of Richard Stallman (RMS) to Cambridge

Tuesday, July 1st, 2014

Richard Stallman (RMS) from MIT stayed with us for 2 days last week. Since RMS has a 9000-word rider on what he needs and doesn’t need when visiting, I hope I will help future hosts by adding some comments. TL;DR It’s hard work.


[RMS (St IGNUsias) selling PMR a GNU; (C) Murray-Rust, CC-BY]

I have a great regard for what RMS has done – Emacs, GNU, the 4 Freedoms. I heard him talk some years ago on Software Patents in Europe and it was great – he knew far more about the European system of government than I did; he had a clear political plan of action (who to write to, and when).  We’d corresponded but only met very briefly in a noisy room.

I posted on the dangers of publishers taking over our data, and he wrote and said he was coming to Cambridge (to talk at OWASP) and would like to talk. He mailed subsequently and said he was looking for somewhere to stay, so we offered him a bed. We’d read the rider – food requirements, temperature, music, dinner gurest, etc. We were prepared for a somewhat eclectic visitor.

In retrospect we should have prepared for an Old Testament prophet or mediaeval itinerant monk. (The dressing up as St IGNUsias – above – is actually quite a close parallel and a valuable addition to the rider.) Be prepared to arrange/fund taxi rides, random food browsing, and a flexible timetable.  In fact RMS didn’t require an internet cable – he used our wireless.

But the strange thing was that we had nothing to say to each other. RMS no longer writes software and does not seem engaged in practical politics or action other than raising money for FSF through sale of swag. His message – at least for these two days – was “everyone is snooping on us” (PMR agrees and is equally concerned) and “We must only run Free software” (Free as in speech, epitomised by GPL). For me GPL has the virtue of forestalling SW patents but when I raised it he seemed to downplay it. If he has a current agenda it’s not clear to me. The “Open” word is verboten in discourse – I wished to explore whether there was any difference between Free Data and Open Data (a term I promoted 9 years ago) but we didn’t.  So there was neither a practical agenda nor a dialectic.

The visit probably had the same impact on the household as most itinerant Prophets have.

And the animals are very happy to have a new addition (Connochaetes gnou). If you believe in the GNU-slash-Linux bintarian theology here it is:





Content Mining: we can now mine images (of phylogenetic trees and more)

Wednesday, June 25th, 2014

The reason I use “content mining” and not “Text and Data Mining” is that science consists of more than text – images, audio video, code and much more.  Text is the best known and the most immediately tractable and many scientific disciplines have developed Natural Language Processing (NLP). In our group Lezan Hawizy, Peter Corbett, David Jessop, Daniel Lowe and others have developed ChemicalTagger, OSCAR, Patent Analysis, and OPSIN. ( ). So the is exactly that – an org that mines content.

But words are often a poor way of representing science and images are common. A general approach to processing all images is very hard and 2 years ago I though it was effectively impossible. However with hard work some subsets can be tractable. Here we show you some of the possibilities in phylogenetic trees (evolutionary trees). What is described below is simple to follow and simple to carry out, but it took me some months of exploration to find the best strategy. And I owe a great debt to Noureddin Sadawi who introduced me to thinning – I haven’t used his code but his experience was invaluable.

But you don’t need to worry. Here’s a typical tree. Its from PLoSONE, ( – “Adaptive Evolution of HIV at HLA Epitopes Is Associated with Ethnicity in Canada” .


The tree has been wrapped into a circle with the Root at the centre and the leaves/tips on the edge of the circle. To transcribe this manually would take hours – we show it being done in a second.

There isn’t always a standard way of doing things but for many diagrams we have to:

  • flatten (remove shades of gray)
  • separate colours (often by flattening them)
  • threshold (remove noise) and background)
  • thin (remove all pixels except the 1-pixel-think backbone)

and here is the thinned diagram:


You’ll see that the lines are all still there but exactly 1 pixel thick. (We’ve lost a few colours, but that’s irrelevant for this example). Now we are going to look at the tree (and ignore the labels):


This has been selected automatically on pixel count, but we can also use bounding boxes and many shape characteristics.

We now analyse the structure and break it into connected components – a topological tree – by standard traversal methods. We end up with nodes and edges – this is a snapshot of a SVG.


[The black lines are artifacts of Inkscape]. So we have identified every node and every edge. The next thing is to trace the edges – that’s easy if they are straight, but here they are curved. Ideally we plan to fit circles, but we’ll use segments for the time being:


The curves are actually straight-line segments, but… no matter.

It’s now a proper phylogenetic tree! And we can serialize it as Newick (or NexML if we wanted).


And here is an interactive tree by posting that string into (try it yourself).


So – to summarize – we have taken a phylogenetic tree – that may have taken hundreds of hours to compute and extracting the key data. (Smart people will ask “what about the text labels?” – be patient, that’s coming).

… in a second.

That scales to over a million images per year on my single laptop! And the technology scales to many other disciplines and it’s completely Open Source (Apache2). So YOU can use it – as long as you give us the credit for writing it.




Is this a scam or a new low for Elsevier?

Friday, June 20th, 2014

I got the following mail today. I genuinely don’t know whether it’s a scam or an unacceptable spam from Elsevier:

Measurement <>

9:54 AM (18 minutes ago)

Dear Dr. Peter Murray-Rust,
You have received this system-generated message because you have been registered by an Editor for the Elsevier Editorial System (EES) – the online submission and peer review tracking system for Measurement.

Here is your username and confidential password, which you will need to access EES at
Your username is: REDACTED
Your password is: REDACTED

The first time you log into this new account, you will be guided through the process of creating a consolidated ‘parent’ profile to which you can link all your EES accounts.

If you have already created a consolidated profile, please use the username and password above to log into this site. You will then be guided through an easy process to add this new account to your existing consolidated profile.

Once you have logged in, you can always view or change your password and other personal information by selecting the “change details” option on the menu bar at the top of the page. Here you can also opt-out for marketing e-mails, in case you do not wish to receive news, promotions and special offers about our products and services.

1) Please ensure that your e-mail server allows receipt of e-mails from the domain ““, otherwise you may not receive vital e-mails.
2) We would strongly advise that you download the latest version of Acrobat Reader, which is available free at:
3) For first-time users of Elsevier Editorial System, detailed instructions and tutorials for Authors and for Reviewers are available at:

Kind regards,
Elsevier Editorial System

For further assistance, please visit our customer support site at Here you can search for solutions on a range of topics, find answers to frequently asked questions and learn more about EES via interactive tutorials. You will also find our 24/7 support contact details should you need any further assistance from one of our customer support representatives.

I went to the sites and although they had Elsevier logos they were of low quality and didn’t have the normal branding that is so beloved of Elsevier.
So I think it’s a scam with fake emails and URLs.
But if it isn’t, then it’s appalling. To take me and my email into a company system, add me to the system without my permission is appalling. If it turns out to be Elsevier I shall write to David Willetts, MP.
And of course they are wasting their time as I have publicly committed to have nothing to do with helping Elsevier.

Content Mining hackday in Edinburgh; we solve Scraping

Friday, June 20th, 2014


[P Murray-Rust, CC  0]
We had our hack day in Edinburgh yesterday on content mining.
First, massive thanks to:
  • Mark MacGillivray for organising the event in Informatics Forum
  • Informatic Forum for being organised
  • Claire and Ianthe from Edinburgh library for sparkle and massive contributions to content mining
  • PT (Sefton) for organising material for the publishing and forbearance when it got squezzed in the program
  • Richard Smith-Unna who took time off holiday to develop his quickscrape code.
  • CottageLabs in person and remotely
  • CameronNeylon and PLoS for Grub/Tucker etc.
  • and everyone who attended
Several participants tweeted that they enjoyed it
Claire Knowles @cgknowles Thanks to @ptsefton for inviting us and @petermurrayrust for a fun day hacking #dinosaur data with @kimshepherd@ianthe88 & @cottagelabs
So now it’s official – content mining is fun!. You’ll remember we were going to
  • SCRAPE material from PLOS (and other Open) articles. And some of these are FUN! They’re about DINOSAURS!!
  • EXTRACT the information. Which papers talk about DINOSAURS? Do they have pictures?
  • REPUBLISH as a book. Make your OWN E-BOOK with Pictures of DINOSAURS with their FULL LATIN NAMES!!

About 15 people passed through and Richard Smith-Unna and Ross Mounce were online. Like all hackdays it had its own dynamics and I was really excited by the end. We had lots of discussion, several small groups crystallised and we also covered molecular dynamics. We probably didn’t do full justice to PT’s republishing technology, that’s how it goes. But we cam up with graphica art for DINOSAUR games!

We made huge progress on the overall architecture (see image) and particularly  on  SCRAPING. Ross had provided us with 15 sets of URLs from different publishers, all relating to Open DINOSAURS.


APP-dinosaur-DOIs.txt APP CC-BY articles, there are more that are free access but I have on… 4 days ago
BioMedCentral-dinosaur-articlelinks.txt BMC article links NOT DOI’s, filtered out ‘free’ but not CC BY articles 4 days ago
Dinosauria_valid_genera.csv List of valid genera in Dinosauria downloaded from PaleoDB. It includ… 4 days ago
Elsevier-CCBY-dinosaur-DOIs.txt 3 Elsevier CC BY articles 4 days ago
FrontiersIn-dinosaur-35articlelinks.txt FrontiersIn 4 days ago
Hindawi-dinosaur-DOIs.txt Pensoft & Hindawi 4 days ago
JournalofGeographyandGeology_DOI.txt Create JournalofGeographyandGeology_DOI.txt 2 days ago
Koedoe-DOI.txt PDF scan but CC BY from 1986 2 days ago
MDPI-dinosaur-DOI.txt MDPI one article 4 days ago Added one Evolution (Wiley) article 4 days ago
RoyalSocietyOA-dinosaur-DOIs.txt just one 4 days ago
SAJournalofScience-DOI.txt 1 CC BY article on African dinosaurs 2 days ago
SATNT-DOI.txt 1 CC-BY article in Afrikaans 2 days ago
Wiley-CCBY-dinosaurs.txt Added one Evolution (Wiley) article 4 days ago
peerj-dinosaur-DOIs.txt 8 PeerJ article DOIs 4 days ago
pensoft-dinosaur-DOIs.txt Pensoft & Hindawi 4 days ago
plos-biology-dinosaurs-DOIs.txt 20 PLOS Biology 4 days ago
plos-one-dinosaur-DOIs.txt first commit 4 days ago
Hard work, and we hope to automate it through CRAWLING, but that’s another day. So could we scrape files from these. Remember they are all Open so we don’t even have to invoke the mighty power of Hargreaves yet . However the technology is the same whether it’s Open or paywalled-and-readable-because-Cambridge-pays-lots-of-money.
We need a different scraper for each publisher (although sometimes a generic one works).  Richard Smith-Unna has created the quickscrape platform In this you have to create a *.json for each publisher (or even journal).
The first thing is to install quick scrape. Node.js, like java, is a WORA write-once-run-anywhere (parodied as WODE write-once-debug-everywhere). RSU has put a huge amount of effort into this so that most people installed it OK, but a few had problems. This isn’t RSU’s fault, it’s a feature of dependencies in any modern language – versions and platforms and libraries. Thanks to all yesterday’s hackers for being patient and for RSU breaking his holidy to support them. (Note – we haven’t quite hacked Windows yet, but we will). For non-hacker worksops – i.e. where we don’t expect so many technical computer experts we have a generic approach to distribs.
Then you have to decide WHAT can be scraped. This varies from whole articles  (e.g. HTML) to images (PNG) to snippets of text (e.g. licences) What really excited and delighted me was how quickly the group understood what to do and then went about it without any problems. The first task was to list all the scrapable material and we used a GoogleSpreadsheet for this. It’s not secret (quite the reverse) but I’m just checking permissions and other technicalities before we release the URL with world access.
You’ll see (just) that we have 15 publishers and about 20 attributes. Who did it? which scraper (note with pleasure that RSU’s generic scraper was pretty good!). Did it work? If not this means customsing the scraper. 9.5/15 is wonderful at this stage.
The great thing is that we have built the development architecture. If I have the Journal of Irreproducible Dinosaurs then I can write a scraper. And if I can’t it will get mailed out to the Content Mine communiaty and they/we’ll solve it.  So fairly shortly we’ll have a spreadsheet showing how we can scrape all the journals we want. In many instances (e.g. BioMedCentral) all the journals (ca 250) use the same tecnology so one-scraper-fits-all.
If YOU have a favourite journal and can hack a bit of Xpath/HTML then we’ll be telling you how you can tackle it and add to the spreadsheet. For the moment just leave a comment on this blog.