CopyCamp2017 4: What is (Responsible) ContentMining?

My non-profit organization has the goal of making contentmining universally available to everyone through three arms:

  • Advocacy. Why it's so valuable and why you should convince others and why restrictions should be removed.
  • Community. We need a large vibrant public community of practice .
  • Tools. We need to be able to do this easily.

There is a lot of apathy and a considerable amount of push-back and obfuscation (mainly from mega-publishers) and it's important that we do things correctly. So 4 of us wrote a document on how to do it responsibly:

Responsible Content Mining
Maximilian Haeussler, Jennifer Molloy,
Peter Murray-Rust and Charles Oppenheim
The prospect of widespread content mining of the scholarly literature is emerging, driven by the promise of increased permissions due to copyright reform in countries such as the UK and the support of some publishers, particularly those that publish Open Access journals. In parallel, the growing software toolset for mining, and the availability of ontologies such as DBPedia mean that many scientists can start to mine the literature with relatively few technical barriers. We believe that content mining can be carried out in a responsible, legal manner causing no technical issues for any parties. In addition, ethical concerns including the need for formal accreditation and citation can be addressed, with the further possibility of machine-supported metrics. This chapter sets out some approaches to act as guidelines for those starting mining activities.

Content mining refers to automated searching, indexing and analysis of the digital scholarly literature by software. Typically this would involve searching for particular objects to extract, e.g. chemical structures, particular types of images, mathematical formulae, datasets or accession numbers for specific databases. At other times, the aim is to use natural language processing to understand the structure of an article and create semantic links to other content.

and we gave a typical workflow (which will be useful when we discuss copyright).


Of course there are variants, and particularly where we start with bulk downloading and then searching. For example we are now downloading all Open content, processing it and indexing against Wikidata. There is little point in everybody doing the same thing and, because the result is Open, everyone can share the results of processing.

We'll use this diagram in later posts.


Posted in Uncategorized | Leave a comment

CopyCamp2017 3: The Hague Declaration and why ContentMining is important

In 2015 LIBER (The European body for Research Libraries) collected a number of leading figures in the Library and Scholarship world to create the Hague Declaration on freedom for Text and Data Mining. This stated not only the aspirations but also the reasons for demanding freedom, and I reproduce chunk of it here for CopyCamp2017 to consider.

The Hague Declaration aims to foster agreement about how to best enable access to facts, data and ideas for knowledge discovery in the Digital Age. By removing barriers to accessing and analysing the wealth of data produced by society, we can find answers to great challenges such as climate change, depleting natural resources and globalisation.

PMR: note that this is about why it's so important - the answers to the health of the planet and the beings on it may be hidden in the scientific literature and Mining can pull this out.



New technologies are revolutionising the way humans can learn about the world and about themselves. These technologies are not only a means of dealing with Big Data1, they are also a key to knowledge discovery in the digital age; and their power is predicated on the increasing availability of data itself. Factors such as increasing computing power, the growth of the web, and governmental commitment to open access2 to publicly-funded research are serving to increase the availability of facts, data and ideas.

However, current legislative frameworks in different legal jurisdictions may not be cast in a way which supports the introduction of new approaches to undertaking research, in particular content mining. Content mining is the process of deriving information from machine-readable material. It works by copying large quantities of material, extracting the data, and recombining it to identify patterns and trends.

At the same time, intellectual property laws from a time well before the advent of the web limit the power of digital content analysis techniques such as text and data mining (for text and data) or content mining (for computer analysis of content in all formats)3. These factors are also creating inequalities in access to knowledge discovery in the digital age. The legislation in question might be copyright law, law governing patents or database laws – all of which may restrict the ability of the user to perform detailed content analysis.

Researchers should have the freedom to analyse and pursue intellectual curiosity without fear of monitoring or repercussions. These freedoms must not be eroded in the digital environment. Likewise, ethics around the use of data and content mining continue to evolve in response to changing technology.

Computer analysis of content in all formats, that is content mining, enables access to undiscovered public knowledge and provides important insights across every aspect of our economic, social and cultural life. Content mining will also have a profound impact for understanding society and societal movements (for example, predicting political uprisings, analysing demographical changes). Use of such techniques has the potential to revolutionise the way research is performed – both academic and commercial.

PMR: This shows clearly the potential of ContentMining and the friction that the current legal system (mainly copyright) places on it, by default.
And a non-exhaustive list of benefits:


The potential benefits of content mining are vast and include:

  • Addressing grand challenges such as climate change and global epidemics

  • Improving population health, wealth and development

  • Creating new jobs and employment

  • Exponentially increasing the speed and progress of science through new insights and greater efficiency of research

  • Increasing transparency of governments and their actions

  • Fostering innovation and collaboration and boosting the impact of open science

  • Creating tools for education and research

  • Providing new and richer cultural insights

  • Speeding economic and social development in all parts of the globe

So what should be done? I'll leave that to the next post.


Posted in Uncategorized | Leave a comment

CopyCamp 2: workshop on ContentMining - what is it and how to do it

In the last post I explained why I became interested in contentmining to do scientific research and started to explain how it it is still a major political and legal challenge. I am excited that I have been asked to run a workshop at CopyCamp, and here is the information I am giving to participants. (You may also find my slides useful ).

Workshops on TDM/contentmining cover many areas and the precise format of this one will depend on the participants. On the program notes I suggested:

  •  hackers (who can make tools such as R, Python, etc.) do exciting things
  • scientists (including citizens) which want to explore questions in bioscience
  • librarians who want to explore C21st ways of creating knowledge
  • open activists who want to change policy both by political means and using tools
  • young people. we have had wonderful contributions from a 15-year old

So if everyone wants to talk about European and UK copyright politics, that's fine. But we also have tools and tutorial showing how mining is done and we suggest people get some hands-on. It's probably going to be a good idea to work in small groups where there are complementary skills:

Dear workshop participant:
I am delighted that you have signed up to my workshop  on Friday 29th at CopyCamp.
Wikidata, ContentMine and the automatic liberation of factual data: (The Right to Read is the Right To Mine)  The workshop will explore how Open Source tools can extract factual information from the Open Access scientific literature (specialising in BioMedicine). We will introduce Wikidata, a rapidly growing collection of of 30 million high-quality data and metadata and use it to index scientific articles. Participants will query the literature at EuropePMC using "getpapers" and retrieve hundreds or thousands of full-text articles  [snip...]
We will adapt the workshop to the skills and wishes of participants when we assemble, though please contact me earlier if there are things you would like to do. Topics can be chosen from:
* online demo of mining
* installation of full ContentMine software stack, and use of public repositories (EuropePubMedCentral, arXiv)
* introduction to WikiFactMine for extracting facts from open access publications.
* political and legal aspects of contentmining (with a European and UK slant)
If any participants are connected with (Polish) Wikipedia that could be valuable and exciting. (By default we shall use English Wikipedia). Note that Wikidata carries a large number of links to other language Wikipedias and this may be a valuable resource to explore.
If you want to run the full ContentMine stack it's a good idea to install beforehand, so here are the instructions for *adventurous* members of the workshop:

This is a VM and should be independent of the operating system of the host machine. It has been tested in several installations but there may be problems with non-US/UK keyboards and encodings. By default the tutorial is in English (all the resources, EuropePMC, dictionaries are also in English and generally use only ASCII 32-127.

Of course anyone anywhere can also try out the tutorials.
Posted in Uncategorized | Leave a comment

CopyCamp: why Copyright reform has failed TDM / ContentMining - 1 The vision and the tragedy

I am honoured to have been invited to speak at CopyCamp2017,  "The Internet of Copyrighted Things" .  I've not been to CopyCamp before, but I've been to similar events and I'm delighted to see it is sponsored by organisations, some of which I belong to, that are fighting for digital freedom. In these posts I'll show why copyright has failed science; this post shows why knowledge is valuable and must be free.

I'm giving a workshop on Thursday and talking on Friday (after scares from Ryanair) and I'm blogging (as I often to) to clear my thoughts and help add to the static slides. This is the latest in a 40-year journey of hope, which is increasingly destroyed by copyright maximalism. I am being turned from an innovative scientist who had a dream of building something excitingly new to an angry activist who is fighting for everyone's rights. I can accept when science doesn't work because it often just doesn't; I get angry when mega-capitalists are using science as a way to generate money and in the wake destroying something potentially wonderful.

Here's the story. 45 years ago I had my first scientific insight - working with Jack Dunitz in Zurich - that by collecting many seemingly unrelated observations (in this case crystal structures) I could find new science by looking at the patterns between them ("reaction pathways"). This is knowledge-driven research, where a scientist takes the results of others and interprets them in different ways. It's as old as science itself, exemplified in chemistry by Mendeleev's collection of the properties of compounds and analysis in the Periodic Table of the Elements. Mendeleev didn't measure all those properties - many will have been reported in the scientific literature - his genius was to make sense out of seemingly unrelated properties.

40 years ago chemists started to use computers to carry out simple chemical artificial intelligence - analysis of spectra and chemical synthesis. I was entranced by the prospect, but realised it relied on large amounts of knowledge to take it further. I was transformed by TimBL's vision of the Semantic Web - where knowledge could be computed. I moved to Cambridge in 1999 with the long-term aim to create "chemical AI".  I created a dream - the WorldWide Molecular Matrix - where knowledge would be constantly captured, formalized and logic or knowledge engines would extract, or even create, new chemical insights.

To do this we'd need automatic extraction of information using machines - thousands of articles or even more. In 2005-2010 I was funded (with others) by EPSRC and JISC to develop tools to extract chemical knowledge from the scientific literature. It's hard and horrible because scientific papers are not authored to be read by machines. I have spent years writing code to do this and now have a toolset which can read tens of thousands of papers a day (or more if we pay for clouds) and extract high quality chemistry. This chemistry is novel because it's too expensive and boring to extract by hand and would be an important addition to what we have. As an example Nick Day in my group built CrystalEye which extracted 250,000 crystal structures, improved them and published them under an Open Licence - we've no joined forces with the wonderful Crystallography Open Database . Later Peter Corbett, Daniel Lowe, and Lezan Hawizy built novel, Open, software for extracting chemistry from the text of papers.

So now I have everything I want - thousands of scientific articles every day, maybe 10-15% containing some chemistry, and a set of Open tools that anyone can use and improve. I'm ready to try the impossible dream - of building a chemical AI...

What will it find?

NOTHING. Because if I or anyone use it without the PUBLISHER's permissiom, the University will be immediately cut off by the publisher because ...

... because it might upset their market. Or their perceived dominance over researchers. This isn't a scare or over-reaction - there are enough stories of scientists of many disciplines being cut off arbitrarily to show it's standard. One day 2 years ago the American Chemical Society's automatic triggers cut off 200 universities. Publishers send bullying mails "you have been illegally downloading content" (totally untruee), or "stealing" (also untrue).

This is now so common that many researchers and even more librarians are scared of publishers. This blog has outlined much of this in the past and it's not getting better. My dream has been destroyed by avarice, fear and conservatism. I'll outline the symptoms, what needs to be done and urge citizens to own this problems and assert that they have a fundamental right to open scientific knowledge.

My slides at CopyCamp: provide additional material.

Posted in Uncategorized | Leave a comment

WLIC/IFLA2017: UBER for scholarly communications and libraries? It’s already here…

WLIC/IFLA2017: UBER for scholarly communications and libraries? It’s already here…

You all know of the digital revolution that is changing the world of service - Amazon, UBER, AirBnB, coupled to Facebook, Google, Siri, etc. The common feature is a large corporation (usually from Silicon valley) which builds a digital infrastructure that controls and feeds off service providers. UBER doesn’t own taxis, and takes no responsibility for their actions. AirBnB doesn’t own hotels, Amazon doesn’t have shopfronts. But they act as the central point for searches, and they design and control the infrastructure. Could it happen for scholcom / libraries? TL;DR it’s already happened.

You may love UBER, may accept it as part of change, or rebel against it.  If you want to save money or save time it’s probably great. If you don’t care whether the drivers are insured or maintain their vehicles, fine. If you don’t care about regulation, and think that a neoliberal market will determine best practices, I can’t convince you.

But if you are a conventional service provider (hotels, taxis) you probably resent the newcomers. If you are blind, or have reduced mobility,  and are used to service provision by taxis you’ll probably be sidelined. UBER and the rest provide what is most cost-effective for them, not what the community needs.

So could it happen in scholarly communications and academic libraries? Where the merit of works is determined by communities of practice? Where all the material is created by academics, and reviewed by academics? Isn’t the dissemination overseen by the Universities and their libraries? And isn’t there public oversight of the practices?


It’s overseen and tightly controlled by commercial companies who have no public governance, can make the rules and who can break the rules and get away with it. While the non-profit organizations are nominally academic societal, in practice many are controlled by managers whose primary requirement is often to generate income as much as to spread knowledge. The worth of scientists is determined not by community acclaim or considered debate but by algorithms run by the mega-companies. Journals are, for the most part, created and managed by corporations. Society journals exist, and new journals are created, but many increasingly end up by being commercialised. What role does the Library have?

Very little.

It nominally carries out the purchase - but has little freedom in a market which is designed for the transfer of money, not knowledge. In the digital era, libraries should be massively innovating new types of knowledge, not simply acting as agents for commercial publishers.

So now Libraries have a chance to change. Where they can take part in the creation of new knowledge. To help researchers. To defend freedom.

It’s probably the last great opportunity for libraries:

Content-mining (aka Text and Data Mining, TDM).

This is a tailor-made opportunity for Libraries to show what they can contribute. TDM has been made legal and encouraged in the UK for 3 years. Yet no UK Library has made a significant investment, no UK Vice Chancellor has spoken positively of the possibilities, no researchers have been encouraged. [1]

And many have been discouraged - formally - including me.

Mining is as revolutionary as the printing press. Libraries should be welcoming it rather than neglecting or even obstructing it. If they don’t embrace it, then the science library will go the way of the corner shop, the family taxi, the pub. These are becoming flattened by US mega-corporations. Products are designed and disseminated by cash-fed algorithms.

The same is happening with libraries.

There is still time to act. Perhaps 6 months. Universities spend 20,000,000,000 USD per year (20 Billion) on scholarly publishing - almost all goes to mega-corporations. If they spent as little as 1% of that (== 200 Million USD) on changing the world it would be transformative. And if they did this by supporting Early Career Researchers (of all ages) it could change the world.

If you are interested, read the next blog post. Tomorrow.

[1] The University of Cambridge Office of Scholarly Communication ran the first UK University meeting on TDM last month.


Posted in Uncategorized | Leave a comment

ContentMine at IFLA2017: The future of Libraries and Scholarly Communications

ContentMine at IFLA2017: The future of Libraries and Scholarly Communications


I am delighted to have been invited to talk at IFLA (, the global overarching body for Libraries of all sorts. I’m in a session 232 (see ) with
Congress Programme, IASE Conference Room 24.08.2017, 10:45 – 12:45

Session 232 Being Open About Open - Academic & Research Libraries, FAIFE and Copyright and Other Legal Matters


What’s FAIFE? It’s

The overall objective of IFLA/FAIFE is to raise awareness of the essential correlation between the library concept and the values of intellectual freedom  ...
Monitor the state of intellectual freedom within the library community
Respond to violations of free access to information and freedom of expression

I share these views. But freedom of access and freedom of expression is under threat in the digital world. Mega-corporations control content and services and are actively trying to claw more control, for example by controlling the right to post hyperlinks to scholarly articles - even open access - (“Link Tax”)

And recently


I have spent 3-4 years on the edge of the political arena and I’ve seen how hard companies fight to remove our rights and to give them control.


And we need your help.

If you are a librarian, then you can only protect access to knowledge by actively fighting for it.

That means you. Not waiting for someone to create a product that you can buy


By actively creating the scholarly infrastructure of the future and embedding rights for everyone.

Now, for the first and possibly the last time we have an opportunities for libraries to make their own contribution to freedom.


I’ve set up the non-profit organization  which promotes three areas for fighting for freedom:


  • Community. The community deserves better from academia, and the community is willing to help, if given the chance. The biggest communal knowledge creation is in Wikimedia and we are working with them to make high-quality knowledge universally created and universally available.



We now have tools which can create the next generation of scholarly knowledge - for everyone.


But YOU can and must help.


IFLA has very generously given us workshop time for a demonstration and discussion of Text and Data Mining (TDM)


Imperial Hall 23.08.2017, 11:45 – 13:30

Session 199 Text and Data Mining (TDM) Workshop for Data Discovery and Analytics - Big Data Special Interest Group (SIG)


We’ll be giving simple hands-on web demonstrations of Mining , interspersed with the chance to discuss policy and investment in tools, practices and people. Especially young people. No future knowledge required.

This is (hopefully) the first of several blogs.


Posted in Uncategorized | 2 Comments

What is TextAndData/ContentMining?

What is TextAndData/ContentMining?

I prefer “ContentMining” to the formal legal phrase “Text and Data Mining” because it emphasizes all kinds of content - audio, photos, videos, diagrams, chemistry, etc. I chose it to assert that non-textual content - such as diagrams - could be factual and therefore uncopyrightable. And because it’s a huge extra exciting dimension.


Mining is the process of finding useful information where the producer hadn’t created it for that specific process. For example the log books of the British navy - which recorded data on weather - are now being used to study climate change (certainly not in the minds of the British Admiralty). Records of an eclipse in ancient China have been used to study the rotation of the earth. So forty years ago I studied hundreds of papers of individual crystal structures to determine reaction pathways - again completely unexpected to the original authors.


In science mining is a way to dramatically increase our human knowledge simply by running software over existing publications. Initially I had to type this in by hand (the papers really were papers) and then I developed ways of using electronic information. Ca 15 years ago I developed tools which could trawl over the whole of the crystallographic literature and extract the structures and we built this into Crystaleye - where the software added much more information than in the original paper. (We have now merged this with the Crystallography Open Database ). My vision was to do this for all chemical information - structures, melting points, molecular mass, etc. Ambitious, but technically not impossible. We had useful funding and collaboration with the Royal Society of Chemistry and developed OSCAR as software specifically to extract chemistry from text. Ten years ago things looked exciting - everyone seemed to accept that having access to electronic publications meant that you could extract facts by machine. It stood to reason that machines were simply a better , more accurate, faster way of extracting facts than pencil and retyping.


So what new science can we find by mining?

  • More comprehensive coverage. In 1974 I read and analyzed 1-200 papers in 6 months. In 2017 my software can read 10000 papers in less than a day.
  • More comprehensive within a paper. Very often I would limit the information beacuse I didn’t have time (e.g. the anisotropic displacements of atoms). Now it’s trivial to include everything.
  • Aggregation and intra-domain analytics. By analysing thousands of papers you can extract trends and patterns that you couldn’t do before. In 1980 I wanted to ask “How valid is the 18-electron rule?” - there wasn’t enough data/time. Now I could answer this within minutes.
  • Aggregation and inter-domain analytics. I think this is where the real win is for most people. “What pesticides are used in what countries where Zika virus is endemic and mosquito control is common?”. You cannot get an answer from a traditional search engine - but if we search the full-text literature for pesticide+country+disease+species we can rapidly find those papers with the raw information and then extract and analyze it. “Which antibodies to viruses have been discovered in Liberia?”. An easy question for our software to answer, except it was behind a paywall - no-one saw it and the Ebola outbreak was unexpected.
  • Added information. If I find “Chikungunya” in an article, the first thing I do is link it electronically to Wikidata/Wikipedia. This tells me immediately the whole information hinterland of every concept I encounter. It’s also computable - if I find a terpene chemical I can compute the molecular properties on-the-fly. I can, for example, predict the boiling point and hence the volatility without this being mentioned in the article. The literature is a knowledge symbiont.


Everyone is already using the results of Text Mining. Google and other search engines have sophisticated language analysis tools that find all sources with (say) “Chikungunya”. What I want to excite you about is the chance to go much further.
Why do we need other search engines when we have “Google”?


  • Google shows you what it wants you to see. (The same is true for Elsevinger). You do not know how these were selected, it’s not reproducible, and you have no control. (Also, if you care, Google and Elsevinger monitor everything you do and either monetize it or sell it back to your Vice-Chancellor).
  • Google does not allow you to collect all the papers that fit a given search. They give links - but try to scrape all these links and you will be cut off. By contrast Rik Smith-Unna, working with ContentMine (CM) developed “getpapers” - which is exactly what the research scientist needs - an organized collection of the papers resulting from a search. ContentMine tools such as “AMI” allow the detailed analysis of the details in the papers.
  • Google can’t be searched by numeric values. Try asking for papers with patients in the age range 12-18 and it’s impossible (you might be lucky that this precise string is used but generally you get nothing). In contrast CM tools can search for numbers, search within graphs, search species and much more. “Find all diterpene volatiles from conifers over 10 metres high at sea level in tropical latitudes” is a straightforward concept for CM software.


That’s a brief introduction - and I’ll show real demos tomorrow.


Posted in Uncategorized | Leave a comment

Text and Data Mining: Overview

Text and Data Mining: Overview

Tomorrow The University of Cambridge Office of Scholarly Communication is running a 1-day Symposium on Text and Data Mining ( ). I have been asked to present   , a project funded by the Shuttleworth Foundation through a personal Fellowship, evolved into a not-for-profit company.

I hope to write several blog posts before tomorrow , and maybe some afterwards. I have been involved in mining science from the semi-structured literature for about 40 years and shall give a scientific slant. As I have got 20-25 minutes I am recording thoughts here so people can have time to explore the more complex aspects.

Machines are now part of our information future and present, but many sectors, including academia, have not embraced this. Whereas supermarkets, insurance, social media are all modernised, scholarly communication still works with “papers”. These papers contain literally billions of dollars of unrealised value but very few people care about this. As a result we are not getting the full value of technical and medical funding, much of which is wasted through the archaic physical format and outdated attitudes.

These blog posts will cover the following questions - how many depends on how the story develops. They include:

  • What mining could be used for and why it could revolutionise science and scholarship
  • Why TDM in the UK and Europe (and probably globally) has been a total political and organizational failure.
  • What directions are we going in? (TL;DR you won’t enjoy them unless you are a monopolistic C21st exploiter, in which case you’ll rejoice.)
  • What I personally am doing to fight the cloud of digital enclosure.

There are 3 arms to ContentMine activities:

  • Advocacy/political. Trying to change the way we work top-down, through legal reforms, funding, etc. (TL;DR it’s not looking bright)
  • Tools. ContentMining needs a new generation of Open tools and we are developing these. The vision is to create intelligent scientific information rather than e-paper (PDF). Much of this is recently enhanced by the development of
  • Community. The great hope is the creative activity of young people (and people young at heart). Young people are sick of the tired publisher-academic complex which epitomises everything old, with meretricious values.

This sounds very idealistic - and perhaps it is. But the Academic-Publisher complex is all-pervasive - it kills young people’s hopes and actions. Our values are managed by algorithms that Elsevinger sells to Vice-chancellors to manage “their” research. The AP complex has destroyed the potential of TDM in the UK and elsewhere and so we must look to alternative approaches.

For me there is a personal sadness. 15 years ago I could mine the literature and no-one cared. I had visions of building the Open shared scientific information of the future. I called it - after Gibson’s vision of the matric in cyberspace. It draws on the vision of TimBL and the semantic web, and the idea of global free information. It was technically ahead of its time by perhaps 15 years, but now - with Wikidata, and modern version control (Git) - we can actually build this.

So my vision is to mine the whole of the scientific literature and create a free scientific resource for the whole world.

It’s technically possible and we have developed the means to do it. And we've started. And we will show you how, and how you can help.

But we can only do it on a small part of the literature because the Academic-Publisher complex has forbidden it on the rest.



Posted in Uncategorized | 2 Comments

How Wikidata can change the world of scientific information 1/n


We're getting involved in Wikidata! It will change the world of scientific (and other) information. So here is an emerging conversation, hopefully over several blog posts.

Wicki: Hang on! What's Wikidata? And Wikimedia? I've heard of Wikipedia, but...

Dater: Wikipedia is a free encyclopedia. It doesn't do everything. It's one of about 12 projects under the aegis of the Wikimedia Foundation. It's the one everyone has heard of, but there are lots of others which are also about making structured information and knowledge available for free and freely reusable by everyone. For example Wikimedia Commons is a huge resource of free images, videos, etc. Many of them are linked from Wikipedia articles but there are lots more which can be re-used in all sorts of ways. Teaching, research, new media ...

Wicki: OK, so Wikidata is the same thing for data? ...

Dater: Yes, but it's not "all the world's free data". It's carefully described data, carefully selected, and with clear provenance. When you find some Wikidata you know:

  •  what it is
  • where it came from
  • how it can be used
  • what other data it is related to

>> so give me an example. If I want to find out where Zika is endemic, then can I find it in Wikidata?

D: Yes. Good example. Actually "Zika" represents quite a lot of different things. It represents a virus...

W: Yes, but surely that's it?

D: No, it also represents the fever caused by the virus. They aren't the same ...

W: OK, I can see that. OK there would have to be two entries...

D: No there's more. Do you know where Zika virus was first discovered ?

W: In Africa? But no idea where...

D: In the Zika forest - in Uganda. The virus was named after the forest. So it's got a separate identifier. Lots of diseases are named after the place where they were first identified.

And then there are people called "Zika"

W: But they wouldn't cause any confusion?

D: Yes, some of them are authors of scientific papers. Which have nothing to do with Zika virus, Zika forest, Zika fever...

W: H'mm. So if I search for "Zika" in G**gle. I'll get all of these?

D: G**gle will guess what you want, and add in what it and its sponsors want you to see. So I didn't find any authors in the first 4 pages. It's powerful, but it's not objective,
and it's not reproducible. If you search tomorrow you'll get different results.

W: And Wikidata is more objective?

<Dater< Yes. Wikidata has different entries (items) for each of the categories above. The virus, the fever, the forest and the authors have different identifiers.

W: identifiers?

Dater: Yes. Good information systems have unique identifiers for each piece of information. Your passport number is unique. That's what the machines read at airports. So here are some identifiers (we searched : )





have a look at, that's got masses of information about Zika virus.

  •    screen-shot-2016-11-08-at-13-14-15

Oh, and here's a botanist, Peter Francis Zika,  whose Wikidata identifier is Q21613657.

W: Help - that's too much at once

D: understood

W: H'm. So does everything in the scientific world have an identifier in Wikidata?

D: no - there's far too much. Even G**gle won't get everything. But everything with a Wikipedia article will (or should) have a Wikidata item.
And lots of things are in Wikidata that don't have articles.
The Wikidata community has imported lots of information directly from authoritative sources.

W: Ok so I can assume that every *important* scientific fact is in Wikidata?

D: that depends on what is "important"? But there are already huge amounts of bioscientific information. Drugs, diseases ...

W: Hm, my brain is really starting to overheat. Let's take a break and come back. Maybe with some more examples??

D: certainly with some more examples. I'll show you how items can be linked together by properties...

W: OK. We've not even talked about how it will change science. you may have to reteach me some of this when we next meet...

D: Just remember "Wikidata".  be seeing you

Posted in Uncategorized | Leave a comment

The critical role of e-Theses: award acceptance speech at NDLTD

I am honoured by this award; I ‘ll describe the current struggle for ownership of digital scholarly knowledge, emphasize young people and machine-understandable theses and suggest practices.


Early Career Researchers see the digital literature – including theses - as a primary research resource. We’ve set up ContentMine – a non-profit supporting machine reading and analysis of scholarship. There are 10-20,000 journal articles a day – and several hundred theses - so machines are essential. Today we’re announcing 6 ContentMine fellows – all of whom have exciting projects to create new bioscience from the scholarly literature.


But this brave new world is often opposed by the Publisher-Academic complex. Academia feeds knowledge and public money into companies who in return define the scholarly infrastructure and the rules by which Academia has to play.

The key issue is who controls scholarship? Universities? Students? Researchers? Or corporations only answerable to their shareholders? How many universities have been arbitrarily cut off by publishers with the accusation that “their” content is being stolen? Knowledge that should be available to the whole world is being controlled and monitored. Increasingly, universities are acquiescent and even required by publishers to police “compliance”.


Last month one of our fellowship - a graduate student colleague in the Netherlands - was legally mining the literature to detect malpractice – such as unjustifiable statistical procedures. After 30,000 downloads a publisher cut off the University and – without discussion - wrote denouncing him for “stealing” content. They required his research be stopped. The University complied. Then another publisher. And a third. Last month Cambridge was cut off for 3 weeks by one publisher. No explanation. No dialogue.


Europe is trying to reform copyright to support research. I am working with them, but there’s massive lobbying by publishers. They want to control and monitor everything. Textual content, repositories for data, metadata, metrics for academic glory.


Machine-understandable e-theses represent one of the remaining areas not controlled by publishers. They are a new opportunity for universities and a knowledge resource for everyone – citizens as well as academics. They report billions of dollars of research, and are often the only place where it’s published. To maximize the spread of knowledge – which young people are passionate about – some suggestions.

  • Be proud of theses.
  • Think of “use” rather than “deposit”
  • Make theses globally discoverable.
  • Involve citizens everywhere. Think of the Global South.
  • Don’t repeat the mistakes of the “West”. Do it differently.
  • Release immediately.
  • Use DOCX, Tex, CSV, SVG, XHTML, besides PDF.
  • Use versioned text and data GIT, DAT …
  • Use openly controlled international repositories.
  • Use permissive licences allowing mining and re-use.
  • Do not hand over rights for content, discovery or access.
  • Don’t buy systems - Encourage young people to build them.
  • Experiment with Open Notebook Science.
  • Encourage and use e-theses as a primary tool for research.
  • Use Wikipedia / Wikidata as the default metadata for scholarship.


And a warning: Unless libraries take this type of opportunity now they will be increasingly replaced by commercial services and disappear. E-theses and young people are your chance.


Posted in Uncategorized | Leave a comment