European copyright: Cancel Articles 3, 11 and 13

The proposed reforms to European Copyright will be disastrous. If you don’t know about them the links below are the seminal introductions. When you understand, then write to your MEPs as well. You are welcome to refer to this letter and copy some or all.

See Glyn Moody’s comprehensive and accurate arguments against the articles:
Thanks to the wonderful which makes this easy and professional


Thursday 7 June 2018

Dear David Campbell Bannerman, Stuart Agnew, Patrick O’Flynn, Tim Aker, John Flack, Alex Mayer and Geoffrey Van Orden,
Articles 3, 11 and 13 of the Proposed Copyright Directive
I write as a scientist, Reader Emeritus of the University of Cambridge and also as founder of a Cambridge non-profit which employs high-tech staff in Cambridge.
I am desperately concerned about the proposed copyright reforms on which you will soon vote. The issues are concisely summarised by MEP Julia Reda . (I have corresponded with Julia for 5 years and she understands all the issues both technical and political. She has installed and run our software!).
In summary the proposals are a mess, and unworkable. They bring confusion, rather than clarity and by default bring total power to “copyright owners”. If they are passed they will destroy knowledge-based innovation in Europe which will pass either to Silicon Valley, SE Asia or the Middle East. Knowledge innovators and companies in Europe are now “chilled” by copyright law and fearful of action. By default they will move to countries with more permissive laws, or simply close.
I am one of the most prominent TDM experts In Europe who both develops software and applications and also publicly campaigns for European Copyright reform. My, and ContentMine’s, goal is to develop machine technology which reads the whole scientific / medical literature and extracts validated factual knowledge for the benefit of us all on a daily basis (environment, health, bioscience, etc.). We have been partners in the H2020 project “FutureTDM” which analysed the problems that European TDM faces.
Europe spends 100 Billion Euro on STEM research but much of it (perhaps 80%) is underutilised because we need machines to help us. We work with clients who have to review 50,000 medical articles – taking months – to advise Medicine agencies about treatments and drugs. We build machine-assisted knowledge tools to speed this process by 10-fold or even more. Article 3 will kill this.
And if we use machines we are likely to fall foul of copyright law and be chilled or even face prosecution (as has happened elsewhere). As an example we have had several approaches from companies around the world who want us to mine the literature for them. We always have to consider the copyright problems upfront. For example TDM can only be carried out (without permission) by “Public Interest Research Organizations” for “non-commercial” purposes. Is PM-R a PIRO? Is If not then citizen-based innovation is being killed. Are we non-commercial? The only way to find out is to be taken to court. Julia Reda has proposed that every citizen should be allowed to carry out TDM for any purpose. That, and only that, is legal certainty.
Do we have to limit our innovation because of the lobbying by European “publishers”, many of whom do not even create their own content but act as rent collectors on 100 B Euro of publicly funded science and medicine?

  • Article 13 stops us publishing knowledge
  • Article 11 stops us telling people about knowledge
  • Article 3 stops us reading knowledge.

Please oppose the current drafts and work with Julia Reda for a positive innovative copyright future for Europe. With your help we can be world leaders
Yours sincerely,
Peter Murray-Rust

Posted in Uncategorized | Leave a comment

IMLS Forum on Text and Data Mining – 1 – Background

I am honoured to have been invited to Chicago to be part of the International Museum and Library Services. Here’s the occasion:

Data Mining Research Using In-copyright and Limited-access Text Datasets

National Forum, April 5 & 6, 2018, Chicago, Illinois
This project will bring together experts and thought leaders for a 1.5 day meeting to articulate an agenda that provides guidelines for libraries to facilitate research access, implement best practices, and mitigate issues associated with methods, approaches, policy, security, and replicability in research that incorporates text datasets that are subject to intellectual property (IP) rights.
Forum attendees will include librarians, researchers, and content providers who will be called to explore issues and challenges for scholars performing data mining and analysis on in-copyright and limited-access text datasets. These datasets are subject to restrictions that lead researchers to obtain permission for each use, to perform non-consumptive research where they do not have read access to the full text corpus, or to work their library to identify whether the content provider’s licensing terms and agreements allow for use.
This project is funded by the Institute of Museum and Library Services award LG-73-17-0070-17.
PMR> I’m excited and hope I can help. The point is that official, legal Text and Data Mining (“ContentMining”) of in-copyright datasets is effectively non-existent in the UK. That’s potentially surprising as the UK passed an extension (“Hargreaves”) specifically allowing it for non-commercial research purposes and promoted it as a great opportunity for wealth generation.
I would like to do it, but I am not, and may comment on this later. But the uncertainties are so great and the difficulties forbidding that no UK University that I know of actively supports it through infrastructure, tools, financial and legal support. Researchers in the UK are left to find their own solutions, in the face of continued obstruction from content providers. Legal language uses “chilling” to describe when people or organizations are frightened of being sued or otherwise penalized. UK TDM free of publisher restrictions has entered an ice age with little prospect of of warming up.
I’ll cover the reasons why in later posts. In the next I will post my own submission to the  meeting.
The meeting aims to create a concerted way forward. The US has a different structure from UK, both legal and academic – US copyright supports Fair Use whereas the UK has very little certainty for the reader/miner.
The University of Illinois at Urbana-Champaign (UIUC) has very kindly invited me to visit before IMLS. I’ll be talking to Digital Humanities, Computer Science, Libraries, etc and giving a general talk on Tuesday 3 April. Not sure whether it will be streamed. Then catching a ride with the others going to Chicago.
I hope the IMLS meeting will generate progress – we need it.  But it’s very difficult to satisfy all parties without becoming fuzzy and anodyne. That requires a strong sense of purpose.
Posted in Uncategorized | Leave a comment

CopyCamp2017 4: What is (Responsible) ContentMining?

My non-profit organization has the goal of making contentmining universally available to everyone through three arms:

  • Advocacy. Why it’s so valuable and why you should convince others and why restrictions should be removed.
  • Community. We need a large vibrant public community of practice .
  • Tools. We need to be able to do this easily.

There is a lot of apathy and a considerable amount of push-back and obfuscation (mainly from mega-publishers) and it’s important that we do things correctly. So 4 of us wrote a document on how to do it responsibly:

Responsible Content Mining
Maximilian Haeussler, Jennifer Molloy,
Peter Murray-Rust and Charles Oppenheim
The prospect of widespread content mining of the scholarly literature is emerging, driven by the promise of increased permissions due to copyright reform in countries such as the UK and the support of some publishers, particularly those that publish Open Access journals. In parallel, the growing software toolset for mining, and the availability of ontologies such as DBPedia mean that many scientists can start to mine the literature with relatively few technical barriers. We believe that content mining can be carried out in a responsible, legal manner causing no technical issues for any parties. In addition, ethical concerns including the need for formal accreditation and citation can be addressed, with the further possibility of machine-supported metrics. This chapter sets out some approaches to act as guidelines for those starting mining activities.
Content mining refers to automated searching, indexing and analysis of the digital scholarly literature by software. Typically this would involve searching for particular objects to extract, e.g. chemical structures, particular types of images, mathematical formulae, datasets or accession numbers for specific databases. At other times, the aim is to use natural language processing to understand the structure of an article and create semantic links to other content.
and we gave a typical workflow (which will be useful when we discuss copyright).

Of course there are variants, and particularly where we start with bulk downloading and then searching. For example we are now downloading all Open content, processing it and indexing against Wikidata. There is little point in everybody doing the same thing and, because the result is Open, everyone can share the results of processing.
We’ll use this diagram in later posts.

Posted in Uncategorized | Leave a comment

CopyCamp2017 3: The Hague Declaration and why ContentMining is important

In 2015 LIBER (The European body for Research Libraries) collected a number of leading figures in the Library and Scholarship world to create the Hague Declaration on freedom for Text and Data Mining. This stated not only the aspirations but also the reasons for demanding freedom, and I reproduce chunk of it here for CopyCamp2017 to consider.

The Hague Declaration aims to foster agreement about how to best enable access to facts, data and ideas for knowledge discovery in the Digital Age. By removing barriers to accessing and analysing the wealth of data produced by society, we can find answers to great challenges such as climate change, depleting natural resources and globalisation.

PMR: note that this is about why it’s so important – the answers to the health of the planet and the beings on it may be hidden in the scientific literature and Mining can pull this out.


New technologies are revolutionising the way humans can learn about the world and about themselves. These technologies are not only a means of dealing with Big Data1, they are also a key to knowledge discovery in the digital age; and their power is predicated on the increasing availability of data itself. Factors such as increasing computing power, the growth of the web, and governmental commitment to open access2 to publicly-funded research are serving to increase the availability of facts, data and ideas.
However, current legislative frameworks in different legal jurisdictions may not be cast in a way which supports the introduction of new approaches to undertaking research, in particular content mining. Content mining is the process of deriving information from machine-readable material. It works by copying large quantities of material, extracting the data, and recombining it to identify patterns and trends.
At the same time, intellectual property laws from a time well before the advent of the web limit the power of digital content analysis techniques such as text and data mining (for text and data) or content mining (for computer analysis of content in all formats)3. These factors are also creating inequalities in access to knowledge discovery in the digital age. The legislation in question might be copyright law, law governing patents or database laws – all of which may restrict the ability of the user to perform detailed content analysis.
Researchers should have the freedom to analyse and pursue intellectual curiosity without fear of monitoring or repercussions. These freedoms must not be eroded in the digital environment. Likewise, ethics around the use of data and content mining continue to evolve in response to changing technology.
Computer analysis of content in all formats, that is content mining, enables access to undiscovered public knowledge and provides important insights across every aspect of our economic, social and cultural life. Content mining will also have a profound impact for understanding society and societal movements (for example, predicting political uprisings, analysing demographical changes). Use of such techniques has the potential to revolutionise the way research is performed – both academic and commercial.

PMR: This shows clearly the potential of ContentMining and the friction that the current legal system (mainly copyright) places on it, by default.
And a non-exhaustive list of benefits:


The potential benefits of content mining are vast and include:

  • Addressing grand challenges such as climate change and global epidemics

  • Improving population health, wealth and development

  • Creating new jobs and employment

  • Exponentially increasing the speed and progress of science through new insights and greater efficiency of research

  • Increasing transparency of governments and their actions

  • Fostering innovation and collaboration and boosting the impact of open science

  • Creating tools for education and research

  • Providing new and richer cultural insights

  • Speeding economic and social development in all parts of the globe

So what should be done? I’ll leave that to the next post.

Posted in Uncategorized | Leave a comment

CopyCamp 2: workshop on ContentMining – what is it and how to do it

In the last post I explained why I became interested in contentmining to do scientific research and started to explain how it it is still a major political and legal challenge. I am excited that I have been asked to run a workshop at CopyCamp, and here is the information I am giving to participants. (You may also find my slides useful ).
Workshops on TDM/contentmining cover many areas and the precise format of this one will depend on the participants. On the program notes I suggested:
  •  hackers (who can make tools such as R, Python, etc.) do exciting things
  • scientists (including citizens) which want to explore questions in bioscience
  • librarians who want to explore C21st ways of creating knowledge
  • open activists who want to change policy both by political means and using tools
  • young people. we have had wonderful contributions from a 15-year old

So if everyone wants to talk about European and UK copyright politics, that’s fine. But we also have tools and tutorial showing how mining is done and we suggest people get some hands-on. It’s probably going to be a good idea to work in small groups where there are complementary skills:

Dear workshop participant:
I am delighted that you have signed up to my workshop  on Friday 29th at CopyCamp.
Wikidata, ContentMine and the automatic liberation of factual data: (The Right to Read is the Right To Mine)  The workshop will explore how Open Source tools can extract factual information from the Open Access scientific literature (specialising in BioMedicine). We will introduce Wikidata, a rapidly growing collection of of 30 million high-quality data and metadata and use it to index scientific articles. Participants will query the literature at EuropePMC using “getpapers” and retrieve hundreds or thousands of full-text articles  [snip…]
We will adapt the workshop to the skills and wishes of participants when we assemble, though please contact me earlier if there are things you would like to do. Topics can be chosen from:
* online demo of mining
* installation of full ContentMine software stack, and use of public repositories (EuropePubMedCentral, arXiv)
* introduction to WikiFactMine for extracting facts from open access publications.
* political and legal aspects of contentmining (with a European and UK slant)
If any participants are connected with (Polish) Wikipedia that could be valuable and exciting. (By default we shall use English Wikipedia). Note that Wikidata carries a large number of links to other language Wikipedias and this may be a valuable resource to explore.
If you want to run the full ContentMine stack it’s a good idea to install beforehand, so here are the instructions for *adventurous* members of the workshop:

This is a VM and should be independent of the operating system of the host machine. It has been tested in several installations but there may be problems with non-US/UK keyboards and encodings. By default the tutorial is in English (all the resources, EuropePMC, dictionaries are also in English and generally use only ASCII 32-127.

Of course anyone anywhere can also try out the tutorials.
Posted in Uncategorized | Leave a comment

CopyCamp: why Copyright reform has failed TDM / ContentMining – 1 The vision and the tragedy

I am honoured to have been invited to speak at CopyCamp2017,  “The Internet of Copyrighted Things” .  I’ve not been to CopyCamp before, but I’ve been to similar events and I’m delighted to see it is sponsored by organisations, some of which I belong to, that are fighting for digital freedom. In these posts I’ll show why copyright has failed science; this post shows why knowledge is valuable and must be free.
I’m giving a workshop on Thursday and talking on Friday (after scares from Ryanair) and I’m blogging (as I often to) to clear my thoughts and help add to the static slides. This is the latest in a 40-year journey of hope, which is increasingly destroyed by copyright maximalism. I am being turned from an innovative scientist who had a dream of building something excitingly new to an angry activist who is fighting for everyone’s rights. I can accept when science doesn’t work because it often just doesn’t; I get angry when mega-capitalists are using science as a way to generate money and in the wake destroying something potentially wonderful.
Here’s the story. 45 years ago I had my first scientific insight – working with Jack Dunitz in Zurich – that by collecting many seemingly unrelated observations (in this case crystal structures) I could find new science by looking at the patterns between them (“reaction pathways”). This is knowledge-driven research, where a scientist takes the results of others and interprets them in different ways. It’s as old as science itself, exemplified in chemistry by Mendeleev’s collection of the properties of compounds and analysis in the Periodic Table of the Elements. Mendeleev didn’t measure all those properties – many will have been reported in the scientific literature – his genius was to make sense out of seemingly unrelated properties.
40 years ago chemists started to use computers to carry out simple chemical artificial intelligence – analysis of spectra and chemical synthesis. I was entranced by the prospect, but realised it relied on large amounts of knowledge to take it further. I was transformed by TimBL’s vision of the Semantic Web – where knowledge could be computed. I moved to Cambridge in 1999 with the long-term aim to create “chemical AI”.  I created a dream – the WorldWide Molecular Matrix – where knowledge would be constantly captured, formalized and logic or knowledge engines would extract, or even create, new chemical insights.
To do this we’d need automatic extraction of information using machines – thousands of articles or even more. In 2005-2010 I was funded (with others) by EPSRC and JISC to develop tools to extract chemical knowledge from the scientific literature. It’s hard and horrible because scientific papers are not authored to be read by machines. I have spent years writing code to do this and now have a toolset which can read tens of thousands of papers a day (or more if we pay for clouds) and extract high quality chemistry. This chemistry is novel because it’s too expensive and boring to extract by hand and would be an important addition to what we have. As an example Nick Day in my group built CrystalEye which extracted 250,000 crystal structures, improved them and published them under an Open Licence – we’ve no joined forces with the wonderful Crystallography Open Database . Later Peter Corbett, Daniel Lowe, and Lezan Hawizy built novel, Open, software for extracting chemistry from the text of papers.
So now I have everything I want – thousands of scientific articles every day, maybe 10-15% containing some chemistry, and a set of Open tools that anyone can use and improve. I’m ready to try the impossible dream – of building a chemical AI…
What will it find?
NOTHING. Because if I or anyone use it without the PUBLISHER’s permissiom, the University will be immediately cut off by the publisher because …
… because it might upset their market. Or their perceived dominance over researchers. This isn’t a scare or over-reaction – there are enough stories of scientists of many disciplines being cut off arbitrarily to show it’s standard. One day 2 years ago the American Chemical Society’s automatic triggers cut off 200 universities. Publishers send bullying mails “you have been illegally downloading content” (totally untruee), or “stealing” (also untrue).
This is now so common that many researchers and even more librarians are scared of publishers. This blog has outlined much of this in the past and it’s not getting better. My dream has been destroyed by avarice, fear and conservatism. I’ll outline the symptoms, what needs to be done and urge citizens to own this problems and assert that they have a fundamental right to open scientific knowledge.
My slides at CopyCamp: provide additional material.

Posted in Uncategorized | Leave a comment

WLIC/IFLA2017: UBER for scholarly communications and libraries? It’s already here…

WLIC/IFLA2017: UBER for scholarly communications and libraries? It’s already here…
You all know of the digital revolution that is changing the world of service – Amazon, UBER, AirBnB, coupled to Facebook, Google, Siri, etc. The common feature is a large corporation (usually from Silicon valley) which builds a digital infrastructure that controls and feeds off service providers. UBER doesn’t own taxis, and takes no responsibility for their actions. AirBnB doesn’t own hotels, Amazon doesn’t have shopfronts. But they act as the central point for searches, and they design and control the infrastructure. Could it happen for scholcom / libraries? TL;DR it’s already happened.
You may love UBER, may accept it as part of change, or rebel against it.  If you want to save money or save time it’s probably great. If you don’t care whether the drivers are insured or maintain their vehicles, fine. If you don’t care about regulation, and think that a neoliberal market will determine best practices, I can’t convince you.
But if you are a conventional service provider (hotels, taxis) you probably resent the newcomers. If you are blind, or have reduced mobility,  and are used to service provision by taxis you’ll probably be sidelined. UBER and the rest provide what is most cost-effective for them, not what the community needs.
So could it happen in scholarly communications and academic libraries? Where the merit of works is determined by communities of practice? Where all the material is created by academics, and reviewed by academics? Isn’t the dissemination overseen by the Universities and their libraries? And isn’t there public oversight of the practices?
It’s overseen and tightly controlled by commercial companies who have no public governance, can make the rules and who can break the rules and get away with it. While the non-profit organizations are nominally academic societal, in practice many are controlled by managers whose primary requirement is often to generate income as much as to spread knowledge. The worth of scientists is determined not by community acclaim or considered debate but by algorithms run by the mega-companies. Journals are, for the most part, created and managed by corporations. Society journals exist, and new journals are created, but many increasingly end up by being commercialised. What role does the Library have?
Very little.
It nominally carries out the purchase – but has little freedom in a market which is designed for the transfer of money, not knowledge. In the digital era, libraries should be massively innovating new types of knowledge, not simply acting as agents for commercial publishers.
So now Libraries have a chance to change. Where they can take part in the creation of new knowledge. To help researchers. To defend freedom.
It’s probably the last great opportunity for libraries:
Content-mining (aka Text and Data Mining, TDM).
This is a tailor-made opportunity for Libraries to show what they can contribute. TDM has been made legal and encouraged in the UK for 3 years. Yet no UK Library has made a significant investment, no UK Vice Chancellor has spoken positively of the possibilities, no researchers have been encouraged. [1]
And many have been discouraged – formally – including me.
Mining is as revolutionary as the printing press. Libraries should be welcoming it rather than neglecting or even obstructing it. If they don’t embrace it, then the science library will go the way of the corner shop, the family taxi, the pub. These are becoming flattened by US mega-corporations. Products are designed and disseminated by cash-fed algorithms.
The same is happening with libraries.
There is still time to act. Perhaps 6 months. Universities spend 20,000,000,000 USD per year (20 Billion) on scholarly publishing – almost all goes to mega-corporations. If they spent as little as 1% of that (== 200 Million USD) on changing the world it would be transformative. And if they did this by supporting Early Career Researchers (of all ages) it could change the world.
If you are interested, read the next blog post. Tomorrow.
[1] The University of Cambridge Office of Scholarly Communication ran the first UK University meeting on TDM last month.

Posted in Uncategorized | Leave a comment

ContentMine at IFLA2017: The future of Libraries and Scholarly Communications

ContentMine at IFLA2017: The future of Libraries and Scholarly Communications
I am delighted to have been invited to talk at IFLA (, the global overarching body for Libraries of all sorts. I’m in a session 232 (see ) with
Congress Programme, IASE Conference Room 24.08.2017, 10:45 – 12:45
Session 232 Being Open About Open – Academic & Research Libraries, FAIFE and Copyright and Other Legal Matters
What’s FAIFE? It’s
The overall objective of IFLA/FAIFE is to raise awareness of the essential correlation between the library concept and the values of intellectual freedom  …
Monitor the state of intellectual freedom within the library community
Respond to violations of free access to information and freedom of expression
I share these views. But freedom of access and freedom of expression is under threat in the digital world. Mega-corporations control content and services and are actively trying to claw more control, for example by controlling the right to post hyperlinks to scholarly articles – even open access – (“Link Tax”)
And recently
I have spent 3-4 years on the edge of the political arena and I’ve seen how hard companies fight to remove our rights and to give them control.
And we need your help.
If you are a librarian, then you can only protect access to knowledge by actively fighting for it.

That means you. Not waiting for someone to create a product that you can buy
By actively creating the scholarly infrastructure of the future and embedding rights for everyone.
Now, for the first and possibly the last time we have an opportunities for libraries to make their own contribution to freedom.
I’ve set up the non-profit organization  which promotes three areas for fighting for freedom:


  • Community. The community deserves better from academia, and the community is willing to help, if given the chance. The biggest communal knowledge creation is in Wikimedia and we are working with them to make high-quality knowledge universally created and universally available.

We now have tools which can create the next generation of scholarly knowledge – for everyone.
But YOU can and must help.
IFLA has very generously given us workshop time for a demonstration and discussion of Text and Data Mining (TDM)
Imperial Hall 23.08.2017, 11:45 – 13:30
Session 199 Text and Data Mining (TDM) Workshop for Data Discovery and Analytics – Big Data Special Interest Group (SIG)
We’ll be giving simple hands-on web demonstrations of Mining , interspersed with the chance to discuss policy and investment in tools, practices and people. Especially young people. No future knowledge required.
This is (hopefully) the first of several blogs.

Posted in Uncategorized | 2 Comments

What is TextAndData/ContentMining?

What is TextAndData/ContentMining?
I prefer “ContentMining” to the formal legal phrase “Text and Data Mining” because it emphasizes all kinds of content – audio, photos, videos, diagrams, chemistry, etc. I chose it to assert that non-textual content – such as diagrams – could be factual and therefore uncopyrightable. And because it’s a huge extra exciting dimension.
Mining is the process of finding useful information where the producer hadn’t created it for that specific process. For example the log books of the British navy – which recorded data on weather – are now being used to study climate change (certainly not in the minds of the British Admiralty). Records of an eclipse in ancient China have been used to study the rotation of the earth. So forty years ago I studied hundreds of papers of individual crystal structures to determine reaction pathways – again completely unexpected to the original authors.
In science mining is a way to dramatically increase our human knowledge simply by running software over existing publications. Initially I had to type this in by hand (the papers really were papers) and then I developed ways of using electronic information. Ca 15 years ago I developed tools which could trawl over the whole of the crystallographic literature and extract the structures and we built this into Crystaleye – where the software added much more information than in the original paper. (We have now merged this with the Crystallography Open Database ). My vision was to do this for all chemical information – structures, melting points, molecular mass, etc. Ambitious, but technically not impossible. We had useful funding and collaboration with the Royal Society of Chemistry and developed OSCAR as software specifically to extract chemistry from text. Ten years ago things looked exciting – everyone seemed to accept that having access to electronic publications meant that you could extract facts by machine. It stood to reason that machines were simply a better , more accurate, faster way of extracting facts than pencil and retyping.
So what new science can we find by mining?

  • More comprehensive coverage. In 1974 I read and analyzed 1-200 papers in 6 months. In 2017 my software can read 10000 papers in less than a day.
  • More comprehensive within a paper. Very often I would limit the information beacuse I didn’t have time (e.g. the anisotropic displacements of atoms). Now it’s trivial to include everything.
  • Aggregation and intra-domain analytics. By analysing thousands of papers you can extract trends and patterns that you couldn’t do before. In 1980 I wanted to ask “How valid is the 18-electron rule?” – there wasn’t enough data/time. Now I could answer this within minutes.
  • Aggregation and inter-domain analytics. I think this is where the real win is for most people. “What pesticides are used in what countries where Zika virus is endemic and mosquito control is common?”. You cannot get an answer from a traditional search engine – but if we search the full-text literature for pesticide+country+disease+species we can rapidly find those papers with the raw information and then extract and analyze it. “Which antibodies to viruses have been discovered in Liberia?”. An easy question for our software to answer, except it was behind a paywall – no-one saw it and the Ebola outbreak was unexpected.
  • Added information. If I find “Chikungunya” in an article, the first thing I do is link it electronically to Wikidata/Wikipedia. This tells me immediately the whole information hinterland of every concept I encounter. It’s also computable – if I find a terpene chemical I can compute the molecular properties on-the-fly. I can, for example, predict the boiling point and hence the volatility without this being mentioned in the article. The literature is a knowledge symbiont.

Everyone is already using the results of Text Mining. Google and other search engines have sophisticated language analysis tools that find all sources with (say) “Chikungunya”. What I want to excite you about is the chance to go much further.
Why do we need other search engines when we have “Google”?

  • Google shows you what it wants you to see. (The same is true for Elsevinger). You do not know how these were selected, it’s not reproducible, and you have no control. (Also, if you care, Google and Elsevinger monitor everything you do and either monetize it or sell it back to your Vice-Chancellor).
  • Google does not allow you to collect all the papers that fit a given search. They give links – but try to scrape all these links and you will be cut off. By contrast Rik Smith-Unna, working with ContentMine (CM) developed “getpapers” – which is exactly what the research scientist needs – an organized collection of the papers resulting from a search. ContentMine tools such as “AMI” allow the detailed analysis of the details in the papers.
  • Google can’t be searched by numeric values. Try asking for papers with patients in the age range 12-18 and it’s impossible (you might be lucky that this precise string is used but generally you get nothing). In contrast CM tools can search for numbers, search within graphs, search species and much more. “Find all diterpene volatiles from conifers over 10 metres high at sea level in tropical latitudes” is a straightforward concept for CM software.

That’s a brief introduction – and I’ll show real demos tomorrow.

Posted in Uncategorized | Leave a comment

Text and Data Mining: Overview

Text and Data Mining: Overview
Tomorrow The University of Cambridge Office of Scholarly Communication is running a 1-day Symposium on Text and Data Mining ( ). I have been asked to present   , a project funded by the Shuttleworth Foundation through a personal Fellowship, evolved into a not-for-profit company.
I hope to write several blog posts before tomorrow , and maybe some afterwards. I have been involved in mining science from the semi-structured literature for about 40 years and shall give a scientific slant. As I have got 20-25 minutes I am recording thoughts here so people can have time to explore the more complex aspects.
Machines are now part of our information future and present, but many sectors, including academia, have not embraced this. Whereas supermarkets, insurance, social media are all modernised, scholarly communication still works with “papers”. These papers contain literally billions of dollars of unrealised value but very few people care about this. As a result we are not getting the full value of technical and medical funding, much of which is wasted through the archaic physical format and outdated attitudes.
These blog posts will cover the following questions – how many depends on how the story develops. They include:

  • What mining could be used for and why it could revolutionise science and scholarship
  • Why TDM in the UK and Europe (and probably globally) has been a total political and organizational failure.
  • What directions are we going in? (TL;DR you won’t enjoy them unless you are a monopolistic C21st exploiter, in which case you’ll rejoice.)
  • What I personally am doing to fight the cloud of digital enclosure.

There are 3 arms to ContentMine activities:

  • Advocacy/political. Trying to change the way we work top-down, through legal reforms, funding, etc. (TL;DR it’s not looking bright)
  • Tools. ContentMining needs a new generation of Open tools and we are developing these. The vision is to create intelligent scientific information rather than e-paper (PDF). Much of this is recently enhanced by the development of
  • Community. The great hope is the creative activity of young people (and people young at heart). Young people are sick of the tired publisher-academic complex which epitomises everything old, with meretricious values.

This sounds very idealistic – and perhaps it is. But the Academic-Publisher complex is all-pervasive – it kills young people’s hopes and actions. Our values are managed by algorithms that Elsevinger sells to Vice-chancellors to manage “their” research. The AP complex has destroyed the potential of TDM in the UK and elsewhere and so we must look to alternative approaches.
For me there is a personal sadness. 15 years ago I could mine the literature and no-one cared. I had visions of building the Open shared scientific information of the future. I called it – after Gibson’s vision of the matric in cyberspace. It draws on the vision of TimBL and the semantic web, and the idea of global free information. It was technically ahead of its time by perhaps 15 years, but now – with Wikidata, and modern version control (Git) – we can actually build this.
So my vision is to mine the whole of the scientific literature and create a free scientific resource for the whole world.
It’s technically possible and we have developed the means to do it. And we’ve started. And we will show you how, and how you can help.
But we can only do it on a small part of the literature because the Academic-Publisher complex has forbidden it on the rest.

Posted in Uncategorized | 2 Comments