Monthly Archives: December 2010

PMR Hackfest: Augmented Molecular Reality – volunteers welcomed

#pmrhack

The contents and activities of our hackfest (http://blogs.ch.cam.ac.uk/pmr/2010/12/23/pmr-events-at-unilever-centre-january-1516-and-17/ ) will be determined by YOUR inventiveness, not our planning. So here’s a typical example of a mashup created in 24 hours. It comes mainly from Dave Murray-Rust (http://mo-seph.com) with minor inputs from the rest of the soporific or slightly sick family.

Dave has found http://www.layar.com which “layers” 3D objects (as Wavefront files) onto reality. It relies on a modern phone (mine isn’t – it’s a year old) which must have a compass and GPS as well as accelerometers. The phone “knows” where it is, and in which direction it’s pointing . If a virtual object is within range and the field of view it’s shown on top of the optical display. Here’s Layar’s example:

The animalcules are built of triangles with material properties and are located somewhere along the current line of sight. All you need to do is walk round the locality and keep orienting you phone and you should be able to locate them.

So we’ve done the same with molecules. Here’s one of the first examples – it’s “in our garden”. If we walked to the other end of the garden we’d see the other side!

 

For the chemists, what is it? (We hope to brighten the red and blue atoms later).

To create this we need a wavefront file:

QUESTION. Which Blue Obelisk software can create Wavefronts of molecules?

QUESTION. Which readers/volunteers/attendees are able to use Layar Software? Our current guidelines are iPhone >= i3gs or 4 and android >= 1.5

QUESTION: Are there Layar or other augmented reality hackers who’d like to help. You don’t need to know any chemistry.

QUESTION: Can you think of a fun game? At present we plan to locate famous molecules on famous places in Cambridge and see if people can work out the connection (some will have a JMR and PMR theme).

We hope to be coming up with a whole set of ideas for the hackfest so pleaes start thinking now if you want to be involved. Nothing is out of scope (barring legality and contractual law – i.e. we cannot use actual scientific content from Closed Access journals).

PMR Hackfest and Blue Obelisk activity/Dinner

As part of the PMR hackfest (http://blogs.ch.cam.ac.uk/pmr/2010/12/23/pmr-events-at-unilever-centre-january-1516-and-17/) we plan to have a Blue Obelisk dinner and a BlueOb hack.

The Blue Obelisk (http://www.blueobelisk.org) is a group of like-minded chemists/programmers w2ho create and promote ODOSOS (open Data, Open Standards and Open Source) in chemistry and related disciplines. Year on year the BO creates more and better resources and it’s fair to say that much of this is the equal of commercial offerings or even better. (There have been virtually no new fundamental developments in chemical software and cheminformatics in the last ten years, much of it being rehashing, widget-frosting and integration. For a discipline which cares (or should care) about data quality and reproducible science, ODOSOS is the only meaningful way to go in the future.

The BlueOb has no membership, no formal agendas, no minutes. It is an unsociety. It keeps in touch daily through email, Twitter, FriendFeed, Bitbucket, Skype, etc. Everyone knows what the others are doing. There’s no deliberate competition though duplication can be useful (especially on different platforms). For example Jmol and Avogadro are both molecular viewers and they have overlaps and dissimilarities.

Anyone is welcome to the dinner which will be in the Panton arms – simple pub grub before and after which people will be hacking in the Chemistry Dept. We plan to have a BlueOb display on the Monday at the symposium and the hackfect is an opportunity to mashup some of the software and resources.

The Panton is open to anyone (it’s a pub!). If you want to come to the hackfest and/org symposium just register (free) at http://www-pmr.ch.cam.ac.uk/wiki/Visions_of_a_%28Semantic%29_Molecular_Future_Registration

PMR symposium – how can we broad/multi/share/cast it?

We’re planning the infrastructure for “Visions of a Semantic (Molecular) Future” in Cambridge on Jan 17th

http://blogs.ch.cam.ac.uk/pmr/2010/12/23/pmr-events-at-unilever-centre-january-1516-and-17/

and want to make this available live to as many people as possible. (We know that there are many people who have clashes – e.g. http://scienceonline2011.com/ ). So we want to stream the sessions to the world and this post is to ask for suggestions as to what the best way(s) are. It’s possible that we may have more than one option or solution.

I’ve been involved with interactive events over the years – the first were text-based realities based on chats and MOOs (such as BioMOO derived from the great LambdaMOO). In 1998 Henry Rzepa and I, with the help of Wendy Warr and sponsored by the (now defunct) ChemWeb had a virtual launch of Chemical Markup Language. I think there were about registrants. More recently JISC streamed my and other presentations at The Library of the Future event last year http://www.jisc.ac.uk/events/2009/04/lotf.aspx . That was impressive – a speakers screen, another with second life and a third with twitterfall. It went well – we got questions from the world – and I assumed that the technology was fairly standard. That’s not the case, so I owe JISC an extra bit of praise for making it work.

The other was a fairly impromptu – quasi-hand-held – streaming of my session on the Green Chain Reaction at Science Online 2010 London (http://scienceonlinelondon.wikidot.com/topics:green-chain-reaction ) and I forget all the details – I was far too rushed on the software.

So what options are available for streaming the speakers’ presentations? Videos of talking heads aren’t usually very compelling so it’s the audio that is essential. And screen shots of projected slides are usually rather fuzzy. On the assumption that speakers can and will provide slides in advance (I usually don’t!) then how can we broadcast the slides in higher digital quality. I’ve seen presentations where the slides are preloaded and advanced by the presentation.

We can’t ask all speakers to do anything special though I hope we can ask for slides in advance – if not it will have to be something that works in real-time – at least a video camera on the screen.

I’ll really welcome suggestions, especially if you have used the approaches yourself.

PMR events at Unilever Centre January 15/16 and 17

“Visions of a Semantic (Molecular) Future”

The weekend and Monday events in Cambridge are now shaping up well and there is already a very promising amount of interest and registration. The following is the material just sent out to participants

Overview The main event is a one-day symposium (January 17th) in the Unilever Centre/Department of Chemistry to which a number of distinguished scientists have agreed to contribute, both in real-life and remotely:

Robert Glen, Tom Blundell, Cameron Neylon, Henry Rzepa, Tony Hey, Alex Wade, John Wilbanks, Dan Zaharevitz, Douglas Kell, PMR

The theme is to look forward to how new technologies, motivations and ways of working might change our practices in scientific scholarship in this decade. The symposium is preceded by a separate weekend hackfest in the Centre to explore some of the technologies and practices with hands-on activities.

Delegates may register for either or both parts of the event.

Timetable

January 15th

0900     Hackfest opens

1300     Lunch (by default in the now-famous Panton Arms)

1800    Blue Obelisk dinner in Panton (all welcome – reserved room)

January 16th

1700 Hackfest finishes

January 17th “Visions of a Semantic (Molecular) Future” Symposium

Symposium We invite the speakers to guess parts of the future and to indicate areas that scientists should be active in. We are planning to provide streaming video/audio so that people unable to be present can follow the symposium. We shall have a hashtag and coordinate the twittersphere, e.g. live twitterfall, which will include remote contributions. With their permission, speakers’ contributions will be openly available under CC-BY.

At lunch there will short 3-minute presentations (possibly Pecha-Kucha style) from those providing demos during the event and a 10-minute contribution from Anita de Waard from the BeyondThePdf event. During the breaks and reception there will be demos by approximately 6 groups.

Demos There will be about 6 demos from the PMR group and extended community showcasing the software and projects. The current list is: -

(a) Chemical Markup Language (CML and Chem4Word). CML is the growing de facto semantic approach to chemistry. (EPSRC, Microsoft Research)

(b) AMI. An intelligent fume cupboard conversing with an intelligent lab-coat. Pervasive computing in chemistry (sponsor JISC)

(c) Patenteye. Automatic interpretation of chemical reactions in patents using OSCAR4 and ChemicalTagger (EPSRC, JISC)

(d) Lensfield-Quixote. An Open community-based infrastructure for computational chemistry. (Community)

(e) Blue Obelisk. Open Data, Open Standards, Open Source in chemistry (community)

(f) Open Bibliography and Open Scholarship . Protocols and practices for Open Scholarship, driven by bibliography (JISC)

(g) Open Climate Code. Making climate research Open and Reproducible. (Climate Code)

Hackfest An unstructured (but responsibly run) gathering, where geeks meet to create something within a weekend. Resources will include:

  • physical devices (Arduinos, Kinect, Wii, sensors, etc.)
  • mashup targets (data.gov <http://data.gov>, Linked Open Data cloud, DBPedia, etc.) British National Bibliography, UKPubMedCentral, etc.
  • Open software (Blue Obelisk for chemistry, OKF software, Climate Code)

 

The outcomes can be technical (a new sensor for AMI), societal (new ways of creating communities), mashups, entries in OKF’s CKAN etc. Experiments in new media. Open Wifi available. Bring laptops and ideas – anyone can start up an un-activity.

Publication Jan Kuras from the Journal of Cheminformatics (BioMed Central) has invited all authors and demonstrations to submit manuscripts for a special article in the journal.

Remote Participation. We are actively investigating streaming of video or audio and slide show. There will be a hashtag and a Twitterfall. More details later.

Connections PMR has been connected with the following communities and activities and anyone interested or involved in them should enjoy either/both events: Open Knowledge Foundation, Blue Obelisk, Climate Code, Quixote-chem, British Library, UKPubMedCentral, and JISC.

Links The primary reference page is
http://www-pmr.ch.cam.ac.uk/wiki/Main_Page, which gives links to otherpages (including registration for either/both event (free, but required). PMR will also blog developments (http://blogs.ch.cam.ac.uk/pmr/ (old URL redirects).

Pleaes let us know via registration if you would like to come and also let others know of the existence of the events.

I shall be posting regularly about progress and the motivation of some of the components.

 

Open Access – why we need Open Bibliography

Stevan Harnad has commented on the discussion on publishing Open Access:

  1. December 20, 2010 at 11:15 am  (Edit)

    Why not just publish in your preferred journal and self-archive the peer-reviewed final draft (“Green OA”)?

For those who don’t know Stevan is one of the pioneers of OA and has been tireless in taking the struggle forward. We agree on many things – the need for Openness of scholarly information and the free (carefully chosen word) access to it. We disagree on details and strategy of achieving the aims.

The Green Road to Open Access should now – I hope – be labeled as “gratis” – “free as in beer”. It’s useful, but I don’t think it’s useful enough in science and I’ll explain why.

But first I’ll commend the Open Access movement on finally coming round to using the terms “gratis” and “libre” (“free as in speech”). For many years the OA movement did not describe how Open Access documents could be used. Obviously if a document is visible on the web a human can read it – while it is mounted – but there is no guarantee of re-use. For example I may violate copyright restrictions if I want to use a diagram in a gratis OA document. This is true whether it’s in a repository or on a personal web page. Moreover repositories are extremely bad (?lazy) at adding formal notices of rights to their contents and the default is simple: “you cannot re-use anything in this repository for any purpose unless explicitly allowed to do so”. That can only be done by adding a formal licence to the documents such as CC-BY or CC0 or PDDL. The Green Road philosophy which maintains that anything publicly visible on the web can be text-mined, reused copied etc. is counter to legal practice and is no defence against being pursued in the courts by the real or presumed copyright owner. We cannot build semantic certainty on legal quicksands. So, unless the author labels the self-archived copy as Libre I cannot afford to re-use it.

Even if the self-archived documents are libre, they are little use to data-driven science, which needs a systematic way of discovering them. Randomly archived documents are not systematically searchable, especially when the percentage of self-archiving is very low. Sometimes this is dictated by publishers who forbid self-archiving (guess which I’m talking about) but the very low level of compliance is the real problem. Almost all scientific publications in closed access publications are not self-archived. Stevan’s argument is that if we all make the right effort we’ll solve the problem – I simply don’t believe this will happen. Some institutions such as QUT and Soton mandate this – and get great reward for doing this, but most universities are incapable of the political effort (I’ll deal with this in later posts).

But let’s assume that everyone DID self-archive their publications. How do we discover them? The journals provide services for searching their own pages, but not surprisingly do not index the self-archived copies. Google, etc. may or may not do a comprehensive job in scraping the academic web but even so you can only use a few results of their search – Google does not provide useful APIs to everyone for free.

The solution is relatively simple to state and create technically. If we create an open Bibliography for scientific articles, then any self-archiving author can add their URLs to this bibliography with almost zero effort. The self-archival into any responsible repository could automatically index their depositions on the Open Bibliography. By searching the Open Bibliography then you discover all self-archived articles. If we are paying repository managers to support self-archiving then they should be providing an index to the reposited material. Everyone benefits – including a forward-looking publisher.

So we have to create an Open Bibliography.

We have the technology.

YOU have to provide the political will.

Why I and you should avoid NC licences

Richard Kid – an friend and collaborator from the Royal Society of Chemistry – asks why I don’t like NC (non-commercial) Open Source licences:

December 16, 2010 at 10:32 am  (Edit)

But what about the authors’ intentions when they put a NC license on a piece of work? They want to share, but not for others to make money off the back of their work. When you say they shouldn’t specify that, what choices are they left with? Will this lead to less work being made available as open source?

You’re asking people to make an additional leap of faith, but not addressing the reasons why people pick a standard NC license.

I wouldn’t have thought that anyone would sensibly class teaching as commercial; and for things like writing a book – why not ask? They’ll either agree it can be used in situation X, or not. Or ask for a charitable donation. Or a postcard. Or a small payment. Aren’t we forgetting that human communication can deal with nuance a whole lot better than a standard set of words on a page?

I understand the sentiments and have shared them in the past. In fact I started this blog as CC-NC, but then moved to CC-BY (we may have lost that in the current blog-move but rest assured this blog is CC-BY).

What is the motivation for NC? I can see the following:

  • To create a monopoly for me to exploit the work. This is still, I think, a valid motive in the creative arts. But not in scholarly work
  • Because I don’t like the commercial sector.
  • To attract other volunteers to the project.

Addressing some of the points:

  • There is every evidence that in code specifically NC is less useful than BY. If code were constrained by BY from adoption then the community would have moved to NC. But that prevents huge take up by other sectors.
  • If I run a workshop, and you visit and we charge you a registration fee, that’s commercial. When lecturers provide lecture notes they have to pay publishers to use copyright material. That’s a commercial transaction (in the opposite direction). When students pay fees to a private university that’s a commercial activity.
  • Why not ask? Because most of the time you don’t get a reply. And that’s true of publishers. We’ve been waiting years for a reply from two chemical publishers (not RSC). And software cannot ask people. Human activity doesn’t scale. When I abstract 100,000 documents then I cannot write to every author and every publisher. I can only go by the licence. If it’s NC I can’t use the material

There are many other reasons why NC doesn’t work. Here’s a good resource http://freedomdefined.org/Licenses/NC :

The key problems with -NC licenses are as follows:

  • They make your work incompatible with a growing body of free content, even if you do want to allow derivative works or combinations.
  • They may rule out other basic and beneficial uses which you want to allow.
  • They support current, near-infinite copyright terms.
  • They are unlikely to increase the potential profit from your work, and a share-alike license serves the goal to protect your work from exploitation equally well.

I converted from CC-NC to CC-BY and haven’t regretted it. Why should we prevent commercial exploitation of our work? Content will become zero-cost in the future – it’s what is done with it that matters

 

 

Should I publish Open Access?

In a reply to the last post Pablo (a leader of the Quixote project) asks:

Pablo Echenique says:

December 10, 2010 at 5:12 pm  (Edit)

I have another question for you, Peter. I have thought about emailing you but maybe this is a better place:

I would like that all my publications go to open access journals, like the PLoS ones… but… there are so few! and many of them are low impact factor, which may make more difficult my future career and, specially, that of my youngest collaborators, who still do not have a permanent position.

What are your views on this?

Thank you.

I think millions of young people are asking this and it’s a very difficult question. As you read this remember I am 0105 years old and so I cannot give completely objective advice. In this area I do not force my ideas on my coworkers. Left to myself I will publish in Open Access – with other authors I am fairly quiet.

Firstly, why publish? There are a number of possible reasons:

  • To record one’s work and to get priority
  • To communicate your work to others
  • To offer your work for peer-review (whether formal or not)
  • To receive merit from the community (e.g. “citations”)
  • To preserve your work for posterity
  • And to fulfil various obligations (e.g. to funders)

     

    In some cases (6) gives you no choice. Wellcome, NIH and many research councils require Open Access publishing. If you don’t like it, you don’t have to take their funding. I’m guessing that (5) is not at the top of young people’s lists – after all young people are immortal. And almost every publication satisfies (1) although I have published in a journal (Internet Journal of Chemistry) and I think the papers are lost.

     

    So it comes down to 2, 3, 4

     

    Let’s first dispose of peer-review. PR is given for free by academics and others. There is no evidence that Closed Access journals have better peer-review than Open Access or vice versa. (Yes, the ludicrous PRISM of Closed Access publishers lambasted Open Access as “junk science” but no honest Close Access publisher will take that view). Your fees (whether author-side or reader-side go to the management of PR, not PR itself).

     

    So the decision rests on “do you want people to read your work” and “do you wish people to rank your work”.

     

    It’s obvious that if an article is Open Access there are more people who can potentially read it. Many people are put off by the hassle of reading Closed Access (e.g. if you have to go through some paywall). And many people are put off by the actual cost. 30USD for 48 hours rent is very high. Moreover if you do not know what is in the paper before you read it you may decide not to read it. After all even 10 seconds glance at a paper can tell you it’s of no relevance and it still costs 30 USD. (or more). How many times have you (a fortunate university reader) glanced at a paper for 10 seconds and then moved on?

     

    However in the information-saturated world you can’t read everything and traditionally journals have been a way of bundling content into packets for particular readers. In the electronic and multidisciplinary world this is no longer necessary (although it’s still common). So journals have become branding labels. They are a simplistic way of saying “this paper is better than that paper”. It’s a bit like Gramophone records used to be. Or book publishers. A very blunt approach, but it had its supporters. So we’ve moved to a situations where scientists follow brands rather than make rational decisions. The university system reinforces this. People get promoted if they have a NatSci paper as opposed to PLoS. And the publishing houses can make a lot of money out of promoting brands. Bibliometrics shows that one publishing house not far from Kings Cross has done exceptionally in promoting its brand for all sorts of disciplines. Does this mean that their papers are better, or simply that their marketeers are better? Why do people buy one fragrance as opposed to another? Or any other fashion accessory? It’s not the raw value of the item – it’s the perception that has been built up.

     

    So in my opinion the scientific publishing market is based on perception rather than value. But what about citations? Well citations are a very very blunt tool. They come after the fact, they often don’t recognize new or controversial value, they are subject-biased and they can be heavily slanted to – say – methods. Worse, the Impact Factor (how many academics voted to introduce impact factors?) is an average over a journal. It flattens and distorts the individual.

     

    All this is known, but not widely enough.

     

    This will change. The first change will be that we become good at discovering individual papers and measuring their values. Journals become irrelevant if (but only if) the academic world wakes up and stops kowtowing to this outofdate concept of a journal. In which case *where* you publish should not matter as far as readership is concerned, except that if it’s Open Access it will have more readers. However the CA publishers will react against this and I would predict a greater introduction of restrictive contracts with libraries. For example not allowing access to “journal X” unless you also buy Y. Or increasing charges because more people read the material (I have heard this is starting to come it. Resist it with your life). We now see greater pressure on library budgets.

     

    We are in a prisoner’s dilemma. It’s clear that universal Open Access is superior for humanity in general (except for shareholders of some companies who will start to miss out). But there is no easy smooth path there. Change puts greater financial pressure on all players.

     

    In the best of all possible worlds I’d like to see the role of publishers diminish sharply and academia reclaim what it produces and owns. I’m not sanguine. Vice Chancellors and Principals fight against each other. They could, if they wished, redesign the system so money was more efficiently spent and scholarship was published more widely. But I doubt they will. So I predict continuing mess, fewer scientists reading publications, even fewer of the general public reading them.

     

    In this broken world, Pablo, I don’t know what should be done. I think there’s a chance of a grass-roots Open revolution. Where people move away from Closed access. Many other sectors are becoming Open – academia may be seen as an unacceptable anachronism. When students (in the UK) riot about the cost of fees, why shouldn’t they riot against expensive publications. (Lecturers cannot copy their own papers for students to use without paying fees).

     

    The positive force is that people’s work will become known by means other than their publications. For informal recognition you will become known for Quixote – and hopefully widely. I communicate to more people through this blog than through papers. It doesn’t work for everyone, but it’s an increasing trend.

     

The real problem is (4). The authors of the Blue Obelisk software are widely known and highly regarded. si monumentum requiris, circumspice. Christopher Wren is known for his cathderals, not his academic publications. Joe Townsend’s Chem4Word has had 250,000 downloads. But that doesn’t even equal 1 citation in the sad world of academia.

 

I think and hope that aspiring young scientists will buck the system and publish where and how they feel fit. I hope, with less conviction, that academia will value that.


 

“What free software licence should I use?”

I often ask questions and answer them on Stackoverflow – an Open site available to anyone to who has programming questions. It’s so successful that over about 2-3 years it has had over 1MILLION questions. Yes, one million. That’s a wonderful example of crowdsourcing meeting a need.

One of its features is a leaderboard/merit points system. It’s very well done and it’s addictive. So if you are thinking of a crowdsourcing project it’s worth thinking of creating one. I get periods of addiction! Anyway here’s one question I have added an answer to:

http://stackoverflow.com/questions/4406755/what-free-software-license-should-i-use/

Hello,

I’m creating a program which i would like to let other users use it for free. (Its a reusable library (C/C++)) and need to know what license would be suitable for this project.

  • The library should be used for non commercial purposes, commercial use should not be allowed.
  • The library should retain all copyright notices (That i created it), but not in a way that says i’m re distributing it.
  • No warranty of any type what so ever.

Would anyone be able to suggest a free software license suitable for this?

These are reasonable requests, but several people have answered that non-commercial is not compatible with Free (==libre) software. You cannot limit fields of endeavour. That was an excellent decision, and Richard Stallman deserves great credit for being firm about the need for complete Openness. Here’s my answer: (http://stackoverflow.com/questions/4406755/what-free-software-license-should-i-use/4406921#4406921 );

You can, in principle, write any licence you like as long as it does not violate the laws of your country and the countries that it will be used in (e.g. you must not break discrimination laws). However writing your own licence is normally a bad idea. Some organizations do this, but they take extensive legal advice and it’s costly.

Therefore most people (rightly) choose from existing and well-tried licences. Most of the common ones are OSI-compliant and this means that there is no restriction on field of endeavour (i.e. they can be used for commercial purposes and they can be used for military purposes, etc.).

AFAIK there are no common non-commercial licences for software and I’d ask you to consider dropping this condition. There’s a purely pragmatic argument – “what is commercial”. Is teaching commercial? possibly. Is writing a book commercial? certainly. And so on.

I am intimately involved with the Open Knowledge Foundation and we cover a number of types of material – software, data, media, etc. We feel that the only reasonable approach is to avoid the NC condition. The motivation is understandable, but it doesn’t actually work.

Be brave and drop it. I don’t think you’ll regret it. It will certainly be less hassle than writing your own licence.

As I’ve said Non-Commercial in scholarship and research causes many problems and IMO solves none. Don’t do it.

The biggest mistake that the Open Access movement made was not to think out the practicalities of practice. There were vague terms like “light-green” or “free” which were not designed. This has landed the movement in all sorts of mess, which many publishers have taken advantage of – charging academics for rights which are poorly defined and may, in fact, not be legally enforceable. It’s taken ten years to come to the realisation that we need licences and we need to differentiate between gratis (“beer”) and libre (“speech”).

Openness and freedom need definition!

Wikileaks – (Web) democracy is in the balance; WE must act:

We are now in the middle of a defining point in human history – an increasing struggle between those who believe that information should be free and those who wish to control it – for many reasons (political, commercial, religious). Nothing fundamental has changed – this country like many others has millennia of history of protest. I have been brought up in traditions where – ultimately – people may have to suffer for their beliefs.

If you haven’t time to read my apologia – just go to the petition… http://www.avaaz.org/en/wikileaks_petition/?vl

One of our fundamental rights – trial by jury – was won by those who refused to accept the arbitrary power of the state. From Wikipedia:

[in 1670] Penn demonstrated no remorse for his aggressive stance and vowed to keep fighting against the wrongs of the Church and the King. For its part, the Crown continued to confiscate Quaker property and put thousands of Quakers in jail. From then on, Penn’s religious views effectively exiled him from English society; he was sent down (expelled) from Christ Church, Oxford for being a Quaker, and was arrested several times. Among the most famous of these was the trial following his 1670 arrest with William Meade. Penn was accused of preaching before a gathering in the street, which Penn had deliberately provoked in order to test the validity of the new law against assembly. Penn pleaded for his right to see a copy of the charges laid against him and the laws he had supposedly broken, but the judge (the Lord Mayor of London) refused – even though this right was guaranteed by the law. Furthermore, the judge directed the jury to come to a verdict without hearing the defence.[51]

Despite heavy pressure from the Lord Mayor to convict Penn, the jury returned a verdict of “not guilty”. When invited by the judge to reconsider their verdict and to select a new foreman, they refused and were sent to a cell over several nights to mull over their decision. The Lord Mayor then told the jury, “You shall go together and bring in another verdict, or you shall starve”, and not only had Penn sent to jail in loathsome Newgate Prison (on a charge of contempt of court), but the full jury followed him, and they were additionally fined the equivalent of a year’s wages each.[52][53] The members of the jury, fighting their case from prison in what became known as Bushel’s Case, managed to win the right for all English juries to be free from the control of judges.[54] This case was one of the more important trials that shaped the future concept of American freedom (see jury nullification)[55] and was a victory for the use of the writ of habeas corpus as a means of freeing those unlawfully detained.

I was brought up in this tradition and was prepared to go to jail rather than be conscripted into the armed forces. One of my scientific and spiritual heroes, Kathleen Lonsdale (the first female member of the Royal Society) went to prison during the war for refusing to watch for enemy aircraft. So if Julian Assange is jailed, it is in a long tradition of protest and reform through personal sacrifice.

For the last 10 years I have expected this battle for freedom to appear. It’s not just the Internet – it’s the control of thought through the new media. The misuse of copyright for commercial control. The digital goldrush where large corporations can claim rights to the public domain. It’s been bubbling for several years and now it’s erupted.

It is difficult to see a middle ground. Either information is free, or we are information slaves – able to do and say only what our masters say. The Military-Industrial-Media complex, perhaps?

Ultimately Penn won through the good sense of individuals in the judicial system in this country. The supreme courts will have to decide our question. Before they do I expect that people will go to jail.

Here’s the justification for the Wikileaks petition.

=========================================================
Dear friends,

The chilling intimidation campaign against WikiLeaks (when they have broken no laws) is an attack on freedom of the press and democracy. We urgently need a massive public outcry to stop the crackdown — let’s get to 1 million voices and take out full page ads in US newspapers this week!


The massive campaign of intimidation against WikiLeaks is sending a chill through free press advocates everywhere.

Legal experts say WikiLeaks has likely broken no laws. Yet top US politicians have called it a terrorist group and commentators have urged assassination of its staff. The organization has come under massive government and corporate attack, but WikiLeaks is only publishing information provided by a whistleblower. And it has partnered with the world’s leading newspapers (NYT, Guardian, Spiegel etc) to carefully vet the information it publishes.

The massive extra-judicial intimidation of WikiLeaks is an attack on democracy. We urgently need a public outcry for freedom of the press and expression. Sign the petition to stop the crackdown and forward this email to everyone — let’s get to 1 million voices and take out full page ads in US newspapers this week!

http://www.avaaz.org/en/wikileaks_petition/?vl

WikiLeaks isn’t acting alone — it’s partnered with the top newspapers in the world (New York Times, The Guardian, Der Spiegel, etc) to carefully review 250,000 US diplomatic cables and remove any information that it is irresponsible to publish. Only 800 cables have been published so far. Past WikiLeaks publications have exposed government-backed torture, the murder of innocent civilians in Iraq and Afghanistan, and corporate corruption.

The US government is currently pursuing all legal avenues to stop WikiLeaks from publishing more cables, but the laws of democracies protect freedom of the press. The US and other governments may not like the laws that protect our freedom of expression, but that’s exactly why it’s so important that we have them, and why only a democratic process can change them.

Reasonable people can disagree on whether WikiLeaks and the leading newspapers it’s partnered with are releasing more information than the public should see. Whether the releases undermine diplomatic confidentiality and whether that’s a good thing. Whether WikiLeaks founder Julian Assange has the personal character of a hero or a villain. But none of this justifies a vicious campaign of intimidation to silence a legal media outlet by governments and corporations. Click below to join the call to stop the crackdown:

http://www.avaaz.org/en/wikileaks_petition/?vl

Ever wonder why the media so rarely gives the full story of what happens behind the scenes? This is why – because when they do, governments can be vicious in their response. And when that happens, it’s up to the public to stand up for our democratic rights to a free press and freedom of expression. Never has there been a more vital time for us to do so.

With hope,
Ricken, Emma, Alex, Alice, Maria Paz and the rest of the Avaaz team.

SOURCES:

Law experts say WikiLeaks in the clear (ABC)
http://www.abc.net.au/worldtoday/content/2010/s3086781.htm

WikiLeaks are a bunch of terrorists, says leading U.S. congressman (Mail Online)
http://www.dailymail.co.uk/news/article-1333879/WikiLeaks-terrorists-says-leading-US-congressman-Peter-King.html

Cyber guerrillas can help US (Financial Times)
http://www.ft.com/cms/s/0/d3dd7c40-ff15-11df-956b-00144feab49a.html#axzz17QvQ4Ht5

Amazon drops WikiLeaks under political pressure (Yahoo)
http://news.yahoo.com/s/afp/20101201/tc_afp/usdiplomacyinternetwikileakscongressamazon

“WikiLeaks avenged by hacktivists” (PC World):
http://www.pcworld.com/businesscenter/article/212701/operation_payback_wikileaks_avenged_by_hacktivists.html

US Gov shows true control over Internet with WikiLeaks containment (Tippett.org)
http://www.tippett.org/2010/12/us-gov-shows-true-control-over-internet-with-wikileaks-containment/

US embassy cables culprit should be executed, says Mike Huckabee (The Guardian)
http://www.guardian.co.uk/world/2010/dec/01/us-embassy-cables-executed-mike-huckabee

WikiLeaks ditched by MasterCard, Visa. Who’s next? (The Christian Science Monitor)
http://www.csmonitor.com/Innovation/Horizons/2010/1207/WikiLeaks-ditched-by-MasterCard-Visa.-Who-s-next

Assange’s Interpol Warrant Is for Having Sex Without a Condom (The Slatest)
http://slatest.slate.com/id/2276690/


Support the Avaaz community! We’re entirely funded by donations and receive no money from governments or corporations. Our dedicated team ensures even the smallest contributions go a long way — donate here.

Do you love books? Get involved! Bibliography wants to be Open

#jiscopenbib

Books are part of the lifeblood of our culture. Their content, their physical form, their impact continues to entrance us. (Yesterday an Audubon was sold for several million). You don’t need to be a librarian or an academic to love books. I am sure that many of you have carefully sorted your books by size, domain, condition, etc. and I’d guess that some of you actually have an index. That’s not just an index, it’s a BIBLIOGRAPHY!

We now have a wonderful resource in the British National Bibliography. This is an index of most of the most important books. Over 3 million. If you love books here’s your chance to get involved. From http://openbiblio.net/2010/12/06/jisc-openbibliography-development-ideas/ where Mark McGillivray present the opportunity [1]:

Now that we have a queryable British National Bibliography dataset, we are investigating useful functionality to take advantage of the data.

The team have listed a few development ideas based both on our own interests and on discussion with others in the community:

  1. flagging – attaching notes to bibliographic records highlighting possible updates
  2. ikipedia – link to ikipedia by author / title / ISBN for further information
  3. book crossing – search an ISBN, find where a copy of it is available
  4. public libraries – search by ISBN and find out which local public library it is in
  5. exporting records – for example to bibtex
  6. google scholar lookup

We are moving forward with these, however we know that it is not possible for us to guess all the uses that the community might find for such data, so we would appreciate further comments and new ideas. It would be great to have a list of use cases that are valued by the community, and to enable as many of them as possible by project end.

If you are interested in the Semantic web, linked open data , etc. and are looking for a project , then Open Bibliography is a great place to start. It’s heavily supported by identifier systems – and this is both a good thing and a bad thing. It’s got a lot of excellent bibliographic data and it’s got some not-quite-so-good data. Bibliographic data is entered by humans and humans show variability!

We are starting to get other Open bibliographies. If you are involved with a library, make your data available.

The point of Open Bibliography is NOT to create one-big-clean-universal-bibliography. It’s to build a system that can relate different bibliographies to each other. Here’s a great post by John Wilkin from Michigan (http://blog.okfn.org/2010/11/29/open-bibliographic-data-how-should-the-ecosystem-work/ )

The problem with both the arguments OCLC makes and many of the arguments for openness seem to be predicated on the view that bibliographic data are largely inert, lifeless “records” and that these records are the units that should be distributed and consumed.1

Nothing could be further from the truth. Good bibliographic data are in a state of fairly constant, even if minor, flux. There are periodic refinements to names and terms (through authority work), corrections to or amplifications of discrete elements (e.g., dates, titles, authors), and constant augmentation of the records through connection with ancillary data (e.g., statements about the copyright status of the specific manifestation of the work).

In fact, bibliographic data are the classic example of data that need to live in the linked data space, where not only constant fixes but constant annotation and augmentation can take place. That fact and the fact that most of the bibliographic data we have has been created through a kind of collaborative paradigm (e.g., in OCLC’s WorldCat) makes the OCLC position all the more offensive.

Locking bibliographic data up, particularly through arguments around community norms, means that they won’t be as used or as useful as they might be, and that we will rarely receive the benefits of community in creating and maintaining them. The way these data are often used when shared, however, makes the hue and cry of the other side, which essentially says “give me a copy of your data,” all the more nonsensical: by disseminating these records all over the networked world, we undermine our collective opportunities.

[…]

By walling off the data, we, the members of the OCLC cooperative, lose any possibility of community input around a whole host of problems bigger than the collectivity of libraries: Author death dates? Copyright determination? Unknown authors or places of publication?

These problems can best be solved by linked data and crowd-sourcing. And all of this should happen with a free and generous flow of data. OCLC should define its preeminence not by how big or how strong the walls are, but by how good and how well-integrated the data are. If WorldCat were in the flow of work, with others building services and activities around it, no one would care whether copies of the records existed elsewhere, and most of the legitimate requests for copies of the records would morph into linked data projects.

The role of our library community around the data should not be that we are the only ones privileged to touch the data, but that we play some coordinating management role with a world of very interested users contributing effort to the enterprise.

On the other hand, every time someone says this is a problem that should be solved by having records all over the Internet like so many flower seeds on the wind, I see a “solution” that produces exactly what the metaphor implies, a thousand flowers blooming, each metaphorical flower an instance of the same bibliographic record.

What is being argued is that having bibliographic records move around in this way is the sine qua non and even the purpose of openness. When we do that, instead of the collective action we need, we get dispersed and diluted action. Where we need authority, we get babel.

[…]

I wanted to use this blog forum as an opportunity to make this point, and also, seemingly incongruously, to announce the availability of nearly 700,000 records from the University of Michigan catalog with a CC-0 license, records that can also be found in OCLC. They are now available here: http://www.lib.umich.edu/open-access-bibliographic-records (CKAN package for the Michigan records).

[…]

That said, I believe having the records out there will stimulate even more discussion about the value of openness and the role of OCLC. I’ll have my staff update the file periodically, and in the next release will add the CC-0 mark to the records themselves. I hope the records prove useful to all sorts of initiatives, but I also hope that their availability and my argument helps spur more collective action around solving these problems through linking and associated strategies of openness, and not through file sharing.

There’s a lot more in John’s post and particularly about the role of OCLC (http://en.wikipedia.org/wiki/Online_Computer_Library_Center – the “O” used to be Ohio). The problem is common to many fields (and chemistry is a good example). An organization was set up in the 20th C to abstract and manage the world’s data and metadata. The org did a good job, but it needed lots of human input and set up a business model which requiring charging for products and services. Because this was a large task, only one such organization is usually created (actually chemistry has two). It then effectively creates a monopoly. And, by 1993 (WWW0) it starts to become inefficient and out-of-date. The language in John’s post is exactly the same as for any other abstracting services – books, law, medicine, chemistry, citations, etc… The organization must change, or face increasing bottom-up challenge.

Because in the Internet era we have web democracy. Yesterday in the UK Clay Shirky and (???) were debating on Newsnight (http://www.bbc.co.uk/blogs/newsnight/fromthewebteam/2010/12/tuesday_7_december_2010.html ) the arrest of Julian Assange of Wikileaks. (???) argued that information wanted to be free and Shirky pointed out that Assange was being curtailed by non-democratic and non-legal methods.

So bibliography wants to be free. If OCLC resists that it will perish in the bottom-up web revolution.

Unless of course the web itself is destroyed. And we all have to be vigilant.

[1] Problem – what has happened to Mark’s “W”s? I have cut and pasted them and they’ve got transformed into invisible characters…