scifoo: academic publishing and what can computer scientists do?

Jim Hendler has summarised several scifoo sessions related to publishing and peer-review and added thoughts for the future (there’s mote to come).  It’s long, but I didn’t feel anything could be selectively deleted so I’ve left only the last para, which has a slight change of subject – speculation what computer scientists could do to help.

15:16 14/08/2007, Planet SciFoo
Here’s a pre-edited preprint of my editorial for the next issue of IEEE Intelligent Systems. I welcome your comments – Jim H.
=======================
[… very worthwhile summary snipped …]
I believe it is time for us as computer scientists to take a leading role in helping to create innovation in this area. Some ideas are very simple, for example providing overlay journals that link already existing Web publications together, thus increasing the visibility (and therefore impact) of research that cuts across fields. Others may require more work, such as exploring how we can easily embed semantic markup into authoring tools and return some value (for example, automatic reference suggestions) via the use of user-extensible ontologies. In part II of this editorial, next issue, I’ll discuss some ideas being explored with respect to new technologies for the future of academic communication that we as a field may be able to help bring into being, and some of the obstacles thereto. I look forward to hearing your thoughts on the subject.

PMR: I’d love to see some decent semantic authoring tools – and before that just some decent authoring tools. For example I hoped to have contributed code and markup examples to this blog and I simply can’t. Yes there are various plugins but I haven’t got them to work regularly. So the first step is syntactic wikis, blogs, etc. We have to be able to write code in our blogs as naturally as we create it in – say – Eclipse. To have it checked for syntax. To allow others to extract it. And the same goes for RDF, MathML. SVG is a disaster. I hailed it in 1998 as a killer app – 9 years later we are struggling to get  it working in the average browser. These things  can be done if we try hard enough, but we shouldn’t have to try.
It’s even more difficult to create and embed semantic chemistry (CML) and semantic GIS. But these are truly killer apps. The chemical blogosphere is doing its best with really awful baseline technology. Ideas such as embedding metadata in PNGs. Better than nothing but almost certain to decay with a year or so. Hiding stuff in PDFs? hardly semantic. We don’t even have a portable mechanism for transferring compound HTML documents reliably (*.mth and so on).  So until we have solved some of this I think the semantic layer will continue to break. The message of Web 2.0 is that we love lashups and mashups but not yet clear this scales to formal semantic systems.
What’s the answer? I’m not sure since we are in the hands of the browser manufacturers at present and they have no commitment to semantics. They are focussed on centralised servers providing for individual visitors. It’s great that blogs and wikis can work with current browsers but they are in spite of the browsers rather than enabled by them. The trend is towards wikis and blogs mounted on other sites rather than our own desktop, rather than enabling the power of the individual on their own machine.
Having been part of the UK eScience program (== cyberinfrastructure) for 5 years I’ve seen the heavy concentration on “the Grid” and very little on the browser. My opinion is the the middleware systems developed are too heavy for innovation. Like good citizens we installed SOAP, WSDL etc and then found we couldn’t share any of it – the installation wasn’t portable. So now we are moving to a much lighter, more rapid environment based on minimalist approaches such as REST.  RDF rather than SQL, XOM rather than DOM, and a mixture of whatever scripts and templating tools fit the problem. But with a basic philosophy that we need to build it with sustainability in mind.
The Grid suits communities already used to heavy engineering – physics, space, etc. But it doesn’t map onto the liberated Web 2.0. An important part of the Grid was controlling who could do what where. The modern web is liberated by assuming that we live our informatics lives in public. Perhaps the next rounds of funding should concentrate on increasing the emphasis on enabling individuals to share information.

This entry was posted in cyberscience, programming for scientists, scifoo. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *