petermr's blog

A Scientist and the Web

 

Archive for the ‘scifoo’ Category

What do we mean by open science?

Friday, August 24th, 2007
There seems to be a critical mass of activity in the Open Science camp – possibly sparked off (or at least given amplification) by scifoo. Here is a very useful summary from Bill Hooker (Timo, invite Bill to scifoo next year). Bill missed the Second life event (so did I and I’m disappointed, but I really had other things to do)…

(Addressed in absentia to “Tools for Open Science”, Second Life, Aug 20 2007. I am sorry I could not be there.)I think we all know what we want, and I think we all want much the same thing, which boils down to just this: cooperation. A way forward for science, a way out of the spiralling inefficiency of patent thickets, secret experiments and dog-eat-dog competition. But we use a variety of terms, and probably mean slightly different things even when we use the same terms. It might — I am not sure — be useful at this point to come together on an agreed definition for an agreed term or set of terms — something equivalent to the Berlin/Bethesda/Budapest Open Access Declarations.

If this does not seem like a “tool for open science”, consider what the BBB definition has done for Open Access. It provides cohesion, a point of reference and a standard introduction for newcomers, and acts as a nucleation center for an effective movement with clear and agreed goals. Since this SL session takes off from SciFoo, and SciFoo is by all accounts very good at converting brainstorming sessions into practical outcomes, I thought perhaps the idea of a definition or declaration of Open Science might be a suitable topic. In what I hope is the spirit of SciFoo, here are some ideas that might be useful in such a discussion.

Terms

Whatever this thing is, what should we call it? There are a number of terms in use:

  • Open Science — has the weight of Creative Commons/Science Commons behind it, via iCommons
  • Open Source Science — Jamais Cascio, Chemists Without Borders
  • Open Source Biology — Molecular Biosciences Institute
  • I think “biology” too narrow — there seems little point in Open Chemistry, Open Microbiology, Open Foo all having different names. I think Open Source Foo too likely to lead to confusion with software initiatives, and too likely to lead to pointless arguments about what the “source code” is.
  • That leaves Open Science, which would be my choice for an umbrella term. A case can be made, though, for Open Research, on the same basis on which I argue against Open Biology etc — see this comment from Matthias Röder
  • Another “inclusive” possibility is to focus on information — Open Data, as per PMR’s wikipedia entry, or the broader Open Content. In the same vein, the Open Knowledge Foundation provides a fairly comprehensive definition of Open Knowledge.
  • I have seen “Science 2.0″ around quite a bit lately, though it’s a bit too marketing-speak for my taste
  • Open Notebook Science is a very specific subset of Open Science: if your notebook is open to the world, there’s not much confusion about access barriers! It even comes with its own motto: “no insider information”. This is as Open as Open gets.

Sources and Models

We don’t have to re-invent the wheel:

Flexibility

We don’t want to start a cult, and we don’t want to bog anyone down in semantics. There’s no purity test or loyalty oath. My own view is that Open Science (or whatever we end up calling it) is not an ideology but an hypothesis: that openly shared, collaborative research models will prove more productive than the highly competitive “standard model” under which we now operate.

Openness in scientific research covers a range of practices, from tentative explorations with a single small side-project all the way to Open Notebook Science á la Jean-Claude, and we should welcome every step away from the current hypercompetitive model. Open Notebook Science provides a useful marker for the Open end of the spectrum; perhaps all a Declaration need do is identify the minimum requirements that mark the other end of the spectrum?


Conditions

What standards must a research project or programme meet in order to be considered Open?

  • obvious: Open Access publication
  • equally crucial: Open Data, that is, raw data as freely available (including machine access) as OA text
  • probably indispensable: Open Licensing so as to avoid confusion as to what is truly available and for what purposes; as per Peters Suber and Murray-Rust, this must be
    • explicit
    • conspicuous
    • machine-readable
  • Open Semantics: perhaps none of this will be much good without metadata and standards to allow interoperability and free flow of information
  • desirable: Free/Open Source Software
  • David Wiley: “four Rs” of Open Content (cf. Stallman’s four fundamental freedoms for software):
    • Reuse – Use the work verbatim, just exactly as you found it
    • Rework – Alter or transform the work so that it better meets your needs
    • Remix – Combine the (verbatim or altered) work with other works to better meet your needs
    • Redistribute – Share the verbatim work, the reworked work, or the remixed work with others
  • OKF definition of Open Knowledge

PMR: This is really useful. I can’t think of significant alterations. No-one is suggesting that science is altruistic – it can be hard and cruel as well as beautiful. And science doesn’t care who wins, but knows that the more who play by the rules the greater the progress and enlightenment.

Open availability of tools, methods, specimens, results, recipes, codes, data, etc. MUST enhance science. Not providing them simply impoverishes the field and provides personal gain at the expense of the rest. Scientists are people and they want to succeed personally.

I am very fortunate that the scientists I have known and who have acted as my mentors have been fantastic people. They have nurtured younger scientists, built a sense of community, fostered international science, cared about the human race. That is not a necessary part of science, but it is sufficiently common that it is worth striving for even if, occasionally, it leads to a non-optimal decision in the prisoner’s dilemma.

scifoo: Cameron Neylon on Open Notebook Science

Friday, August 24th, 2007

More on Open Science from Jean-Claude Bradley. It’s sad to see how paper-driven we have become. It’s critical to publish, but I continually sense there is an increasing pressure of “I need a paper – what’s the most cost-effective way of getting one”? This is Jean Claude on Cameron Neylon

22:22 23/08/2007, Jean-Claude Bradley, Useful Chemistry
There has been a lot of discussion lately about the philosophy of Open Science in general terms.

This is certainly worthwhile but I think it is even more interesting to discuss the mechanics of its implementation. That is what I was trying to push a little more by setting up the “Tools of Open Science” session on SciFoo Lives On.

That’s why I’ve been very impressed by Cameron Neylon’s recent posts in his blog “Science in the Open“.

He has been discussing details of the brand of Open Science that interests me most: Open Notebook Science, where a researcher’s laboratory notebook is completely public.

Cameron has been looking at how our UsefulChem experiments could be mapped onto his system and this has sparked off some interesting discussion. I am becoming more convinced than ever that the differences between how scientific fields and individual researchers operate are much deeper than we usually assume.

By focussing almost entirely on the sausage (traditional articles), we tend to forget just how bloody it actually is to make it and we probably assume that everybody makes their sausage the same way.

The basic paradigm of generating a hypothesis then attempting to prove it false is certainly a cornerstone of the scientific process but it is certainly not the whole story. However, after reading a lot of papers and proposals, one gets the impression that science is done as an orderly repetition of that process.

What I have observed in my own career, after working and collaborating with several chemists, most of the experiments we do are done for the purpose of writing papers! The reasoning is that if it is not published in a journal, it never happened. This often leads to the syndrome of sunk costs, similar to a gambler throwing good money after bad, trying to win back his initial loss.

After a usually brief discovery phase, the logical scientist will try to conceive of the fewest number of experiments (preferably of lowest cost and difficulty) to obtain a paper. In this system, like in a courtroom, an unambiguous story and conclusion is the prefered outcome. Reality rarely cooperates that easily and that is why the selection of experiments to perform is truly an artform.

We’re currently going through that process. We have an interesting result observed for a few compounds and a working hypothesis. That’s not enough for a paper in my field. We cannot prove the hypothesis without doing an infinite number of experiments but we are expected to make a decent attempt at trying to falsify it. I know from experience roughly the number of experiments we need with clear cut outcomes to write a traditional paper.

So how much more value to the scientific community is that paper relative to the single experiment where this effect was first disclosed on our wiki then summarized on our blog?

Is this really the most efficient system for doing science or is this the tail wagging the dog?

When the scientific process becomes more automated, I predict that the single experiments will be of more value than standard articles created for human consumption and career validation.

[...]
One of the most useful outcomes of Open Notebook Science (and why I’m highlighting Cameron’s work) might be the insight it will bring to the science of how science actually gets done. (Researchers like Heather Piwowar should appreciate that)

This is where it starts – the passion, the innovation and publicity of people who want to change the current complacency. The exciting thing is that the Internet makes that possible. Within months.

scifoo: academic publishing and what can computer scientists do?

Tuesday, August 14th, 2007

Jim Hendler has summarised several scifoo sessions related to publishing and peer-review and added thoughts for the future (there’s mote to come).  It’s long, but I didn’t feel anything could be selectively deleted so I’ve left only the last para, which has a slight change of subject – speculation what computer scientists could do to help.

15:16 14/08/2007, Planet SciFoo
Here’s a pre-edited preprint of my editorial for the next issue of IEEE Intelligent Systems. I welcome your comments – Jim H.

=======================

[... very worthwhile summary snipped ...]
I believe it is time for us as computer scientists to take a leading role in helping to create innovation in this area. Some ideas are very simple, for example providing overlay journals that link already existing Web publications together, thus increasing the visibility (and therefore impact) of research that cuts across fields. Others may require more work, such as exploring how we can easily embed semantic markup into authoring tools and return some value (for example, automatic reference suggestions) via the use of user-extensible ontologies. In part II of this editorial, next issue, I’ll discuss some ideas being explored with respect to new technologies for the future of academic communication that we as a field may be able to help bring into being, and some of the obstacles thereto. I look forward to hearing your thoughts on the subject.

PMR: I’d love to see some decent semantic authoring tools – and before that just some decent authoring tools. For example I hoped to have contributed code and markup examples to this blog and I simply can’t. Yes there are various plugins but I haven’t got them to work regularly. So the first step is syntactic wikis, blogs, etc. We have to be able to write code in our blogs as naturally as we create it in – say – Eclipse. To have it checked for syntax. To allow others to extract it. And the same goes for RDF, MathML. SVG is a disaster. I hailed it in 1998 as a killer app – 9 years later we are struggling to get  it working in the average browser. These things  can be done if we try hard enough, but we shouldn’t have to try.

It’s even more difficult to create and embed semantic chemistry (CML) and semantic GIS. But these are truly killer apps. The chemical blogosphere is doing its best with really awful baseline technology. Ideas such as embedding metadata in PNGs. Better than nothing but almost certain to decay with a year or so. Hiding stuff in PDFs? hardly semantic. We don’t even have a portable mechanism for transferring compound HTML documents reliably (*.mth and so on).  So until we have solved some of this I think the semantic layer will continue to break. The message of Web 2.0 is that we love lashups and mashups but not yet clear this scales to formal semantic systems.
What’s the answer? I’m not sure since we are in the hands of the browser manufacturers at present and they have no commitment to semantics. They are focussed on centralised servers providing for individual visitors. It’s great that blogs and wikis can work with current browsers but they are in spite of the browsers rather than enabled by them. The trend is towards wikis and blogs mounted on other sites rather than our own desktop, rather than enabling the power of the individual on their own machine.

Having been part of the UK eScience program (== cyberinfrastructure) for 5 years I’ve seen the heavy concentration on “the Grid” and very little on the browser. My opinion is the the middleware systems developed are too heavy for innovation. Like good citizens we installed SOAP, WSDL etc and then found we couldn’t share any of it – the installation wasn’t portable. So now we are moving to a much lighter, more rapid environment based on minimalist approaches such as REST.  RDF rather than SQL, XOM rather than DOM, and a mixture of whatever scripts and templating tools fit the problem. But with a basic philosophy that we need to build it with sustainability in mind.

The Grid suits communities already used to heavy engineering – physics, space, etc. But it doesn’t map onto the liberated Web 2.0. An important part of the Grid was controlling who could do what where. The modern web is liberated by assuming that we live our informatics lives in public. Perhaps the next rounds of funding should concentrate on increasing the emphasis on enabling individuals to share information.

miniblogosphere

Sunday, August 12th, 2007

Here’s Pimm (attilachordash) with a nice picture of the linkages in the scifoo tag cloud.

SciFoo links visualized by TouchGraph Google Browser

Posted by attilachordash on August 11th, 2007

The Google Hacks book from O’Reilly was one out of the free goodies on the SciFoo last weekend. Hack #3 is Visualize Google Results with the TouchGraph Java applet that allows you to visually explore the connections between related websites. Of course I started with the term “scifoo” with the setting of filtering single nodes out of the network in order to see the separate groups of nodes behind.

scifootouchgraph

Explore the detailed properties of the SciFoo URL cloud by double clicking the individual nodes in the network.

PMR: (the click didn’t work for me either in Firefox or IE – maybe something has to be enabled). Perhaps someone would like to do this for the chemical blogosphere?

scifoo: images

Thursday, August 9th, 2007

This blog doesn’t have many pictures but these remind me of three sessions at scifoo with a chance to say a little more after the event. I shan’t (== can’t) identify everyone so feel free to annotate…
cimg1269a.JPG

Andrew Walkingshaw presenting his Golem system. Tim O’Reilly (under the rocket) listened attentively. Golem addresses the important question of how doe we find out what is in data files when we know the vocabulary used, but not the structure of the document. Data was a key issue in the meeting.
cimg1270a.JPG

The blogosphere (part). Deepak Singh (closest) and Jean-Claude Bradley. There were more people than this photo suggests. As we skipped from blogger to blogger, Bora Zivkovic brought up their blog on the screen and scrolled through it.

cimg1273a.JPG

Andrew Walkingshaw (left) and Alex Palazzo. (right) in animated conversation with Philip Campbell (centre, Nature) after the session A+A ran on young scientists and the culture of fear. This was probably the highlight of the meeting for me – where else could you get an idea which surfaced at 0930 on one day and 26 hours later there was a deep debate among equals?

blogging 101

Thursday, August 9th, 2007

Today I seem to be catching up with the continuing background radiation from scifoo and it’s a good way to wind down the jetlag. Here’s Richard Akerman again showing that we really went to scifoo. This learning session also was responsible for the two very short posts on this blog where we were showing how it works…

17:55 09/08/2007, Richard Akerman, scifoo2007, web/tech, weblogs, Science Library Pad
This post lists a few basics about blogging (and feeds) and the tools that I use, it also serves as an example of why I blog: sure I could send this as an email, or bookmark links for my own use, but if I’m going to that effort, I might as well just share it with everyone.

[DSC00450]
Peter Murray-Rust showing his blog

John Santini had the perhaps-misfortune of asking Peter Murray-Rust and I about both the reasons for and the mechanics of blogging, we proceeded to outgeek one another with dueling laptops showing the following:

www.typepad.com is what I use for a blogging platform, you have to pay but that does have the benefit of separating your site out from the unfortunate profusion of spam blogs on

www.blogger.com Google’s free blogging platform

To prevent the flood of spam comments that inevitably flow to all blogs, Peter has a filtering system plus moderation, and I use TypePad’s CAPTCHA system and moderation. It’s unfortunately not possible to filter trackbacks in this way, although you can moderate them.

To track get a full picture of your visitors, you need to track both web hits and (RSS) feed hits. I use StatCounter for my web hits, plus both Peter and I use FeedBurner (now owned by Google) to track our feed hits. Google Analytics is another web hit tracking option, but it’s more for high-volume sites. All these tracking tools are free.

You can also track references to your blog through Technorati and other blog/feed search tools, e.g. here are links to Peter’s blog:

http://www.technorati.com/blogs/wwmm.ch.cam.ac.uk/blogs/murrayrust/?reactions

Peter uses Feed Reader to read RSS feeds, I use Bloglines (you can see what I read at http://www.bloglines.com/public/rakerman ).

In terms of reasons and other meta-blogging areas, I blog mainly to have online searchable notes of stuff that I am sure to forget, and also to connect into the library technology community, which I entered only a few years ago. If making connections like that is important to you, make sure to be generous with your outbound links.

John asked about how much of your identity you have to reveal online, you have every choice ranging from fully anonymous to complete disclosure. Depending on your topic, revealing at least your work title may help to establish your position in the community for people who are reading yoru blog.

That’s about it, it’s quite easy to start blogging and through the magic of linking and Google, if you write it, they will come.

Peter has blogged some of his thoughts on the topic in scifoo: blogsession.

PMR: and the photo shows off the CML t-shirt that Mo-seph created for my Christmas present. (His t-shirt style is very individual and I think elegantly simple. But I am not an independent reviewer).

scifoo: data-driven science and storage

Tuesday, August 7th, 2007

I managed to get out to a few sessions at scifoo not concerned with my immediate concerns, of which two were on the Large Synoptic Survey Telescope and Google’s abiility and willingness to manage scientific data. They come together because the astronomers are producing hundreds of terabytes every day(?) and academia isn’t always the most suitable place to manage the data. So some of them have considered/started shipping it to Google. Obviously it has to be Open Data. There cannot be human-related restrictions that require management.

Everyone thinks they are being overwhelmed with data. Where to keep it temporarily? Can we find it in a year’s time? Should we expect CrystalEye data to remain on WWMM indefinitely?  But our problems are minute compared with the astronomers which are probably 3 orders of magnitude greater.

How would you obtain bandwidth to ship data to someone like Google? Remarkably the fastest way to transmit it is on hard disk. 4 750GByte disks (i.e. 3Tb) fit nicely into a padded box and can be shipped by any major shipping company.  And disk storage  cost is decreasing at 78% per year.

I’m tempted to start putting our data into the “cloud” in this way. It’s Open, so we don’t mind what happens to it (as long as we are recognised as the original creators). It’s peanuts for the large players. If we allocate a megabyte for each new published compound (structure, spectra, crystallography, computation, links, etc. and the full-text if we are allowed) and assume  a million compounds a year that is just ONE terabyte. The whole of the world’s new chemical data each year can fit on a single disk! What the astronomers collect in one minute!

But before we all rush off to to this we must think about semantics and metadata. The astronomers have been doing this for years. They haven’t solved it fully, but they’ve made a lot of progress and have some communal dictionaries and ontologies.

So we could have all the world’s chemical information on our desktops or access it through GYM (Google/Yahoo/Microsoft).
I wonder why we don’t.

scifoo: Open Science

Tuesday, August 7th, 2007

One of the themes at scifoo was “Open Science” or “Open Notebook Science” – the latter term coined by Jean-Claude Bradley. The idea that science is publicly recorded as it is done. The very first bottom-up session (i.e. Saturday morning) was run by J-CB and Bora Zivkovic of PLoS ONE. Here are two comments:

Corie Lok et al.Scifoo: day 1; open science

It’s late and so I’ll keep this short. I’ll write more detailed accounts of Scifoo soon, but here are some highlights so far.

My day today started off with a contentious talk about open science. It quickly veered off into a complaint session about how the slow publication process in biology and the fear of not being credited and of being scooped are hindering open science (putting prepublication info and data online). But the physicists in the room quickly got annoyed by the complaining (not exactly new complaints either) and so the discussion got back on track to focus on current efforts to put more data and discussion of prepublication research online (such as Jean-Claude Bradley’s open notebook efforts). The session set the stage for several other related ones later in the day. It also spawned one taking place tomorrow about the culture of fear among young scientists: fear of doing open science, at the risk of jeopardizing career prospects. I’ll definitely be at that one. For another perspective on this session, check out Anna’s post on it.

PMR: here’s Anna’s post:

Swimming in the Ocean

Date:
Saturday, 04 Aug 2007 – 22:46 GMT
Have you heard the expression “small fish in a big pond”? I have an updated version. How about, “plankton in an ocean”? That’s me. I am the plankton, spending the weekend with CEOs of major corporations, editors in-chief, a couple Nobel prize winners, people advancing science and media in ways I can hardly comprehend… and Martha Stewart. That, in a nutshell (or an ocean, as the case may be) is Science Foo Camp, where I am currently sitting with mouth hanging open and ears open wide.One of the major themes of this free-form gathering has been open access publishing. In a group discussion led by Bora Zivkovic of PLoS ONE, tempers flared (which made it even more fun than staring at science celebrities), and the many complications, pros and cons of open access were raised. Does the term “open access” refer to pre- or post-publication open access? Is it open, non-peer reviewed publication of articles or even complete lab notebooks, or access to reviewed, published articles free of charge? That aside, will open access publishing negatively affect the hiring potential of young faculty looking for tenure track positions or funding from organizations such as Wellcome Trust and the NIH?

What about intellectual property? How does one protect findings aired in a public forum? One attendee replied that you don’t, it doesn’t matter, it should all be free and open. As much as I personally admire this free love, Birkenstock/Woodstock approach to science and research, I do not believe it to be feasible at the moment. Science is run by money. In order to get money or funding, one must publish. The changes and minor revolutions in that need to occur in publishing before the concept of the science paper becomes obsolete are staggering. They are also occurring as we speak.

Back to gaping at people far smarter than me.

PMR: and comments to Anna (so far):

Comments

  • Bora Zivkovic said:
    Small fish? No way – I was very excited to get to meet you in person.
  • Anna Kushnir said:
    The pleasure was all mine. I am happy I got the chance to meet you!
  • Jean-Claude Bradley said:
    Concerning the question of intellectual property, I am guessing that you are referring to my comment. I was not saying that all research should be open and free – just that people who are interested in intellectual property protection should probably not do Open Notebook Science. And this is no different than in the traditional publication process. People who are interested in intellectual property should not publish manuscripts without filing a patent (at least a provisional US patent). This is an expensive route and completely unrealistic for most scientific research projects. Money is not the sole motivation of scientists. If that were the case who would study fields like archaeology and cosmology?I wish that we had more time to discuss these issues during the session.

  • Deepak Singh said:
    I think the IP issue didn’t get brought up enough, especially with the peer2patent and other IP types there. In many cases the flaws are not in intent, but in the system itself. That said, I think as a community, we know what the problems are. We should just focus on solutions rather than trying to go into what’s wrong in excruciating detail :)

PMR: and Duncan Hull


9.30am: open science 2.0: where we are, where we’re going

After breakfast at Googley’s, I head off to a session on Open Science 2.0. This session is game of two halves, the first half there is much talk of how publishing is a roadblock to many things we would like to achieve with science on the web. Peter Murray-Rust talks of “conservative chemistry”, where (un-named) publishers are the problem, not the solution and block the whole of the University of Cambridge for accessing content in unapproved ways (text-mining). Paul Sereno and Chemist Carl Djerassi discuss the importance of publications in getting jobs and tenure at Stanford. There is talk of the dangerous power of editors of journals, who ultimately decide careers that they are blind too. They don’t just accept papers when they publish, they make and break people’s livelihoods. Andrew Walkingshaw tells of a common perception amongst young scientists about the importance of the h-index and other publication metrics. Eric Lander points out that publication isn’t everything for young scientists, a lot of it comes down to letters of recommendation in job applications and this fact is often overlooked by young scientists. Pamela Silver talks of how the publish or perish mentality is slow like molasses, and sends many talented young scientists at Harvard running and screaming from academia into the arms of anywhere else that will have them, which is a great loss to science. We move on to Open Access, Tim Hubbard, head of informatics at Sanger tells how the Wellcome Trust insists any publications that arise from its funded research projects must be freely available within six months after publication. Jonathan Eisen talks of different types of open access, which is not just about reading papers for free, but reusing them for free too, as in Creative Commons. Somebody possibly Richard Jefferson, talks of a reputation engine called Carmleon? (not sure of spelling).

All of this makes young scientists risk averse and paranoid, which is bad. The only people who can take risks are established scientists, which is a shame. But the discussion takes a u-turn when Paul Ginsparg (arXiv.org) and Dave Carlson, point out we should be having fun not moaning about publishing. We didn’t all come here to whinge, we should be talking about the technology that will enable us to break the publishing roadblock and make science a better place to live, work and play. On this note, Bora Zivkovic tells of publication turnaround times at PLOS, which are now “9 weeks not 9 months”. This is great for young scientists, who often don’t have time to wait for the glacial turnaround times of many publishing companies. He asks what would cyber infrastructure look like in 2015? Jean-Claude Bradley, gives a demo of Usefulchem, see for example this experiment tools like blogs and wikis will play an important contribution in this area.

Summary

Science is becoming more open, but it will be a slow evolution not a rapid revolution. We’re heading in the right direction, some of the tools for doing it are beginning to work. PLOS asks people to be courageous and send their papers in, this can be a gamble, when scientists often favour the old favourites of Nature, Science and PNAS. This session was typical of scifoo, its a mashup of different ideas from very different people working in different areas. It doesn’t always summarise neatly, but thats life. A session on this came later on, called the Culture of Fear: led by Andrew Walkingshaw and Alex Palazzo.

PMR: The session didn’t go as planned – JCB had produced material to demonstrate and didn’t get to show it till near the end. The meeting got hijacked by the theme of Open Access and I helped in the hijack when I probably should have stayed quiet. It meant that we didn’t explore the bright future but reiterated the less inspiring present. But somehow that was the burden that a lot of people had brought with them. Scifoo doesn’t run  on predictable lines and one good thing was that Alex and Andrew were inspired to run a session (young scientists and the culture of fear) they hadn’t planned to when they came.

“Open Science” is a concept whose time has arrived. I prefer “open Notebook Science” because there is less chance of confusion with other terms which have nothing to do with the concept.  Under Open Research WP ha a stub which lists a few lists a few examples – add some more.

scifoo: young scientists and the culture of fear

Tuesday, August 7th, 2007

On the last day, and as an inspiration from the previous sessions and the community atmosphere of the meeting, Andrew Walkingshaw and Alex Palazzo ran a session on the problems of being a postdoc under the pressure of having to publish in high-impact journals. They explained how the very high sense of competition and the pressure of conformance to a single way measure of success constrained innovation – their sense of concern came through very clearly. Here’s their blog entries (AW first):

The Scifoo nature

Scifoo was a blast.

Alex Palazzo and I ran a session today on the politics of scientific communication/open access, particularly for young scientists: he writes about our thoughts here. I was really delighted with how it went; many people, including some very successful academics and editor-in-chief of Nature, Philip Campbell, came along and shared their thoughts.

There’ll be more on what we actually discussed in due course, but the thing happening was itself staggering; from half-formed idea to a really deep round-table discussion in less than forty-eight hours. Creating a space where that can happen is priceless; I can’t thank the organisers enough for inviting me, and, equally importantly, everyone there for their generosity of spirit and openness.

PMR: Then AP. Read this in full, and also the commentary it has generated (and may continue to generate):

Scifoo – Day 3 (well that was yesterday, but I just didn’t have the time …)

Category: art, food, music, citylife and other mental stimuli
Posted on: August 6, 2007 10:46 AM, by Alex Palazzo

Our session on Scientific Communication and Young Scientists, the Culture of Fear, was great. Many bigwigs in the scientific publishing industry were present and a lot of ideas were pitched around. I would also like to thank Andrew Walkinshaw who co-hosted the session, Eric Lander for encouraging us to pursue this discussion, Pam Silver for giving a nice perspective on the whole issue, and all the other participants for giving their views.

Now someone had asked that we vlog the session, we actually tried to set it up but we didn’t have the time. In retrospect I’m glad we didn’t. This became at the last session of scifoo where attendees voiced their comments on the logistics of scifoo, many conference goers preferred to keep video and audio recording devices away from the sessions as they impede open discussion. Conversations off of the record can be more honest and more productive.

So about the session …

The main point that we wanted to make was that there are problems with the current way that we are communicating science and due to developments with Web2.0 applications there is a big push to change how this is done. But we must keep in mind the anxieties and fears of scientists. How we communicate does not only impact how information is disseminated but does affect the careers of the scientists who produce content. Until there is general consensus from the scientific publishing industry, the major funding institutions, and the higher echelons of academia (for example junior faculty search committees), junior scientists are unlikely to participate in novel and innovative modes of scientific communication. The bottom line is that it is just to risky to do so.

There are two main areas that remain to be clarified by the scientific establishment.

1) Credit. How do we ascertain who deserves credit for an original idea, model or piece of data.

2) Peer-review. Although most scientists and futurists who promote much of the open-access model of scientific publishing support some type of peer-review where the science or consistence of a particular body of work is evaluated, there remains some confusion as to whether peer-review should continue to assess the “value” of a particular manuscript. Right now, manuscripts that are submitted to any scientific publication must attain some level of importance that is at least equal to the standards of that particular journal. When evaluating the scientific contribution of any given scientist, close attention is payed to their publication record and particularly where their manuscripts are published. Now whether we should continue to follow this model where editors and the senior scientists determine the scientific validity of any given manuscript is being questioned. In an alternative model many technologists are pushing post-publication evaluation processes which evaluate the importance of any single manuscript after the manuscript has been released after minimal peer-review. These not only include citations indices, but also newer metrics that are currently being developed by many information scientists. There are many problems with these systems, the most critical is that most of the value cannot be assessed until many years after the publication date. An important piece of work may take years to have an impact in a given particular field. Until the scientific establishment reaches a consensus as to whether these post-publication metrics are indeed useful for determining the credentials of a scientist in the shorter term (<2 years post-publication) it is unlikely that any scientists would risk publishing their findings in a minimally peer-reviewed journal.

There was a strong feeling that the top journals do provide a valuable filtering service. They go through all the crap in order to publish the best work. OK they don’t always succeed but competition between all the big journals promotes a high standard. And many scientists are reluctant to give up this model. Journals also help to improve the quality of the published manuscripts, this function would be lost if all we had was PLoS One and Nature Precedings. To all those who think that journals must be eliminated in favour of an ArXiv.org model you are now warned.

PMR: I kept quiet during this session – I have no easy answer. It’s clear that the pressure to get scientific jobs is increasing – whereas not so long ago institutions could choose from those they knew (with all the pluses and minuses) now they try to create a “level playing field”. And what measure do they have when everyone has rave references? It’s difficult not to count the numbers. We did hear that one leading systems biology lab did not simply look at publications but wanted to choose people who could provide a major shift in emphasis and might have a relatively unconventional paper trail. But it’s not common.

Much credit to Alex and Andrew for their bravery in running this session, and to scifoo for it being the sort of place where it could happen.

scifoo: blogsession

Tuesday, August 7th, 2007

As I’ve mentioned at scifoo the programme was evolved by the participants in a first-come first-accepted process whereby we signed up for free slots. It was hardly surprising that the blogosphere gained a slot and on Sunday we found a community of about 10-15 bloggers discussing how and why they did it. Here are some of the blogs that scifoo members have created and some of which were at the session. (Andrew Walkingshaw created a PlanetScifoo, the aggregation of the blogs updating every halfhour). Nothing special about my selection… they weren’t all at the session

So we spent an hour talking about why we did it – what we got out of it – etc. At one end are the compulsive writers – Henry Gee explained how he couldn’t help blogging – it was in the journalistic tradition. I sometimes feel like this but not to the extent I am driven to communicate something whatever. Many of us feel we have an “audience”, community, whatever with whome we have a fragile rapport. Some bloggers get a lot of feedback, others very little. Often we are dependent on real-life contacts for feedback (I generally get little unless I unwittingly or otherwise turn up the “outrage button” and find out who is at the other end). Many bloggers who act as transducers for the immediate are appreciated by their following – the stream of consciousness of unprepared commentary on the world make contact.

Some – such as Richard Akerman have been blogging for years, others like me have yet to reach their first blogversary. Some, especially those in clear employment (e.g. publishers), have boundaries that should not be overstepped. What the boundaries are, are not always clear. Some have more than one blog – a day blog and a more anonymous non-work one. Some feel “soft constraints”, especially when they are partially hosted by – say – a publisher’s umbrella. But I think most would be prepared to speak their mind – here (A Letter to Martha) is Anna Kushnir criticising Martha Stewart for failing to live up to the promise of Scifoo.  (I wasn’t there, but it sounds like a valid comment).
So blogs are of all sorts. Mine has a life of its own.