PMR: This is really useful. I can’t think of significant alterations. No-one is suggesting that science is altruistic – it can be hard and cruel as well as beautiful. And science doesn’t care who wins, but knows that the more who play by the rules the greater the progress and enlightenment. Open availability of tools, methods, specimens, results, recipes, codes, data, etc. MUST enhance science. Not providing them simply impoverishes the field and provides personal gain at the expense of the rest. Scientists are people and they want to succeed personally. I am very fortunate that the scientists I have known and who have acted as my mentors have been fantastic people. They have nurtured younger scientists, built a sense of community, fostered international science, cared about the human race. That is not a necessary part of science, but it is sufficiently common that it is worth striving for even if, occasionally, it leads to a non-optimal decision in the prisoner’s dilemma.(Addressed in absentia to “Tools for Open Science”, Second Life, Aug 20 2007. I am sorry I could not be there.)I think we all know what we want, and I think we all want much the same thing, which boils down to just this: cooperation. A way forward for science, a way out of the spiralling inefficiency of patent thickets, secret experiments and dog-eat-dog competition. But we use a variety of terms, and probably mean slightly different things even when we use the same terms. It might — I am not sure — be useful at this point to come together on an agreed definition for an agreed term or set of terms — something equivalent to the Berlin/Bethesda/Budapest Open Access Declarations. If this does not seem like a “tool for open science”, consider what the BBB definition has done for Open Access. It provides cohesion, a point of reference and a standard introduction for newcomers, and acts as a nucleation center for an effective movement with clear and agreed goals. Since this SL session takes off from SciFoo, and SciFoo is by all accounts very good at converting brainstorming sessions into practical outcomes, I thought perhaps the idea of a definition or declaration of Open Science might be a suitable topic. In what I hope is the spirit of SciFoo, here are some ideas that might be useful in such a discussion. Terms Whatever this thing is, what should we call it? There are a number of terms in use:
- Open Science — has the weight of Creative Commons/Science Commons behind it, via iCommons
- Open Source Science — Jamais Cascio, Chemists Without Borders
- Open Source Biology — Molecular Biosciences Institute
- I think “biology” too narrow — there seems little point in Open Chemistry, Open Microbiology, Open Foo all having different names. I think Open Source Foo too likely to lead to confusion with software initiatives, and too likely to lead to pointless arguments about what the “source code” is.
- That leaves Open Science, which would be my choice for an umbrella term. A case can be made, though, for Open Research, on the same basis on which I argue against Open Biology etc — see this comment from Matthias Röder
- Another “inclusive” possibility is to focus on information — Open Data, as per PMR’s wikipedia entry, or the broader Open Content. In the same vein, the Open Knowledge Foundation provides a fairly comprehensive definition of Open Knowledge.
- I have seen “Science 2.0″ around quite a bit lately, though it’s a bit too marketing-speak for my taste
- Open Notebook Science is a very specific subset of Open Science: if your notebook is open to the world, there’s not much confusion about access barriers! It even comes with its own motto: “no insider information”. This is as Open as Open gets.
Sources and Models We don’t have to re-invent the wheel:
Flexibility We don’t want to start a cult, and we don’t want to bog anyone down in semantics. There’s no purity test or loyalty oath. My own view is that Open Science (or whatever we end up calling it) is not an ideology but an hypothesis: that openly shared, collaborative research models will prove more productive than the highly competitive “standard model” under which we now operate. Openness in scientific research covers a range of practices, from tentative explorations with a single small side-project all the way to Open Notebook Science á la Jean-Claude, and we should welcome every step away from the current hypercompetitive model. Open Notebook Science provides a useful marker for the Open end of the spectrum; perhaps all a Declaration need do is identify the minimum requirements that mark the other end of the spectrum? Conditions What standards must a research project or programme meet in order to be considered Open?
- Open Access declarations: Bethesda, Berlin, Budapest
- Creative Commons Licenses, particularly CC-BY
- David Wiley’s Open Education License, an attempt to put legal muscle into a Public Domain dedication; the linked post contains an argument against copyleft
- SPARC / Science Commons Author Addendum allowing authors to retain copyright and self-archive
- my attempt at a Data Addendum, based on the SPARC addendum
- Free Software Foundation definition of free software
- Open Source Initiative definition of Open Source software
- Open Knowledge Foundation definition of Open Knowledge
- enormous body of expertise on the idea of a “public good“; a brief definition might simply be that Open Science is science produced and intended as a public good.
- obvious: Open Access publication
- equally crucial: Open Data, that is, raw data as freely available (including machine access) as OA text
- probably indispensable: Open Licensing so as to avoid confusion as to what is truly available and for what purposes; as per Peters Suber and Murray-Rust, this must be
- Open Semantics: perhaps none of this will be much good without metadata and standards to allow interoperability and free flow of information
- desirable: Free/Open Source Software
- David Wiley: “four Rs” of Open Content (cf. Stallman’s four fundamental freedoms for software):
- Reuse – Use the work verbatim, just exactly as you found it
- Rework – Alter or transform the work so that it better meets your needs
- Remix – Combine the (verbatim or altered) work with other works to better meet your needs
- Redistribute – Share the verbatim work, the reworked work, or the remixed work with others
- OKF definition of Open Knowledge
Archive for the ‘scifoo’ Category
This is where it starts – the passion, the innovation and publicity of people who want to change the current complacency. The exciting thing is that the Internet makes that possible. Within months.22:22 23/08/2007, Useful ChemistryThere has been a lot of discussion lately about the philosophy of Open Science in general terms. This is certainly worthwhile but I think it is even more interesting to discuss the mechanics of its implementation. That is what I was trying to push a little more by setting up the “Tools of Open Science” session on SciFoo Lives On. That’s why I’ve been very impressed by Cameron Neylon’s recent posts in his blog “Science in the Open“. He has been discussing details of the brand of Open Science that interests me most: Open Notebook Science, where a researcher’s laboratory notebook is completely public. Cameron has been looking at how our UsefulChem experiments could be mapped onto his system and this has sparked off some interesting discussion. I am becoming more convinced than ever that the differences between how scientific fields and individual researchers operate are much deeper than we usually assume. By focussing almost entirely on the sausage (traditional articles), we tend to forget just how bloody it actually is to make it and we probably assume that everybody makes their sausage the same way. The basic paradigm of generating a hypothesis then attempting to prove it false is certainly a cornerstone of the scientific process but it is certainly not the whole story. However, after reading a lot of papers and proposals, one gets the impression that science is done as an orderly repetition of that process. What I have observed in my own career, after working and collaborating with several chemists, most of the experiments we do are done for the purpose of writing papers! The reasoning is that if it is not published in a journal, it never happened. This often leads to the syndrome of sunk costs, similar to a gambler throwing good money after bad, trying to win back his initial loss. After a usually brief discovery phase, the logical scientist will try to conceive of the fewest number of experiments (preferably of lowest cost and difficulty) to obtain a paper. In this system, like in a courtroom, an unambiguous story and conclusion is the prefered outcome. Reality rarely cooperates that easily and that is why the selection of experiments to perform is truly an artform. We’re currently going through that process. We have an interesting result observed for a few compounds and a working hypothesis. That’s not enough for a paper in my field. We cannot prove the hypothesis without doing an infinite number of experiments but we are expected to make a decent attempt at trying to falsify it. I know from experience roughly the number of experiments we need with clear cut outcomes to write a traditional paper. So how much more value to the scientific community is that paper relative to the single experiment where this effect was first disclosed on our wiki then summarized on our blog? Is this really the most efficient system for doing science or is this the tail wagging the dog? When the scientific process becomes more automated, I predict that the single experiments will be of more value than standard articles created for human consumption and career validation.[...]One of the most useful outcomes of Open Notebook Science (and why I’m highlighting Cameron’s work) might be the insight it will bring to the science of how science actually gets done. (Researchers like Heather Piwowar should appreciate that)
PMR: I’d love to see some decent semantic authoring tools – and before that just some decent authoring tools. For example I hoped to have contributed code and markup examples to this blog and I simply can’t. Yes there are various plugins but I haven’t got them to work regularly. So the first step is syntactic wikis, blogs, etc. We have to be able to write code in our blogs as naturally as we create it in – say – Eclipse. To have it checked for syntax. To allow others to extract it. And the same goes for RDF, MathML. SVG is a disaster. I hailed it in 1998 as a killer app – 9 years later we are struggling to get it working in the average browser. These things can be done if we try hard enough, but we shouldn’t have to try. It’s even more difficult to create and embed semantic chemistry (CML) and semantic GIS. But these are truly killer apps. The chemical blogosphere is doing its best with really awful baseline technology. Ideas such as embedding metadata in PNGs. Better than nothing but almost certain to decay with a year or so. Hiding stuff in PDFs? hardly semantic. We don’t even have a portable mechanism for transferring compound HTML documents reliably (*.mth and so on). So until we have solved some of this I think the semantic layer will continue to break. The message of Web 2.0 is that we love lashups and mashups but not yet clear this scales to formal semantic systems. What’s the answer? I’m not sure since we are in the hands of the browser manufacturers at present and they have no commitment to semantics. They are focussed on centralised servers providing for individual visitors. It’s great that blogs and wikis can work with current browsers but they are in spite of the browsers rather than enabled by them. The trend is towards wikis and blogs mounted on other sites rather than our own desktop, rather than enabling the power of the individual on their own machine. Having been part of the UK eScience program (== cyberinfrastructure) for 5 years I’ve seen the heavy concentration on “the Grid” and very little on the browser. My opinion is the the middleware systems developed are too heavy for innovation. Like good citizens we installed SOAP, WSDL etc and then found we couldn’t share any of it – the installation wasn’t portable. So now we are moving to a much lighter, more rapid environment based on minimalist approaches such as REST. RDF rather than SQL, XOM rather than DOM, and a mixture of whatever scripts and templating tools fit the problem. But with a basic philosophy that we need to build it with sustainability in mind. The Grid suits communities already used to heavy engineering – physics, space, etc. But it doesn’t map onto the liberated Web 2.0. An important part of the Grid was controlling who could do what where. The modern web is liberated by assuming that we live our informatics lives in public. Perhaps the next rounds of funding should concentrate on increasing the emphasis on enabling individuals to share information.15:16 14/08/2007, Planet SciFooHere’s a pre-edited preprint of my editorial for the next issue of IEEE Intelligent Systems. I welcome your comments – Jim H. ======================= [... very worthwhile summary snipped ...] I believe it is time for us as computer scientists to take a leading role in helping to create innovation in this area. Some ideas are very simple, for example providing overlay journals that link already existing Web publications together, thus increasing the visibility (and therefore impact) of research that cuts across fields. Others may require more work, such as exploring how we can easily embed semantic markup into authoring tools and return some value (for example, automatic reference suggestions) via the use of user-extensible ontologies. In part II of this editorial, next issue, I’ll discuss some ideas being explored with respect to new technologies for the future of academic communication that we as a field may be able to help bring into being, and some of the obstacles thereto. I look forward to hearing your thoughts on the subject.
PMR: (the click didn’t work for me either in Firefox or IE – maybe something has to be enabled). Perhaps someone would like to do this for the chemical blogosphere?
Posted by attilachordash on August 11th, 2007The Google Hacks book from O’Reilly was one out of the free goodies on the SciFoo last weekend. Hack #3 is Visualize Google Results with the TouchGraph Java applet that allows you to visually explore the connections between related websites. Of course I started with the term “scifoo” with the setting of filtering single nodes out of the network in order to see the separate groups of nodes behind.
Explore the detailed properties of the SciFoo URL cloud by double clicking the individual nodes in the network.
PMR: and the photo shows off the CML t-shirt that Mo-seph created for my Christmas present. (His t-shirt style is very individual and I think elegantly simple. But I am not an independent reviewer).This post lists a few basics about blogging (and feeds) and the tools that I use, it also serves as an example of why I blog: sure I could send this as an email, or bookmark links for my own use, but if I’m going to that effort, I might as well just share it with everyone. Peter Murray-Rust showing his blog John Santini had the perhaps-misfortune of asking Peter Murray-Rust and I about both the reasons for and the mechanics of blogging, we proceeded to outgeek one another with dueling laptops showing the following: www.typepad.com is what I use for a blogging platform, you have to pay but that does have the benefit of separating your site out from the unfortunate profusion of spam blogs on www.blogger.com Google’s free blogging platform To prevent the flood of spam comments that inevitably flow to all blogs, Peter has a filtering system plus moderation, and I use TypePad’s CAPTCHA system and moderation. It’s unfortunately not possible to filter trackbacks in this way, although you can moderate them. To track get a full picture of your visitors, you need to track both web hits and (RSS) feed hits. I use StatCounter for my web hits, plus both Peter and I use FeedBurner (now owned by Google) to track our feed hits. Google Analytics is another web hit tracking option, but it’s more for high-volume sites. All these tracking tools are free. You can also track references to your blog through Technorati and other blog/feed search tools, e.g. here are links to Peter’s blog: http://www.technorati.com/blogs/wwmm.ch.cam.ac.uk/blogs/murrayrust/?reactions Peter uses Feed Reader to read RSS feeds, I use Bloglines (you can see what I read at http://www.bloglines.com/public/rakerman ). In terms of reasons and other meta-blogging areas, I blog mainly to have online searchable notes of stuff that I am sure to forget, and also to connect into the library technology community, which I entered only a few years ago. If making connections like that is important to you, make sure to be generous with your outbound links. John asked about how much of your identity you have to reveal online, you have every choice ranging from fully anonymous to complete disclosure. Depending on your topic, revealing at least your work title may help to establish your position in the community for people who are reading yoru blog. That’s about it, it’s quite easy to start blogging and through the magic of linking and Google, if you write it, they will come. Peter has blogged some of his thoughts on the topic in scifoo: blogsession.
open notebook efforts). The session set the stage for several other related ones later in the day. It also spawned one taking place tomorrow about the culture of fear among young scientists: fear of doing open science, at the risk of jeopardizing career prospects. I’ll definitely be at that one. For another perspective on this session, check out Anna’s post on it.PMR: here’s Anna’s post:
PMR: and comments to Anna (so far):
Swimming in the Ocean
- Saturday, 04 Aug ust 2007 – 22:46 GMTHave you heard the expression “small fish in a big pond”? I have an updated version. How about, “plankton in an ocean”? That’s me. I am the plankton, spending the weekend with CEOs of major corporations, editors in-chief, a couple Nobel prize winners, people advancing science and media in ways I can hardly comprehend… and Martha Stewart. That, in a nutshell (or an ocean, as the case may be) is Science Foo Camp, where I am currently sitting with mouth hanging open and ears open wide.One of the major themes of this free-form gathering has been open access publishing. In a group discussion led by Bora Zivkovic of PLoS ONE, tempers flared (which made it even more fun than staring at science celebrities), and the many complications, pros and cons of open access were raised. Does the term “open access” refer to pre- or post-publication open access? Is it open, non-peer reviewed publication of articles or even complete lab notebooks, or access to reviewed, published articles free of charge? That aside, will open access publishing negatively affect the hiring potential of young faculty looking for tenure track positions or funding from organizations such as Wellcome Trust and the NIH? What about intellectual property? How does one protect findings aired in a public forum? One attendee replied that you don’t, it doesn’t matter, it should all be free and open. As much as I personally admire this free love, Birkenstock/Woodstock approach to science and research, I do not believe it to be feasible at the moment. Science is run by money. In order to get money or funding, one must publish. The changes and minor revolutions in that need to occur in publishing before the concept of the science paper becomes obsolete are staggering. They are also occurring as we speak. Back to gaping at people far smarter than me.
PMR: and Duncan Hull
- Bora Zivkovic said:
- Small fish? No way – I was very excited to get to meet you in person.
- Anna Kushnir said:
- The pleasure was all mine. I am happy I got the chance to meet you!
- Jean-Claude Bradley said:
- Concerning the question of intellectual property, I am guessing that you are referring to my comment. I was not saying that all research should be open and free – just that people who are interested in intellectual property protection should probably not do Open Notebook Science. And this is no different than in the traditional publication process. People who are interested in intellectual property should not publish manuscripts without filing a patent (at least a provisional US patent). This is an expensive route and completely unrealistic for most scientific research projects. Money is not the sole motivation of scientists. If that were the case who would study fields like archaeology and cosmology?I wish that we had more time to discuss these issues during the session.
- Deepak Singh said:
- I think the IP issue didn’t get brought up enough, especially with the peer2patent and other IP types there. In many cases the flaws are not in intent, but in the system itself. That said, I think as a community, we know what the problems are. We should just focus on solutions rather than trying to go into what’s wrong in excruciating detail
Mon, 2007-08-06 14:43 — DuncanPMR: The session didn’t go as planned – JCB had produced material to demonstrate and didn’t get to show it till near the end. The meeting got hijacked by the theme of Open Access and I helped in the hijack when I probably should have stayed quiet. It meant that we didn’t explore the bright future but reiterated the less inspiring present. But somehow that was the burden that a lot of people had brought with them. Scifoo doesn’t run on predictable lines and one good thing was that Alex and Andrew were inspired to run a session (young scientists and the culture of fear) they hadn’t planned to when they came. “Open Science” is a concept whose time has arrived. I prefer “open Notebook Science” because there is less chance of confusion with other terms which have nothing to do with the concept. Under Open Research WP ha a stub which lists a few lists a few examples – add some more.
9.30am: open science 2.0: where we are, where we’re goingAfter breakfast at Googley’s, I head off to a session on Open Science 2.0. This session is game of two halves, the first half there is much talk of how publishing is a roadblock to many things we would like to achieve with science on the web. Peter Murray-Rust talks of “conservative chemistry”, where (un-named) publishers are the problem, not the solution and block the whole of the University of Cambridge for accessing content in unapproved ways (text-mining). Paul Sereno and Chemist Carl Djerassi discuss the importance of publications in getting jobs and tenure at Stanford. There is talk of the dangerous power of editors of journals, who ultimately decide careers that they are blind too. They don’t just accept papers when they publish, they make and break people’s livelihoods. Andrew Walkingshaw tells of a common perception amongst young scientists about the importance of the h-index and other publication metrics. Eric Lander points out that publication isn’t everything for young scientists, a lot of it comes down to letters of recommendation in job applications and this fact is often overlooked by young scientists. Pamela Silver talks of how the publish or perish mentality is slow like molasses, and sends many talented young scientists at Harvard running and screaming from academia into the arms of anywhere else that will have them, which is a great loss to science. We move on to Open Access, Tim Hubbard, head of informatics at Sanger tells how the Wellcome Trust insists any publications that arise from its funded research projects must be freely available within six months after publication. Jonathan Eisen talks of different types of open access, which is not just about reading papers for free, but reusing them for free too, as in Creative Commons. Somebody possibly Richard Jefferson, talks of a reputation engine called Carmleon? (not sure of spelling). All of this makes young scientists risk averse and paranoid, which is bad. The only people who can take risks are established scientists, which is a shame. But the discussion takes a u-turn when Paul Ginsparg (arXiv.org) and Dave Carlson, point out we should be having fun not moaning about publishing. We didn’t all come here to whinge, we should be talking about the technology that will enable us to break the publishing roadblock and make science a better place to live, work and play. On this note, Bora Zivkovic tells of publication turnaround times at PLOS, which are now “9 weeks not 9 months”. This is great for young scientists, who often don’t have time to wait for the glacial turnaround times of many publishing companies. He asks what would cyber infrastructure look like in 2015? Jean-Claude Bradley, gives a demo of Usefulchem, see for example this experiment tools like blogs and wikis will play an important contribution in this area.
SummaryScience is becoming more open, but it will be a slow evolution not a rapid revolution. We’re heading in the right direction, some of the tools for doing it are beginning to work. PLOS asks people to be courageous and send their papers in, this can be a gamble, when scientists often favour the old favourites of Nature, Science and PNAS. This session was typical of scifoo, its a mashup of different ideas from very different people working in different areas. It doesn’t always summarise neatly, but thats life. A session on this came later on, called the Culture of Fear: led by Andrew Walkingshaw and Alex Palazzo.
he writes about our thoughts here. I was really delighted with how it went; many people, including some very successful academics and editor-in-chief of Nature, Philip Campbell, came along and shared their thoughts. There’ll be more on what we actually discussed in due course, but the thing happening was itself staggering; from half-formed idea to a really deep round-table discussion in less than forty-eight hours. Creating a space where that can happen is priceless; I can’t thank the organisers enough for inviting me, and, equally importantly, everyone there for their generosity of spirit and openness.PMR: Then AP. Read this in full, and also the commentary it has generated (and may continue to generate):
PMR: I kept quiet during this session – I have no easy answer. It’s clear that the pressure to get scientific jobs is increasing – whereas not so long ago institutions could choose from those they knew (with all the pluses and minuses) now they try to create a “level playing field”. And what measure do they have when everyone has rave references? It’s difficult not to count the numbers. We did hear that one leading systems biology lab did not simply look at publications but wanted to choose people who could provide a major shift in emphasis and might have a relatively unconventional paper trail. But it’s not common. Much credit to Alex and Andrew for their bravery in running this session, and to scifoo for it being the sort of place where it could happen.
Category: art, food, music, citylife and other mental stimuli Posted on: August 6, 2007 10:46 AM, by Alex Palazzo Our session on Scientific Communication and Young Scientists, the Culture of Fear, was great. Many bigwigs in the scientific publishing industry were present and a lot of ideas were pitched around. I would also like to thank Andrew Walkinshaw who co-hosted the session, Eric Lander for encouraging us to pursue this discussion, Pam Silver for giving a nice perspective on the whole issue, and all the other participants for giving their views. Now someone had asked that we vlog the session, we actually tried to set it up but we didn’t have the time. In retrospect I’m glad we didn’t. This became at the last session of scifoo where attendees voiced their comments on the logistics of scifoo, many conference goers preferred to keep video and audio recording devices away from the sessions as they impede open discussion. Conversations off of the record can be more honest and more productive. So about the session … The main point that we wanted to make was that there are problems with the current way that we are communicating science and due to developments with Web2.0 applications there is a big push to change how this is done. But we must keep in mind the anxieties and fears of scientists. How we communicate does not only impact how information is disseminated but does affect the careers of the scientists who produce content. Until there is general consensus from the scientific publishing industry, the major funding institutions, and the higher echelons of academia (for example junior faculty search committees), junior scientists are unlikely to participate in novel and innovative modes of scientific communication. The bottom line is that it is just to risky to do so. There are two main areas that remain to be clarified by the scientific establishment. 1) Credit. How do we ascertain who deserves credit for an original idea, model or piece of data. 2) Peer-review. Although most scientists and futurists who promote much of the open-access model of scientific publishing support some type of peer-review where the science or consistence of a particular body of work is evaluated, there remains some confusion as to whether peer-review should continue to assess the “value” of a particular manuscript. Right now, manuscripts that are submitted to any scientific publication must attain some level of importance that is at least equal to the standards of that particular journal. When evaluating the scientific contribution of any given scientist, close attention is payed to their publication record and particularly where their manuscripts are published. Now whether we should continue to follow this model where editors and the senior scientists determine the scientific validity of any given manuscript is being questioned. In an alternative model many technologists are pushing post-publication evaluation processes which evaluate the importance of any single manuscript after the manuscript has been released after minimal peer-review. These not only include citations indices, but also newer metrics that are currently being developed by many information scientists. There are many problems with these systems, the most critical is that most of the value cannot be assessed until many years after the publication date. An important piece of work may take years to have an impact in a given particular field. Until the scientific establishment reaches a consensus as to whether these post-publication metrics are indeed useful for determining the credentials of a scientist in the shorter term (<2 years post-publication) it is unlikely that any scientists would risk publishing their findings in a minimally peer-reviewed journal. There was a strong feeling that the top journals do provide a valuable filtering service. They go through all the crap in order to publish the best work. OK they don’t always succeed but competition between all the big journals promotes a high standard. And many scientists are reluctant to give up this model. Journals also help to improve the quality of the published manuscripts, this function would be lost if all we had was PLoS One and Nature Precedings. To all those who think that journals must be eliminated in favour of an ArXiv.org model you are now warned.
- AlexPalazzo: The Daily Transcript http://scienceblogs.com/transcript/
- AndrewWalkingshaw: Brighten the Corners http://wwmm.ch.cam.ac.uk/blogs/walkingshaw/
- AnnaKushnir: Lab Life
- AttilaCsordas: Pimm Partial Immortalization http://pimm.wordpress.com/
- BoraZivkovic: A Blog Around the Clock http://scienceblogs.com/clock/
- Corie Lok: Nature Network Boston
- DaveCarlson: Dave’s Blog http://web.mac.com/clsmfmly/iWeb/Carlson%20Family/Dave%27s%20Blog/Dave%27s%20Blog.html
- DeepakSingh: http://mndoci.com/blog/
- DuncanHull: http://www.nodalpoint.org/blog/duncan
- EuanAdie: http://www.ghastlyfop.com/blog/
- GiaMilinovich: http://www.giagia.co.uk/
- Henry Gee: The End Of The Pier Show
- JCBradley: http://usefulchem.blogspot.com/
- Jim Hendler (and colleagues): http://www.mindswap.org/blog/
- JonathanEisen: http://phylogenomics.blogspot.com/
- PatTufts: Pinhead’s Progress
- PeterMurrayRust: http://wwmm.ch.cam.ac.uk/blogs/murrayrust/
- PZMyers: http://scienceblogs.com/pharyngula/
- RichardAkerman: http://scilib.typepad.com/science_library_pad/
- RichardJefferson: CAMBIA βiος – Taking the Red Pill
- RobCarlson: http://synthesis.typepad.com
- ThomasGoetz: http://epidemix.org/
- VaughanBell: http://mindhacks.com