Data-intensive Science: The JISC I2S2 project.
Typed into Arcturus I’m in Bath for a JISC meeting – the I2S2 meeting. All JISC meetings have acronyms – I2S2 stands for Infrastructure for Integration in Structural Sciences and involves a number of experimentalists in finding the structure of materials (More on the I2S2 project at its website: http://www.ukoln.ac.uk/projects/I2S2/) . For example Martin Dove (from Earth Sciences in Cambridge) is looking at how atoms in silicates move, and how this changes the structure of minerals. Since much of the Earth’s crust is made of silicates this is of importance in understanding tectonic movements, exploration for minerals, etc. Here’s an example of the multidisciplinary nature of science – to find out what happens hundreds of kilometres (10^5 meters) deep in the earth we have to understand how atoms behave at the picometer scale (10^-12 meters). So there is a factor of nearly 20 powers of ten – and it’s remarkable how often the very small and the very large interact. Martin collaborates with Rutherford laboratory near Harwell run by STFC. Martin uses neutrons to determine how the atoms move and needs a special “facility” (ISIS) to do this. Here (http://www.isis.stfc.ac.uk/instruments/instruments2105.html ) are some of the many projects at ISIS which include wsays of improving mobiles phones, mediacl diagnostics and much more. Science underpins our modern life and however we are to escape from our present plight we must see science at the centre. It’s something that the rest of the world admires in the UK. ISIS produces DATA. And that’s what the I2S2 project is about. The data is expensive to produce (neutrons are not cheap) and the data are complex. STFC also has a large resource in developing new approaches to information and Brian Mathews from STFC is therefore also on the project. This is “large science”. But I2S2 also covers “long tail” science – where lots of science is done by individuals. Simon Coles runs the National Crystallographic Service in Southampton where hundreds of researchers submit their samples and his group “solves the structure” and returns the data. Here the data are likely to be in hundreds of separate packages. What’s characteristic of these projects is that the data often drive the science. So managing the data is critical. And we’ve just been talking about problems of scale. If we get 10 times more data then the problem becomes intrinsically more difficult – it’s not just “buying another disc”. New bugs arise and integration issues become essential. So I2S2 is looking to see whether there can be a unified approach to managing data. This requires an information model, because only when we understand the model can we create the software and glueware to automate the process. This is not easy even when “most of the experiments are similar”. It needs expert understanding of the domain and a vocabulary (more technically an ontology) for the data and the processes. Moreover it’s not a static process – we often keep refining the processes in transforming and managing data. And the result of experiment A is often the input for project B. So the process is often shown as cyclic – the research cycle. A key concept id “data reuse” – in this area ideas often build on existing data (which is why I and others keep banging on about publishing data). Here’s a (relatively simple!) diagram for the research cycle in I2S2: Note the cycle round the outside. Start at the NE corner. Not everyone maps their research in precisely these terms but most do something fairly similar. The data-intensive part is mostly at the bottom. Data are not simple – usually the “raw” data need processing before being interpreted. For example an experiment may collect data as photons (flashes of radiation) and these need integrating locally. Or they need transformation between different mathematically domains (“Fourier transform”). Or they are raw numbers from computer simulations. It’s critical that any transformation is openly inspectable so that the rest of the world does not suspect the authors of “manipulating their data to fit the theories”. That’s one reason why it’s so important to agree on the data transformation process and that anyone (not just scientists) can agree it has been done responsibly. This is a microcosm of science – data is everywhere – and all of those projects will be thinking and acting as to how their data can be reliably and automatically processed. Because automation gives both reproducibility and also saves costs. So when scientists say they need resources for processing data, trust us – they do.