Where should we get our computing?

Three independent events have made me re-ask the question – where do we get our computing from? They are:

  • A visit to the Barcelona supercomputing centre. We had a personal tour during the COST D37 visit – thanks very much to all involved. It’s a splendid sight – a huge glass box occupying most of the interior of a chapel, with racks of blades (some 2500 I think). And the lovely coloured wiring of the fibre, electric coolant, etc.
  • The building of Barcelona Supercomputing Center is a former chapel.
  • I’m chair of the Computer Services Committee in the Department. We’re quite a federated organization and that means there are several server rooms. All of them require power. All of them require space. All of them require cooling. All of them need backing up. All of them chop and change as the kit wears out and my colleagues get new grants. All of them need connecting to the network. We have an excellent group of Computer Officers so I don’t have to think about them but it’s a lot of work and a lot of money.
  • And we have a High Performance Computing facility in the University. It got into the top something-or-other for size or power or … I’m not sure what the finances are (well I’m not going to blog them) but we are urged to consider it as a primary resource.

And today Jim sent me a recent critique of HPC: HPC Considered Harmful. I have some sympathy with these views (like “Making sure their programs produce correct answers”). So why I am not enthusiastic about HPC?
HPC comes out of the “big science” tradition. CERN, NASA, etc. Where there are teams of engineers on hand to solve problems. Where there are managed projects with project managers. Where there are career staff to support the facilities. Where there are facilities.
Chemistry is long-tail science. Where the unit of allegiance is the lab. There are certainly problems which actively require large machines with large memory. But they often hit the problems of scale. You don’t get usually get sixteen times as much power by building a machine sixteen times as big. OK, You don’t always get sixteen times more science with sixteen times more graduate students.
The Australian e-research effort identified four potential bottlenecks:

  • cpu
  • bandwidth
  • storage
  • data

and concluded that the biggest bottleneck was data.
I’d agree.  Often the primary problem is that we don’t have data. That’s what much of the blog is about. And, at the other end it’s often much easier to produce simulated data than to use it.
So who knows how to manage large-scale computing? The large companies. Amazon, etc. COST D37 had a talk from (I think) Rod Jones at CERN who said that in building computing systems he always benchmarked the costs against Amazon.
I’m certainly looking in that sort of direction.

This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Where should we get our computing?

  1. Pingback: SimBioSys Blog » Blog Archive » The future of HPC

  2. Pingback: Rust asks, “Where should we get our computing?” | insideHPC

Leave a Reply

Your email address will not be published. Required fields are marked *