The following appeared on noticeboards in the Chemistry Department – the Panton Arms  is just 200 metres away
Maybe some reader of this blog can help…
But the real message is that Data Management Plans are most valuable to the actual people involved. So I would start graduate training with a data management course…
But NOT run by the Library or Department or the University.
Run by THIRD-YEAR PhD students. These are the people who know the problems of not doing things properly in year one. When they come to write their theses, they know the pain of not having all the data at hand.
And I would develop a thesis-oriented tool that takes ideas from versioned repositories (such as Mercurial (http://en.wikipedia.org/wiki/Mercurial ) and Git http://en.wikipedia.org/wiki/Git_%28software%29 ) and merge it with a continuous integration tool such as Jenkins (http://en.wikipedia.org/wiki/Jenkins_%28software%29 ). This is what we do for software and the peace of mind it gives is enormous. I *know* that all our software is permanently archived  and that it works.
Wouldn’t you like to know that your thesis is permanently archived during the whole of your research (not just after submission). And that it is up to date (i.e. you have all the figures and tables already prepared and certified as fit-for-publish)?
It’s technically straightforward. It simply needs universities to provide a system. IMO this is more important than putting resources into traditional Institutional repositories.
- Scientists do not and will never use IRs in their present form. But they will use a good data management tool.
- If libraries or information support wishes to interact more with researchers (there is very poor engagement at present) then supporting continuous data integration during these research is the best place to invest effort.
 Perhaps this pub is familiar? See http://pantonprinciples.org/about/
 Scientists are concerned about archiving for the here and now. Not for 100 years. Bitbucket and Sourceforge are quite stable enough.