The Science Code Manifesto

There’s a manifesto that you can sign, calling for a more sensible approach to the use of software in science. It says:

Software is a cornerstone of science. Without software, twenty-first century science would be impossible. Without better software, science cannot progress.

But the culture and institutions of science have not yet adjusted to this reality. We need to reform them to address this challenge, by adopting these five principles:

Code: All source code written specifically to process data for a published paper must be available to the reviewers and readers of the paper.

Copyright: The copyright ownership and license of any released source code must be clearly stated.

Citation: Researchers who use or adapt science source code in their research must credit the code’s creators in resulting publications.

Credit: Software contributions must be included in systems of scientific assessment, credit, and recognition.

Curation: Source code must remain available, linked to related materials, for the useful lifetime of the publication.

The founding signatories are:

• Nick Barnes and David Jones of the Climate Code Foundation,

• Peter Norvig, the director of research at Google,

• Cameron Neylon of Science in the Open,

• Rufus Pollock of the Open Knowledge Foundation,

• Joseph Jackson of the Open Science Foundation.

I was the 312th person to sign. How about joining?

There’s a longer discussion of each point of the manifesto here. It ties in nicely with the philosophy of the Azimuth Code Project, namely:

Many papers in climate science present results that cannot be reproduced. The authors present a pretty diagram, but don’t explain which software they used to make it, and don’t make this software available, don’t really explain how they did what they did. This needs to change! Scientific results need to be reproducible. Therefore, any software used should be versioned and published alongside any scientific results.

All of this is true for large climate models such as General Circulation Models, as well—but the problem becomes much more serious, because these models have long outgrown the extend where a single developer was able to understand all the code. This is a kind of phase transition in software development: it necessitates a different toolset and a different approach to software development.

As Nick Barnes points out, these ideas

… are simply extensions of the core principle of science: publication. Publication is what distinguishes science from alchemy, and is what has propelled science—and human society—so far and so fast in the last 300 years. The Manifesto is the natural application of this principle to the relatively new, and increasingly important, area of science software.

15 Responses to The Science Code Manifesto

  1. Nick Barnes says:

    Thanks, John. I’d add that climate science isn’t any kind of outlier in this: most code, in most fields of science, isn’t released, and even where it is released the other problems addressed by the manifesto still apply: it’s not properly curated or acknowledged.

    Most of the GCMs do have available source code (unlike large complex software in many other fields): in climate science the availability problem mainly applies to smaller models, and to the small pieces of analytical code written for individual publications.

  2. Nick Barnes says:

    And also: we’re not alone in this, and it’s not a case of outsiders interfering in science. Increasing numbers of scientists, across all disciplines, are talking and acting about these problems and other ‘open science’ issues. The Manifesto, like the Panton Principles, is supposed simply to be a banner which can unite many disparate voices.

  3. Aaron Denney says:

    I feel a bit conflicted as to how useful requiring code to be open will be. When conclusions depend on the code, without the code being reviewed, it’s not satisfactory peer-review. On the other hand, when code is shared, it isn’t rewritten, and confirmations are no longer independent. I think the first trumps the second, but I’m uneasy about how much code is passed around without understanding as it is.

  4. Jess says:

    Stoked: I’m signatory #400. :)

    I think that releasing code is equally as important as releasing papers. One thing that I found useful for releasing code was Matt Might’s Community Research and Academic Programming License (or CRAPL) – an academic-strength open source license which covers a lot of what the Manifesto contains.

    You can find it here: http://matt.might.net/articles/crapl/

  5. Nick Barnes says:

    The CRAPL has some interesting aspects, but is unfortunately a shrink-wrap contract, not a license (and also is about 10x too verbose for my liking). It might be possible to deliver some of the same value (clause IV in the CRAPL) in a license.

  6. noodly says:

    There are two comments on HN, that I agree with (I’m the author of the second):
    http://news.ycombinator.com/item?id=3112705
    http://news.ycombinator.com/item?id=3112984
    (from thread http://news.ycombinator.com/item?id=3112274)

    tl;dr – “I didn’t see any description of problems on this page, that this manifesto wants to solve” (I see more problems that it creates)
    “The code is not as important as descriptions of algorithms, and the ideas behind code”

    I would also like to add:
    Math will not go anywhere soon – programming languages are getting obsolete much faster – so it’s more important that paper had as much detail as it’s required to replicate the results without code than to have an easy access to code that can degrade quality of the papers e.g. when paper misses some important detail of algorithm, and the code is in some kind of assembly – code works, you can run it and get the same results – lazy researcher would use it, without understanding it – even if he couldn’t code the same algorithm from the paper – raising the chance of replicating bugs.
    That’s not how science should work.

  7. David Corfield says:

    I’m thoroughly in favour of code being published, not for its own sake but with a view to allowing replication and criticism. So

    …are simply extensions of the core principle of science: publication. Publication is what distinguishes science from alchemy, and is what has propelled science—and human society—so far and so fast in the last 300 years. The Manifesto is the natural application of this principle to the relatively new, and increasingly important, area of science software.

    I find a very odd thing to write. Weren’t Zosimos of Panopolis’s books publications? I could just about understand the quotation with ‘replication’ in place of ‘publication’.

    • Nick Barnes says:

      As is often the case, precision was sacrificed in this simile, for the sake of brevity and rhetorical effect. “Alchemy” is standing in for the hermetic and esoteric traditions often followed in that discipline: a method might never be published, or if published might be enciphered, or described in metaphorical or allegorical ways, or steps might be omitted or misrepresented. These obfuscations were used to prevent replication, or to restrict it to an elite circle of initiates. The effect was that advances were slow and often lost. Some of these traditions died hard in the 17th century, at the birth of modern science: scientists wanted to keep their discoveries to themselves. Henry Oldenburg had to badger people into publication, and (as I recall) on occasion resorted to trickery to achieve it.

      • Yes, it’s tricky to say what you wanted to say briefly. And as I said I’m very sympathetic to what you are trying to achieve.

        A good case of secrecy holding up progress is that of the Renaissance court mathematicians challenging each other to solve specific problems, while keeping their techniques to themselves.

        • John Baez says:

          Yeah, we all tend to pick on the poor alchemists. There was a lot of secrecy in some alchemical traditions, though. Isaac Newton is a great example of that!

  8. John Baez says:

    Here are some of the comments I got on my thread about this over on Google+.

    Toby Bartels wrote:

    I don’t agree with this line in the supporting discussion:

    Adapting someone else’s code without permission and citation is plagiarism.

    The word ‘permission’ is incorrect here. The authors of the discussion appear to be conflating copyright and plagiarism; permission applies to copyright, while citation applies to plagiarism. (Also, ‘adapting’ is misleading but makes sense in context.)

    For copyright, the possessive in ‘someone else’s code’ refers to the legal owner of the copyright. Publishing someone else’s code (whether adapted or not) without permission is a violation of laws that almost every jurisdiction has adopted, but citation as such is not required (although it’s usually made a condition of permission). Abiding by the law is often wise; however, these laws have nothing to do with academic standards.

    For plagiarism, the possessive in ‘someone else’s code’ refers to the actual originator of the work. Using someone else’s code (whether adapted or not) without citation is a violation of the academic standards that we normally adopt, but permission as such is not required (although it’s usually obtained for legal reasons). Abiding by academic standards is a necessity for any decent researcher.

    As I said, ‘adapting’ makes sense in context, so they should simply remove the text ‘permission and’.

    Benjamin Ramage wrote:

    I like this idea, but I wonder about some of the finer points of what would be acceptable. Ecological data sets are often extremely messy (especially when they have been collected by dozens of different people over many years, e.g., long-term Forest Service inventory data), and thus code can be littered with notes about why particular plots/transects etc. have been excluded from analysis. Even in first-hand data sets, omissions and notes can be common; for instance, I have had to exclude plots with notes like “field assistant appeared extremely hungover during data collection – data do not make any sense at all”. In cases like this, would it be ethical to just delete the relevant parts of code before posting (assuming that the data set was not also provided)? If the raw data had to be provided, would it be ethical to delete the relevant parts of code AND delete the erroneous records from the data set? More generally, I guess I’m wondering how much code cleaning would be accepted (or even expected)?

    Carlos Scheiddeger wrote:

    Benjamin: these things you point out are directly analogous to the non-reasons people use to not publish their source code. There is a cultural component aspect: the community is expected to understand that all code starts messy. Open source writers have had to come to grips with this as well. Let’s get the high-order bits right first!

    Also, notice that the alternative (writing messy software but not publish) is much worse.

    I would also like to add that this manifesto should include publishing data as well. So many papers are irreproducible (or incomparable) even with open source software, simply because the data on which the experiments are based cannot be acquired.

    Jane Shevstov wrote:

    I’m also an ecologist and I think what should be published are the core algorithms, not the stuff that’s specific to processing your data. (Good programming means keeping the two as separate as possible anyway.) Of course, the data should be released, too, but that’s a separate issue.

    The question is which code is interesting or unique enough to publish. When I developed a new method for analyzing stock-flow networks, the code was published as an appendix to the paper. But does anyone really want to see R code for a bootstrap two-group comparison?

    Benjamin Ramage wrote:

    +Jane Shevtsov, I agree that what is important to publish are the core algorithms and anything that makes an analysis unique or novel, but the proposed science code manifesto says “All source code written specifically to process data for a published paper must be available to the reviewers and readers of the paper”.

    +Carlos Scheidegger, I guess what I’m asking for are guidelines about how much cleaning would be acceptable. In my opinion, there’s a delicate balance between transparency and readability. Papers (and code) that acknowledge and explain every excluded data point (regardless of how trivial the reason or non-influential the outcome) can be nearly unreadable. At the other extreme, if a researcher excludes data points (as well as any reference to these data points) simply because they are outliers (i.e. without considering the potential mechanisms), questions can and should be raised about the legitimacy of the results. I know these are not new issues, but they are highly relevant to initiatives like The Science Code Manifesto. If clear guidelines are part of the manifesto, I think it might increase the chances of widespread adoption.

    Carlos Scheidegger wrote:

    +Benjamin Ramage There should never be a penalty for publishing too much code. The culture should be that it’s ok if your code is messy, as long as I can run it on my data.

    There’s two good side effects of this. First, it acknowledges that the current situation is terrible, and that an over-correction is sometimes necessary. Second, there’s a long-term incentive to clean up code and create default standard libraries.

    Miguel Angel wrote:

    +John Baez What are your thoughts about unit tests in science-related source code? I think unit tests would be great to ensure that the code does what is supposed to do and to provide certainty to other researchers who adapt the code that they’re not breaking anything in the process of changing it.

    John Baez wrote:

    +Miguel Angel I’m not really the right person to answer that question; I’m a mere mathematician. Someone else here could do better.

    I’ve just noticed, as I’m trying to read papers about climate science, how often it’s impossible to tell how the authors have processed their data. And it’s not just climate science; it seems to be all science. Back when publishing meant “printing words on paper”, there was a good excuse for this.

    Miguel Angel wrote:

    +John Baez It’s alright. I just thought it would be a nice addition to the manifesto. Unit testing is used widely in software development, particularly useful when you have to deal with a lot of complexity in your code (Wikipedia does a better job at explaining it than me – http://en.wikipedia.org/wiki/Unit_testing).

    Carlos Scheidegger wrote:

    +Miguel Angel In engineering and computational science there’s a large research effort in “verification and validation” (“V&V”). It’s very related to unit testing. A quick search on google yielded this page on V&V for computational fluid dynamics: http://www.grc.nasa.gov/WWW/wind/valid/tutorial/tutorial.html

    Miguel Angel wrote:

    +Carlos Scheidegger Publicly available here: https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B_6AKCRES-6tNjUyNGM0YTUtOGE1OS00YzgxLTkxOTEtMGY2ODFiNjcxYWQy&hl=en

    It’s a bit late here so I will read it tomorrow, but looks interesting.

    F. Lengyel wrote:

    I was expecting an elaborated formal theory of validation and verification, as opposed to a catalog of recommended heuristics and methodological practice in plain English that researchers generally follow (or ought to follow) in the course of their work. IV&V and V&V seem to differ in who is doing the V&V, so it is misleading to suggest that they are radically different. The extensive literature on formal methods familiar to a logician or computer scientist, e.g., formal logical and programming languages adapted to program verification, model checking, program transformation–receives passing mention. In any case, V&V advocacy would require its own manifesto.

    Tim van Beek wrote:

    I don’t think there is any need to mention coding techniques in the manifesto. It is good and important as it is, to get people to recognize that their code should be part of their publication.

    Of course most scientists aren’t very good at programming, but most computer scientists aren’t, too. Getting them to recognize that is an entirely different endeavor. Usually people think that they are good at programming when they succeeded to compile a small program and to get it to do what it was supposed to do. It’s like thinking you know all about math because you can add and multiply natural numbers.

    Unit-Tests are just one example of a tool in a large toolbox that helps you to design and code better programs and save a lot of time along the way. Software projects that develop big and highly critical software systems like the Windows operating system would have failed miserably decades ago if they hadn’t developed these tools. Most scientists who develop software are at least 3 decades behind the state of the art in this sense. So there certainly is a lot of potential in a closer interaction of scientists who need to develop software, and professional programmers, but the former will have to understand this first.

    Me, for example, I won’t run after scientists to tell them about professional software development if they aren’t interested. And I won’t waste my time trying to explain to them why they should be interested.

    BTW, I’m the 418th “endorser” :-)

    • Vasileios Anagnostopoulos says:

      A notable omission of this manifesto related to one of Stallman’s view of software is the necessity of including a document for the procedure of compilation the source code in order to produce the excutable (imagine a 30000 LOC project without a makefile). Moreover equally important (in my view and Stallman’s view) is the ability to run/interpret the code on at least a free (as in beer) environment. If someone has written the software in C++ calling HP/UX system calls is not useful to a researcher that does not have money to buy an HP/UX workstation. A notable example is Darwin’s code that cannot be cross-compiled from another OS to produce a base system (it is not scientific example but It is an example of lack of executability).

      • Nick Barnes says:

        The manifesto is principally concerned with publication: that readers should be able at least to *read* the code (because without this, an important aspect of method is not published). Being able to *run* the code opens a subsidiary can of worms, and in particular is not going to be possible in many sciences at present. Many scientists, and some entire disciplines, rely on proprietary – and sometimes very expensive – third party software components. I don’t much like it, but I can’t hope to change it, and as an outsider I can’t even get traction towards changing it (although the discussion document does address this subject).

        The manifesto is aimed at things which *can* be changed, today.

        Finally, although I personally have a great deal of respect for RMS’s work and achievements – and have been a satisfied user of GCC and emacs for more than 20 years – he’s not any sort of authority in science. Why should scientists care what he thinks?

        • davidtweed says:

          I’m very tempted to take the opposite view: being able to just read the code isn’t much use, since what happens if I as an author and someone disagree about the correctness of some point: it’ll almost certainly come down to a response I’ve used myself on occasions “It works on my machine, dunno what you think is wrong.” In contrast if I can run the code, even if I can’t understand it I can produce “examples of misbehaviour” that are more difficult to just brush under the carpet.

          As you say, actually getting independently compilable code is incredibly difficult (and I’m guilty of not cleaning up some code enough to put it on the Azimuth wiki, so I’m a very black pot here) but I suspect it’s the only thing that will be effective in spotting errors, some of which will have led to bigger “overall picture interpretation” mistakes.

  9. […] Azimuth points you to the Science Code Manifesto — if you code, go sign it! […]

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.