Gas station without pumps

2012 August 27

Accountable research software

Filed under: Uncategorized — gasstationwithoutpumps @ 09:03
Tags: , ,

Iddo Friedberg asks the seemingly reasonable question “Can we make accountable research software?” on his blog Byte Size Biology. As he points out, most research software is built by rapid prototyping methods, rather than careful software development methods, because we usually have no idea what algorithms and data structures are going to work when we start writing the code.  The point of the research is often to discover new methods for analyzing data, which means that there are a lot of false starts and dead ends in the process.

The result, however, is that research code is often incredibly difficult to distribute or maintain.  Like some others in the bioinformatics community, he feels that the solution is for code to be rewritten and carefully tested before publication of results.  He is aware of at least one of the reasons this is not currently done—it is damned expensive and funding agencies have shown almost no willingness to support rewriting research code into distributable code (I know, as I’ve tried to get funding for that).

The rapid prototyping skills needed for research programming and the careful specification, error checking, and testing needed for software engineering are almost completely disjoint. Some people would even argue that the thinking styles needed for the two types of programming are incompatible.  I wouldn’t go quite that far, but they are certainly very different modes of programming.  It will often be the case that different programmers need to be hired for developing research code and for converting it into distributable code.

The “solution” that Iddo proposes, passed on from Ben Templeton, is the Bioinformatics Testing Consortium, which is a volunteer group of researchers to do some of the quality assurance (QA) steps of software development for each other (code review and testing).  Quite frankly, I don’t see this as being much of a solution.  First, the software has to be in a nearly finished, polished state before the QA steps that they propose make much sense—and getting the code to that state is 90% of the problem.  Second, the volunteer nature of the consortium could easily result in the “tragedy of the commons”, where everyone wants to take more out of the system than they put in.  This is already happening in peer review of papers, with people writing more papers than they review, with the result that editors are finding it harder and harder to get competent reviewers. Third, the people involved are either going to be careful software developers (who are not the main problem in undistributable research code) or rapid prototypers who don’t have the patience and methodical approach of professional testers.

Note: I think that the Bioinformatics Testing Consortium is a good idea. Like many other volunteer projects, it is addressing a real need, though only a small part of the need and with inadequate resources.

I do worry a little about one of the justifications given for distributing research code—the need to replicate experiments.  A proper replication for a computational method is not running the same code over again (and thus making the same mistakes), but re-implementing the method independently.  Having access to the original code is then useful for tracking down discrepancies, as it is often the case that the good results of a method are due to something quite different from what the original researchers thought.  I fear that the push to have highly polished distributable code for all publications will result in a lot less scientific validation of methods by reimplementation, and more “ritual magic” invocation of code that no one understands.  I’ve seen this already with code like DSSP, which almost all protein structure people use for identifying protein secondary structure with almost no understanding of what DSSP really does nor exactly how it defines H-bonds.  It does a good enough job of identifying secondary structure, so no one thinks about the problems.

I fear that the push for polished code from researchers is an attempt to replace computational researchers with software publishing teams. The notion is that the product of the research is not the ideas and the papers, but just free code for others to use.  It treats bioinformaticians as servants of “real” researchers, rather than as researchers in their own right.  It’s like demanding that no papers on possible drug leads be published until Phase III trials have been completed (though not quite that expensive), and then that the drug be distributed for free.

Certainly there is a place for bioinformatics as a service—the UCSC genome browser is a good example of such a service, and the team of developers, QA people, and IT people needed to build and maintain such a service is big and expensive—more expensive than the researchers involved in the effort.  There are enough uses and enough users for that service to justify the price, but are if we hold all bioinformatics researchers to that level of code quality, we’ll stifle a lot of new ideas.

Requiring that code be turnkey software before publication is not a desirable goal for bioinformatics as a research community.

Create a free website or blog at WordPress.com.

%d bloggers like this: