By Konrad Hinsen

A central problem with modern computer-aided research is that research based on long computations becomes difficult to verify. It is thus unclear how and why such should be trusted. The most widely discussed aspect of this problem is the (non-)reproducibility of computational work, but the issue goes deeper: even 100% reproducible work can be wrong because of a software bug, an implicit assumption in the code that is not documented, etc.

One measure to improve the situation is to require authors who use computers in their research to state in a short section of each paper (1) what they have done to validate their methods and their software and (2) how readers can attempt an independent verification of the work. Reviewers would then check if those statements match or exceed the current state of the art. The goal is to bring the discussion about the problem out in the open, in a way that nobody can simply ignore it. The measure acts as both carrot and stick: good verifiability approaches can be showcased, and insufficient ones can be recognized more easily.


Please log in to add a comment.

Konrad Hinsen



Published: 25 Nov, 2016

Cc by