By girija goyal

A valuable but hidden set of data: research trainees spend months to years confirming previously published data. These experiments are not considered publishable. Other trainees in the same field “reinvent the wheel” due to lack of publicly available information.

A simple idea: Imagine the savings in time and resources, reduction in retractions and the impact of each experiment if we knew the following: 10 labs repeated it, 7 found a phenotype and 3 did not. What if a single click could highlight differences in methodology between each experiment?

Our vision is to create a gamified portal for researchers (initially focused on students and postdocs) to publish replicated data, compare methodology and provide reproducibility statistics for individual experiments. A user ladder with increasing prestige (pioneer-->scholar--->catalyst), reviewing and commenting rights, and goodies donated by biotech (antibodies, kits etc.), coupled with the real benefit of having publications and peer-reviewing experience will attract trainees and encourage return usage.

The act of comparing several submissions and associated methodologies itself has received overwhelming support from a large network of fellow trainees and mentors (~100). We request a technical partnership to allow single experiment submission, review by several “players” and assessment of reproducibility.

Attachment: miniReproducibility_Mozfest_2015.pdf (227 KB)


There is a very similar project for replications in computational science: ReScience (

Konrad Hinsen · 1 Oct, 2015

Postpublication review by author: Was just talking to a programmer friend at lunch: miniReproducibility Project is to science as Github is to code: providing a test for whether the science "works", version control and branch points. With goodies!

girija goyal · 4 Oct, 2015
Please log in to add a comment.

girija goyal



Published: 30 Sep, 2015

Cc by