RCR: Replicated Computational Results Initiative
Authors of articles nearly-accepted in JEA will now be invited to apply for an RCR
certificate attached to their article. For those that accept, a further reviewer will be
appointed to ensure that their experimental results can be replicated. The RCR initiative aims to improve the reproducibility of experimental results in the community and adds to the trustability of the experimental results in the articles published in JEA.
The RCR process includes the following steps:
- RCR invitation:
When a manuscript passes sufficient rounds of reviews to require only minor revisions,
the Editor in Chief will email the contact author to invite the article to participate in the RCR Initiative. The authors may decline and continue the traditional process.
- Reproducibility referee assignment:
If the authors agree in participating, a Reproducibility Referee will be assigned to the article, in the form of an extra reviewer for the final phases of reviewing. This reviewer has the sole responsibility of ensuring that the computational results in the manuscript can be replicated. This reviewer will be known to the authors and will work together with them along the process. Reproducibility referees are chosen from a permanent board.
The Board of Reproducibility Referees will be listed in the JEA page, together with the Editorial Board.
RCR review process:
We expect that this extra reviewing will be light and will not noticeably extend the reviewing period, as it will run in parallel with the last stages of standard reviewing (the referee will communicate with the authors directly via email). The Reproducibility Referee may advise the authors on what additional information to add in the article in order to let readers reproduce the results faithfully. Reacting to this advice works in the same way as with standard referees: the parties iterate until the referee is satisfied. Ultimately, this referee will declare whether or not the computational results in the manuscript are reproducible.
- RCR Determination:
We anticipate that manuscripts submitted for RCR designation will almost surely succeed in eventually achieving this designation, or that if they do not, it does not imply that the presented results are incorrect. In such a case, failing to achieve the RCR designation will not be an obstacle for acceptance. If, however, the RCR process demonstrates that there is actually a failure in the experimental results, the authors will be required to correct them. The Editor in Chief will manage those situations, seeking to finally accept the manuscript in a corrected form.
A manuscript whose computational results are successfully replicated will be published with a special RCR designation. The involved Reproducibility Referee should be acknowledged in the published paper.
- Methods for Replicating Results:
We rely on the expertise of the Reproducibility Reviewer to make the final determination of the RCR designation. Presently we have two basic approaches for assessing replicability. The first is more desirable, but not always possible.
The authors provide the RCR reviewer access to, or sufficient description of, the computational platform used to produce the manuscript results. Access could be:
Review of computational results artifacts:
- A direct transfer of software to the reviewer or a pointer to an archive of the software, and a description of a commonly available computer system the reviewer can access.
- A guest account and access to the software on the system used to produce the results.
- Detailed observation of the authors replicating the results.
In some situations, authors may not be able to readily replicate computational results. Results may be from a system that is no longer available, or may be on a leadership class computing system to which access is very limited. In these situations, careful documentation of the process used to produce results could be sufficient for an RCR designation. In this case, the software should have its own substantial verification process to give the reviewer confidence that computations were performed correctly. If timing results are reported, the authors' artifacts should include validation testing of the timers used to report results.