Experimental Algorithmics (JEA)


Search JEA
enter search term and/or author name

Social Media


JEA Guidelines for Referees

To the Reviewer: Welcome, and thank you for agreeing to review an article submitted to JEA. Thoughful and timely manuscript review is one of the most important services you can perform for your research community. This page is intended to clarify the standards of JEA and your responsibilites as a reviewer.

If you are reviewing a paper in the Manuscript Central system, please use the review form provided.

If you are reviewing a paper not in Manuscript Central, you can use the review form available at .

Evaluating Manuscripts Submitted to JEA

JEA is part of ACM's flagship publications; thus the standard of quality is high. Articles are judged on:

  • The relevance of the work to JEA's aims and scope.
  • The originality and significance of the results, which may be their likely impact on an application area, or their contribution to the understanding of discrete algorithms and data structures, or contribution to experimental methodology.
  • The thoroughness of the experimentation, and the methodology of the research. This includes the range and relevance of test data, the elucidation of curious or negative results, the quality of conclusions drawn from the experimental results, and how well the data supports those conclusions. For a tutorial or survey, consider the breadth and depth of coverage and the ties made to experimental algorithmics.

    Referees should not hesitate to ask for additional experimental work to support claims made inthe paper, or to clarify trends, but should keep in mind that a single paper cannot be expected to resolve all major questions about an algorithm or application.
  • The quality of writing, including organization, clarity, and style.

Checking Software and Data

Most submissions to JEA will include code and test data. Authors are expected to provide complete code packages (preferably written in C or C++) that can be installed on a standard Unix platform by a moderately knowledgeable reader with a few instructions (typically by executing make or some similar building tool). Test data (whether problem instances or output from various programs) should be well documented, including types of instances (if applicable), relevance to testing, significance of results, etc.; much of this documentation may of course appear in the paper itself.

Referees are asked to evaluate the software only to the point of verifying that:

  • The package submitted is easily built on some commonly available Unix platform.
  • The instructions for use are reasonably easy to follow.
  • The software generally performs as described when run on a few test files.

In particular, referees are not expected to certify that the software is correct or that all the experimental results reported in the manuscript are valid.

JEA expects all data relevant to the paper to be included in the paper: reviewers are not expected to examine data files or to compare them to results in the paper.

Reviewers are invited to comment if data and other files lack sufficient documentation to be understood or interpreted, or if features do not work as claimed.   


General ACM policies

For more information on specific topics, see the ACM publications policies.

All ACM Journals | See Full Journal Index