Java Test Case Generation Tool Competition

Timeline

  • 20 Jan’23: Tool submission.
  • 20 Feb’23: Notification of the results for structural metrics (code coverage and mutation score).
  • 08 Mar’23: Notification of the understandability results.
  • 15 Mar’23: Tool report deadline.
  • May’23: Official competition results and tool presentation live at SBFT workshop.

Benchmarking Platform

The infrastructure concerning the Java tool competition is available on GitHub and can also be tried using Docker: https://github.com/JUnitContest/JUGE. Please refer to the GitHub repo for instructions.

Related publication
Devroey, Xavier, Alessio Gambi, Juan Pablo Galeotti, René Just, Fitsum Kifetew, Annibale Panichella, and Sebastiano Panichella. “JUGE: An Infrastructure for Benchmarking Java Unit Test Generators.” arXiv preprint arXiv:2106.07520 (2021). https://arxiv.org/abs/2106.07520

Competition Process

One of the biggest challenges of automated test case generation is that the produced test cases are not easy to understand. This is one of the main limitations that hinders wider adoption of the automated test case generators in practice. In this edition of the Java Tool Competition we will evaluate the understandability of the generated test cases. Intuitively, a test case that is understandable is one such that a human can easily understand its semantics.
The assessment will be done by conducting a study with human evaluators who will give an understandability score to a random sample of the generated test cases. The average understandability score across this sample will count for 20% of the final score (which is used to rank the tools).

Tool submission (by 20 Jan’23 AoE)
Fill this Google form: https://forms.gle/EwPbyGu1xKCBdRS67 and send a Zip file with your tool to the organisers: Gunel Jahangirova - gunel.jahangirova@kcl.ac.uk, and Valerio Terragni - v.terragni@auckland.ac.nz.

Notification of the results for structural metrics (code coverage and mutation score) (by 20 Feb’23 AoE)
We will run our tool with the benchmark and you will receive by email the results of the code coverage and mutation score. You will need these data to write the report.

Notification of the understandability results (by 08 Mar’23 AoE)
We will conduct a human evaluation of your test cases to assess their understandability. We will send you a report, which you should also discuss in the report.

Tool report deadline (by 15 Mar’23 AoE)
You will need to submit a report, which will be included in the workshop proceedings. The report must follow the IEEE conference format. The page limit is 2 pages including references. Please submit the PDF to the organisers by the deadline: Gunel Jahangirova - gunel.jahangirova@kcl.ac.uk, and Valerio Terragni - v.terragni@auckland.ac.nz.

Results.
Finally, the winners will be announced live during the workshop in May.

Prizes and awards

Will be announced later. Stay tuned!

Organizers

The fuzzing competition is organized jointly by Gunel Jahangirova - gunel.jahangirova@kcl.ac.uk, and Valerio Terragni - v.terragni@auckland.ac.nz.