Whenever participants submit the algorithm through the Grand Challenge, inference, and evaluation will be automatically proceeded in the backend with a hidden dataset.
We provide a high-level explanation of the evaluation process at https://ocelot2023.grand-challenge.org/evaluation-metric/, however, we also provide the actual code in evaluation/eval.py
.
Follow the below steps to try the evaluation with the training dataset.
- Save a single JSON file that stores cell predictions by following a format described in https://github.com/lunit-io/ocelot23algo/blob/main/README.md#input-and-output
- Save a single JSON file that stores ground-truth cells which is originally a list of ground-truth CSV files in Zenodo. It can be easily done by
python convert_gt_csvs_to_json.py -d DATASET_PATH -s train
. This is for matching the format with the JSON for the cell predictions. - Properly update
algorithm_output_path
andgt_path
in theevaluation/eval.py
. - Run
python evaluation/eval.py
Note that the evaluation/eval.py
uses exactly the same code as the one embedded in the Grand Challenge for auto-evaluation.