Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add logic to validate the outcome of the examples #37

Open
3 tasks
jluethi opened this issue Mar 16, 2023 · 0 comments
Open
3 tasks

Add logic to validate the outcome of the examples #37

jluethi opened this issue Mar 16, 2023 · 0 comments

Comments

@jluethi
Copy link
Collaborator

jluethi commented Mar 16, 2023

Currently, we're checking that the examples pass and that the images and segmentation look appropriate visually. We'll likely still change some parameters over time, thus I wouldn't expect to fully reproduce the same results (e.g. the segmentation parameters aren't optimized for all examples yet => makes sense to change them).

It would nevertheless make sense to have a script we can run for some examples (e.g. just example 01 to start with) which checks some of the content of the results.

Ideas for what to check:

  • Is the OME-Zarr structure still the same (how to we serialize the structure?)
  • Do we get the same number of segmented objects (unique labels or number of measurements in the measurement table)
  • Either compare full measurement tables or some summary statistics (for 01 feasible to just check against the expected table. Wouldn't make sense for larger examples)

Those then are a bit like integration tests, but we probably don't want to run them at every commit. Can be valuable though to run e.g. before some new releases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

1 participant