Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assess the quality of user submitted annotations #192

Open
goodb opened this issue Oct 3, 2016 · 1 comment
Open

Assess the quality of user submitted annotations #192

goodb opened this issue Oct 3, 2016 · 1 comment

Comments

@goodb
Copy link

goodb commented Oct 3, 2016

The project needs a way to know how the user submitted annotations compare to expert and machine-generated annotations. In cases where a gold standard exists, it should be used to benchmark the data and make decisions about how the project should proceed. There are many that would be applicable e.g. from https://f1000research.com/articles/3-96

@x0xMaximus
Copy link
Member

Potential Discussion Points:

  • Variable sources of gold standard comparisons
  • Sources for expert annotations: intra Mark2Cure environment or comparison to Expert annotations in other annotation projects
  • Level of refinement for comparison user, is our users population subdivided and organized in specific ways
  • Does "make decisions about how the project should proceed" imply automatic document import and task type distribution (in a multi task environment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants