You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Context: At the biodiversity_next conference workshop, we asked participants to discuss citizen science data quality issues and write responses on post its. These are the responses.]
Hard to compare between users -- different experiences, abilities
All data have issues
Non-systematic collection of data
People are not experts and make errors when identifying species
Citizen scientists don't follow instructions
All data not collected by scientists are bad
No measure of effort in biodiversity recording
Trade-off between educational value and scientific value
Trade-off between fun and work
Biased in space and time
Inaccurate species identification
Bias for interesting-looking species
Problems are project level
No way to estimate user expertise
Prejudiced perception of citizen scientist capabilities vs paid employees
Lack of "skill"
Lack of recognition that citizen science data includes highly standardized data collection
Poor identification
Lack of guides/guidelines and standards
Inconsistencies
People use common names for different species
Misidentification
No trust in knowledge of citizen scientists
Variable and unknown quality of datapoints
Quality increases/changes as people learn
Lack of scientific attitude, misuse of scientific methods, ethics, etc.
Lack of quality control of data and data collectors
The text was updated successfully, but these errors were encountered:
[Context: At the biodiversity_next conference workshop, we asked participants to discuss citizen science data quality issues and write responses on post its. These are the responses.]
The text was updated successfully, but these errors were encountered: