Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification Needed on Evaluation Results and Input Files #263

Open
BaharMahdavi opened this issue Nov 22, 2024 · 0 comments
Open

Clarification Needed on Evaluation Results and Input Files #263

BaharMahdavi opened this issue Nov 22, 2024 · 0 comments
Assignees

Comments

@BaharMahdavi
Copy link

Dear Developers,

Thank you for developing and sharing this amazing tool, as well as providing comprehensive benchmarking and detailed evaluation metrics. I am currently running the tool on scenario 20 of the synthetic data without noise to reproduce the evaluation metrics (TP, FP, FN, and F1 Score) reported in the supplementary files. However, I am facing some challenges and need clarification on a few points.

I used the file SBS96_De-Novo_Signatures.txt to compare with ground.truth.syn.sigs.csv in the evaluation function mentioned in the subroutines.py file. However, when I use a cosine similarity cutoff of 0.9, my results differ from the supplementary data. Interestingly, they align closely when I set the cutoff to 0.8.

Since the input data includes three ground truth files (ground.truth.syn.catalog, ground.truth.syn.exposures, and ground.truth.syn.sigs) and the outputs include multiple signature extraction files (e.g., All_Solutions and Suggested_Solution), could you kindly confirm if I am using the correct files and procedures? If I am missing something, I would greatly appreciate your guidance on how to resolve this issue.

Thank you very much for your help!

Best regards,
Bahar

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants