Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add files via upload #45

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Add files via upload #45

wants to merge 2 commits into from

Conversation

Seymour22
Copy link

Added code to print statistical difference between cosecutive models with n+1 subtypes

Reran simulated data to print statistical difference between cosecutive models with n+1 subtypes
Copy link
Author

@Seymour22 Seymour22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added code in AbstractSustain.py to print statistical difference between cosecutive models with n+1 subtypes

Copy link
Author

@Seymour22 Seymour22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@noxtoby
Copy link
Member

noxtoby commented Apr 13, 2023

Had a quick look on my phone. Is this a t test between BIC? Are you sure that this is valid?

@Seymour22
Copy link
Author

Yes, this is the t test between BIC. I've checked it on the simulated data which supports ground truth of two subtypes and not three subtypes. See "SuStaIn tutorial using simulated data updated stats between models" file in notebooks

@sea-shunned
Copy link
Member

I think Neil's comment is more about how meaningful a t-test between BIC is. That justification is key, and I think it'd be prudent to have that rather than integrate this based on empirical results in the simulated data.

From a code perspective, if we were to do this, you should be looping over all pairs of subtypes, rather than the for loop currently implemented. After this, there should also be the option for multiple comparison correction.

@Seymour22
Copy link
Author

Thanks for clarifying the issue @sea-shunned. I guess increasing the sample size for the t-test i.e number of cross-validations wouldn't work either as we'd end up with similar BIC values for each of the cross-validations. I think a simplier method is to compare the difference of BICs for each of the models trained on the main dataset. I'll update the code in another notebook.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants