Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Minor typos #62

Merged
merged 5 commits into from
Mar 12, 2024
Merged

[Docs] Minor typos #62

merged 5 commits into from
Mar 12, 2024

Conversation

DominiqueMakowski
Copy link
Contributor

Also removed reference to "of each accumulator" as exgauss doesn't have multiple accumualtors

@DominiqueMakowski
Copy link
Contributor Author

By the way, what do you think about making the Wald & Exgauss compatible with a specification of type:

ExGaussian([0.8], 0.5, 0.3)

i.e., accepting a vector of length 1 for drift/sigma? I think it might add the (small) benefit of being able to have a consistent syntax accross models in Turing (i.e., using filldist and vectorized operations) with other models.
(I'm obviously biased given my current usecase of fitting different models to the same no-choice data)

Copy link
Contributor

github-actions bot commented Mar 10, 2024

Benchmark Results

master 715a231... master/715a231e242f20...
logpdf/("SequentialSamplingModels.DDM", 10) 1.75 ± 0.22 μs 1.54 ± 0.12 μs 1.14
logpdf/("SequentialSamplingModels.DDM", 100) 17.2 ± 0.71 μs 17.2 ± 1.1 μs 0.999
logpdf/("SequentialSamplingModels.LBA", 10) 2.45 ± 0.19 μs 2.31 ± 0.21 μs 1.06
logpdf/("SequentialSamplingModels.LBA", 100) 22.4 ± 1.9 μs 23.2 ± 1.7 μs 0.965
logpdf/("SequentialSamplingModels.LNR", 10) 1 ± 0.15 μs 0.999 ± 0.15 μs 1
logpdf/("SequentialSamplingModels.LNR", 100) 8.64 ± 0.35 μs 8.66 ± 0.33 μs 0.997
logpdf/("SequentialSamplingModels.RDM", 10) 2.63 ± 0.27 μs 2.62 ± 0.25 μs 1
logpdf/("SequentialSamplingModels.RDM", 100) 24.7 ± 1 μs 24.7 ± 0.83 μs 0.998
logpdf/("SequentialSamplingModels.Wald", 10) 0.227 ± 0.17 μs 0.225 ± 0.17 μs 1.01
logpdf/("SequentialSamplingModels.Wald", 100) 1.86 ± 0.15 μs 2 ± 0.049 μs 0.933
logpdf/("SequentialSamplingModels.WaldMixture", 10) 1.08 ± 0.15 μs 1.09 ± 0.16 μs 0.99
logpdf/("SequentialSamplingModels.WaldMixture", 100) 10.6 ± 0.79 μs 9.97 ± 0.84 μs 1.06
rand/("SequentialSamplingModels.DDM", 10) 3.92 ± 0.57 μs 3.76 ± 0.55 μs 1.04
rand/("SequentialSamplingModels.DDM", 100) 0.0378 ± 0.0028 ms 0.0378 ± 0.0026 ms 0.998
rand/("SequentialSamplingModels.LBA", 10) 3.26 ± 0.3 μs 3.24 ± 0.3 μs 1.01
rand/("SequentialSamplingModels.LBA", 100) 16.3 ± 1.2 μs 16.3 ± 1.2 μs 1
rand/("SequentialSamplingModels.LCA", 10) 0.756 ± 0.22 ms 0.76 ± 0.23 ms 0.994
rand/("SequentialSamplingModels.LCA", 100) 8.17 ± 0.24 ms 8.23 ± 0.26 ms 0.993
rand/("SequentialSamplingModels.LNR", 10) 1.09 ± 0.41 μs 1.09 ± 0.44 μs 0.999
rand/("SequentialSamplingModels.LNR", 100) 7.36 ± 3.2 μs 7.48 ± 3.7 μs 0.985
rand/("SequentialSamplingModels.RDM", 10) 1.37 ± 0.44 μs 1.28 ± 0.35 μs 1.07
rand/("SequentialSamplingModels.RDM", 100) 10.6 ± 3.5 μs 10.7 ± 3 μs 0.987
rand/("SequentialSamplingModels.Wald", 10) 0.465 ± 0.16 μs 0.465 ± 0.16 μs 0.999
rand/("SequentialSamplingModels.Wald", 100) 2.76 ± 0.25 μs 2.89 ± 0.22 μs 0.955
rand/("SequentialSamplingModels.WaldMixture", 10) 1.18 ± 0.16 μs 1.31 ± 0.15 μs 0.901
rand/("SequentialSamplingModels.WaldMixture", 100) 11.3 ± 0.21 μs 12.6 ± 0.19 μs 0.891
simulate/SequentialSamplingModels.DDM 3.63 ± 1.5 μs 3.51 ± 1.4 μs 1.03
simulate/SequentialSamplingModels.LBA 3.69 ± 0.47 μs 3.77 ± 0.34 μs 0.979
simulate/SequentialSamplingModels.LCA 0.101 ± 0.02 ms 0.0962 ± 0.021 ms 1.05
simulate/SequentialSamplingModels.RDM 0.0666 ± 0.026 ms 0.0642 ± 0.026 ms 1.04
simulate/SequentialSamplingModels.Wald 9.07 ± 4.9 μs 9.05 ± 4.4 μs 1
simulate/SequentialSamplingModels.WaldMixture 4.3 ± 1.6 μs 3.73 ± 1.4 μs 1.15
time_to_load 3.65 ± 0.053 s 3.67 ± 0.08 s 0.993

Benchmark Plots

A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR.
Go to "Actions"->"Benchmark a pull request"->[the most recent run]->"Artifacts" (at the bottom).

@codecov-commenter
Copy link

codecov-commenter commented Mar 10, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.97%. Comparing base (8a42af7) to head (715a231).

Additional details and impacted files
@@           Coverage Diff           @@
##           master      #62   +/-   ##
=======================================
  Coverage   89.97%   89.97%           
=======================================
  Files          23       23           
  Lines        1416     1416           
=======================================
  Hits         1274     1274           
  Misses        142      142           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

docs/src/lca.md Outdated
@@ -1,6 +1,6 @@
# Leaky Competing Accumulator

The Leaky Competing Accumulator (LCA; Usher & McClelland, 2001) is a sequential sampling model in which evidence for options races independently. The LBA makes an additional simplification that evidence accumulates in a linear and ballistic fashion, meaning there is no intra-trial noise. Instead, evidence accumulates deterministically and linearly until it hits the threshold.
The Leaky Competing Accumulator (LCA; Usher & McClelland, 2001) is a sequential sampling model in which evidence for options races independently. the LCA is similar to the Linear Ballistic Accumulator (LBA), but additionally assumes an intra-trial noise in the form of evidence accumulation leakage (in contrast, the The LBA assumes that evidence accumulates in a ballistic fashion, i.e., linearly and deterministically until it hits the threshold).
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The needs to be capitalized.

docs/src/lca.md Outdated
@@ -1,6 +1,6 @@
# Leaky Competing Accumulator

The Leaky Competing Accumulator (LCA; Usher & McClelland, 2001) is a sequential sampling model in which evidence for options races independently. The LBA makes an additional simplification that evidence accumulates in a linear and ballistic fashion, meaning there is no intra-trial noise. Instead, evidence accumulates deterministically and linearly until it hits the threshold.
The Leaky Competing Accumulator (LCA; Usher & McClelland, 2001) is a sequential sampling model in which evidence for options races independently. the LCA is similar to the Linear Ballistic Accumulator (LBA), but additionally assumes an intra-trial noise in the form of evidence accumulation leakage (in contrast, the The LBA assumes that evidence accumulates in a ballistic fashion, i.e., linearly and deterministically until it hits the threshold).
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"the The" in the parentheses

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noise and leakage are separate concepts. I would say something like:

"intra-trial noise and leakage (in contrast, the The LBA assumes that evidence accumulates in a ballistic fashion, i.e., linearly and deterministically until it hits the threshold)

@itsdfish
Copy link
Owner

Thanks for the fixes to the docs. There were a few minor issues in the changes.

I'll have to think about it some more, but I don't think making the drift rate a vector is a good idea. Using a scalar communicates to the user that there is only one drift rate, and prevents accidental use of multiple drift rates.

@DominiqueMakowski
Copy link
Contributor Author

Using a scalar communicates to the user that there is only one drift rate

Fair (though one could also throw an informative error "only one accumulator is possible for this model")

@DominiqueMakowski
Copy link
Contributor Author

Other minor consistency thing:

LBA has the following order: drift, A, k, τ
But RDM has: drift, k, A, τ

Does it make sense to standardize when possible?

@itsdfish
Copy link
Owner

Yes. Standardizing the order is on my agenda. It became apparent I should do that when you opened the issue about the LBA doc strings the other day. Thanks for the suggestion.

@itsdfish
Copy link
Owner

Fair (though one could also throw an informative error "only one accumulator is possible for this model")

I still don't think that would be good to support officially, but to simplify your particular use case, you can define ExGaussian(mu::AbstractArray, sigma::Real, tau::Real) = Exgaussian(mu, sigma, tau) in your script (you might also need to import ExGaussian).

Thanks again for the fixes to the docs.

@itsdfish itsdfish merged commit 329fcc7 into itsdfish:master Mar 12, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants