Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting the monty section #6

Merged
merged 41 commits into from
Nov 19, 2024
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
8fd6b67
Starting the monty section
MJomaba Nov 4, 2024
94d487f
More editing
MJomaba Nov 7, 2024
ba7c859
More editing
MJomaba Nov 7, 2024
995024d
Add code at beginning of pages to homogenise content and improve visu…
MJomaba Nov 8, 2024
69c1611
Correct plot labelling
MJomaba Nov 8, 2024
a7bebc3
Starting the monty section
MJomaba Nov 4, 2024
a2c25f6
More editing
MJomaba Nov 7, 2024
44f526f
More editing
MJomaba Nov 7, 2024
6685184
Add code at beginning of pages to homogenise content and improve visu…
MJomaba Nov 8, 2024
de186bd
Correct plot labelling
MJomaba Nov 8, 2024
89d33e1
Merge branch 'monty-general' of https://github.com/mrc-ide/odin-monty…
MJomaba Nov 8, 2024
8048ac1
Expand text
MJomaba Nov 12, 2024
cde7aab
Update branch with main contentMerge branch 'main' into monty-general
MJomaba Nov 12, 2024
6343ebf
Restructuring and adding content + placeholders
MJomaba Nov 12, 2024
0fff7b2
Merge main into current branchMerge branch 'main' into monty-general
MJomaba Nov 12, 2024
57ce74b
some light edits
edknock Nov 13, 2024
87a98a9
Spacing of formula
MJomaba Nov 13, 2024
8a501b1
Ironing up spacing
MJomaba Nov 13, 2024
c9a44a3
Amending text
MJomaba Nov 13, 2024
4a13741
Consistent spacing
MJomaba Nov 13, 2024
99e02d2
Intect to use allow_multiple_parameters to replace Vectorize
MJomaba Nov 14, 2024
1b09bf5
Merge branch 'main' into monty-general
MJomaba Nov 14, 2024
093f609
Update _quarto.yml
MJomaba Nov 14, 2024
737b139
Change name of files to match book style
MJomaba Nov 14, 2024
211104d
Remove mcstate to avoid confusion and WinBUGS and JAGS as example of …
MJomaba Nov 14, 2024
83a5ca7
Remove mcstate to avoid confusion and WinBUGS and JAGS as example of …
MJomaba Nov 14, 2024
c4c0b72
Removing advanced monty section as too little content for now
MJomaba Nov 14, 2024
8f5632c
Remove composability section as too little content for now, will be p…
MJomaba Nov 14, 2024
de2fcba
Changing 'l_distribution' to more explicit name 'mixture_model'
MJomaba Nov 14, 2024
f8543c6
Change variance of 1-D VCV matrix to avoid confusion between dimensio…
MJomaba Nov 14, 2024
40c0dac
Removing runners section as too little content
MJomaba Nov 14, 2024
d8b94bc
Working on the rewording and deleting a miniscule section that was ou…
MJomaba Nov 14, 2024
037dc79
Reworking of the parameter section
MJomaba Nov 14, 2024
93bba6a
Merge branch 'main' into monty-general
richfitz Nov 19, 2024
42291a8
Correct typo
MJomaba Nov 19, 2024
eb3a878
Merge branch 'monty-general' of https://github.com/mrc-ide/odin-monty…
MJomaba Nov 19, 2024
af8ce8d
Update monty.qmd
MJomaba Nov 19, 2024
d3be774
Update monty.qmd
MJomaba Nov 19, 2024
7f40429
Amend text to remove missing link
MJomaba Nov 19, 2024
c5ae337
Merge branch 'monty-general' of https://github.com/mrc-ide/odin-monty…
MJomaba Nov 19, 2024
e0e036a
Correct spelling
MJomaba Nov 19, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions 2009fluUK.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# 2009 pandemic in the UK
7 changes: 7 additions & 0 deletions _quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,17 @@ book:
- part: "Monty"
chapters:
- monty.qmd
- model.qmd
- montyDSL.qmd
- composability.qmd
- samplers.qmd
- runners.qmd
- advanced_monty.qmd
- part: "Odin & Monty"
chapters:
- inference.qmd
- differentiability.qmd
- 2009fluUK.qmd
MJomaba marked this conversation as resolved.
Show resolved Hide resolved
- references.qmd
appendices:
- installation.qmd
Expand Down
14 changes: 14 additions & 0 deletions advanced_monty.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Advanced monty

```{r}
#| include: false
source("common.R")
```

## Observers

## Working with other packages

Exporting chains to work with other packages
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be basic, not advanced. perhaps just drop this whole file for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed file for now.


## Nested models
29 changes: 29 additions & 0 deletions composability.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Composability {#sec-monty-combine}

```{r}
#| include: false
source("common.R")
```

```{r}
library(monty)
```

```{r}
# A simple example; a model that contains something of interest,
# and a simple prior from monty_dsl
likelihood <- monty_example("banana")
prior <- monty_dsl({
alpha ~ Normal(0, 1)
beta ~ Normal(0, 10)
})
posterior <- likelihood + prior
posterior

# # The same thing, more explicitly:
# monty_model_combine(likelihood, prior)
#
# # Control properties of the combined model:
# monty_model_combine(likelihood, prior,
# monty_model_properties(has_gradient = FALSE))
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what this file is trying to show, drop for now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropped for now

114 changes: 114 additions & 0 deletions model.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# About (monty) Models {#sec-monty-models}

```{r}
#| include: false
source("common.R")
```

In this book, we explore two fundamentally different yet interconnected approaches to modelling: **dynamical systems** and **statistical models**. To illustrate these differences, let’s begin with a familiar (but maybe dated!) metaphor: the game [*SimCity*](https://en.wikipedia.org/wiki/SimCity).

## Models vs. Models: dynamical and statistical perspectives

### SimCity and dynamical systems

Imagine a city model, as in *SimCity*, where thousands of virtual inhabitants follow routines, interact, and respond to changes like new buildings or natural disasters. These elements evolve according to predefined rules and parameters, similar to how **dynamical systems** simulate real-world processes over time.

In a [**dynamical model**](https://en.wikipedia.org/wiki/Dynamical_system), we track changes in a system's "state" over time. The state is a summary of key information at a specific time point; for example, in an epidemiological model, the state might include the numbers of susceptible, infected, and recovered individuals. More detailed models may add variables like age, location, health risks, and symptoms, allowing us to simulate interventions, such as vaccination campaigns, and explore their potential impacts. There exist many formalisms to describe dynamical systems, incorporating e.g. specific input and output functions in control theory but the central element of these systems is this notion of the "state", a mathematical object summarising the system at a timepoint.

The `odin` package, which supports differential and difference equations, is an effective tool for building dynamical models, enabling users to simulate time-evolving systems. Dynamical models are well-suited for exploring how specific scenarios or interventions impact a system over time, making them particularly useful for modelling real-world phenomena in a structured way.

TODO: simple odin deterministic model with discussion about results being a time indexed suite of "states"

### Statistical models and uncertainty

[**Statistical models**](https://en.wikipedia.org/wiki/Statistical_model), on the other hand, focus on capturing uncertainty in outcomes. Instead of simulating the system’s evolution over time, they represent possible states of the world probabilistically. Statistical models summarise data patterns or help make predictions based on uncertainties, such as estimating the range of future cases of a disease or identifying risk factors in a population.

The `monty` package facilitates statistical modelling, providing tools for working with probability distributions and Monte Carlo methods. Where `odin` enables us to explore the dynamics of changing systems, `monty` is ideal for making probabilistic inferences and assessing variability across potential outcomes without necessarily modelling time-evolving dynamics.

TODO: simple gaussian model - discussion about parameter being actually the sampling space and explain that it connects with Bayesian inference, show that the important component of the model is the parameters and domain support + density function, maybe shows that the model can be built in different way (R function, monty_model(), DSL).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move this into an issue please


## Bridging dynamical systems and statistical models

While dynamical systems and statistical models serve distinct purposes, they are not strictly separate. In fact, they can be connected through two powerful approaches that bring probabilistic reasoning into dynamical systems: **stochastic processes** and **Bayesian inference**.

### Stochastic processes: adding uncertainty to dynamical models
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced this belongs here, but leave it here for now. It might belong earlier in some general introduction

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree - it touches on odin models, that are upwards in the book.


A natural way to bridge dynamical and statistical models is by introducing [**stochastic processes**](https://en.wikipedia.org/wiki/Stochastic_process). Traditional dynamical models use fixed parameters and deterministic rules to evolve a system’s state over time. However, many real-world systems have inherent randomness or uncertainty that deterministic models cannot capture.

In a **stochastic process**, the system’s state is no longer a single deterministic value but a collection of potential outcomes, each weighted by a probability. This probabilistic view enables us to model fluctuations and uncertainties within the dynamical framework, capturing both the system’s evolution and the uncertainty in each state. Stochastic processes are thus a natural extension of dynamical models, adding an extra layer of realism by treating system evolution as inherently uncertain.

The `odin` package provides an intuitive framework for writing a class of stochastic systems, making it easier to define models that incorporate randomness in their evolution over time. The `dust` package complements `odin` by enabling efficient large-scale simulations, allowing users to capture and analyse the uncertainty inherent to these systems through repeated runs. Together, `odin` and `dust` offer a powerful toolkit for developing and exploring stochastic models that reflect the complexity and variability of real-world dynamics.

TODO simple dust example or just link with relevant dust section in the book

### Bayesian inference: statistical modelling of model parameters

**Bayesian inference** is another approach to linking dynamical and statistical models by treating model parameters as random variables rather than fixed values. This introduces a probability distribution over possible parameter values, making parameter estimation a statistical problem.

Using Bayes’ theorem, Bayesian inference combines:

- **The likelihood**: the probability of observing the data given specific parameter values, and
- **The prior distribution**: our initial assumptions about the parameter values before observing the data.

The result is the **posterior distribution** of a parameter $\theta$ given data $y$:

$$
p(\theta | y) = \frac{p(y|\theta) p(\theta)}{p(y)}
$$

where:

- $p(y|\theta)$ is the likelihood,
- $p(\theta)$ is the prior distribution, and
- $p(y)$ is a normalising constant to ensure the posterior integrates to 1 over all $\theta$ values.

Bayesian inference updates our beliefs about parameters based on observed data, yielding a posterior distribution that reflects both model assumptions and data influence. This probabilistic framework around a dynamical system’s parameters creates a flexible statistical model that adapts as new data becomes available.

In this way, **stochastic processes** add an uncertainty dimension to the state of dynamical systems, while **Bayesian inference** applies probabilistic modelling to the parameters, merging dynamical and statistical perspectives. Together, these methods provide a rich toolkit for modelling complex systems that evolve over time and accounting for both randomness and uncertainty.

## Parameters and model complexity

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a missing paragraph here outlining what you are trying to cover in this section.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The whole section has been reworked, for hopefully adding flow and clarity.

### Parameters in dynamical systems

In dynamical models, parameters define the structure and dynamics of the scenario. For example, a detailed epidemiological model may need parameters for transmission rates, contact patterns, or intervention schedules. These inputs allow us to examine "what-if" scenarios, enabling decision-makers to predict and manage changes in real-time.

### Parameters in statistical models

Statistical models, meanwhile, use parameters to define probability distributions over possible outcomes rather than scenario-specific details. Here, parameters describe uncertainties, often quantifying risks, making predictions, or summarising patterns in data.

The `odin` package allows us to build and explore the behaviours of dynamical models over time, while `monty` enables us to quantify uncertainties and make probabilistic inferences without simulating detailed dynamics.

## Bayesian or non-Bayesian modelling

Many statistical modelling tools, such as `stan`, `mcstate`, and `BayesTools`, are inherently Bayesian. However, the `monty` package is **Bayesian-agnostic**: it does not require a Bayesian interpretation. This flexibility allows users to work with probabilistic models either within a Bayesian framework or outside of it, depending on the needs of the analysis.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should drop mcstate from this, it will just be confusing to people

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


## Probability densities: normalised vs. unnormalised
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not feel like it belongs here? Perhaps it belongs in a details section and then linked back to where you want to refer to it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been reworked.


A key concept in Bayesian inference and Monte Carlo methods is the distinction between **normalised** and **unnormalised** probability densities.

### Unnormalised density
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are just paragraphs, not sections - it would read more smoothly without the headings I think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. Done.


In Bayesian inference, rather than using the "full" Bayesian as above, we are often working with defining posterior density as being proportional to a product of densities:

$$
p(\theta | y) \propto p(y|\theta) p(\theta)
$$

where:

- $p(y|\theta)$ is the likelihood, and
- $p(\theta)$ is the prior distribution, as above.

Note that it does not involve anymore $p(y)$ the normalising constant. The reason is that since calculating $p(y)$ can be very difficult, we often work with the **unnormalised posterior density** $p(y|\theta)p(\theta)$. This unnormalised form is sufficient for many Monte Carlo methods, where only relative densities matter.

### Normalised density

A **normalised density** integrates to 1 over its entire parameter space. This is necessary for direct probability interpretations and for certain Bayesian methods, like model comparison using [Bayes factors](https://en.wikipedia.org/wiki/Bayes_factor).

## Monte Carlo methods with unnormalised densities

Monte Carlo algorithms, such as **Metropolis-Hastings** and **Importance Sampling**, often operate using unnormalised densities, focusing on relative probabilities rather than absolute values. This makes them efficient for high-dimensional problems where calculating a normalising constant is untractable.

## Linking to Physics: partition functions and probability densities

Normalisation in probability densities has parallels in physics. In statistical mechanics, the **partition function** $Z$ normalises probabilities over all possible states, much like normalising densities in Bayesian inference. This connection highlights why algorithms like **Metropolis-Hastings** use unnormalised densities: they mirror physical systems where absolute energies are less relevant than energy differences.
103 changes: 91 additions & 12 deletions monty.qmd
Original file line number Diff line number Diff line change
@@ -1,16 +1,95 @@
# Monty
# Getting started with monty

Before showing how to fit odin models to data, we'll focus for a bit on monty itself. If you are anxious to get on and fit the model from @sec-data-filter, you might skip ahead to @sec-inference, where we resume this.
```{r}
#| include: false
source("common.R")
```

There are a bunch of things we want to cover in this chapter so likely it will be split into a few:
The `monty` R package is designed for modular work with statistical distributions, leveraging Monte Carlo methods for sampling. It enables the construction of increasingly complex models through four approaches:

* defining a model in monty, both easily with `monty_function` and less easily with `monty_model`
* fitting models with MCMC
* writing priors with the `monty_dsl`
* using observers to follow along with the model (if we can come up with a good example)
* working with nested models (once we write the code that would allow this)
* running MCMC chains in parallel
* exporting chains to work with other packages
* different samplers
1. Simple R code,
2. A dedicated domain-specific language (DSL),
3. Integration of `odin` models,
4. Composition of two `monty` models.

Hmm, there really is quite a bit of ground to cover here!
The most basic method to define a `monty` model is through a simple R function, as shown in this chapter. More advanced examples are covered in the @sec-monty-models in particular in the context of Bayesian statistics. @sec-monty-dsl introduces the monty DSL for more versatile model building using a small probabilistic DSL similar to BUGs. @sec-monty-combine explains how to compose new monty models by combining two existing ones. The last part of this book starting from @sec-inference demonstrates how to incorporate `odin` models into `monty` workflows.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Referring to sec-monty-combine here, doesn't seem to exist currently

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed reference.

## A simple example

```{r}
library(monty)
```

We can define a simple Gaussian [mixture model](https://en.wikipedia.org/wiki/Mixture_model) of two sub-populations using the `monty_model_function()` from the package. The model represents the distribution of a quantity $l$, sampled with a probability $p$ from a normal distribution of mean $m_{1}$ and with a probability $1-p$ from another normal distribution of mean $m_{2}$. Both subpopulations have variance equal to 1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am sure we could come up with a less abstract example to help people think about this, but this can be done in the next PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there might be a nice biology example with antibody titres and positive vs negative subpopulations - but yes can wait for another iteration.

To build our `monty` model, we start by defining an R function returning the log-density of our statistical model. This function has for arguments $l$, the quantity of interest and the three parameters $p$, $m_{1}$ and $m_{1}$ defining the two Normally-distributed subpopulations.

MJomaba marked this conversation as resolved.
Show resolved Hide resolved
```{r}
fn <- function(l, p, m1, m2) {
log(p*dnorm(l,mean = m1) + (1-p)*dnorm(l, mean = m2))
}
MJomaba marked this conversation as resolved.
Show resolved Hide resolved
MJomaba marked this conversation as resolved.
Show resolved Hide resolved
```

Assuming that the population is 75% from subpopulation 1 (normally distributed with mean 3) and 25% from subpopulation 2 (also normally distributed but with mean 7), we can build our `monty` model by indicating that the parameters of the subpopulations are fixed at these given values.

```{r}
l_distribution <- monty_model_function(fn,
fixed = list(p = 0.75, m1 = 3, m2=7))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why l_distribution - can you use a better name here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to "mixture_model"

```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fixed = list(p = 0.75, m1 = 3, m2=7))
fixed = list(p = 0.75, m1 = 3, m2 = 7),
allow_multiple_parameters = TRUE)

this is what you need to void vectrisation (plus some consistency in spacing)

MJomaba marked this conversation as resolved.
Show resolved Hide resolved

We have just created a `monty` model.

```{r}
l_distribution
```

We can plot the density of our model for values of $l$ between 0 and 10 and eye-check that we can see the two subpopulations in the correct proportions and with the 3 and 7 as modes.

MJomaba marked this conversation as resolved.
Show resolved Hide resolved
```{r}
l <- seq(from = 0, to = 10, by = 0.1)
plot(l,
exp(Vectorize(l_distribution$density)(l)),
type = "l",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richfitz I need to use Vectorize() here to pass multiple value of the argument - let me know if a better interface can be shown.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above. You will need to pass a 1-row matrix in to vectorise , so

l <- matrix(seq(from = 0, to = 10, by = 0.1), 1)
monty_distribution_density(l_distribution, l)

Please do not use $density in docs

Copy link
Contributor Author

@MJomaba MJomaba Nov 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how to make this work. The following code:
fn <- function(l, p, m1, m2) { log(p * dnorm(l, mean = m1) + (1 - p) * dnorm(l, mean = m2)) } l_distribution <- monty_model_function(fn, fixed = list(p = 0.75, m1 = 3, m2 = 7), allow_multiple_parameters = TRUE) l <- matrix(seq(from = 0, to = 10, by = 0.1), 1) monty_model_density(l_distribution, l)

Return the following error,

Error in packer$unpack(): ! Can't unpack a matrix where the unpacker uses 'fixed' Run rlang::last_trace() to see where the error occurred.

that I interpret as a conflict with having some parameters set as fixed. I could pass a 101x4 instead with the fixed argument like this (that works fine),

`l_distribution <- monty_model_function(fn,
allow_multiple_parameters = TRUE)

l <- matrix(c(seq(from = 0, to = 10, by = 0.1), rep(0.75,101), rep(3,101), rep(7,101)),4)
monty_model_density(l_distribution, l)`

but then it makes things quite more complex to understand and it's not the same as the monty model has then 4 "parameters" (with a mixed interpretation, as one is the actual sampling variable of interest, and the others are parameters of the distributions in the statistical sense) instead of one.

ylab = "density")
```

## Sampling from our example distribution

We want now to sample from this model, using the `monty_sample()` function. For this we need to tell `monty` which sampler we want to use to explore our distribution. There are a variety of samplers avalaible and you can learn about them in @sec-samplers. One of the simplest is the random walk [Metropolis-Hastings](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) algorithm that should work almost out of the box (though not necesseraly efficiently) in most cases.

The random walk sampler uses a variance-covariance (VCV) matrix to guide its exploration, determining the 'jump' from the current point to the next in a random walk by drawing from a multivariate normal distribution parameterised by this matrix. For a single-parameter model, we use a 1x1 matrix (`matrix(1)`) as our VCV matrix.

The choice of the VCV matrix is critical for the sampler's efficiency, especially in more complex cases where the tuning of this matrix can significantly affect performance. A well-chosen VCV matrix optimises moving accross the parameter space, making the random walk sampler more effective in exploring the distribution of interest.

```{r}
sampler <- monty_sampler_random_walk(matrix(1))
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

People may get confused with this being matrix(1) (it's a 1x1 matrix with a value of 1). If you used matrix(0.5) (say) then it would be easier to talk about (a 1x1 matrix with a variance in the proposal of 0.5)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Changed to a variance of 2 - not much of a difference in sampling performance, with proposal steps on average sqrt(2) bigger.


We are now ready to sample from our distribution using `monty_sample()` with 2000 samples, starting our MCMC chain at the value 3 and running 4 chains simultaneously.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We are now ready to sample from our distribution using `monty_sample()` with 2000 samples, starting our MCMC chain at the value 3 and running 4 chains simultaneously.
We are now ready to sample from our distribution using `monty_sample()` with 2000 samples, starting our MCMC chain at the value 3 and running 4 chains sequentially.

These run sequentially, not simultaneously

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

MJomaba marked this conversation as resolved.
Show resolved Hide resolved
```{r}
samples <- monty_sample(l_distribution, sampler, 2000, initial = 3, n_chains = 4)
```

We can visualise our 4 chains.
```{r}
matplot(samples$density, type = "l", lty = 1,
xlab = "sample", ylab = "log posterior density", col = "#00000055")
```

We can also check that our samples are correctly representing our distribution:
```{r}
hist(samples$pars["l",,], breaks=100)
```
MJomaba marked this conversation as resolved.
Show resolved Hide resolved

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
```{r}
hist(samples$pars["l",,], breaks=100)
```
```{r}
hist(samples$pars["l", , ], breaks = 100)
```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

## Connection of `monty` with other software

`monty` includes a simple probabilistic domain-specific language (DSL) that is inspired by languages of the BUGS family such as `stan` and [Statistical Rethinking](https://xcelab.net/rm/). It is designed to make some tasks a bit easier, particularly when defining priors for your model. We expect that this DSL is not sufficiently advanced to represent most interesting models but it may get more clever and flexible in the future. In particular we do not expect the DSL to be useful in writing likelihood functions for comparison to data; we expect that if your model is simple enough for this you would be better off using `stan` or some similarly flexible system.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section feels like it is now in the wrong place

*mention SSM, dr jacoby, mcstate, BayesTools

## Going further

We've discussed a little bit about the philosophy of `monty` and built a simple model using an R function with `monty_function`. The example introduces the basics of defining and sampling from a `monty` model.

For more advanced applications, refer to @sec-monty-dsl for constructing models using `monty`'s domain-specific language, which enables greater flexibility for complex structures. To combine multiple models, see @sec-monty-combine, which demonstrates how to compose and integrate existing `monty` models. Lastly, @sec-inference illustrates how to incorporate `odin` models into `monty` workflows, expanding the package’s potential for Bayesian inference with more sophisticated, system-based models. Each section provides detailed examples to guide you in leveraging `monty`’s full capabilities.
Loading
Loading