-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #29 from mrc-ide/order-of-events
Order of events
- Loading branch information
Showing
3 changed files
with
267 additions
and
7 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,264 @@ | ||
# Order of events for discrete-time stochastic models {#sec-order} | ||
|
||
```{r} | ||
#| include: false | ||
source("common.R") | ||
``` | ||
|
||
```{r} | ||
library(odin2) | ||
library(dust2) | ||
``` | ||
|
||
Here we have a discrete-time stochastic model | ||
|
||
```{r} | ||
sir <- odin({ | ||
update(S) <- S - n_SI | ||
update(I) <- I + n_SI - n_IR | ||
update(R) <- R + n_IR | ||
update(incidence) <- incidence + n_SI | ||
beta <- interpolate(beta_time, beta_value, "linear") | ||
beta_time <- parameter(constant = TRUE) | ||
beta_value <- parameter(constant = TRUE) | ||
dim(beta_time, beta_value) <- parameter(rank = 1) | ||
p_SI <- 1 - exp(-beta * I / N * dt) | ||
p_IR <- 1 - exp(-gamma * dt) | ||
n_SI <- Binomial(S, p_SI) | ||
n_IR <- Binomial(I, p_IR) | ||
initial(S) <- N - I0 | ||
initial(I) <- I0 | ||
initial(R) <- 0 | ||
initial(incidence, zero_every = 1) <- 0 | ||
N <- parameter(1000) | ||
I0 <- parameter(10) | ||
gamma <- parameter(0.1) | ||
## Comparison to data | ||
phi <- interpolate(phi_time, phi_value, "constant") | ||
phi_time <- parameter(constant = TRUE) | ||
phi_value <- parameter(constant = TRUE) | ||
dim(phi_time, phi_value) <- parameter(rank = 1) | ||
cases <- data() | ||
exp_noise <- Exponential(1e6) | ||
cases ~ Poisson(phi * incidence + exp_noise) | ||
}) | ||
``` | ||
|
||
this model includes many components we have introduced in preceding sections, such as [variables that reset periodically](time.qmd#sec-reset-variables), [use of interpolation functions](interpolation.qmd#sec-interpolation-functions) and [comparison to data](data.qmd). | ||
|
||
It is useful to understand the order in which events will be evaluated as the model updates from `time = t0` to `time = t0 + dt`. There will first be a series of updates to the variables, followed by a comparison to data (assuming you are comparing the model to data). | ||
|
||
## Updates {#sec-order-update} | ||
|
||
At the beginning of the update we have `time = t0` and then the updates proceed in the following order: | ||
|
||
### Reset any variables that use `zero_every` | ||
|
||
Any variable that uses `zero_every = x` will be reset to 0 if `t0` is a multiple of `x`. In our model we have | ||
|
||
```r | ||
initial(incidence, zero_every = 1) <- 0 | ||
``` | ||
|
||
so `incidence` will reset to 0 whenever `t0` is an integer. | ||
|
||
### Read from variables | ||
|
||
All variables will be read at their current values. In our example these are `S`, `I`, `R` and `incidence`. | ||
|
||
### Look up interpolation | ||
|
||
Any uses of `interpolate` relating to updates are evaluated using `time = t0`. Thus in our example | ||
|
||
```r | ||
beta <- interpolate(beta_time, beta_value, "linear") | ||
``` | ||
|
||
is evaluated to give `beta` its value at `time = t0`. | ||
|
||
### Evaluate assignments | ||
|
||
Assignment equations should have a logical order of evaluation (circularity should prevent compilation). For instance in our model it would naturally evaluate | ||
|
||
```r | ||
p_SI <- 1 - exp(-beta * I / N * dt) | ||
p_IR <- 1 - exp(-gamma * dt) | ||
``` | ||
|
||
and then | ||
|
||
```r | ||
n_SI <- Binomial(S, p_SI) | ||
n_IR <- Binomial(I, p_IR) | ||
``` | ||
|
||
Equations will be evaluated using the variables that were read in earlier (note that this is after any variables that use `zero_every` have been reset to 0). As with interpolation, any equations that use `time` (see @sec-interpolation-time) will use `time = t0`. | ||
|
||
### Write out new values of state | ||
|
||
Here we write out new values of variables as given by `update` equations. In our example these are | ||
|
||
```r | ||
update(S) <- S - n_SI | ||
update(I) <- I + n_SI - n_IR | ||
update(R) <- R + n_IR | ||
update(incidence) <- incidence + n_SI | ||
``` | ||
|
||
As with the other assignment equations, these are evaluated using values of the variables that were read in earlier. Thus in the equation | ||
|
||
```r | ||
update(S) <- S - n_SI | ||
``` | ||
|
||
it is telling us the value we will assign to `S` for `time = t0 + dt`, but on the right hand side it will use the value of `S` at `time = t0`. | ||
|
||
::: {.callout-warning} | ||
You should be careful to avoid introducing lags between state variables in your `update` equations. | ||
|
||
For example for this simple model | ||
|
||
\begin{gather*} | ||
X(t + dt) = X(t) + Normal(0, 1)\\ | ||
Y(t + dt) = Y(t) + Normal(0, 1)\\ | ||
Z(t) = X(t) + Y(t) | ||
\end{gather*} | ||
|
||
it would be incorrect to code this as | ||
|
||
```{r} | ||
m <- odin({ | ||
update(X) <- X + dX | ||
update(Y) <- Y + dY | ||
update(Z) <- X + Y | ||
dX <- Normal(0, 1) | ||
dY <- Normal(0, 1) | ||
initial(X) <- 0 | ||
initial(Y) <- 0 | ||
initial(Z) <- 0 | ||
}) | ||
``` | ||
|
||
as we can see that running this results in `Z` being lagged by a time-step relative to `X` and `Y`: | ||
|
||
```{r} | ||
sys <- dust_system_create(m, list(), seed = 1) | ||
dust_system_set_state_initial(sys) | ||
t <- seq(0, 5) | ||
y <- dust_system_simulate(sys, t) | ||
y <- dust_unpack_state(sys, y) | ||
y$X + y$Y | ||
y$Z | ||
``` | ||
|
||
This is because | ||
|
||
```r | ||
update(Z) <- X + Y | ||
``` | ||
|
||
corresponds to $Z(t + dt) = X(t) + Y(t)$ and not $Z(t) = X(t) + Y(t)$. Thus we need to account for the updates to X and Y on the right-hand side. | ||
|
||
A correct way to write it could be | ||
|
||
```r | ||
update(Z) <- X + dX + Y + dY | ||
``` | ||
|
||
however in more complex models such an approach may become unwieldy, and further changes to updates for `X` and `Y` must be replicated in the update to `Z`. | ||
|
||
A safer approach is to introduce intermediate variables representing the updated values of `X` and `Y` | ||
|
||
```{r} | ||
m <- odin({ | ||
update(X) <- X_new | ||
update(Y) <- Y_new | ||
update(Z) <- X_new + Y_new | ||
dX <- Normal(0, 1) | ||
X_new <- X + dX | ||
dY <- Normal(0, 1) | ||
Y_new <- Y + dY | ||
initial(X) <- 0 | ||
initial(Y) <- 0 | ||
initial(Z) <- 0 | ||
}) | ||
``` | ||
|
||
which means that subsequent changes to how `X` updates are automatically factored into how `Z` updates. We can see this version no longer results in a lag: | ||
|
||
```{r} | ||
sys <- dust_system_create(m, list(), seed = 1) | ||
dust_system_set_state_initial(sys) | ||
t <- seq(0, 5) | ||
y <- dust_system_simulate(sys, t) | ||
y <- dust_unpack_state(sys, y) | ||
y$X + y$Y | ||
y$Z | ||
``` | ||
::: | ||
|
||
|
||
### Update time to t0 + dt | ||
|
||
The series of updates ends with `time` being increased by `dt`. | ||
|
||
|
||
## Comparison to data {#sec-order-comparison} | ||
|
||
The comparison to data follows on from the above series of updates, and as such it is common that all equations relating to the comparison to data appear in a block at the end. | ||
|
||
Equations relating to comparison to data are evaluated after the update of time to `t0 + dt` and will only be evaluated if you have any data at `time = t0 + dt`. | ||
|
||
### Read from variables | ||
|
||
All variables will be read at their current values, which are the values corresponding to `time = t0 + dt`. Again, in our example the variables are `S`, `I`, `R` and `incidence`, although only `incidence` is used in the comparison to data | ||
|
||
### Look up interpolation | ||
|
||
Any uses of `interpolate` that are used in the comparison to data will be evaluated using `time = t0 + dt` (not using `time = t0` as was used earlier in the "update" step). Compilation of `odin` code will detect which interpolations are used in the comparison to data. | ||
|
||
In our example we | ||
|
||
```r | ||
phi <- interpolate(phi_time, phi_value, "constant") | ||
``` | ||
|
||
where `phi` might represent a time-varying ascertainment rate of cases (which could be affected by e.g. whether schools are open, or day of the week effects). | ||
|
||
### Evaluate assignments | ||
|
||
As with interpolation, compilation of `odin` code will detect which assignments are used only in the comparison to data. For instance in our example this would be | ||
|
||
```r | ||
exp_noise <- Exponential(1e6) | ||
``` | ||
|
||
Any use of variables in these equations on the right-hand side will be evaluated using their values at `time = t0 + dt`. | ||
|
||
|
||
### Compare to data | ||
|
||
Finally we evaluate the density given by any relationship equations. In our example this is | ||
|
||
```r | ||
cases ~ Poisson(phi * incidence + exp_noise) | ||
``` | ||
|
||
As with assignments in the comparison to data step, any use of variables in relationship equations on the right-hand side will be evaluated using their values at `time = t0 + dt`. No actual assignment is done in relationship equations, but the density accumulates over all non-NA data entries. |