weight Time Chick Diet
1 42 0 1 1
2 51 2 1 1
3 59 4 1 1
4 64 6 1 1
5 76 8 1 1
6 93 10 1 1
7 106 12 1 1
8 125 14 1 1
9 149 16 1 1
10 171 18 1 1
Assignment 8
LOO-CV model comparison
1 General information
The maximum amount of points from this assignment is 6.
We have prepared a quarto template specific to this assignment (html, qmd, pdf) to help you get started.
We recommend you use jupyter.cs.aalto.fi or the docker container.
Reading instructions:
- The reading instructions for BDA3 Chapter 6 (posterior predictive checking).
- The reading instructions for BDA3 Chapter 7 (predictive performance).
- The ’loo‘ package vignette on the basics of LOO shows an example of how to modify Stan code and use the package with Stan models.
- Also read about PSIS-LOO in the PSIS-LOO paper.
- CV-FAQ includes a lot of informative answers to frequent questions and misconceptions.
Grading instructions:
The grading will be done in peergrade. All grading questions and evaluations for this assignment are contained within this document in the collapsible Rubric blocks.
Installing and using CmdStanR
:
See the Stan demos on how to use Stan in R (or Python). Aalto JupyterHub has working R and CmdStanR/RStan environment and is probably the easiest way to use Stan. * To use CmdStanR in Aalto JupyterHub:
library(cmdstanr)
set_cmdstan_path('/coursedata/cmdstan')
The Aalto Ubuntu desktops also have the necessary libraries installed.]{.aalto}
To install Stan on your laptop, run ‘install.packages("cmdstanr", repos = c("https://mc-stan.org/r-packages/", getOption("repos")))’ in R. If you encounter problems, see additional answers in FAQ. If you don’t succeed in short amount of time, it is probably easier to use Aalto JupyterHub.
If you use Aalto JupyterHub
, all necessary packages have been pre-installed. In your laptop, install package cmdstanr
. Installation instructions on Linux, Mac and Windows can be found at https://mc-stan.org/cmdstanr/. Additional useful packages are loo
, bayesplot
and posterior
(but you don’t need these in this assignment). For Python users, PyStan
, CmdStanPy
, and ArviZ
packages are useful.
Stan manual can be found at https://mc-stan.org/users/documentation/. From this website, you can also find a lot of other useful material about Stan.
If you edit files ending .stan
in RStudio, you can click “Check” in the editor toolbar to make syntax check. This can significantly speed-up writing a working Stan model.
For posterior statistics of interest, only report digits that are not completely random based on the Monte Carlo standard error (MCSE).
Example: If you estimate \(E(\mu) \approx 1.234\) with MCSE(\(E(\mu)\)) = 0.01, then the true expectation is likely to be between \(1.204\) and \(1.264\), it makes sense to report \(E(\mu) \approx 1.2\).
See Lecture video 4.1, the chapter notes, and a case study for more information.
- The recommended tool in this course is R (with the IDE RStudio).
- Instead of installing R and RStudio on you own computer, see how to use R and RStudio remotely.
- If you want to install R and RStudio locally, download R and RStudio.
- There are tons of tutorials, videos and introductions to R and RStudio online. You can find some initial hints from RStudio Education pages.
- When working with R, we recommend writing the report using
quarto
and the provided template. The template includes the formatting instructions and how to include code and figures. - Instead of
quarto
, you can use other software to make the PDF report, but the the same instructions for formatting should be used. - Report all results in a single, anonymous *.pdf -file and submit it in peergrade.io.
- The course has its own R package
aaltobda
with data and functionality to simplify coding. The package is pre-installed in JupyterHub. To install the package on your own system, run the following code (upgrade="never" skips question about updating other packages):
install.packages("aaltobda", repos = c("https://avehtari.github.io/BDA_course_Aalto/", getOption("repos")))
- Many of the exercises can be checked automatically using the R package
markmyassignment
(pre-installed in JupyterHub). Information on how to install and use the package can be found in themarkmyassignment
documentation. There is no need to includemarkmyassignment
results in the report. - Recommended additional self study exercises for each chapter in BDA3 are listed in the course web page. These will help to gain deeper understanding of the topic.
- Common questions and answers regarding installation and technical problems can be found in Frequently Asked Questions (FAQ).
- Deadlines for all assignments can be found on the course web page and in Peergrade. You can set email alerts for the deadlines in Peergrade settings.
- You are allowed to discuss assignments with your friends, but it is not allowed to copy solutions directly from other students or from internet.
- You can copy, e.g., plotting code from the course demos, but really try to solve the actual assignment problems with your own code and explanations.
- Do not share your answers publicly.
- Do not copy answers from the internet or from previous years. We compare the answers to the answers from previous years and to the answers from other students this year.
- Use of AI is allowed on the course, but the most of the work needs to by the student, and you need to report whether you used AI and in which way you used them (See points 5 and 6 in Aalto guidelines for use of AI in teaching).
- All suspected plagiarism will be reported and investigated. See more about the Aalto University Code of Academic Integrity and Handling Violations Thereof.
- Do not submit empty PDFs, almost empty PDFs, copy of the questions, nonsense generated by yourself or AI, as these are just harming the other students as they can’t do peergrading for the empty or nonsense submissions. Violations of this rule will be reported and investigated in the same way was plagiarism.
- If you have any suggestions or improvements to the course material, please post in the course chat feedback channel, create an issue, or submit a pull request to the public repository!
2 A hierarchical model for chicken weight time series
2.1 Exploratory data analysis
In the first part of this assignment, you will explore the dataset ChickWeight
. In particular, you will see what information is recorded in the dataset, and how you can use visualisation to learn more about the dataset. More information can be found on the corresponding page of the R documentation.
2.2 Linear regression
In this section, you will build a model that predicts the weight of a chicken over time and depending on the diet. After sampling from the posteriors, you will use posterior predictive checks to see how well the predictions match the observations. Then you will adjust the model by adding more complexity, and check again.
For the prior on Time
, consider how much the weight of a chicken (in grams) could possibly change each day. For the priors on the effects of different diets, consider how much average weight difference would be possible between diets.
Note that as Diet
is a categorical variable, the priors need to be specified for each category (apart from Diet1
which is taken to be the baseline).
Next, you can use the bayesplot
package to check the posterior predictions in relation to the observed data using the pp_check
function. The function plots the \(y\) values, which are the observed data, and the \(y_\text{rep}\) values, which are replicated data sets from the posterior predictive distribution.
The default density plot is not always informative, but bayesplot
has different settings that can be used to create plots more appropriate for specific data.
2.3 Log-normal linear regression
Based on the identified issues from the posterior predictive check, the model can be improved. It is advisable to change only one or a few things about a model at once. At this stage, focus on changing the observation model family to better account for the observed data.
One option is to use the lognormal observation model, which only allows positive values. In brms
you can change the observation model family to this by setting the argument family = "lognormal"
. Note that when using the log-normal observation model, the regression coefficients represent the change in the log weight of a chicken. The priors have been adjusted accordingly in the template.
2.4 Hierarchical log-normal linear regression
The model can further be improved by directly considering potential differences in growth rate for individual chicken. Some chickens may innately grow faster than others, and this difference can be included by including both population and group level effects in to the model.
To include a group effect in brms
, the code + (predictor|group)
can be added to the model formula. In this case, the predictor is Time
and the group is Chick
.
2.5 Model comparison using the ELPD
There are many ways of comparing models1. Commonly, we evaluate point predictions, such as the mean of the predictive distribution2, or accuracy of the whole posterior predictive. Whether we prioritise point or density predictive accuracy may serve different purposes and lead to different outcomes for model choice 3. It is common, however, to report predictive accuracy via log-scores and point-predictive accuracy via root-mean-squared-error based on the empirical average of the predictive distribution. To cross-validate both metrics on left out observations without need to sample from each leave-one-out posterior, we use Pareto-smoothed importance sampling as discussed in the course materials (see Lecture 9).
We start comparing models based on the log-score. Use loo::loo()
and loo::loo_compare()
to quantify the differences in predictive performance.
For hierarchical models, it may be more important to think about how well the individual group is predicted and how many observations there are in a group compared to the number of parameters estimated. Also check out CV-FAQ on high Pareto-\(\hat{k}\) values.
2.6 Model comparison using the RMSE
3 Overall quality of the report
Footnotes
In principle, when comparing models based on accuracy in predictions or parameter estimation (if true parameter values are available to you, as e.g. in simulation studies), we want to use so called strictly proper scoring rules that will always indicate when a “better” model is better and the score reaches its uniquely defined best value at the “true” model, if it is also well defined. See Gneiting and Raftery, (2007) for and in depth treatment of this topic.↩︎
NOT predictions based on the mean of the posterior parameters, but first generating the predictive distribution and then computing an average.↩︎
For instance, a unimodal and bimodal predictive density may have the same expected value, but very different areas of high posterior density and therefore very different log-scores.↩︎