Bayesian Data Analysis Global South (GSU) 2023

Project work details

Project work involves choosing a data set and performing a whole analysis according to all the parts of Bayesian workflow studied along the course. In this course instance there are no project presentations, but you will get feedback from your peers. You can do the project work in groups if you like.

Project schedule

  • See the overall schedule of the GSU 2023 course
  • Start working on the project in midway of the course.
  • Project report deadline 15th May Submit in peergrade (separate “class”, the class code will be posted in the course chat announcements).
  • Project report peer grading 18-20th May.

Groups

In this course instance the project work can be done in groups of 1-4 persons, but you don’t need to find a group.

If you don’t have a group, you can ask other students in the group chat channel #project. Tell what kind of data you are interested in (e.g. medicine, health, biological, engineering, political, business), whether you prefer R or Python, and whether you have already more concrete idea for the topic.

Evaluation

In this course instance the project work’s evaluation consists of only from

  • peergraded project report (40%) (within peergrade submission 80% and feedack 20%)

Project report

In the project report you practice presenting the problem and data analysis results, which means that minimal listing of code and figures is not a good report. There are different levels for how data analysis project could be reported. This report should be more than a summary of results without workflow steps. While describing the steps and decisions made during the workflow, to keep the report readable some of the diagnostic outputs and code can be put in the appendix. If you are uncertain you can ask TAs in TA sessions whether you are on a good level of amount of details.

The report should not be over 20 pages and should include

  1. Introduction describing
    • the motivation
    • the problem
    • and the main modeling idea.
    • Showing some illustrative figure is recommended.
  2. Description of the data and the analysis problem. Provide information where the data was obtained, and if it has been previously used in some online case study and how your analysis differs from the existing analyses.
  3. Description of at least two models, for example:
    • non hierarchical and hierarchical,
    • linear and non linear,
    • variable selection with many models.
  4. Informative or weakly informative priors, and justification of their choices.
  5. Stan, rstanarm or brms code.
  6. How the Stan model was run, that is, what options were used. This is also more clear as combination of textual explanation and the actual code line.
  7. Convergence diagnostics (\(\widehat{R}\), ESS, divergences) and what was done if the convergence was not good with the first try. This should be reported for all models.
  8. Posterior predictive checks and what was done to improve the model. This should be reported for all models.
  9. Optional/Bonus: Predictive performance assessment if applicable (e.g. classification accuracy) and evaluation of practical usefulness of the accuracy. This should be reported for all models as well.
  10. Sensitivity analysis with respect to prior choices (i.e. checking whether the result changes a lot if prior is changed). This should be reported for all models.
  11. Model comparison (e.g. with LOO-CV).
  12. Discussion of issues and potential improvements.
  13. Conclusion what was learned from the data analysis.
  14. Self-reflection of what the you/group learned while making the project.

Data sets

As some data sets have been overused for these particular goals, note that the following ones are forbidden in this work (more can be added to this list so make sure to check it regularly):

  • extremely common data sets like titanic, mtcars, iris
  • Baseball batting (used by Bob Carpenter’s StanCon case study).
  • Data sets used in the course demos

It’s best to use a dataset for which there is no ready made analysis in internet, but if you choose a dataset used already in some online case study, provide the link to previous studies and report how your analysis differs from those (for example if someone has made non-Bayesian analysis and you do the full Bayesian analysis).

Depending on the model and the structure of the data, a good data set would have more than 100 observations but less than 1 million. If you know an interesting big data set, you can use a smaller subset of the data to keep the computation times feasible. It would be good that the data has some structure, so that it is sensible to use multilevel/hierarchical models.

If you’re looking for inspiration or you’re not sure where to begin, take a browse over this list of datasets arranged by topic, the datasets mentioned in the lecture slides (see slide 6), or else look at some of these publically accessible databases:

Model requirements

  • Every parameter needs to have an explicit proper prior. Improper flat priors are not allowed.
  • A hierarchical model is a model where the prior of certain parameter contain other parameters that are also estimated in the model. For instance, b ~ normal(mu, sigma), mu ~ normal(0, 1), sigma ~ exponential(1).
  • Do not impose hard constrains on a parameter unless they are natural to them. uniform(a, b) should not be used unless the boundaries are really logical boundaries and values beyond the boundaries are completely impossible.
  • At least some models should include covariates. Modelling the outcome without predictors is likely too simple for the project.
  • brms and rstanarm can be used, but you need to report the priors used (including reporting the priors brms and rstanamr assign by default).

Some examples

The following case study examples demonstrate how text, equations, figures, and code, and inference results can be included in one report. These examples don’t necessarily have all the workflow steps required in your report, but different steps are illustrated in different case studies and you can get good ideas for your report just by browsing through them.