Causal analysis of Sesame Street experiment. See Chapters 18 and 21 in Regression and Other Stories.


Load packages

library("rprojroot")
root<-has_file(".ROS-Examples-root")$make_fix_file()
library("rstanarm")
library("brms")

Set random seed for reproducability

SEED <- 1234

Load data

sesame <- read.csv(root("Sesame/data","sesame.csv"))
head(sesame)
  rownames id site sex age viewcat setting viewenc prebody prelet preform
1        1  1    1   1  66       1       2       1      16     23      12
2        2  2    1   2  67       3       2       1      30     26       9
3        3  3    1   1  56       3       2       2      22     14       9
4        4  4    1   1  49       1       2       2      23     11      10
5        5  5    1   1  69       4       2       2      32     47      15
6        6  6    1   2  54       3       2       2      29     26      10
  prenumb prerelat preclasf postbody postlet postform postnumb postrelat
1      40       14       20       18      30       14       44        14
2      39       16       22       30      37       17       39        14
3       9        9        8       21      46       15       40         9
4      14        9       13       21      14       13       19         8
5      51       17       22       32      63       18       54        14
6      33       14       14       27      36       14       39        16
  postclasf peabody agecat encour X_Isite_2 X_Isite_3 X_Isite_4 X_Isite_5
1        23      62      1      1         0         0         0         0
2        22       8      1      1         0         0         0         0
3        19      32      1      0         0         0         0         0
4        15      27      0      0         0         0         0         0
5        21      71      1      0         0         0         0         0
6        24      32      1      0         0         0         0         0
  regular watched encouraged  y pretest
1       0       0          1 30      23
2       1       1          1 37      26
3       1       1          0 46      14
4       0       0          0 14      11
5       1       1          0 63      47
6       1       1          0 36      26

Compliance

(sesame_tab <- table(sesame[,c('watched','encouraged')]))
       encouraged
watched   0   1
      0  40  14
      1  48 138
round(prop.table(sesame_tab, margin=2), digits=2)
       encouraged
watched    0    1
      0 0.45 0.09
      1 0.55 0.91

Wald estimator

Estimate the intent-to-treat (ITT) effect of the instrument (encouragement) on the treatment (regular watching), that is, percentage of children actually induced to watch Sesame Street by the intervention

itt_zt <- stan_glm(watched ~ encouraged, data=sesame, seed=SEED, refresh=0)
print(itt_zt, digits=2)
stan_glm
 family:       gaussian [identity]
 formula:      watched ~ encouraged
 observations: 240
 predictors:   2
------
            Median MAD_SD
(Intercept) 0.55   0.04  
encouraged  0.36   0.05  

Auxiliary parameter(s):
      Median MAD_SD
sigma 0.38   0.02  

------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg

Estimate the intent-to-treat (ITT) estimate on the outcome (post-treatment letter identification)

itt_zy <- stan_glm(postlet ~ encouraged, data=sesame, refresh=0)
print(itt_zy, digits=1)
stan_glm
 family:       gaussian [identity]
 formula:      postlet ~ encouraged
 observations: 240
 predictors:   2
------
            Median MAD_SD
(Intercept) 24.9    1.4  
encouraged   2.8    1.8  

Auxiliary parameter(s):
      Median MAD_SD
sigma 13.3    0.6  

------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg

Calculate Wald estimate, ie the ratio of the above two estimates

wald_est <- coef(itt_zy)["encouraged"] / coef(itt_zt)["encouraged"]
round(wald_est, digits=1)
encouraged 
       7.9 

Two stage approach

Predict the "treatment" variable on the randomized instrument

The first step is to regress the "treatment" variable---an indicator for regular watching (watched)---on the randomized instrument, encouragement to watch (encouraged).

fit_2a <- stan_glm(watched ~ encouraged, data=sesame, seed=SEED, refresh=0)
print(fit_2a, digits=2)
stan_glm
 family:       gaussian [identity]
 formula:      watched ~ encouraged
 observations: 240
 predictors:   2
------
            Median MAD_SD
(Intercept) 0.55   0.04  
encouraged  0.36   0.05  

Auxiliary parameter(s):
      Median MAD_SD
sigma 0.38   0.02  

------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
summary(fit_2a$fitted, digits=2)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   0.55    0.55    0.91    0.77    0.91    0.91 
sesame$watched_hat <- fit_2a$fitted

Then we plug predicted values of watched into the equation predicting the letter recognition outcome.

fit_2b <- stan_glm(postlet ~ watched_hat, data=sesame, seed=SEED, refresh=0)
print(fit_2b, digits = 1)
stan_glm
 family:       gaussian [identity]
 formula:      postlet ~ watched_hat
 observations: 240
 predictors:   2
------
            Median MAD_SD
(Intercept) 20.6    3.9  
watched_hat  7.9    4.9  

Auxiliary parameter(s):
      Median MAD_SD
sigma 13.3    0.6  

------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg

Two stage approach with instrumental variables

Two stage approach with adjusting for covariates in an instrumental variables framework.

Predict the "treatment" variable on the randomized instrument and pre-treatment variables.

The first step is to regress the "treatment" variable on the randomized instrument, encouragement to watch (encouraged) and pre-treatment variables.

fit_3a <- stan_glm(watched ~ encouraged + prelet + as.factor(site) + setting,
                   data=sesame, seed=SEED, refresh=0)
print(fit_3a, digits=2)
stan_glm
 family:       gaussian [identity]
 formula:      watched ~ encouraged + prelet + as.factor(site) + setting
 observations: 240
 predictors:   8
------
                 Median MAD_SD
(Intercept)       0.66   0.11 
encouraged        0.34   0.05 
prelet            0.01   0.00 
as.factor(site)2  0.03   0.07 
as.factor(site)3 -0.11   0.07 
as.factor(site)4 -0.34   0.07 
as.factor(site)5 -0.29   0.10 
setting          -0.05   0.05 

Auxiliary parameter(s):
      Median MAD_SD
sigma 0.35   0.02  

------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
summary(fit_3a$fitted, digits=2)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   0.21    0.64    0.81    0.77    0.97    1.10 

Then we plug predicted values of watched into the equation predicting the letter recognition outcome.

watched_hat_3 <- fit_3a$fitted
fit_3b <- stan_glm(postlet ~ watched_hat_3 + prelet + as.factor(site) + setting,
                   data=sesame, seed=SEED, refresh=0)
print(fit_3b, digits = 1)
stan_glm
 family:       gaussian [identity]
 formula:      postlet ~ watched_hat_3 + prelet + as.factor(site) + setting
 observations: 240
 predictors:   8
------
                 Median MAD_SD
(Intercept)       1.3    4.7  
watched_hat_3    14.0    4.2  
prelet            0.7    0.1  
as.factor(site)2  8.4    1.8  
as.factor(site)3 -4.0    1.8  
as.factor(site)4  1.0    2.5  
as.factor(site)5  2.7    2.9  
setting           1.6    1.5  

Auxiliary parameter(s):
      Median MAD_SD
sigma 9.7    0.4   

------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg

Estimate the standard errors

Use the predictor matrix from this second-stage regression.

X_adj <- X <- model.matrix(fit_3b)
X_adj[,"watched_hat_3"] <- sesame$watched
n <- nrow(X)
p <- ncol(X)

Compute the standard deviation of the adjusted residuals

RMSE1 <- sqrt(sum((sesame$postlet - X %*% coef(fit_3b))^2)/(n-p))
RMSE2 <- sqrt(sum((sesame$postlet - X_adj %*% coef(fit_3b))^2)/(n-p))
se_adj <- se(fit_3b)["watched_hat_3"] * sqrt(RMSE1 / RMSE2)
print(se_adj, digits=2)
watched_hat_3 
          4.2 

Two-stage approach with brms

Predict the "treatment" variable on the randomized instrument

f1 <- bf(watched ~ encour)
f2 <- bf(postlet ~ watched)
IV_brm_a <- brm(f1 + f2, data=sesame, seed=SEED)
print(IV_brm_a, digits=1)
 Family: MV(gaussian, gaussian) 
  Links: mu = identity; sigma = identity
         mu = identity; sigma = identity 
Formula: watched ~ encour 
         postlet ~ watched 
   Data: sesame (Number of observations: 240) 
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup samples = 4000

Population-Level Effects: 
                  Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
watched_Intercept      0.5       0.0      0.5      0.6  1.0     4360     2975
postlet_Intercept     20.6       3.8     13.5     28.4  1.0     1976     2405
watched_encour         0.4       0.1      0.3      0.5  1.0     4335     2665
postlet_watched        7.9       4.8     -1.9     16.9  1.0     1958     2150

Family Specific Parameters: 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_watched      0.4       0.0      0.4      0.4  1.0     4121     2791
sigma_postlet     12.7       0.7     11.4     14.2  1.0     2971     2170

Residual Correlations: 
                        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
rescor(watched,postlet)      0.2       0.2     -0.1      0.5  1.0     2108
                        Tail_ESS
rescor(watched,postlet)     2117

Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

Incorporate other pre-treatment variables as controls

f1 <- bf(watched ~ encour + prelet + setting + factor(site))
f2 <- bf(postlet ~ watched + prelet + setting + factor(site))
IV_brm_b <- brm(f1 + f2, data=sesame, seed=SEED)
print(IV_brm_b, digits=1)
 Family: MV(gaussian, gaussian) 
  Links: mu = identity; sigma = identity
         mu = identity; sigma = identity 
Formula: watched ~ encour + prelet + setting + factor(site) 
         postlet ~ watched + prelet + setting + factor(site) 
   Data: sesame (Number of observations: 240) 
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup samples = 4000

Population-Level Effects: 
                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
watched_Intercept        0.7       0.1      0.5      0.9  1.0     4353     2782
postlet_Intercept        1.4       4.6     -8.1     10.2  1.0     1845     1696
watched_encour           0.3       0.1      0.2      0.4  1.0     4902     3139
watched_prelet           0.0       0.0     -0.0      0.0  1.0     6567     2378
watched_setting         -0.1       0.1     -0.2      0.0  1.0     4974     2889
watched_factorsite2      0.0       0.1     -0.1      0.2  1.0     3803     2777
watched_factorsite3     -0.1       0.1     -0.2      0.0  1.0     3255     2857
watched_factorsite4     -0.3       0.1     -0.5     -0.2  1.0     3805     3189
watched_factorsite5     -0.3       0.1     -0.5     -0.1  1.0     3981     3031
postlet_watched         13.9       3.9      6.5     21.8  1.0     1772     1648
postlet_prelet           0.7       0.1      0.5      0.9  1.0     4456     2728
postlet_setting          1.5       1.4     -1.3      4.4  1.0     3386     2894
postlet_factorsite2      8.4       1.8      4.9     11.9  1.0     4151     3085
postlet_factorsite3     -4.0       1.8     -7.4     -0.5  1.0     3415     2988
postlet_factorsite4      0.9       2.4     -3.8      5.8  1.0     2116     2481
postlet_factorsite5      2.7       2.9     -2.8      8.5  1.0     2800     2725

Family Specific Parameters: 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_watched      0.4       0.0      0.3      0.4  1.0     5925     2720
sigma_postlet      9.4       0.5      8.5     10.7  1.0     2718     1818

Residual Correlations: 
                        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
rescor(watched,postlet)     -0.2       0.2     -0.5      0.1  1.0     1794
                        Tail_ESS
rescor(watched,postlet)     1762

Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
LS0tCnRpdGxlOiAiUmVncmVzc2lvbiBhbmQgT3RoZXIgU3RvcmllczogU2VzYW1lIHN0cmVldCIKYXV0aG9yOiAiQW5kcmV3IEdlbG1hbiwgSmVubmlmZXIgSGlsbCwgQWtpIFZlaHRhcmkiCmRhdGU6ICJgciBmb3JtYXQoU3lzLkRhdGUoKSlgIgpvdXRwdXQ6CiAgaHRtbF9kb2N1bWVudDoKICAgIHRoZW1lOiByZWFkYWJsZQogICAgdG9jOiB0cnVlCiAgICB0b2NfZGVwdGg6IDIKICAgIHRvY19mbG9hdDogdHJ1ZQogICAgY29kZV9kb3dubG9hZDogdHJ1ZQotLS0KQ2F1c2FsIGFuYWx5c2lzIG9mIFNlc2FtZSBTdHJlZXQgZXhwZXJpbWVudC4gU2VlIENoYXB0ZXJzIDE4IGFuZCAyMQppbiBSZWdyZXNzaW9uIGFuZCBPdGhlciBTdG9yaWVzLgoKLS0tLS0tLS0tLS0tLQoKCmBgYHtyIHNldHVwLCBpbmNsdWRlPUZBTFNFfQprbml0cjo6b3B0c19jaHVuayRzZXQobWVzc2FnZT1GQUxTRSwgZXJyb3I9RkFMU0UsIHdhcm5pbmc9RkFMU0UsIGNvbW1lbnQ9TkEpCmBgYAoKIyMjIyBMb2FkIHBhY2thZ2VzCgpgYGB7ciB9CmxpYnJhcnkoInJwcm9qcm9vdCIpCnJvb3Q8LWhhc19maWxlKCIuUk9TLUV4YW1wbGVzLXJvb3QiKSRtYWtlX2ZpeF9maWxlKCkKbGlicmFyeSgicnN0YW5hcm0iKQpsaWJyYXJ5KCJicm1zIikKYGBgCgpTZXQgcmFuZG9tIHNlZWQgZm9yIHJlcHJvZHVjYWJpbGl0eQoKYGBge3IgfQpTRUVEIDwtIDEyMzQKYGBgCgojIyMjIExvYWQgZGF0YQoKYGBge3IgfQpzZXNhbWUgPC0gcmVhZC5jc3Yocm9vdCgiU2VzYW1lL2RhdGEiLCJzZXNhbWUuY3N2IikpCmhlYWQoc2VzYW1lKQpgYGAKCiMjIENvbXBsaWFuY2UKCmBgYHtyIH0KKHNlc2FtZV90YWIgPC0gdGFibGUoc2VzYW1lWyxjKCd3YXRjaGVkJywnZW5jb3VyYWdlZCcpXSkpCnJvdW5kKHByb3AudGFibGUoc2VzYW1lX3RhYiwgbWFyZ2luPTIpLCBkaWdpdHM9MikKYGBgCgojIyBXYWxkIGVzdGltYXRvcgoKRXN0aW1hdGUgdGhlIGludGVudC10by10cmVhdCAoSVRUKSBlZmZlY3Qgb2YgdGhlIGluc3RydW1lbnQKKGVuY291cmFnZW1lbnQpIG9uIHRoZSB0cmVhdG1lbnQgKHJlZ3VsYXIgd2F0Y2hpbmcpLCB0aGF0IGlzLApwZXJjZW50YWdlIG9mIGNoaWxkcmVuIGFjdHVhbGx5IGluZHVjZWQgdG8gd2F0Y2ggU2VzYW1lIFN0cmVldCBieQp0aGUgaW50ZXJ2ZW50aW9uCgpgYGB7ciB9Cml0dF96dCA8LSBzdGFuX2dsbSh3YXRjaGVkIH4gZW5jb3VyYWdlZCwgZGF0YT1zZXNhbWUsIHNlZWQ9U0VFRCwgcmVmcmVzaD0wKQpwcmludChpdHRfenQsIGRpZ2l0cz0yKQpgYGAKCkVzdGltYXRlIHRoZSBpbnRlbnQtdG8tdHJlYXQgKElUVCkgZXN0aW1hdGUgb24gdGhlIG91dGNvbWUKKHBvc3QtdHJlYXRtZW50IGxldHRlciBpZGVudGlmaWNhdGlvbikKCmBgYHtyIH0KaXR0X3p5IDwtIHN0YW5fZ2xtKHBvc3RsZXQgfiBlbmNvdXJhZ2VkLCBkYXRhPXNlc2FtZSwgcmVmcmVzaD0wKQpwcmludChpdHRfenksIGRpZ2l0cz0xKQpgYGAKCkNhbGN1bGF0ZSBXYWxkIGVzdGltYXRlLCBpZSB0aGUgcmF0aW8gb2YgdGhlIGFib3ZlIHR3byBlc3RpbWF0ZXMKCmBgYHtyIH0Kd2FsZF9lc3QgPC0gY29lZihpdHRfenkpWyJlbmNvdXJhZ2VkIl0gLyBjb2VmKGl0dF96dClbImVuY291cmFnZWQiXQpyb3VuZCh3YWxkX2VzdCwgZGlnaXRzPTEpCmBgYAoKIyMgVHdvIHN0YWdlIGFwcHJvYWNoCgojIyMjIFByZWRpY3QgdGhlICJ0cmVhdG1lbnQiIHZhcmlhYmxlIG9uIHRoZSByYW5kb21pemVkIGluc3RydW1lbnQKClRoZSBmaXJzdCBzdGVwIGlzIHRvIHJlZ3Jlc3MgdGhlICJ0cmVhdG1lbnQiIHZhcmlhYmxlLS0tYW4KaW5kaWNhdG9yIGZvciByZWd1bGFyIHdhdGNoaW5nICh3YXRjaGVkKS0tLW9uIHRoZSByYW5kb21pemVkCmluc3RydW1lbnQsIGVuY291cmFnZW1lbnQgdG8gd2F0Y2ggKGVuY291cmFnZWQpLgoKYGBge3IgfQpmaXRfMmEgPC0gc3Rhbl9nbG0od2F0Y2hlZCB+IGVuY291cmFnZWQsIGRhdGE9c2VzYW1lLCBzZWVkPVNFRUQsIHJlZnJlc2g9MCkKcHJpbnQoZml0XzJhLCBkaWdpdHM9MikKc3VtbWFyeShmaXRfMmEkZml0dGVkLCBkaWdpdHM9MikKc2VzYW1lJHdhdGNoZWRfaGF0IDwtIGZpdF8yYSRmaXR0ZWQKYGBgCgpUaGVuIHdlIHBsdWcgcHJlZGljdGVkIHZhbHVlcyBvZiB3YXRjaGVkIGludG8gdGhlIGVxdWF0aW9uCnByZWRpY3RpbmcgdGhlIGxldHRlciByZWNvZ25pdGlvbiBvdXRjb21lLgoKYGBge3IgfQpmaXRfMmIgPC0gc3Rhbl9nbG0ocG9zdGxldCB+IHdhdGNoZWRfaGF0LCBkYXRhPXNlc2FtZSwgc2VlZD1TRUVELCByZWZyZXNoPTApCnByaW50KGZpdF8yYiwgZGlnaXRzID0gMSkKYGBgCgojIyBUd28gc3RhZ2UgYXBwcm9hY2ggd2l0aCBpbnN0cnVtZW50YWwgdmFyaWFibGVzCgpUd28gc3RhZ2UgYXBwcm9hY2ggd2l0aCBhZGp1c3RpbmcgZm9yIGNvdmFyaWF0ZXMgaW4gYW4gaW5zdHJ1bWVudGFsIHZhcmlhYmxlcyBmcmFtZXdvcmsuCgojIyMjIFByZWRpY3QgdGhlICJ0cmVhdG1lbnQiIHZhcmlhYmxlIG9uIHRoZSByYW5kb21pemVkIGluc3RydW1lbnQgYW5kIHByZS10cmVhdG1lbnQgdmFyaWFibGVzLgoKVGhlIGZpcnN0IHN0ZXAgaXMgdG8gcmVncmVzcyB0aGUgInRyZWF0bWVudCIgdmFyaWFibGUgb24gdGhlCnJhbmRvbWl6ZWQgaW5zdHJ1bWVudCwgZW5jb3VyYWdlbWVudCB0byB3YXRjaCAoZW5jb3VyYWdlZCkgYW5kCnByZS10cmVhdG1lbnQgdmFyaWFibGVzLgoKYGBge3IgfQpmaXRfM2EgPC0gc3Rhbl9nbG0od2F0Y2hlZCB+IGVuY291cmFnZWQgKyBwcmVsZXQgKyBhcy5mYWN0b3Ioc2l0ZSkgKyBzZXR0aW5nLAogICAgICAgICAgICAgICAgICAgZGF0YT1zZXNhbWUsIHNlZWQ9U0VFRCwgcmVmcmVzaD0wKQpwcmludChmaXRfM2EsIGRpZ2l0cz0yKQpzdW1tYXJ5KGZpdF8zYSRmaXR0ZWQsIGRpZ2l0cz0yKQpgYGAKClRoZW4gd2UgcGx1ZyBwcmVkaWN0ZWQgdmFsdWVzIG9mIHdhdGNoZWQgaW50byB0aGUgZXF1YXRpb24KcHJlZGljdGluZyB0aGUgbGV0dGVyIHJlY29nbml0aW9uIG91dGNvbWUuCgpgYGB7ciB9CndhdGNoZWRfaGF0XzMgPC0gZml0XzNhJGZpdHRlZApmaXRfM2IgPC0gc3Rhbl9nbG0ocG9zdGxldCB+IHdhdGNoZWRfaGF0XzMgKyBwcmVsZXQgKyBhcy5mYWN0b3Ioc2l0ZSkgKyBzZXR0aW5nLAogICAgICAgICAgICAgICAgICAgZGF0YT1zZXNhbWUsIHNlZWQ9U0VFRCwgcmVmcmVzaD0wKQpwcmludChmaXRfM2IsIGRpZ2l0cyA9IDEpCmBgYAoKIyMjIyBFc3RpbWF0ZSB0aGUgc3RhbmRhcmQgZXJyb3JzCgpVc2UgdGhlIHByZWRpY3RvciBtYXRyaXggZnJvbSB0aGlzIHNlY29uZC1zdGFnZSByZWdyZXNzaW9uLgoKYGBge3IgfQpYX2FkaiA8LSBYIDwtIG1vZGVsLm1hdHJpeChmaXRfM2IpClhfYWRqWywid2F0Y2hlZF9oYXRfMyJdIDwtIHNlc2FtZSR3YXRjaGVkCm4gPC0gbnJvdyhYKQpwIDwtIG5jb2woWCkKYGBgCgojIyMjIENvbXB1dGUgdGhlIHN0YW5kYXJkIGRldmlhdGlvbiBvZiB0aGUgYWRqdXN0ZWQgcmVzaWR1YWxzCgpgYGB7ciB9ClJNU0UxIDwtIHNxcnQoc3VtKChzZXNhbWUkcG9zdGxldCAtIFggJSolIGNvZWYoZml0XzNiKSleMikvKG4tcCkpClJNU0UyIDwtIHNxcnQoc3VtKChzZXNhbWUkcG9zdGxldCAtIFhfYWRqICUqJSBjb2VmKGZpdF8zYikpXjIpLyhuLXApKQpzZV9hZGogPC0gc2UoZml0XzNiKVsid2F0Y2hlZF9oYXRfMyJdICogc3FydChSTVNFMSAvIFJNU0UyKQpwcmludChzZV9hZGosIGRpZ2l0cz0yKQpgYGAKCiMjIFR3by1zdGFnZSBhcHByb2FjaCB3aXRoIGJybXMKCiMjIyMgUHJlZGljdCB0aGUgInRyZWF0bWVudCIgdmFyaWFibGUgb24gdGhlIHJhbmRvbWl6ZWQgaW5zdHJ1bWVudAoKYGBge3IgfQpmMSA8LSBiZih3YXRjaGVkIH4gZW5jb3VyKQpmMiA8LSBiZihwb3N0bGV0IH4gd2F0Y2hlZCkKYGBgCmBgYHtyIHJlc3VsdHM9J2hpZGUnfQpJVl9icm1fYSA8LSBicm0oZjEgKyBmMiwgZGF0YT1zZXNhbWUsIHNlZWQ9U0VFRCkKYGBgCmBgYHtyIH0KcHJpbnQoSVZfYnJtX2EsIGRpZ2l0cz0xKQpgYGAKCiMjIyMgSW5jb3Jwb3JhdGUgb3RoZXIgcHJlLXRyZWF0bWVudCB2YXJpYWJsZXMgYXMgY29udHJvbHMKCmBgYHtyIH0KZjEgPC0gYmYod2F0Y2hlZCB+IGVuY291ciArIHByZWxldCArIHNldHRpbmcgKyBmYWN0b3Ioc2l0ZSkpCmYyIDwtIGJmKHBvc3RsZXQgfiB3YXRjaGVkICsgcHJlbGV0ICsgc2V0dGluZyArIGZhY3RvcihzaXRlKSkKYGBgCmBgYHtyIHJlc3VsdHM9J2hpZGUnfQpJVl9icm1fYiA8LSBicm0oZjEgKyBmMiwgZGF0YT1zZXNhbWUsIHNlZWQ9U0VFRCkKYGBgCmBgYHtyIH0KcHJpbnQoSVZfYnJtX2IsIGRpZ2l0cz0xKQpgYGAKCg==