Estimating Power Analyses for Diff-Diff
but first, I want a fresh understanding of the alternatives to Diff-Diff designs.
Synthetic Control Method
I read about this on Sunday and totally forgot how it would be different from PSM
Synthetic control method (SCM) matches according to the Y variable in pre-intervention periods, as a time series. Untreated comparison cases are identified according to similarity to the treated case during the period (can be multiple but typically one or few case[s]).
– Parallel trends assumption is dubious
– Assume unobservable confounders influence the Y variable and desire to get most accurate (how) estimates of treatment effect \alpha = \Y_treated_t=1,i=1 \minus \Y_untreated_t=1,i=1
– Economists with stronger design backgrounds tend to pool multiple treated cases – notably, they have also had multiple treatments, multiple cases. The inventors of SCM are usually
Kreif, Noémi, Richard Grieve, Dominik Hangartner, Alex James Turner, Silviya Nikolova, and Matt Sutton. “Examination of the Synthetic Control Method for Evaluating Health Policies with Multiple Treated Units.” Health Economics 25, no. 12 (2016): 1514–28. https://doi.org/10.1002/hec.3258.
“This paper extends the limited extant literature on the synthetic
control method for multiple treated units. A working paper by Acemoglu et al. (2013) uses the synthetic control method to construct the treatment‐free potential outcome for each multiple treated unit and is similar to the approach we take in the sensitivity analysis, but weights the estimated unit‐level treatment effects according to the closeness of the synthetic control. Their inferential procedure is similar to the one developed here, in that they re‐sample placebo‐treated units from the control pool. Dube and Zipperer (2013) pool multiple estimates of treatment effects to generalise inference for a setting with multiple treated units and policies. Xu (2015) propose a generalisation for the synthetic control approach, for multiple treated units with a factor model that predicts counterfactual outcomes. Our approach is most closely related to the suggestion initially made by Abadie et al. (2010), to aggregate multiple treated units into a single treated unit. In preliminary simulation studies, we find the method reports relatively low levels of bias in similar settings to the AQ study.”
Propensity Score Matching
is ideal in cases where
– Assignment to the treatment group correlates with variables relevant to the outcome variable (treatment assignment bias)
– Few cases eligible for comparison group are comparable to treatment group case (on covariates deemed relevant)
– Many relevant dimensions on which to match
Tactic: Generate a “propensity score” via logit regression of participation on confounders, giving the predicted probablity of participation in the treatment group.
Then: Each treatment participating case gets one or more matched comparison cases based on their confounding variables, which give their propensity to have been participants. To do this, we need measures and thresholds of nearness.
I’m unclear about: But is it nearness on P-hat or is it nearness on the confounders? If the latter, does that still involve it like there is a variable P for participation and P~X and P~Y, so model P~X, pick comparisons that look like group for whom P=1, and then assume relationship to Y operates similarly in treatment & comparison groups?