Abstract
This paper compares matching and Difference-In-Difference matching (DID) when estimating the effect of a program on a dynamic outcome. I detail the sources of bias of each estimator in a model of entry into a Job Training Program (JTP) and earnings dynamics that I use as a working example. I show that there are plausible settings in which DID is consistent while matching on past outcomes is not. Unfortunately, the consistency of both estimators relies on conditions that are at odds with properties of earnings dynamics. Using calibration and Monte-Carlo simulations, I show that deviations from the most favorable conditions severely bias both estimators. The behavior of matching is nevertheless less erratic: its bias generally decreases when controlling for more past outcomes and it generally provides a lower bound on the true treatment effect. I finally point to previously unnoticed empirical results that confirm that DID does well, and generally better than matching on past outcomes, at replicating the results of an experimental benchmark.
Keywords
Matching - Difference in Difference - Evaluation of Job training Programs;
JEL codes
- C21: Cross-Sectional Models • Spatial Models • Treatment Effect Models • Quantile Regressions
- C23: Panel Data Models • Spatio-temporal Models
Replaced by
Sylvain Chabé-Ferret, “Analysis of the bias of Matching and Difference-in-Difference under alternative earnings and selection processes”, Journal of Econometrics, vol. 185, n. 1, March 2015, pp. 110–123.
Reference
Sylvain Chabé-Ferret, “Matching vs Differencing when Estimating Treatment Effects with Panel Data: the Example of the Effect of Job Training Programs on Earnings”, TSE Working Paper, n. 12-356, October 2012.
See also
Published in
TSE Working Paper, n. 12-356, October 2012