|Oregon Research Institute|
See also Statistical Methods for Missing Data
Altman, D. G. (1985). Comparability of randomised groups. Statistician, 34, 125-126. ♦
Angrist, J. D. (2006). Instrumental variables methods in experimental criminological research: What, why and how. Journal of Experimental Criminology, 2(1), 23-44. ♦
Angrist, J. D., & Imbens, G. W. (1995). Two stage least squares estimation of average causal effects in models with variable treatment intensity. Journal of the American Statistical Association, 90, 431-442. ♦
Angrist, J. D., Imbens, G. W., & Rubin, D. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91, 444-455. ♦
Excellent paper and recommended by Michael Sobel for reading on intermediate outcomes.
Angrist, J. D., & Krueger, A. (1991). Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics, 106, 979-1014.
Atkins, D. C. (2009). Clinical trials methodology: Randomization, intent-to-treat, and random-effects regression. Depression and Anxiety, 26(8), 697-700. doi: 10.1002/da.20594 ♦
Austin, P. C. (2008). A critical appraisal of propensity-score matching in the medical literature between 1996 and 2003. Statistics in Medicine, 27, 2037-2049. doi: 10.1002/sim.3150 ♦
Austin, P. C. (2009). Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Statistics in Medicine, 28, 3083-3107. ♦
Austin, P. C. (2011). A tutorial and case study in propensity score analysis: An application to estimating the effect of in-hospital smoking cessation counseling on mortality. Multivariate Behavioral Research, 46(1), 119-151. doi: 10.1080/00273171.2011.540480
Austin, P. C. (2011). An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behavioral Research, 46(3), 399-424. doi: 10.1080/00273171.2011.568786
Bang, H., & Davis, C. (2007). On estimating treatment effects under non-compliance in randomized clinical trials: Are intent-to-treat or instrumental variables analyses perfect solutions? Statistics In Medicine, 26(5), 954-964. ♦
Baron, J. (2000). Thinking and deciding (3rd ed.). New York: Cambridge University Press. ◊
See chapter 7 on hypothesis testing.
Bloom. H. S. (1984). Accounting for no-shows in experimental evaluation designs. Evaluation Review, 8, 225-246. ♦
Cochran, W. G., & Rubin, D. B. (1973). Controlling bias in observational studies: A review. Sankhya: The Indian Journal of Statistics, Series A, 35(Part 4), 417-66.
Cohen, D. K., Raudenbush, S. W., & Ball, D. L. (2003). Resources, instruction, and research. Educational Evaluation and Policy Analysis, 25(2), 119-142. ♦
Connell, A. M., Dishion, T. J., Yasui, M, & Kavanagh, K. (2007). An adaptive approach to family intervention: Linking engagement in family-centered intervention to reductions in adolescent problem behavior. Journal of Consulting and Clinical Psychology, 75(4), 568-579. ♦
A practical example of the use of "Complier Average Causal Effect analysis (CACE; see G. Imbens & D. Rubin, 1997) to examine the impact of an adaptive approach to family intervention in the public schools on rates of substance use and antisocial behavior among students ages 11-17" (abstract). The study, however, may have included some important methodological flaws. This paper follows the same sample as Véronneau, Dishion, Connell, and Kavanagh (2016) and Stormshak, Connell, and Dishion (2009), but the papers have important differences. Stormshak et al. report that "when the students moved on to high school, FRC services were discontinued" (p. 225), but Connell et al. (2007) agree with that statement but, paradoxially, then said that "students . . . were offered services if they remained in the county" (p. 571). Véronneau et al. went further and reported that "FCUs were also offered in high school (in Grades 10Ė11) for those families remaining in the school district" (p. 6), noting that 44.7% of noncompliers in middle school participated in the FCU in high school. Because Stormshak et al. follows students through Grade 11, it is not clear how to reconcile the three reports. See notes for Véronneau et al. about problems with the CASE models, which likely apply to this study as well, on the Parenting Practices bibliography page.
Cox, D. R. (1958). Planning of experiments. New York: Wiley.
Notable quote: "There is no 'interference' between different units if the observation on one unit [is] unaffected by the particular assignment of treatments to the other units" (Cox 1958, p. 19). Stated in terms of the potential outcomes framework (see Rubin, 1986, 2005), "the [potential outcome] observation on one unit should be unaffected by the particular assignment of treatments to the other units" (Cox 1958, p. 19).
D'Agostino, R. B., Jr. (1998). Propensity score methods for bias reduction for the comparison of a treatment to a non-randomized control group. Statistics in Medicine, 17(19), 2265-2281. ♦
Recommended by Michael Sobel for reading on causal inference.
D'Agostino, R. B., Jr. (2007). Propensity scores in cardiovascular research. Circulation, 115(17), 2340-2343. ♦
D'Agostino, R. B., Jr., D'Agostino, R. B., Sr. (2007). Estimating treatment effects using observational data. Journal of the American Medical Association, 297(3), 314-316. ♦
D'Agostino, R. B., Sr, & Kwan, H. (1995). Measuring effectiveness: What to expect without a randomized control group. Medical Care, 33(4 suppl), AS95-AS105. ♦
Dunn, G., Maracy, M., Dowrick, C., Ayuso-Mateos, J. L., Dalgard, O. S., Page, H. Lehtinen, V., Casey, P., Wilkinson, C., Vázquez-Barquero, J. L., & Wilkinson, G. (2003). Estimating psychological treatment effects from a randomised controlled trial with both non-compliance and loss to follow-up. British Journal of Psychiatry, 183 323-331. ♦
Dunn, P. M. (1997). James Lind (1716-94) of Edinburgh and the treatment of scurvy. Archives of Disease in Childhood, 76, F64-F65. ♦
Dunn (1996) describes James Lind's first known attempt to conduct a controlled clinical trial (nonrandom) to investigate the treatment of scurvy. See Lind (1953).
Fisher, L. D., Dixon, D. O., Herson, J., Frankowski, R. K., Hearron, M. S., & Peace, K. E. (1990) Intention-to-treat in clinical trials. In K.E. Peace (Ed.), Statistical issues in drug research and development. New York: Marcel Dekker. ♦
Fisher et al. (1990) suggest that analysts should include all randomized patients in the groups to which they were randomly assigned, regardless of their adherence with the entry criteria, regardless of the treatment they actually received, and regardless of subsequent withdrawal from treatment or deviation from the protocol.
Freedman, D. A. (1991). Statistical models and shoe leather. Sociological Methodology, 21, 291-313. ♦
Freedman, D. A. (1997). From association to causation via regression. Advances in Applied Mathematics, 18, 59-110. ♦
Garrido, M. M., Kelley, A. S., Paris, J., Roza, K., Meier, D. E., Morrison, R. S., & Aldridge, M. D. (2014). Methods for constructing and assessing propensity scores. Health Services Research, 49(5), 1701-1720. doi: 10.1111/1475-6773.12182 ♦
Genetian, L. A., Morris, P. A., Johannes. M., & Bloom, H. S. (2005). Constructing instrumental variables from experimental data to explore how treatments produce effects. In M. S. Bloom (Ed.). Learning more from social experiments (pp. 75-114). New York: Russell Sage. ♦
Greenland, S. (1996). Basic methods for sensitivity analysis of biases. International Journal of Epidemiology, 25(6), 1107-1116. ♦
Greenland, S., & Robins, J. M. (2009). Identifiability, exchangeability and confounding revisited. Epidemiologic Perspectives and Innovations, 64(4). doi:10.1186/1742-5573-6-4 [Retrieved from http://www.epi-perspectives.com/] ♦
Hayduk, L, Cummings, G., Stratkotter, R., Nimmo, M., Grygoryev, K., Dosman, D., Gillespie, M., Pazderka-Robinson, H., & Boadu, K. (2003). Pearl's d-separation: One more step into causal thinking. Structural Equation Modeling, 10(2), 289-311. ♦
Heckman, J. J. (2005). The scientific model of causality. Sociological Methodology, 35(1), 1-98. ♦
Hill, J. (2008). Discussion of research using propensity-score matching: Comments on 'A critical appraisal of propensity-score matching in the medical literature between 1996 and 2003' by Peter Austin, Statistics in Medicine. Statistics in Medicine, 27, 2055-2061. doi: 10.1002/sim.3245 ♦
Ho, D., Imai, K., King, G., & Stuart, E. A. (2007). Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis, 15(3), 199-236. ♦
Ho, Imai, King, and Stuart (2007) "propose a unified approach [to matching] that makes it possible for researchers to preprocess data with matching . . . and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences" (abstract). This paper is used to justify the WWC standard that baseline differences in RCTs and QEDs fall below an effect size (Hedges' g) of 0.25 standard deviations, although it appears that Ho et al. are addressing a different issue.
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945-960. ♦
Holland, P. W. (1988). Causal inference, path analysis, and recursive structural equation models. In C. Clogg & G. Arminger (Eds.), Sociological Methodology, Volume 18 (pp. 449-484). Washington, DC: American Sociological Association. ♦
Recommended by Michael Sobel for reading on intermediate outcomes. See book on Amazon.com.
Holland, P. W. (1993). Which comes first, cause or effect? In G Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 273-282). Hillsdale, NJ: Lawrence Erlbaum Associates.
Holland, P. W., & Rubin, D. B. (1982). On Lord's Paradox (Technical Report No 82-34). Princeton, NJ: Educational Testing Service. Retrieved from the Wiley Online Library: http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)2330-8516 ♦
Δ Holland and Rubin (1982) conclude that "the blind use of complicated statistical procedures, like analysis of covariance, is doomed ot lead to absurd concclusions" (p. 30). That said, Holland and Rubin argue that analysis of covariance can provide valuable answers in certain situations but that causal statements must be made explicit, ideally through the use of mathematics, rather than in natural language, which can be "vague and potentailly misleading" (p. 30).
Holland, P. W., & Rubin, D. B. (1983). On Lord's Paradox. In H. Wainer & S. Messick (Eds.), Principles of modern psychological measurement (pp. 3-35). Hillsdale, NJ: Lawrence Erlbaum.
Holland, P. W., & Rubin, D. B. (1988). Causal inference in retrospective studies. Evaluation Review, 12(3), 203-231. doi: 10.1177/0193841X8801200301 ♦
Hollis, S., & Campbell, F. (1999). What is meant by intention to treat analysis? Survey of published randomised controlled trials. British Medical Journal, 319(7211), 670-674. ♦
Imai, K., King, G., & Stuart, E. A. (2008). Misunderstandings between experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society A, 171(2), 481-502. doi: 10.1111/j.1467-985X.2007.00527.x ♦
Imai, King, and Stuart (2008) discuss random sampling, random treatment assignment, blocking before assignment, and matching after data collection. The authors also discuss the absurdity of baseline testing for balance in randomized trials.
Imbens, G. W. (2004). Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and Statistics, 86(1), 4-29. ♦
Recommended by Michael Sobel for reading on causal inference.
Imbens, G. W. (2010). An economistís perspective on Shadish (2010) and West and Thoemmes (2010). Psychological Methods, 15(1), 47-55. doi: 10.1037/a0018538 ♦
Imbens, G. W. & Angrist, J. D. (1994). Identification and estimation of local average treatment effects. Econometrica, 62, 467-475. ♦
Imbens, G. W., & Rubin, D. B. (1997). Estimating outcome distributions for compliers in instrumental variables models. Review of Economic Studies, 64(4), 555-574. ♦
Imbens, G. W., & Rubin, D. B. (2015). Causal inference for statistics, social, biomedical sciences: An introduction. New York: Cambridge University Press. ◊
Imbens and Rubin (2015) provide an exceptional introduction to the use of data and statistics to make causal inferences.
Jo, B. (2002). Estimation of intervention effects with noncompliance: Alternative model specifications (with discussion). Journal of Educational and Behavioral Statistics, 27, 385-420. doi: 10.3102/10769986027004385 ♦
See also Jo's rejoinder to comments by Rubin and Mealli in JEBS.
Jo, B. (2002). Model misspecification sensitivity analysis in estimating causal effects of interventions with non-compliance. Statistics in Medicine, 21, 3161-3181.
Jo, B. (2002). Statistical power in randomized intervention studies with noncompliance. Psychological Methods, 7(2), 178-193. ♦
Jo, B. (2008). Causal inference in randomized experiments with mediational processes. Psychological Methods, 13, 314-336. doi: 10.1037/a0014207 ♦
Jo, B., Asparouhov, T., Muthén, B. O., Ialongo, N. S., & Brown, C. H. (2008). Cluster randomized trials with treatment noncompliance. Psychological Methods, 13(1), 1-18. ♦
Jo, B., & Muthén, B. (2001). Modeling of intervention effects with noncompliance: A latent variable approach for randomized trials. In G. A. Marcoulides & R. E. Schumacker (Eds.), New developments and techniques in structural equation modeling (pp. 57-87). Mahwah, NJ: Lawrence Erlbaum Associates. ♦
Joffe, M. M., Small, D., Hsu, C-S. (2007). Defining and estimating intervention effects for groups that will develop an auxiliary outcome. Statistical Science, 22(1), 74-97. ♦
From the abstract: "It has recently become popular to define treatment effects for subsets of the target population characterized by variables not observable at the time a treatment decision is made. Characterizing and estimating such treatment effects is tricky; the most popular but naive approach inappropriately adjusts for variables affected by treatment and so is biased. We consider several appropriate ways to formalize the effects. . . ."
Kim, Y., & Steiner, P. (2016). Quasi-Experimental Designs for Causal Inference. Educational Psychologist, 51(3-4), 395-405. doi: 10.1080/00461520.2016.1207177
La Caze, A., Djulbegovic, B., & Senn, S. (2012). What does randomisation achieve? Evidence-Based Medicine, 17(1), 1-2. doi: 10.1136/ebm.2011.100061 ♦
Lachin, J. M. (2000). Statistical considerations in the intent-to-treat principle. Controlled Clinical Trials, 21(3), 167-189. ♦
Due to potential bias that can be introduced by postrandomization exclusions, "especially in a large study, the inflation in type I error probability can be severe, 0.50 or higher, even when the null hypothesis is true" (abstract).
Lanza, S., Moore, J., & Butera, N. (2013). Drawing causal inferences using propensity scores: A practical guide for community psychologists. American Journal of Community Psychology, 52(3/4), 380-392. doi: 10.1007/s10464-013-9604-4
Larsen, R. J., & Marx, M. L. (1986). An introduction to mathematical statistics and its applications (2nd ed.). Englewood Cliffs, NJ: Pretice-Hall.
Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2014). Why ineffective psychotherapies appear to work: A taxonomy of causes of spurious therapeutic effectiveness. Perspectives on Psychological Science, 9(4), 355-387. doi: 10.1177/1745691614535216 ♦
Lilienfeld, Ritschel, Lynn, Cautin, and Latzman (2014) outline the potential causes of spurious treatment effects for psychological interventions that explain why interventions may appear to work when they, in fact, do not. They discuss the causes in terms of the perceptions of interventionists and their treatment recipients and potentially associates (e.g., family & friends). The authors locate each cause of spurious effects within four broad cognitive barriers: naïve realism, confirmation bias, illusory causation, and illusion of control. Many of the 26 potential causes of spurious effects have parallels for educational, social-behavioral, or other interventions, curricula, policies, and prevention programs.
Lind J. (1753). A treatise on the scurvy: In three parts. Edinburgh: Sands, Murray, and Cochran for A. Kincaid & A Donaldson. Retrieved from the James Lind Libraray, http://www.jameslindlibrary.org/
Lind (1753) reported, "On the 20th of May 1747, I selected twelve patients in the scurvy, on board the Salisbury at sea. Their cases were as similar as I could have them" (p. 191). See also Dunn (1997).
Little, R. J., & Yau, L. H. Y. (1998). Statistical techniques for analyzing data from prevention trails: Treatment of no-shows using Rubin's causal model. Psychological Methods, 3, 147-159. ♦
Luellen, J. K. (2007). A comparison of propensity score estimation and adjustment methods on simulated data. Dissertation Abstracts International: Section B: The Sciences and Engineering, 68(5-B), 3433.
From the abstract: "This study used simulated data to examine the relative performance of five methods of estimating propensity scores (logistic regression, classification trees, bootstrap aggregation, boosted regression, and random forests) crossed with four types of adjustments that utilize propensity scores (matching, stratification, covariance adjustment, and weighting) at two levels of sample sizes (N = 200 and N = 1,000). . . . All combinations of propensity score methods led to at least some average reduction in selection bias, and for most combinations of methods these reductions were statistically significant. However, this seemingly promising finding is tempered by the fact that bias was actually introduced in many replicates, especially when the level of sample size was 200 [emphasis added]. The traditional approach to estimating propensity scores, logistic regression, worked well at reducing selection bias, on average, at both sample sizes and tended to result in more precise estimates of the treatment effect with less potential for introducing bias. . . . Matching, stratification, and covariance adjustment were fairly competitive and a clear favorite was not discerned."
Lundh, A., & Gøtzsche, P. C. (2008). Recommendations by Cochrane Review Groups for assessment of the risk of bias in studies. BMC Medical Research Methodology, 8, e22. doi: 10.1186/1471-2288-8-22 ♦
Maxwell, S. E. (2010). Introduction to the special section on Campbellís and Rubinís conceptualizations of causality. Psychological Methods, 15(1), 1-2. doi: 10.1037/a0018825 ♦
Maxwell (2010) introduces a special section on two perspectives of causal infrence, those developed by Donald Campbell and Donald Rubin. Commentaries were provided by Shadish (2010) and West and Thoemmes (2010). See also Rubin's (2010) and Impens' (2010) discussion of Shadish and West and Thoemmes.
Mealli, F., & Rubin, D. B. (2002). Discussion of 'Estimation of intervention effects with noncompliance: Alternative model specifications' By Booil Jo. Journal of Educational and Behavioral Statistics, 27(4), 411-415. doi: 10.3102/10769986027004411
Mealli and Rubin (2002) offer a commentary on Jo's (2002) paper in the same journal.
Morgan, S. L., & Winship, C. (2007). Counterfactuals and causal inference: Methods and principles for social research. New York: Cambridge University Press. ◊
Pearl, J. (2000). Causality: Models, reasoning, and inference. New York: Cambridge University Press.
Pearl, J. (2009). Causality: Models, reasoning, and inference (2nd ed.). New York: Cambridge University Press.
Pearl, J., Glymour, M., & Jewell, N. P. (2016). Causal inference in statistics: A primer. New York: John Wiley & Sons. ♦¹
Posavac, E. J. (2002). Using p values to estimate the probability of a statistically significant replication. Understanding Statistics, 1(2), 101-112. ♦
Raudenbush, S. W. (2001). Comparing personal trajectories and drawing causal inferences from longitudinal data. Annual Review of Psychology, 52, 501-25. ♦
Raudenbush, S. W. (2005). How do we study 'what happens next'? Annals of the American Academy of Political and Social Science, 602(1), 131-144. doi: 10.1177/0002716205280900 ♦
Raudenbush, S. W. (2008). Advancing policy by advancing research on instruction. American Educational Research Journal, 45(1), 206-230. ♦
This theoretical yet accessible paper presents several challenges with educational research that tests "instructional regimes" at the classroom or school level. These include the application of randomization and stable unit treatment value assumption, both critical requirements for causal inference, within the framework of clustered trials. The paper also argues for measurement of the intervention activities, in this case measurement of the experienced, opposed to intended, instructional regimes. "Intended regimes are well measured and accessible to randomized trials, whereas experienced instruction is measured with error and not amenable to randomization" (abstract). Raudenbush also raises challenges associated with multiyear sequences of instruction.
Raudenbush, S. W., Reardon, S. F., Nomi, T. (2012). Statistical analysis for multisite trials using instrumental variables with random coefficients. Journal of Research on Educational Effectiveness, 5(3), 303-332. ♦
Robins, J. M. (2000). Marginal structural models versus structural nested models as tools for causal inference. In M. E. Halloran & D. Berry (Eds.), Statistical models in epidemiology, the environment, and clinical trials (pp. 95-133). New York: Springer-Verlag. ♦
Robins, J. M. (2003). Semantics of causal DAG models and the identification of direct and indirect effects. In P. J. Green, N. L. Hjort, & S. Richardson (Eds.), Highly structured stochastic systems (pp. 70-81). New York: Oxford.
Robins, J. M., & Greenland, S. (1992). Identifiability and exchangeability for direct and indirect effects. Epidemiology, 3(2), 143-155. ♦
Rogosa, D., (1987). Causal models do not support scientific conclusion: A comment in support of Freedman. Journal of Educational Statistics, 12, 185-95. ♦
Rogosa, D. (1988). Myths about longitudinal research. In K. W. Schaie, R. T. Campbell, W. Meredith, & S. C. Rawlings (Eds.), Methodological Issues in Aging Research (pp. 171-209). New York: Springer. ♦
Chapter is concerned with methods for the analysis of longitudinal data. It seeks to convey "right thinking" about longitudinal research. Heroes of this chapter are statistical models for collections of individual growth (learning) curves. Myths indicate some of the beliefs that have impeded doing good longitudinal research.
Rosen, L., Manor, O., Engelhard, D., & Zucker, D. (2006). In defense of the randomized controlled trial for health promotion research. American Journal of Public Health, 96(7), 1181-1186. doi: 10.2105/AJPH.2004.061713 ♦
Rosenbaum, P. R. (2002). Covariance adjustment in randomized experiments and observational studies. Statistical Science, 17(3), 286-327. ♦
Includes comments by Angrist and Imbens, Hill, and Robins, with a rejoinder by Rosenbaum.
Rosenbaum. P. R. (2002). Observational studies (2nd ed.). New York: Springer
Rosenbaum. P. R. (2007). Interference between units in randomized experiments. Journal of the American Statistical Association, 102(477), 191-200. doi: 10.1198/016214506000001112 ♦
Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41-55.
Rosenbaum, P. R., & Rubin, D. B. (1984). Reducing bias in observational studies using subcalssification on the propensity score. Journal of the American Statistical Association, 79(387), 516-524. ♦
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688-701. ♦
Rubin (1974) defines and defends the randomized experiment. That is, he provides an clear explanation of the importance of experimental control, created by randomization for most social science experiments. Rubin also compares the relative value of observational studies to experiments. This excellent and interesting paper should be read periodically, along with Jacob Cohen's (1990) "Things I have learned (so far)."
Rubin, D. B. (1977). Assignment to treatment group on the basis of a covariate. Journal of Educational Statistics, 2, 1-26. ♦
Rubin, D. B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6, 34-58. ♦
Rubin, D. B. (1980). Discussion of "Randomization analysis of experimental data in the Fisher randomization test" by Basu. Journal of the American Statistical Association, 75, 591-93. ♦
Rubin coins the phrase stable unit treatment value assumption (SUTVA) in his discussion of an article by D. Basu. For more on SUTVA, see Rubin (1986).
Rubin, D. B. (1981). Estimation in parallel randomized experiments. Journal of Educational Statistics, 6, 377-400. ♦
Rubin, D. B. (1986). Which ifs have causal answers? Discussion of "Statistics and causal inference" by Holland. Journal of the American Statistical Association, 83, 396. ♦
Rubin, D. B. (1990). Formal modes of statistical inference for causal effects. Journal of Statistical Planning and Inference, 25, 279-292. ♦
The first six sections provide an interesting overview of causal effects and their defining characteristics. The following eight sections describe several modes of inference.
Rubin, D. B. (1991). Practical implications of models of statistical inference for causal effects and the critical role of random assignment. Biometrics, 47, 1213-1234. ♦
Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469), 322-331. ♦
Rubin, D. B. (2007). The design versus the analysis of observational studies for causal effects: Parallels with the design of randomized studies. Statistics in Medicine, 26(1), 20-36. ♦
See comments on this article by Ian Shrier's (2008) letter with Rubin's reply ♦, Ian Shrier's (2009) letter on propensity scores ♦, and Judea Pearl's (2009) letter with Rubin's reply ♦.
Rubin, D. B. (2010). Reflections stimulated by the comments of Shadish (2010) and West and Thoemmes (2010). Psychological Methods, 15(1), 38-46. doi: 10.1037/a0018537 ♦
Rubin, D. B., & Thomas N. (1996). Matching using estimated propensity scores: Relating theory to practice. Biometrics, 52, 249-264. ♦
Schmidt, F. (2010). Detecting and correcting the lies that data tell. Perspectives on Psychological Science, 5(3) 233-242. ♦
Schochet, P. Z., & Burghard, J. (2007). Using propensity scoring to estimate program-related subgroup impacts in experimental program evaluations. Evaluation Review, 31, 95-120. ♦
Schochet and Burghard (2007) explain how to address variability in program impacts based on specific program features, which may include implementation fidelity.
Schultz, K. F., & Grimes, D. A. (2002). Sample size slippages in randomized trials: Exclusions and the lost and wayward. Lancet, 359, 781-785. ♦
Senn, S. (2013). Seven myths of randomisation in clinical trials. Statistics in Medicine, 32(9), 1439-1450. doi: 10.1002/sim.5713 ♦
Shadish, W. R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods, 15(1), 3-17. doi: 10.1037/a0015916 ♦
Shadish, W. R., & Ragsdale, K. (1996). Random versus nonrandom assignment in controlled experiments: Do you get the same answer? Journal of Consulting and Clinical Psychology, 64(6), 1290-1305. ♦
"It is concluded that studies using nonrandom assignment may produce acceptable approximations to results from randomized experiments under some circumstances but that reliance on results from randomized experiments as the gold standard is still well founded" (abstract). Nonetheless, "a slightly degraded randomized experiment may still produce better effect estimates than many quasi-experiments (Shadish & Ragsdale, 1996)" (Shadish, Cook, & Campbell, 2002, p. 229).
Sheiner, L. B. (2002). Is intent-to-treat analysis always (ever) enough? British Journal of Clinical Pharmacology, 54(2), 203-211. doi:10.1046/j.1365-2125.2002.01628.x ♦
Smith, G. C. S., & Pell, J. P. (2003). Parachute use to prevent death and major trauma related to gravitational challenge: Systematic review of randomised controlled trials. British Medical Journal, 327, 1459-1461. doi:10.1136/bmj.327.7429.1459 ♦
Sobel, M. E. (1995). Causal inference in the social and behavioral sciences. In G. Arminger, C. C. Clogg, & M. E. Sobel (Eds), Handbook of statistical modeling for the social and behavioral sciences. New York: Plenum. ♦
Sobel, M. E. (1996). An introduction to causal inference. Sociological Methods and Research, 24(3), 353-379. ♦
Recommended by Michael Sobel for reading on causal inference.
Sobel, M. E. (2008). Identification of causal parameters in randomized studies with mediating variables. Journal of Educational and Behavioral Statistics, 33(2), 230-251. ♦
Recommended by Michael Sobel for reading on intermediate outcomes.
Sobel, M. (2009). Causal inference in randomized and non-randomized studies: The definition, identification, and estimation of causal parameters. The Sage handbook of quantitative methods in psychology (pp. 3-22). Thousand Oaks, CA: Sage. ♦
Stuart, E. A., Perry, D. F., Le, H.-N., & Ialongo, N. S. (2008). Estimating intervention effects of prevention programs: Accounting for noncompliance. Prevention Science, 9, 288-298. doi: 10.1007/s11121-008-0104-y. ♦
Ten Have, T. R., Elliott, M. R., Joffe, M., Zanutto, E., & Datto, C. (2004). Causal models for randomized physician encouragement trials in treating primary care depression. Journal of the American Statistical Association, 99, 16-25. ♦
Ten Have, T. R., Joffe, M., Lynch, K., Brown, G., & Maito, S. (2005). Causal mediation analyses with structural mean models (Biostatistics working paper). University of Pennsylvania. ♦
van den Berg, G. J. (2007). An economic analysis of exclusion restrictions for instrumental variable estimation (IZA Discussion Paper No. 2585). Bonn, Germany: Institute for the Study of Labor. Retreived from the Institute for the Study of Labor (IZA), http://legacy.iza.org/en/webcontent/publications/papers ♦
West, S. G., & Thoemmes, F. (2010). Campbellís and Rubinís perspectives on causal inference. Psychological Methods, 15(1), 18-37. doi: 10.1037/a0015917 ♦
Winship, C. & Morgan, S. L. (1999). The estimation of causal effects from observational data. Annual Review of Sociology, 25, 659-706. ♦
Wilcox, A., & Wacholder, S. (2008). Observational data and clinical trials: Narrowing the gap? Epidemiology, 19(6), 765. ♦
Introduction to the a debate, captured in issue 19(6) of Epidemiology, about the use of observation data to measure clinical outcomes in the context of postmenopausal hormone therapy and coronary heart disease studied through the Nurses' Health Study and the Women's Health Initiative. See references to all relevant papers on the General Public Health bibliography page.
Wu, M., & Cheng, P. W. (1999). Why causation need not follow from statistical association: Boundery conditions for the evaluation of generative and preventive causal powers. Psychological Science, 10(2), 92-97. ♦
Recommended Reading on Causal Inference
Links to External Websites