Causal inference

Theme Co-ordinators: Rhian Daniel, Simon Cousens, Bianca De Stavola, Karla Diaz-Ordaz, Richard Silverwood, Ruth Keogh

Please see here for slides and audio recordings of previous seminars relating to this theme.

A brief overview of a vast and rapidly-expanding subject

Causal inference is a central aim of many empirical investigations, and arguably most studies in the fields of medicine, epidemiology and public health. We would like to know ‘does this treatment work?’, ‘how harmful is this exposure?’, or ‘what would be the impact of this policy change?’.

The gold standard approach to answering such questions is to conduct a controlled experiment in which treatments/exposures are allocated at random, all participants adhere perfectly to the treatment assigned, and all the relevant data are collected and measured without error. Provided that we can then discount ‘chance’ alone as an explanation, any observed differences between treatment groups can be given a causal interpretation (albeit in a population that may differ from the one in which we are interested).

In the real world, however, such experiments rarely attain this ideal status, and for many important questions, such an experiment would not even be ethically, practically, or economically feasible, and our investigations must be based instead on observational data. In reality, therefore, causal inference is a very ambitious goal. However, since it undeniably is the only useful goal in so many contexts, we must try our best. This involves carefully formulating the causal question to be tackled, explicitly stating the assumptions under which the answers may be trusted, often considering novel analysis methods that may require weaker assumptions than would be required by traditional approaches, and finally using sensitivity analyses to explore how robust our conclusions are to violations of the assumptions.

Historically, even when attempting causal inference, the role of statistics was seen to be to quantify the extent to which ‘chance’ could explain the results, with concerns over systematic biases due to the non-ideal nature of the data relegated to the qualitative discussion of the results. The field known as causal inference has changed this state of affairs, setting causal questions within a coherent framework which facilitates explicit statement of all the assumptions underlying the analysis and allows extensive exploration of potential biases. In the paragraphs that follow, we will attempt a brief overview.

 A language for causal inference (potential outcomes and counterfactuals)

Over the last thirty years, a formal statistical language has been developed in which causal effects can be unambiguously defined, and the assumptions needed for their identification clearly stated. Although alternative frameworks have been suggested (see, for example, Dawid, 2000) and developed, the language which has gained most traction in the health sciences is that of potential outcomes, also called counterfactuals (Rubin, 1978).

Suppose X is a binary exposure, Y a binary outcome, and C a collection of potential confounders, measured before X. We write Y0 and Y1 for the two potential outcomes; the first is the outcome that would be seen if X were set (possibly counter to fact) to 0, and the second is what would be seen if X were set to 1. Causal effects can then be expressed as contrasts of aspects of the distribution of these potential outcomes. For example:

  1. E(Y1) –E(Y0)
  2. E(Y1|X=1)/E(Y0|X=1)
  3. log[E(Y1|C)/{1–E(Y1|C)}] –log[E(Y0|C)/{1–E(Y0|C)}]

The first is the average causal effect (ACE) of X on Y expressed as a marginal risk difference and the second is the average causal effect in the exposed (also called the average treatment effect in the treated, or ATT) expressed as a risk ratio (marginal wrt confounders C). The third is a conditional causal log odds ratio, given C.

Sufficient conditions for these and other similar parameters to be identified can also be expressed in terms of potential outcomes. For the ACE, for example, these are:

  1. Consistency: For x=0,1, if X=x then Yx=Y
  2. Conditional exchangeability: For x=0,1, Yx ╨ X | C

The latter formalises the notion of “no unmeasured confounders”.

The increased clarity afforded by this language has led to increased awareness of causal pitfalls (such as the ‘birthweight paradox’ – see Hernández-Díaz et al, 2006) and the building of a new and extensive toolbox of statistical methods especially designed for making causal inferences from non-ideal data under transparent, less restrictive and more plausible assumptions than were hitherto required.

Of course this does not mean that all causal questions can be answered, but at least they can be formally formulated and the plausibility of the required assumptions assessed.

Considerations of causality are not new. Neyman used potential outcomes in his PhD thesis in the 1920s, and who could forget Bradford Hill’s much-cited guidelines published in 1965? The last few decades, however, have seen the focus move towards developing solutions, as well as acknowledging limitations.

Traditional methods

Not all reliable causal inference requires novel methodology. A carefully-considered regression model, with an appropriate set of potential confounders (possibly identified using a causal diagram – see below) measured and appropriately included as covariates, is a reasonable approach in some settings.

Causal diagrams

An ubiquitous feature of methods for estimating causal effects from non-ideal data is the need for untestable assumptions regarding the causal structure of the variables being analysed (from which conditions such as conditional exchangeability can be deduced). Such assumptions are often represented in a causal diagram or graph, with variables identified by nodes and the relationships between them by edges. The simplest and most commonly-used class of causal diagram is the (causal) directed acyclic graph (DAG), in which all edges are arrows, and there are no cycles, i.e. no variable explains itself (Greenland et al, 1999). These are used not only to represent assumptions but also to inform the choice of a causally-interpretable analysis, specifically to help decide which variables should be included as confounders.

Fully-parametric approaches to problems involving many variables

Another common feature of causal inference methods is that, as we move further from the ideal experimental setting, more aspects of the joint distribution of the variables must be modelled, which would have been ancillary had the data arisen from a perfect experiment. Structural equation modelling (SEM) (Kline, 2011) is a fully-parametric approach, in which the relationship between each node in the graph and its parents is specified parametrically. This approach offers an elegant (full likelihood) treatment of ignorable missing data and measurement error, when this affects any variable for which validation or replication data are available.

Semiparametric approaches

Concerns over the potential impact of model misspecification in fully-parametric approaches have led to the development of alternative semiparametric approaches to causal inference, in which the number of additional aspects to be modelled is reduced. These include methods based on the propensity score (Rosenbaum and Rubin, 1983), including inverse probability weighting, and g-estimation, and the so-called doubly-robust estimation proposed by Robins, Rotnitzky and others.

Inferring the effects of time-varying exposures

Novel causal inference methods are particularly relevant for studying the causal effect of a time-varying exposure on an outcome, because standard methods fail to give causally-interpretable estimators when there exist time-varying confounders of the exposure and outcome that are themselves affected by previous levels of the exposure. Methods developed to deal with this problem include the fully-parametric g-computation formula (Robins, 1986), and two semiparametric approaches: g-estimation of structural nested models (Robins et al, 1992), and inverse probability weighted estimation of marginal structural models (Robins et al, 2000). For an accessible tutorial on these methods, see Daniel et al (2013). Related to this longitudinal setting is the identification of optimal treatment regimes, for example in HIV/AIDS research where questions such as ‘at what level of CD4 should HAART (highly active antiretroviral therapy) be initiated?’ are often asked. These can be addressed using the methods listed above, and other related methods (see Chakraborty and Moodie, 2013).

Instrumental variables and Mendelian Randomisation

It is important to appreciate that non-ideal experimental data (e.g. suffering from noncompliance, missing data or measurement error) are not on a par with data arising from observational studies (as may be inferred from what is written above). Randomisation can be used as a tool to aid causal inference even when the randomised experiment is ‘broken’, for example as a result of non-compliance to randomised treatment. Such methods make use of randomisation as an instrumental variable (Angrist and Pischke, 2009). Instrumental variables have even been used with observational data, in particular when the instrument is a variable that holds genetic information (in which case it is known as Mendelian randomisation; see Davey Smith and Ebrahim, 2003) with genotype used in place of randomisation. This is motivated by the idea that genes are ‘randomly’ passed down from parents to offspring in the same way that treatment is allocated in double-blind randomised trials. Although this assumption is generally untestable (Hernán and Robins, 2006), there are situations in which it may be deemed more plausible than the other candidate set of untestable assumptions, namely conditional exchangeability.

Mediation analysis

Approaches (such as SEM) amenable to complex causal structures have opened the way to looking beyond the causal effect of an exposure on an outcome as a black box, and to asking ‘how does this exposure act?’. For example, if income has a positive effect on certain health outcomes, does this act simply by increasing access to health care, or are there other important pathways? Addressing such questions is the goal of mediation analysis and the estimation of direct/indirect effects (see Emsley et al, 2010, for a review). This area has seen an explosion of new methodology in recent years, with several semiparametric alternatives to SEM introduced.

Suggested introductory reading

Hernán MA, Robins JM (to appear, 2015) Causal Inference. Chapman & Hall/CRC. [First fifteen chapters available for download here.]

Greenland S, Pearl J, Robins JM (1999) Causal diagrams for epidemiologic research.Epidemiology10(1):37–48.

Hernán MA, Hernández-Díaz S, Werler MM, Mitchell AA (2002) Causal knowledge as a prerequisite for confounding evaluation: an application to birth defects epidemiology. American Journal of Epidemiology155:176–184.

Pearl J (2010) An Introduction to Causal Inference. The International Journal of Biostatistics. 6(2): Article 7.

Angrist JD, Pischke J (2009) Mostly harmless econometrics: an empiricist’s companion. Princeton University Press.

Other references

Chakraborty B, Moodie EEM (2013) Statistical methods for dynamic treatment regimes. Springer.

Daniel RM, Cousens SN, De Stavola BL, Kenward MG, Sterne JAC (2013) Methods for dealing with time-dependent confounding. Statistics in Medicine. 32(9):1584–1618.

Davey Smith G, Ebrahim S (2003) ‘Mendelian randomization’: can genetic epidemiology contribute to understanding environmental determinants of disease? International Journal of Epidemiology32:1–22.

Dawid, AP (2000) Causal inference without counterfactuals. Journal of the American Statistical Association. 95(450):407–448.

Emsley RA, Dunn G, White IR (2010) Mediation and moderation of treatment effects in randomised controlled trials of complex interventions. Statistical Methods in Medical Research. 19(3):237–270.

Greenland S, Pearl J, Robins JM (1999) Causal diagrams for epidemiologic research.Epidemiology10(1):37–48.

Hernán MA, Robins JM (2006) Instruments for causal inference: an epidemiologist’s dream? Epidemiology17:360–372.

Hernández-Díaz S, Schisterman EF, Hernán MA (2006) The birth weight “paradox” uncovered? American Journal of Epidemiology. 164: 1115–1120.

Kline RB (2011) Principles and Practice of Structural Equation Modeling, 3rd ed. The Guilford Press.

Robins JM (1986) A new approach to causal inference in mortality studies with a sustained exposure period – application to control of the healthy worker survivor effect. Mathematical Modelling. 7:1393–1512.

Robins JM, Blevins D, Ritter G, Wulfsohn M (1992) G-estimation of the effect of prophylaxis therapy for pneumocystis carinii pneumonia on the survival of AIDS patients. Epidemiology3:319–336.

Robins JM, Hernán MA, Brumback B (2000) Marginal structural models and causal inference in epidemiology. Epidemiology11:550–560.

Rosenbaum PR, Rubin DB (1983) The central role of the propensity score in observational studies for causal effects. Biometrika. 70(1):41–55.

Rubin DB (2006) Bayesian inference for causal effects: the role of randomisation. The Annals of Statistics. 6:34–58.

Recent and on-going methodological research in Causal Inference at LSHTM

 Causal mediation analysis

Researchers from LSHTM are involved in several strands of research on mediation analysis, including dealing with multiple mediators, intermediate confounding and latent variables, specifically in studies of birthweight and infant mortality.

Mediation analysis in the presence of intermediate confounders and the links with SEM

Bianca De Stavola, Rhian Daniel (LSHTM) and George Ploubidis (IoE)

Intermediate confounders, ie variables that confound the mediator-outcome relationship and are affected by the exposure, are problematic for the decomposition of causal effects into direct and indirect components. The sufficient conditions most commonly cited for identifying natural direct and indirect effects (Pearl, 2001) include the so-called “cross-world assumption”, that conditionally on baseline confounders C, the counterfactuals Y(x,m) and M(x*) should be independent, even when x≠x*. This assumption precludes the existence of intermediate confounders. However, identification is also possible when this assumption is replaced by a weaker one (Petersen et al, 2006) namely that E{Y(1,m)-Y(0,m)|M(0)=m,C=c} = E{Y(1,m)-Y(0,m)|C=c}. Alternatively, Robins and Greenland (1992) showed that identification is also possible when this assumption is replaced by the condition that there can be no X-M interaction even on an individual level, ie that, for each subject i, Yi(1,m)-Yi(0,m) is the same for all levels of m. Both the Petersen et al assumption, and that of Robins and Greenland, can hold when intermediate confounding is present, but they imply restrictions on the form of the associational models to be fitted. In this work, we discuss these restrictions, together with further results, and in-so-doing clarify the link between the causal inference and SEM approaches to mediation analysis.

We have also written a routine in Stata (gformula) for estimating controlled direct effects and natural direct and indirect effects (or their randomized interventional analogues) in the presence of intermediate confounding using a fully-parametric approach via Monte Carlo simulation.

Daniel RM, De Stavola BL and Cousens SN (2011) gformula: Estimating causal effects in the presence of time-varying confounding or mediation using the g-computation formula. The Stata Journal. 11(4):479–517.

De Stavola BL, Daniel RM, Ploubidis GB, Micali N (2015) Mediation analysis with intermediate confounding: structural equation modelling viewed through the causal inference lens. American Journal of Epidemiology. 181(1):64–80.

Mediation analysis with multiple mediators

Rhian Daniel, Bianca De Stavola and Simon Cousens (LSHTM) and Stijn Vansteelandt (University of Ghent)

The many recent contributions to the causal inference approach to mediation analysis have focused almost entirely on settings with a single mediator of interest, or a set of mediators considered en bloc; in many applications, however, researchers attempt a much more ambitious decomposition into numerous path-specific effects through many mediators. In this work, we gave counterfactual definitions of such path-specific estimands in settings with multiple mediators, when earlier mediators may affect later ones, showing that there are many ways in which decomposition can be done. We discussed the strong assumptions under which the effects are identified, suggesting a sensitivity analysis approach when a particular subset of the assumptions cannot be justified. The aim was to bridge the gap from “single mediator theory” to “multiple mediator practice,” highlighting the ambitious nature of this endeavour and giving practical suggestions on how to proceed.

Daniel RM, De Stavola BL, Cousens SL, Vansteelandt S (2015) Causal mediation analysis with multiple mediators. Biometrics. 71(1):1–14.

The (low) birthweight paradox

Bianca De Stavola, Richard Silverwood (LSHTM)

Overall, maternal factors such as low socio-economic position and smoking lead to a higher incidence of infant mortality. However, this relationship has been found to be reversed for babies of low birthweight, with factors such as maternal smoking appearing protective. This has been termed the (low) birthweight paradox and various explanations have been offered. One of these is that the apparent reversal of the effect is due to unaccounted confounding between birthweight and infant mortality. We are currently investigating this phenomenon in the ONS and Scotting Longitudinal Studies. The methodological aspect of this work involves incorporating a latent class approach to account for some of the unmeasured confounding.

Longitudinal causal effects and time-dependent confounding

In the setting of longitudinal data, LSHTM researchers are involved in methods for inferring short-term and total effects, methods for use when there is strong confounding, methods for use with routinely-collected data, as well as having recently been involved in pedagogic and software-development work (to show/hide references, ).

Daniel RM, Cousens SN, De Stavola BL, Kenward MG, Sterne JAC (2013) Methods for dealing with time-dependent confounding. Statistics in Medicine. 32(9):1584–1618.

Daniel RM, De Stavola BL and Cousens SN (2011) gformula: Estimating causal effects in the presence of time-varying confounding or mediation using the g-computation formula. The Stata Journal. 11(4):479–517.

Application of Marginal Structural Models (MSMs) with Inverse Probability of Treatment Weighting (IPTW) to primary care records in the clinical area of diabetes

Ruth Farmer, Krishnan Bhaskaran (LSHTM) and Debbie Ford (MRC CTU)

Existing research is conflicting over whether the first line treatment of metformin for type 2 diabetes is protective against the development of cancer. Within this context, time varying measures such as blood glucose level (HBa1c) and BMI are determinants of treatment, may be affected by prior treatment, and may also have an independent effect on risk of cancer. Work is ongoing to apply MSMs with IPTW to deal with time dependent confounding in this context, using data from the Clinical Practise Research Datalink (CPRD). This will be one of the first attempts to apply MSM methodology to a real-world problem in a “big data” setting. There is a particular methodological focus on how the diabetes context and use of routinely collected data may lead to violation of the underlying assumptions needed for the MSM to produce valid causal inferences, and potential solutions to this.

Inferring short-term and total effects from longitudinal data

Ruth Keogh (LSHTM) and Stijn Vansteelandt (University of Ghent)

In longitudinal studies in which measures of exposures and outcomes are observed at repeated visits, interest may lie in studying short term or long term exposure effects. A short term effect is defined as the effect of an exposure at a given time on a concurrent outcome. Long term effects are the effects of earlier exposures on any subsequent outcome, and interest may be in two types of long term effect: (1) the total effect of an exposure at a given time on a subsequent outcome, including both indirect effects mediated through intermediate exposures and direct effects; (2) the joint effects of exposures at different time points on a subsequent outcome, which requires a separation of direct and indirect effects.

The emphasis in the statistical causal inference literature has been on studying joint effects and in particular on special methods for handling the complications of time-dependent confounding which occur in this situation when time-varying confounders are affected by past outcomes, which cannot be handled by standard regression adjustment. However, investigating short term or total exposure effects provides a simpler starting point. Moreover, these effect estimates may often be the most useful, for example for a doctor making a decision about starting a patient on a treatment. In this work we have shown how, with careful control for confounding by past exposures, outcomes and time-varying covariates, short term and total effects can be estimated using conventional regression analysis. This approach is based on sequential conditional mean models (CMM) including an extension to include propensity score adjustment. We have used simulation studies to compare this approach with IPW, finding that sequential CMMs give more precise estimates than IPW and provide double-robustness via propensity score adjustment. As part of this work we have also developed a new test of whether there are direct effects of past exposures on a subsequent outcome not mediated through intermediate exposures.

A manuscript is forthcoming.

Propensity score adjustment and strong confounding

Rhian Daniel (LSHTM) and Stijn Vansteelandt (University of Ghent)

Regression adjustment for the propensity score (p(C)=Pr(X=1|C), where X is the exposure and C are confounders) is a rarely-used alternative to the other propensity score methods, namely stratification, matching and inverse weighting. In recent work, we clarified the rationale for its use, in particular for estimating the information-standardised effect:

E{w(C)(Y1-Y0)}  (*)

where w(C)=[p(C){1-p(C)}] / E[p(C){1-p(C)}], since in GLMs, adjustment for the propensity score leads to a consistent estimator of (*) if the propensity score model is correctly specified even if its conditional relationship with the outcome (in the GLM) has been misspecified.

The estimand (*) is attractive in settings with strong confounding, since it gives greatest weight to those in the centre of the propensity score distribution, and least weight to those who, on the basis of their confounders were nearly certain either to be exposed or not, about whom the observational data carry very little information on the treatment effect. When there is strong confounding, so that some subjects are bound to be exposed/unexposed estimating the more usual estimand E(Y1-Y0) may be too ambitious, and anyway may not be of interest, since it requires asking what would happen if everyone were exposed, even though we know that some subjects, on the basis of their confounders, will never be exposed.

In on-going work, we are extending this thinking to longitudinal studies where the problem of strong confounding becomes arguably even more acute. Typically, the regimes that are compared by g-methods are “always treat”, “never treat” etc. More pragmatic estimands may be sensible in situations where very few subjects have a propensity to be always/never treated.

Vansteelandt S, Daniel RM (2014) On regression adjustment for the propensity score. Statistics in Medicine; 33(23):4053–4072.

Causal inference and missing data

There are many links between the concepts and methods used in the fields of causal inference and missing data, and several LSHTM researchers are working on this intersection:

A doubly robust estimator to handle missing data and confounding simultaneously, with a focus on data from e-health records

Elizabeth (Fizz) Williamson (LSHTM)

Fizz recently developed a doubly robust estimator that combines an element of robustness to the models used to handle missing data with an element of robustness to the models used to handle confounding. She is currently extending this estimator to more realistic scenarios, particularly those with several partially missing variables. She is also working on a series of projects investigating methods for handling missing data within propensity score analyses, with an emphasis on analyses using data drawn from electronic health records.

Williamson EJ, Forbes A, Wolfe R (2012) Doubly robust estimators of causal exposure effects with missing data in the outcome, exposure or a confounder. Statistics in Medicine. 31(30):4382–4400.

Methods for estimating treatment effect when there are departures from protocol (non-compliance and missing data) in a randomised trial

Karla Diaz-Ordaz (LSHTM)

Mendelian randomization

The technique of Mendelian randomization is used in applied research at LSHTM, particularly in the field of cardiovascular and other non-communicable diseases. Alongside this, methodological work inspired by the applied problems is also carried out.

Investigating non-linear effects with Mendelian randomization

Richard Silverwood and Frank Dudbridge (LSHTM)

Mendelian randomization studies have so far restricted attention to linear associations relating the genetic instrument to the exposure, and the exposure to the outcome, but this may not always be appropriate. For example, alcohol consumption is consistently reported as having a U-shaped association with cardiovascular events in observational studies. Richard Silverwood, Frank Dudbridge and others (Silverwood et al, 2014), proposed a novel method to assess non-linear causal effects using a binary genotype in Mendelian randomization studies based on estimating local average treatment effects for discrete levels of the exposure range, then testing for a linear trend in those effects. Their method gave a conservative test for non-linearity under realistic violations of the key assumption in extensive simulations, making their method useful for inferring departure from linearity when only a binary instrument is available. They found evidence for a non-linear causal effect of alcohol intake on several cardiovascular traits in the Alcohol-ADH1B Consortium, using the single nucleotide polymorphism rs1229984 in ADH1B as a genetic instrument.

Silverwood RJ, Holmes MV, Dale CE, et al. (2014) Testing for non-linear causal effects using a binary genotype in a Mendelian randomization study: application to alcohol and cardiovascular traits. International Journal of Epidemiology; B(6):1781-90.

Causal inference for Health Economics

One of the most active causal inference research groups at the school is led by Richard Grieve, and focuses on the area of health economics. As such, there is overlap with the CSM theme of the same name. The methodological aspects of this work are outlined below.

Causal inference approaches for handling external validity, estimating continuous treatments, and handling aspects of time-varying confounding.

Richard Grieve, Noemi Kreif, Karla Diaz-Ordaz, Manuel Gomes, Zia Sadique (LSHTM) and Jasjeet Sekhon (UC Berkeley)

LSHTM researchers are extending causal inference approaches to: identify populating treatment effects from RCTs, estimate the effects of ‘continuous’ treatments and handle aspects of time-varying confounding in evaluating new health policies from observational data.

A general concern is that RCTs may fail to provide unbiased estimates of population average treatment effects. We have derived the requisite assumptions to identify population average treatment effects from RCTs. Our research provides placebo tests, which formally follow from the identifying assumptions and can assess whether they hold. We offer new research designs for estimating population effects that use non-randomised studies to adjust the RCT data. This approach is illustrated in a study evaluating the clinical and cost-effectiveness analysis of a clinical intervention: pulmonary artery catheterisation (see Hartman et al, 2015).

These placebo tests reveal that in some trial settings, the requisite underlying assumptions for estimating population treatment effects are not satisfied. This external validity concern was illustrated by an RCT of an intervention in primary care, ‘Telehealth’ in which only 20% of eligible patients agreed to participate. To address the external validity issue, we developed sensitivity analyses that combine RCT and observational data to re-estimate treatment effects (Steventon et al, 2015).

When evaluating the effects of continuous treatments (for example according to different dosages of drug), the generalised propensity score (GPS) can be used to adjust for confounding. However, an unbiased estimation of the dose-response function assumes that both the GPS, and the outcome-treatment relationship have been correctly specified. We introduce a machine learning method, the “Super Learner” for model selection in estimating continuous treatment effects. We compare this Super Leaner approach to parametric implementations of the GPS, and to outcome regression methods, in a re-analysis of the Risk Adjustment In Neurocritical care (RAIN) cohort study. Our paper highlights the importance of principled model selection for applied empirical analysis (Kreif et al 2015).

A further strand of research considers alternative approaches for handling confounding in studies where outcomes are measured before and after an intervention. Our research contrasts the synthetic control method for the evaluation of health policies, with difference-in-differences (DiD) estimation. The synthetic control approach estimates treatment effects by constructing a weighted combination of control units, to represent outcomes the treated group would have experienced in the absence of receiving the treatment. DiD estimation assumes that pre-treatment, the outcomes between the treated and control groups follow parallel trends over time, whereas the synthetic control method allows for non-parallel trends. We extend the synthetic control approach to settings where there are multiple treated units (for example hospitals), in re-evaluating the effects of a recent hospital pay-for-performance (P4P) scheme on risk-adjusted hospital mortality. Ongoing research is contrasting the synthetic control, and DiD approaches with matching and regression methods.

Hartman, E., Grieve, R., Ramsahai, R. and Sekhon, JS. (2015). From sample average treatment effect to population average treatment effect on the treated: combining experimental with observational studies to estimate population treatment effects. JRSSA doi: 10.1111/rssa.12094

Free text available from:

Steventon A, Grieve R, Bardsley M (2015). An approach to assess generalizability in comparative effectiveness research: a case study of the Whole Systems Demonstrator cluster randomized trial comparing telehealth with usual care for patients with chronic health conditions. Medical Decision Making (in press).

Kreif, N, Grieve, R, Díaz, I, Harrison, D (2015). Evaluation of the effect of a continuous treatment: a machine learning approach with an application to treatment for traumatic brain injury. Health Economics (in press). Submitted version available as working paper from:

Causal inference methods for economic evaluations of longitudinal interventions

Noemi Kreif (LSHTM)

Noemi holds a Medical Research Council Early Career Fellowship in the Economics of Health, on improving statistical methods to address confounding in the economic evaluation of health interventions, including a collaboration with Dr Maya Petersen at UC Berkeley Division of Biostatistics. She is investigating advanced causal inference methods for the setting of economic evaluations of longitudinal interventions. In particular, she is using targeted maximum likelihood estimation and machine learning to compare dynamic treatment regimes, using a non-randomised study on nutritional intake of critically ill children.

Short course

A short course entitled Causal Inference in Epidemiology: recent methodological developments is held at LSHTM for one week every November.


As part of the CSM’s activities, seminars on causal inference are often organised. Past speakers include Philip DawidVanessa DidelezRichard EmsleyMiguel HernánErica Moodie and Anders Skrondal.

Details of upcoming meetings can be found here.

UK Causal Inference Meeting 2016

We are delighted to host – together with the London School of Economics and Political Science – the 4th annual UK Causal Inference Meeting, which will take place at the LSHTM’s John Snow Lecture Theatre from 13-15 April 2016. More details will be published soon, but for now please save the date!

Other events

As well as regular research seminars, we occasionally organise one-off events on topics of particular interest. For example, we are currently planning a half-day meeting on recent controversies in propensity scores. Details of upcoming events can be found here.