Design and analysis for dependent data

Theme Co-ordinators: Mike Kenward, Chris Frost, Richard Silverwood, Richard Hayes, Shabbar Jaffar

The following topics are grouped within this theme: Cluster randomised trials, Using linear mixed models to inform the design of clinical trials, Design and analysis of cross-over trials, Small sample inference for the multivariate linear model, and Growth modelling.

Cluster randomised trials

Cluster randomised trials (CRTs), in which experimental conditions are assigned to groups or clusters rather than individual subjects, are used increasingly to evaluate the effects of health interventions. They are particularly valuable for interventions that are delivered at community level, or to minimise the contamination that can occur when treatments are assigned to individuals living in the same community, or – particularly for infectious diseases – to capture the indirect/herd effects of intervening in entire populations. The School has considerable experience in the design, conduct and analysis of CRTs, and has collaborated in large scale trials in both developed and developing countries, some of which have had a major impact on health policy.

Although important advances in methodology for CRTs have been made in the past 10-20 years, methods for this trial design are relatively less developed than for traditional individually randomised trials. The main feature of such trials that requires special methodology is the within-cluster correlation between observations induced by randomising groups rather than individuals. A wide range of design and analysis issues require further research. These include the evaluation of alternative analytical methods for different types of endpoint, using simulation studies; analytical methods for pair-matched and stratified studies; methods of testing for interactions, including assessment of dose-response at cluster or individual levels; and methods of accounting for non-compliance when estimating intervention effects in CRTs.

Using linear mixed models to inform the design of clinical trials

Linear mixed models (Verbeke and Molenberghs 2000) have great utility in the analysis of randomised controlled trials with repeated measues of continuous outcome variables. Analysis of repeated measures of potential outcome variables from clinical trials and longitudinal studies using linear mixed models can also inform the design of future clinical trials. In particular when designing a clinical trial choices have to be made concerning a number of factors such as the length of the trial and whether outcomes should be collected at interim visits, and if so at what intervals. The advantages of extending follow-up and adding interim visits depend upon the between- and within-subject components of variability in absolute levels and the between-subject variability in rates of change and these can be quantified by fitting appropriate linear mixed models (e.g. Schott et. al. 2006). Such models can also inform decisions over stratification factors and whether or not to adopt novel designs such as those incorporating a run-in period (Frost et. al. 2008) or utilising a staggered start.

Design and analysis of cross-over trials

In a crossover design each subject is randomized to a sequence of treatments, with the aim of comparing the effect of individual treatment assignment. Such trials are restricted to fairly stable conditions and treatments with reversible effects. When within-subject dependence of the outcome is high, such designs can be very much more efficient than parallel group studies with the same number of subjects. However, because randomization applies to the sequences, not the individual treatments, the justification for the analysis of data from such trials depends more on model based assumptions than a typical parallel group study. The most commonly used crossover design is the so-called two-period two-treatment design in which subjects are randomized to one of two sequences, A then B or B then A. Provided it can be assumed that the consequences of treatment allocation in the first period does not bias the treatment comparison in the second period (carryover) the analysis of such trials is very simple. It turns out that in most trials there will be very little information in the data on possible carryover, and therefore no useful statistical procedure exists either for its assessment or for adjustment for it. As a consequence prior knowledge and adequate washout periods need to be used to justify the analysis in such settings. Higher order designs with more than two periods and more than two sequences exist for both the two treatment setting, and for more than two treatments. Efficient designs can be constructed that allow adjustment for carryover, but only when it is expressed in particular, limited, mathematical forms. Data from crossover designs are examples of conventional repeated measures and analyses can be taken from the range of tools available for these, both with continuous and discrete outcomes.

Early use of cross-over trials was in agricultural experimentation, with the first separation of direct and indirect (or carryover) effects by Cochran (1939). The earliest use in a human setting appears to be Simpson (1938), who used them to compare the effects of childhood diets. Their use evolved over the next 50 years in a wide variety of settings and the first book to gather together the disparate literature on these designs was Jones and Kenward (1989), with a second edition in 2003. A text devoted to the design in a clinical setting was provided by Senn in 1993, with a second edition in 2002. Together these two texts have become the standard references on the subject.

In the School research interest has largely been in the analysis of data from cross-over trials, especially in issues arising from the typical small samples (Kenward and Roger 1997, 2009) and the use of baseline measurements (Kenward and Roger 2010).

Small sample inference for the multivariate linear model

Many modern statistical analyses for continuous outcomes are based on the multivariate Gaussian linear model. Examples are those based on linear mixed models and multivariate models with patterned covariance matrices. Only in very simple balanced settings will the inferences from such analyses be based on exact small sample results, and in practice finite sample methods must be based on approximations. Until the paper of Kenward and Roger (1997) such approximations were ad hoc in nature, and lacked generality. The Kenward-Roger procedure provided both a small sample adjusted estimate of precision and a Wald type inference approach derived from this. The full procedure has been implemented in SAS PROC MIXED, where it is the recommended approach for the multivariate linear model. As a consequence it is now very widely used, and there is a growing literature assessing its performance in a range of settings. An improved approximation for non-linear covariance structures, together with the establishment of important invariance properties, has recently been developed (Kenward and Roger 2009).

Apart from very simple balanced settings, all methods that use empirical generalized least squares break down if the sample size is small enough, including the Kenward-Roger procedure. An alternative is to avoid the use of the covariance structure in estimation and in the calculation of precision and to correct the final inferential pivot for this. Such a procedure, developed recently in the School, has been shown to work well in very small samples (Skene and Kenward 2010a,b). Work continues into such problems including high dimensional meta analysis and other design and analysis issues.

Growth modelling

Understanding how certain biological dimensions change over time is of interest in various clinical and epidemiological contexts. Childhood growth is one such example, as are systolic blood pressure and cognitive function trajectories in adult life.

There are numerous approaches to modelling this type of serial data. The simplest of these is the so-called marginal model, that is a model for the mean of a trajectory. This could be specified as a polynomial (or fractional polynomial), or alternatively as a piecewise linear model or more generally a regression spline. Valid inference requires that the correlated nature of the observations belonging to the same individual be taken into account, for example adopting restricted moment methods, such as those based on generalized estimating equations (Liang and Zeger 1986). However, in most instances we are not just interested in the mean. If this is the case mixed effects models could be used. Here, a model is postulated where some parameters are assumed to vary across individuals, and in this way they capture each individual’s intrinsic growth pattern. Such models are very widely used in practice, especially assuming a normal distribution for random effects and errors, both in linear (e.g. Verbeke and Molenberghs 2000) and non-linear (Davidian 2009) forms. Covariates can be incorporated in a straightforward way while smoothing splines can be used instead of a polynomial function of time to better capture the trajectory. Splines can be accommodated in a relatively simple way within the mixed model framework (e.g. Verbyla et al. 1999, Welham 2009, Silverwood et al. 2009), extending considerably the range of growth patterns that can be fitted to the data.

Our interest is in generalizing such models in several ways, including latent class growth analysis and growth mixture modelling (e.g. Nagin 1999, Muthen and Shedden 1999), and simultaneous  modelling of multiple dimensions.

Selected references

Cheung YB, Jeffries D, Thomson A and Milligan P (2008). A simple approach to test for interaction between intervention and an individual-level variable in community randomized trials. Tropical Medicine and International Health, 13: 247-255.

Davidian M (2009). Non-linear mixed effect models in Longitudinal Data Analysis. G Fitzmaurice, Editor. Chapman & Hall/CRC. p. 107-142.

Frost C, Kenward MG, Fox NC (2008). Optimizing the design of clinical trials where the outcome is a rate. Can estimating a baseline rate in a run-in period increase efficiency? Statistics in Medicine, 27: 3717-3731.

Hayes RJ and Bennett S (1999). Simple sample size calculation for cluster-randomized trials. International Journal of Epidemiology, 28: 319-326.

Hayes RJ and Moulton LH (2009). Cluster randomised trials. Boca Raton: Chapman & Hall/CRC.

Jones B and Kenward MG (1989, 2003). Design and Analysis of Cross-over Trials. London: Chapman &/CRC Hall.

Kenward MG and Roger JH (1997). Small sample inference for fixed effects estimators from restricted maximum likelihood. Biometrics, 53: 983-997.

Kenward MG and Roger JH (2009). An improved approximation to the precision of fixed effects from restricted maximum likelihood. Computational Statistics and Data Analysis, 53: 2583-2595.

Kenward MG and Roger JH (2010) The use of baseline covariates in cross-over studies. Biostatistics, 11: 1-17.

Liang, K-Y and Zeger SL (1986). Longitudinal data analysis using generalized linear models. Biometrika, 73(1): 13–22.

Muthen B and Shedden K (1999). Finite mixture modelling with mixture outcomes using the EM algorithm. Biometrics, 55: 463-469.

Nagin DS (1999). Analysing developmental trajectories: a semi-parametric, group-based approach. Psychol Meth, 4: 139-157.

Schott JM, Frost C, Whitwell JL, MacManus DG, Boyes RG, Rossor MN, Fox NC (2006). Combining short interval MRI in Alzheimer’s disease: Implications for therapeutic trials. Journal of Neurology, 253: 1147-1153.

Senn, S. (1993, 2002). Cross-over Trials in Clinical Research. Chichester: Wiley.

Silverwood RJ, De Stavola BL, Cole TJ, Leon DA (2009). BMI peak in infancy as a predictor for later BMI in the Uppsala Family Study. Int J Obes (Lond) : 33, 929–937.

Skene S and Kenward MG (2010). The analysis of very small samples of repeated measurements. I: An adjusted sandwich estimator. Statistics in Medicine, 29, 2825-2837.

Skene S and Kenward MG (2010). The analysis of very small samples of repeated measurements. II: A modified Box correction. Statistics in Medicine, 29, 2838-2856.

Verbyla AP et al. (1999). The analysis of designed experiments and longitudinal data using smoothing splines (with discussion). Applied Statistics, 48: 269-312.

Verbeke G, Molenberghs G (2000). Linear mixed models for longitudinal data. New York: Springer.

Welham SJ (2009). Smoothing spline models for longitudinal data, in Longitudinal Data Analysis, G Fitzmaurice, Editor. Chapman & Hall/CRC. p. 253-290.

Comments are closed.