“Statistical Methods for Big Data” Symposium
The Centre for Statistical Methodology held a half-day symposium on Statistical Methods for Big Data on 7 July 2017.
“Big Data” are being promoted as a revolutionary development in the future of health research. They are increasingly available in different fields, from molecular to environmental epidemiology to pharmaco-epidemiology to public policy. Different applications encounter different challenges, for example high-dimensionality of populations, exposures, time points, or locations, or of a combination of these. Distinctions between “made big data” (such as those derived from -omics platforms) and “found big data” (such as those obtained from linkage across electronic health records and administrative databases) may be useful to identify and address the challenges posed by these novel sources of information.
The symposium aimed to discuss these features and the challenges that they pose for statistical methods and future directions of research. Speakers from different methodological perspectives presented examples across a wide spectrum of applications:
Elizabeth Williamson (LSHTM): Big data in health research: opportunities and challenges
Stephen Evans (LSHTM): A perspective from and for pharmacoepidemiology
Pietro Ferrari (IARC, Lyon): Understanding complex data
Joel Schwartz (Harvard T.H. Chan School of Public Health): Big data in environmental epidemiology: p > n, large n, and machine learning
Jas Sekhon (University of California, Berkeley): Policy and evaluation in the age of big data
Followed by a panel discussion.
Slides and recordings of the talks will soon be available here.