Statistical Methods for the Analysis of Epidemiological Studies of Familial Cancer

2015 ◽  
pp. 199-201
Author(s):  
G. J. Draper



2017 ◽  
Vol 76 (3) ◽  
pp. 213-219 ◽  
Author(s):  
Johanna Conrad ◽  
Ute Nöthlings

Valid estimation of usual dietary intake in epidemiological studies is a topic of present interest. The aim of the present paper is to review recent literature on innovative approaches focussing on: (1) the requirements to assess usual intake and (2) the application in large-scale settings. Recently, a number of technology-based self-administered tools have been developed, including short-term instruments such as web-based 24-h recalls, mobile food records or simple closed-ended questionnaires that assess the food intake of the previous 24 h. Due to their advantages in terms of feasibility and cost-effectiveness these tools may be superior to conventional assessment methods in large-scale settings. New statistical methods have been developed to combine dietary information from repeated 24-h dietary recalls and FFQ. Conceptually, these statistical methods presume that the usual food intake of a subject equals the probability of consuming a food on a given day, multiplied by the average amount of intake of that food on a typical consumption day. Repeated 24-h recalls from the same individual provide information on consumption probability and amount. In addition, the FFQ can add information on intake frequency of rarely consumed foods. It has been suggested that this combined approach may provide high-quality dietary information. A promising direction for estimation of usual intake in large-scale settings is the integration of both statistical methods and new technologies. Studies are warranted to assess the validity of estimated usual intake in comparison with biomarkers.



Author(s):  
Mark Elwood

This book presents a system of critical appraisal applicable to clinical, epidemiological and public health studies and to many other fields. It assumes no prior knowledge. The methods are relevant to students, practitioners and policymakers. The book shows how to assess if the results of one study or of many studies show a causal effect. The book discusses study designs: randomised and non-randomised trials, cohort studies, case-control studies, and surveys, showing the presentation of results including person-time and survival analysis, and issues in the selection of subjects. The system shows how to describe a study, how to detect and assess selection biases, observation bias, confounding, and chance variation, and how to assess internal validity and external validity (generalisability). Statistical methods are presented assuming no previous knowledge, and showing applications to each study design. Positive features of causation including strength, dose-response, and consistency are discussed. The book shows how to do systematic reviews and meta-analyses, and discusses publication bias. Systems of assessing all evidence are shown, leading to a general method of critical appraisal based on 20 key questions in five groups, which can be applied to any type of study or any topic. Six chapters show the application of this method to randomised trials, prospective and retrospective cohort studies, and case-control studies. An appendix summarises key statistical methods, each with a worked example. Each main chapter has self-test questions, with answers provided.



2010 ◽  
Vol 39 (5) ◽  
pp. 1345-1359 ◽  
Author(s):  
Simon Thompson ◽  
Stephen Kaptoge ◽  
Ian White ◽  
Angela Wood ◽  
Philip Perry ◽  
...  


2006 ◽  
Vol 45 (04) ◽  
pp. 409-413 ◽  
Author(s):  
M. E. Schmidt ◽  
K. Steindorf

Summary Objectives: Questionnaires used in epidemiological studies should be validated. However, unclarity exists about the appropriate statistical methods and interpretation of validation studies. Thus, we investigated the theory and practice of statistical evaluation approaches. Methods: Using three platforms, a literature review, own simulations, and a validation study performed by ourselves, we worked out relevant limitations, advantages, and new important aspects of evaluation methods. Results: Our systematic literature review, based on physical activity questionnaires, revealed that correlation coefficients are still the common approach in validation studies, found in 41 of 46 reviewed publications (89.1%). This practice has been criticized in the theoretically oriented literature for more than 20 years. Appropriate evaluation methods as recommended by Bland and Altman were found in only ten publications (21.7 %).We showed that serious bias in questionnaires can be revealed by Bland-Altman plots but may remain undetected by correlation coefficients. With our simulations we refuted the argument that correlation coefficients properly investigate whether a questionnaire ranks the subjects sufficiently well. Further, with Bland-Altman analyses we could evaluate differential errors with respect to case-control status in our validation study. Yet, this was not possible with correlation coefficients, because they generally do not identify systematic bias. In addition, we show a potential pitfall in the interpretation of Bland-Altman plots that might occur in specific rare instances. Conclusions: The commonly used correlation approach can yield misleading conclusions in validation studies. A more frequent and proper use of the Bland-Altman methods would be desirable to improve epidemiological data quality.



2013 ◽  
Vol 2013 (1) ◽  
pp. 5869
Author(s):  
Marisa Estarlich ◽  
Carmen Iñiguez ◽  
Spanish C Switzerland ◽  
Ana Esplugues ◽  
Gerard Hoek ◽  
...  


1982 ◽  
Vol 109 (2) ◽  
pp. 203-223 ◽  
Author(s):  
S. Haberman

The paper describes in some detail the statistical methods that may be used for analysing mortality data from medical and epidemiological studies, with particular reference to quantifying survival and the risk of mortality and to comparing the experience with a standard.



1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.



Sign in / Sign up

Export Citation Format

Share Document