scholarly journals Ontological Models Supporting Covariates Selection in Observational Studies

Author(s):  
Thibaut Pressat Laffouilhère ◽  
Julien Grosjean ◽  
Jacques Bénichou ◽  
Stefan J. Darmoni ◽  
Lina F. Soualmia

In the context of causal inference, biostatisticians use causal diagrams to select covariates in order to build multivariate models. These diagrams represent datasets variables and their relations but have some limitations (representing interactions, bidirectional causal relations). The MetBrAYN project aims at building an ontological-based process to tackle these issues. The knowledge acquired by the biostatistician during a methodological consultation for a research question will be represented in a general ontology. In order to aggregate various forms of knowledge the ontology will act as a wrapper. Ontology-based causal diagrams will be semi-automatically built. Founded on inference rules, the global system will help biostatisticians to curate it and to visualize recommended covariates for their research question.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Emilia Gvozdenović ◽  
Lucio Malvisi ◽  
Elisa Cinconze ◽  
Stijn Vansteelandt ◽  
Phoebe Nakanwagi ◽  
...  

Abstract Background Randomized controlled trials are considered the gold standard to evaluate causal associations, whereas assessing causality in observational studies is challenging. Methods We applied Hill’s Criteria, counterfactual reasoning, and causal diagrams to evaluate a potentially causal relationship between an exposure and outcome in three published observational studies: a) one burden of disease cohort study to determine the association between type 2 diabetes and herpes zoster, b) one post-authorization safety cohort study to assess the effect of AS04-HPV-16/18 vaccine on the risk of autoimmune diseases, and c) one matched case-control study to evaluate the effectiveness of a rotavirus vaccine in preventing hospitalization for rotavirus gastroenteritis. Results Among the 9 Hill’s criteria, 8 (Strength, Consistency, Specificity, Temporality, Plausibility, Coherence, Analogy, Experiment) were considered as met for study c, 3 (Temporality, Plausibility, Coherence) for study a, and 2 (Temporary, Plausibility) for study b. For counterfactual reasoning criteria, exchangeability, the most critical assumption, could not be tested. Using these tools, we concluded that causality was very unlikely in study b, unlikely in study a, and very likely in study c. Directed acyclic graphs provided complementary visual structures that identified confounding bias and helped determine the most accurate design and analysis to assess causality. Conclusions Based on our assessment we found causal Hill’s criteria and counterfactual thinking valuable in determining some level of certainty about causality in observational studies. Application of causal inference frameworks should be considered in designing and interpreting observational studies.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Hendrika J. Luijendijk ◽  
Matthew J. Page ◽  
Huibert Burger ◽  
Xander Koolman

Abstract Background Evidence based medicine aims to integrate scientific evidence, clinical experience, and patient values and preferences. Individual health care professionals need to appraise the evidence from randomized trials and observational studies when guidelines are not yet available. To date, tools for assessment of bias and terminologies for bias are specific for each study design. Moreover, most tools appeal only to methodological knowledge to detect bias, not to subject matter knowledge, i.e. in-depth medical knowledge about a topic. We propose a unified framework that enables the coherent assessment of bias across designs. Methods Epidemiologists traditionally distinguish between three types of bias in observational studies: confounding, information bias, and selection bias. These biases result from a common cause, systematic error in the measurement or common effect of the intervention and outcome respectively. We applied this conceptual framework to randomized trials and show how it can be used to identify bias. The three sources of bias were illustrated with graphs that visually represent researchers’ assumptions about the relationships between the investigated variables (causal diagrams). Results Critical appraisal of evidence started with the definition of the research question in terms of the population of interest, the compared interventions and the main outcome. Next, we used causal diagrams to illustrate how each source of bias can lead to over- or underestimated treatment effects. Then, we discussed how randomization, blinded outcome measurement and intention-to-treat analysis minimize bias in trials. Finally, we identified study aspects that can only be appraised with subject matter knowledge, irrespective of study design. Conclusions The unified framework encompassed the three main sources of bias for the effect of an assigned intervention on an outcome. It facilitated the integration of methodological and subject matter knowledge in the assessment of bias. We hope that graphical diagrams will help clarify debate among professionals by reducing misunderstandings based on different terminology for bias.


2020 ◽  
Vol 16 (1) ◽  
pp. 25-48 ◽  
Author(s):  
Brian M. D'Onofrio ◽  
Arvid Sjölander ◽  
Benjamin B. Lahey ◽  
Paul Lichtenstein ◽  
A. Sara Öberg

The goal of this review is to enable clinical psychology researchers to more rigorously test competing hypotheses when studying risk factors in observational studies. We argue that there is a critical need for researchers to leverage recent advances in epidemiology/biostatistics related to causal inference and to use innovative approaches to address a key limitation of observational research: the need to account for confounding. We first review theoretical issues related to the study of causation, how causal diagrams can facilitate the identification and testing of competing hypotheses, and the current limitations of observational research in the field. We then describe two broad approaches that help account for confounding: analytic approaches that account for measured traits and designs that account for unmeasured factors. We provide descriptions of several such approaches and highlight their strengths and limitations, particularly as they relate to the etiology and treatment of behavioral health problems.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S323-S323
Author(s):  
Anja K Leist

Abstract Rationale: There is an urgent need to better understand how to maintain cognitive functioning at older ages with social and behavioral interventions, given that there is currently no medical cure available to prevent, halt or reverse the progression of cognitive decline and dementia. However, in current models, it is still not well established which factors (e.g. education, BMI, physical activity, sleep, depression) matter most at which ages, and which behavioral profiles are most protective against cognitive decline. In the last years, advances in the fields of causal inference and machine learning have equipped epidemiology and social sciences with methods and models to approach causal questions in observational studies. Method: The presentation will give an overview of the causal inference framework and different machine learning approaches to investigate cognitive aging. First, we will present relevant research questions on the role of social and behavioral factors in cognitive aging in observational studies. Second, we will introduce the causal inference framework and recent methods to visualize and compute the strength of causal paths. Third, promising machine learning approaches to arrive at robust predictions are presented. The 13-year follow-up from the European SHARE survey that employs well-established cognitive performance tests is used to demonstrate the usefulness of the approach. Discussion: The causal inference framework, combined with recent machine learning approaches and applied in observational studies, provides a robust alternative to intervention research. Advantages for investigations under the new framework, e.g., fewer ethical considerations compared to intervention research, as well as limitations are discussed.


2019 ◽  
pp. 004912411985237
Author(s):  
Peter Abell ◽  
Ofer Engel

The article explores the role that subjective evidence of causality and associated counterfactuals and counterpotentials might play in the social sciences where comparative cases are scarce. This scarcity rules out statistical inference based upon frequencies and usually invites in-depth ethnographic studies. Thus, if causality is to be preserved in such situations, a conception of ethnographic causal inference is required. Ethnographic causality inverts the standard statistical concept of causal explanation in observational studies, whereby comparison and generalization, across a sample of cases, are both necessary prerequisites for any causal inference. Ethnographic causality allows, in contrast, for causal explanation prior to any subsequent comparison or generalization.


2004 ◽  
Vol 29 (3) ◽  
pp. 343-367 ◽  
Author(s):  
Donald B. Rubin

Inference for causal effects is a critical activity in many branches of science and public policy. The field of statistics is the one field most suited to address such problems, whether from designed experiments or observational studies. Consequently, it is arguably essential that departments of statistics teach courses in causal inference to both graduate and undergraduate students. This article discusses an outline of such courses based on repeated experience over more than a decade.


Sign in / Sign up

Export Citation Format

Share Document