distributional robustness
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Martin Emil Jakobsen ◽  
Jonas Peters

Abstract While causal models are robust in that they are prediction optimal under arbitrarily strong interventions, they may not be optimal when the interventions are bounded. We prove that the classical K-class estimator satisfies such optimality by establishing a connection between K-class estimators and anchor regression. This connection further motivates a novel estimator in instrumental variable settings that minimizes the mean squared prediction error subject to the constraint that the estimator lies in an asymptotically valid confidence region of the causal coefficient. We call this estimator PULSE (p-uncorrelated least squares estimator), relate it to work on invariance, show that it can be computed efficiently as a data-driven K-class estimator, even though the underlying optimization problem is non-convex, and prove consistency. We evaluate the estimators on real data and perform simulation experiments illustrating that PULSE suffers from less variability. There are several settings including weak instrument settings, where it outperforms other estimators.


2021 ◽  
Vol 33 (2) ◽  
pp. 226-247 ◽  
Author(s):  
Sebastian Weichwald ◽  
Jonas Peters

Whereas probabilistic models describe the dependence structure between observed variables, causal models go one step further: They predict, for example, how cognitive functions are affected by external interventions that perturb neuronal activity. In this review and perspective article, we introduce the concept of causality in the context of cognitive neuroscience and review existing methods for inferring causal relationships from data. Causal inference is an ambitious task that is particularly challenging in cognitive neuroscience. We discuss two difficulties in more detail: the scarcity of interventional data and the challenge of finding the right variables. We argue for distributional robustness as a guiding principle to tackle these problems. Robustness (or invariance) is a fundamental principle underlying causal methodology. A (correctly specified) causal model of a target variable generalizes across environments or subjects as long as these environments leave the causal mechanisms of the target intact. Consequently, if a candidate model does not generalize, then either it does not consist of the target variable's causes or the underlying variables do not represent the correct granularity of the problem. In this sense, assessing generalizability may be useful when defining relevant variables and can be used to partially compensate for the lack of interventional data.


2020 ◽  
Vol 35 (6) ◽  
pp. 4908-4911
Author(s):  
Yang Cao ◽  
Wei Wei ◽  
Shengwei Mei ◽  
Miadreza Shafie-khah ◽  
Joao P. S. Catalao

2020 ◽  
Vol 34 (04) ◽  
pp. 5511-5518
Author(s):  
Ashkan Rezaei ◽  
Rizal Fathony ◽  
Omid Memarrast ◽  
Brian Ziebart

Developing classification methods with high accuracy that also avoid unfair treatment of different groups has become increasingly important for data-driven decision making in social applications. Many existing methods enforce fairness constraints on a selected classifier (e.g., logistic regression) by directly forming constrained optimizations. We instead re-derive a new classifier from the first principles of distributional robustness that incorporates fairness criteria into a worst-case logarithmic loss minimization. This construction takes the form of a minimax game and produces a parametric exponential family conditional distribution that resembles truncated logistic regression. We present the theoretical benefits of our approach in terms of its convexity and asymptotic convergence. We then demonstrate the practical advantages of our approach on three benchmark fairness datasets.


2018 ◽  
Vol 17 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Shinji Ijichi ◽  
Naomi Ijichi ◽  
Yukina Ijichi ◽  
Chikako Imamura ◽  
Hisami Sameshima ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document