Log-Nonlinear Formulations for Robust High-dimensional Modeling

2021 ◽  
Author(s):  
Taylor W Webb ◽  
Kiyofumi Miyoshi ◽  
Tsz Yan So ◽  
Sivananda Rajananda ◽  
Hakwan Lau

Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct, leading to debate over whether these predictions are optimal, and whether they rely on the same decision variable as decisions themselves. This work has generally relied on idealized, low-dimensional modeling frameworks, such as signal detection theory or Bayesian inference, leaving open the question of how decision confidence operates in the domain of high-dimensional, naturalistic stimuli. To address this, we developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images. The model naturally accounts for a number of puzzling dissociations between decisions and confidence, suggests a principled explanation of these dissociations in terms of optimization for the statistics of sensory inputs, and makes the surprising prediction that, despite these dissociations, decisions and confidence depend on a common decision variable.


2020 ◽  
Vol 15 (3) ◽  
pp. 909-935 ◽  
Author(s):  
Xinming Yang ◽  
Naveen N. Narisetty

2017 ◽  
Vol 43 (1) ◽  
pp. 3-31 ◽  
Author(s):  
Adam C. Sales ◽  
Ben B. Hansen ◽  
Brian Rowan

In causal matching designs, some control subjects are often left unmatched, and some covariates are often left unmodeled. This article introduces “rebar,” a method using high-dimensional modeling to incorporate these commonly discarded data without sacrificing the integrity of the matching design. After constructing a match, a researcher uses the unmatched control subjects—the remnant—to fit a machine learning model predicting control potential outcomes as a function of the full covariate matrix. The resulting predictions in the matched set are used to adjust the causal estimate to reduce confounding bias. We present theoretical results to justify the method’s bias-reducing properties as well as a simulation study that demonstrates them. Additionally, we illustrate the method in an evaluation of a school-level comprehensive educational reform program in Arizona.


1999 ◽  
Vol 09 (01) ◽  
pp. 41-59 ◽  
Author(s):  
CHUN-SHIN LIN ◽  
CHIEN-KUO LI

The paper presents a novel memory-based Self-Generated Basis Function Neural Network (SGBFN) that is composed of small CMACs. The SGBFN requires much smaller memory space than the conventional CMAC and has an excellent learning convergence property compared to multilayer neural networks. Each CMAC in the new structure takes a subset of problem inputs as its inputs. Several CMACs that have different subsets of inputs form a submodule and a group of submodules form a neural network. The output of a submodule is the product of its CMACs' outputs. Each submodule implements a self-generated basis function, which is developed during the learning. The output of the neural network is the sum of the outputs from the submodules. Using only a subset of inputs in each CMAC significantly reduces the required memory space in high-dimensional modeling. With the same size of memory, the new structure is able to achieve a much smaller learning error compared to the conventional CMAC.


2015 ◽  
Vol 51 (1) ◽  
pp. 220-239 ◽  
Author(s):  
Jessica M. Franklin ◽  
William H. Shrank ◽  
Joyce Lii ◽  
Alexis K. Krumme ◽  
Olga S. Matlin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document