scholarly journals Improving Probability Judgment in Intelligence Analysis: From Structured Analysis to Statistical Aggregation

2019 ◽  
Author(s):  
Christopher W. Karvetski ◽  
David R. Mandel ◽  
Daniel Irwin

As in other areas of expert judgment, intelligence analysis often requires judging the probability that hypotheses are true. Intelligence organizations promote the use of structured methods such as “Analysis of Competing Hypotheses” (ACH) to improve judgment accuracy and analytic rigor, but these methods have received little empirical testing. In this experiment, we pitted ACHagainst a factorized Bayes theorem (FBT) method, and we examined the value of recalibration (coherentization) and aggregation methods for improving the accuracy of probability judgment. Analytic techniques such as ACH and FBT were ineffective in improving accuracy and handling correlated evidence, and ACH in fact decreased the coherence of probability judgments. In contrast, statistical post-analytic methods (i.e., coherentization and aggregation) yielded large accuracy gains. A wide range of methods for instantiating these techniques were tested. The interactions among the factors considered suggest that prescriptive theorists and interventionists should examine the value of ensembles of judgment-support methods.

Risk Analysis ◽  
2020 ◽  
Vol 40 (5) ◽  
pp. 1040-1057
Author(s):  
Christopher W. Karvetski ◽  
David R. Mandel ◽  
Daniel Irwin

2019 ◽  
Author(s):  
Naruki Yoshikawa ◽  
Geoffrey Hutchison

<div>Rapidly predicting an accurate three dimensional geometry of a molecule is a crucial task in cheminformatics and a range of molecular modeling. Fast, accurate, and open implementation of structure prediction is necessary for reproducible cheminformatics research. We introduce fragment-based coordinate generation for Open Babel, a widely accepted open source toolkit for cheminformatics. The new implementation significant improves speed and stereochemical accuracy, while retaining or improving accuracy of bond lengths, bond angles, and dihedral torsions. We first separate an input molecule into fragments by cutting at rotatable bonds. Coordinates of fragments are set according to the fragment library, which is prepared from open crystallographic databases. Since coordinates of multiple atoms are decided at once, coordinate prediction is accelerated over the previous rules-based implementation or the widely-used distance geometry methods in RDKit. This new implementation will be beneficial for a wide range of applications, including computational property prediction in polymers, molecular materials and drug design.</div>


2021 ◽  
Vol 27 (1) ◽  
pp. 62-73
Author(s):  
Alexandru Kis ◽  
Peter Tvaruška ◽  
Oliver Tarcala

Abstract The article examines some particular aspects of the analytical process within the intelligence cycle, having as reference the framework of strategic intelligence. Starting from a proposed model of analysis of the competing hypotheses using phases-tailored tools, which will improve the quality of all-source intelligence analysis and its final products, we further assess its applicability in HUMINT (Human Intelligence) analysis. The model of intelligence analysis as a problem-solving method, with a focus on predictive analysis, will serve to understand the expectations from single-source collection disciplines (in our case, HUMINT) data gathering and reporting, connected to the roles of HUMINT analysts in the specialized branches.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Hyun Jae Baek ◽  
Min Hye Chang ◽  
Jeong Heo ◽  
Kwang Suk Park

Brain-computer interfaces (BCIs) aim to enable people to interact with the external world through an alternative, nonmuscular communication channel that uses brain signal responses to complete specific cognitive tasks. BCIs have been growing rapidly during the past few years, with most of the BCI research focusing on system performance, such as improving accuracy or information transfer rate. Despite these advances, BCI research and development is still in its infancy and requires further consideration to significantly affect human experience in most real-world environments. This paper reviews the most recent studies and findings about ergonomic issues in BCIs. We review dry electrodes that can be used to detect brain signals with high enough quality to apply in BCIs and discuss their advantages, disadvantages, and performance. Also, an overview is provided of the wide range of recent efforts to create new interface designs that do not induce fatigue or discomfort during everyday, long-term use. The basic principles of each technique are described, along with examples of current applications in BCI research. Finally, we demonstrate a user-friendly interface paradigm that uses dry capacitive electrodes that do not require any preparation procedure for EEG signal acquisition. We explore the capacitively measured steady-state visual evoked potential (SSVEP) response to an amplitude-modulated visual stimulus and the auditory steady-state response (ASSR) to an auditory stimulus modulated by familiar natural sounds to verify their availability for BCI. We report the first results of an online demonstration that adopted this ergonomic approach to evaluating BCI applications. We expect BCI to become a routine clinical, assistive, and commercial tool through advanced EEG monitoring techniques and innovative interface designs.


2019 ◽  
Vol 33 (6) ◽  
pp. 1080-1090 ◽  
Author(s):  
Mandeep K. Dhami ◽  
Ian K. Belton ◽  
David R. Mandel

2020 ◽  
Vol 14 (01) ◽  
pp. 107-135
Author(s):  
Salah Rabba ◽  
Matthew Kyan ◽  
Lei Gao ◽  
Azhar Quddus ◽  
Ali Shahidi Zandi ◽  
...  

There remain outstanding challenges for improving accuracy of multi-feature information for head-pose and gaze estimation. The proposed framework employs discriminative analysis for head-pose and gaze estimation using kernel discriminative multiple canonical correlation analysis (K-DMCCA). The feature extraction component of the framework includes spatial indexing, statistical and geometrical elements. Head-pose and gaze estimation is constructed by feature aggregation and transforming features into a higher dimensional space using K-DMCCA for accurate estimation. The two main contributions are: Enhancing fusion performance through the use of kernel-based DMCCA, and by introducing an improved iris region descriptor based on quadtree. The overall approach is also inclusive of statistical and geometrical indexing that are calibration free (does not require any subsequent adjustment). We validate the robustness of the proposed framework across a wide variety of datasets, which consist of different modalities (RGB and Depth), constraints (wide range of head-poses, not only frontal), quality (accurately labelled for validation), occlusion (due to glasses, hair bang, facial hair) and illumination. Our method achieved an accurate head-pose and gaze estimation of 4.8∘ using Cave, 4.6∘ using MPII, 5.1∘ using ACS, 5.9∘ using EYEDIAP, 4.3∘ using OSLO and 4.6∘ using UULM datasets.


2021 ◽  
Author(s):  
Julian Hocker ◽  
Taryn Bipat ◽  
David W. McDonald ◽  
Mark Zachry

Abstract Qualitative science methods have largely been omitted from discussions of open science. Platforms focused on qualitative science that support open science data and method sharing are rare. Sharing and exchanging coding schemas has great potential for supporting traceability in qualitative research as well as for facilitating the re-use of coding schemas. In this study, we describe and evaluate QualiCO, an ontology for qualitative coding schemas. QualiCO is designed to describe a wide range of qualitative coding schemas. Twenty qualitative researchers used QualiCO to complete two coding tasks. In our findings, we present task performance and interview data that focus participants’ attention on the ontology. Participants used QualiCO to complete the coding tasks, decreasing time on task, while improving accuracy, signifying that QualiCO enabled the reuse of qualitative coding schemas. Our discussion elaborates some issues that participants had and highlights how conceptual and prior practice frames their interpretation of how QualiCo can be used.


2018 ◽  
Author(s):  
Jian-Qiao Zhu ◽  
Adam N Sanborn ◽  
Nick Chater

Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello &amp; Watts, 2014, 2016a, 2017, 2019; Costello, Watts, &amp; Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities, and we show in a new experiment that this model better captures these judgments both qualitatively and quantitatively.


Sign in / Sign up

Export Citation Format

Share Document