joint probabilities
Recently Published Documents


TOTAL DOCUMENTS

91
(FIVE YEARS 10)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Author(s):  
James Magnuson ◽  
Samantha Grubb ◽  
Anne Marie Crinnion ◽  
Sahil Luthra ◽  
Phoebe Gaston

Norris and Cutler (in press) revisit their arguments that (lexical-to-sublexical) feedback cannot improve word recognition performance, based on the assumption that feedback must boost signal and noise equally. They also argue that demonstrations that feedback improves performance (Magnuson, Mirman, Luthra, Strauss, & Harris, 2018) in the TRACE model of spoken word recognition (McClelland & Elman, 1986) were artifacts of converting activations to response probabilities. We first evaluate their claim that feedback in an interactive activation model must boost noise and signal equally. This is not true in a fully interactive activation model such as TRACE, where the feedback signal does not simply mirror the feedforward signal; it is instead shaped by joint probabilities over lexical patterns, and the dynamics of lateral inhibition. Thus, even under high levels of noise, lexical feedback will selectively boost signal more than noise. We demonstrate that feedback promotes faster word recognition and preserves accuracy under noise whether one uses raw activations or response probabilities. We then document that lexical feedback selectively boosts signal (i.e., lexically-coherent series of phonemes) more than noise by tracking sublexical (phoneme) activations under noise with and without feedback. Thus, feedback in a model like TRACE does improve word recognition, exactly by selective reinforcement of lexically-coherent signal. We conclude that whether lexical feedback is integral to human speech processing is an empirical question, and briefly review a growing body of work at behavioral and neural levels that is consistent with feedback and inconsistent with autonomous (non-feedback) architectures.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 878
Author(s):  
C. T. J. Dodson ◽  
John Soldera ◽  
Jacob Scharcanski

Secure user access to devices and datasets is widely enabled by fingerprint or face recognition. Organization of the necessarily large secure digital object datasets, with objects having content that may consist of images, text, video or audio, involves efficient classification and feature retrieval processing. This usually will require multidimensional methods applicable to data that is represented through a family of probability distributions. Then information geometry is an appropriate context in which to provide for such analytic work, whether with maximum likelihood fitted distributions or empirical frequency distributions. The important provision is of a natural geometric measure structure on families of probability distributions by representing them as Riemannian manifolds. Then the distributions are points lying in this geometrical manifold, different features can be identified and dissimilarities computed, so that neighbourhoods of objects nearby a given example object can be constructed. This can reveal clustering and projections onto smaller eigen-subspaces which can make comparisons easier to interpret. Geodesic distances can be used as a natural dissimilarity metric applied over data described by probability distributions. Exploring this property, we propose a new face recognition method which scores dissimilarities between face images by multiplying geodesic distance approximations between 3-variate RGB Gaussians representative of colour face images, and also obtaining joint probabilities. The experimental results show that this new method is more successful in recognition rates than published comparative state-of-the-art methods.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 872
Author(s):  
Aldo F. G. Solis-Labastida ◽  
Melina Gastelum ◽  
Jorge G. Hirsch

Since the experimental observation of the violation of the Bell-CHSH inequalities, much has been said about the non-local and contextual character of the underlying system. However, the hypothesis from which Bell’s inequalities are derived differ according to the probability space used to write them. The violation of Bell’s inequalities can, alternatively, be explained by assuming that the hidden variables do not exist at all, that they exist but their values cannot be simultaneously assigned, that the values can be assigned but joint probabilities cannot be properly defined, or that averages taken in different contexts cannot be combined. All of the above are valid options, selected by different communities to provide support to their particular research program.


2020 ◽  
Vol 50 (12) ◽  
pp. 1921-1933
Author(s):  
Giacomo Mauro D’Ariano

AbstractIt is almost universally believed that in quantum theory the two following statements hold: (1) all transformations are achieved by a unitary interaction followed by a von-Neumann measurement; (2) all mixed states are marginals of pure entangled states. I name this doctrine the dogma of purification ontology. The source of the dogma is the original von Neumann axiomatisation of the theory, which largely relies on the Schrődinger equation as a postulate, which holds in a nonrelativistic context, and whose operator version holds only in free quantum field theory, but no longer in the interacting theory. In the present paper I prove that both ontologies of unitarity and state-purity are unfalsifiable, even in principle, and therefore axiomatically spurious. I propose instead a minimal four-postulate axiomatisation: (1) associate a Hilbert space $${\mathcal {H}}_\text{A}$$ H A to each system$$\text{A}$$ A ; (2) compose two systems by the tensor product rule $${\mathcal {H}}_{\text{A}\text{B}}={\mathcal {H}}_\text{A}\otimes {\mathcal {H}}_\text{B}$$ H AB = H A ⊗ H B ; (3) associate a transformation from system $$\text{A}$$ A to $$\text{B}$$ B to a quantum operation, i.e. to a completely positive trace-non-increasing map between the trace-class operators of $$\text{A}$$ A and $$\text{B}$$ B ; (4) (Born rule) evaluate all joint probabilities through that of a special type of quantum operation: the state preparation. I then conclude that quantum paradoxes—such as the Schroedinger-cat’s, and, most relevantly, the information paradox—are originated only by the dogma of purification ontology, and they are no longer paradoxes of the theory in the minimal formulation. For the same reason, most interpretations of the theory (e.g. many-world, relational, Darwinism, transactional, von Neumann–Wigner, time-symmetric,...) interpret the same dogma, not the strict theory stripped of the spurious postulates.


Author(s):  
Hossein Estiri ◽  
Sebastien Vasey ◽  
Shawn N Murphy

Abstract Objective Due to a complex set of processes involved with the recording of health information in the Electronic Health Records (EHRs), the truthfulness of EHR diagnosis records is questionable. We present a computational approach to estimate the probability that a single diagnosis record in the EHR reflects the true disease. Materials and Methods Using EHR data on 18 diseases from the Mass General Brigham (MGB) Biobank, we develop generative classifiers on a small set of disease-agnostic features from EHRs that aim to represent Patients, pRoviders, and their Interactions within the healthcare SysteM (PRISM features). Results We demonstrate that PRISM features and the generative PRISM classifiers are potent for estimating disease probabilities and exhibit generalizable and transferable distributional characteristics across diseases and patient populations. The joint probabilities we learn about diseases through the PRISM features via PRISM generative models are transferable and generalizable to multiple diseases. Discussion The Generative Transfer Learning (GTL) approach with PRISM classifiers enables the scalable validation of computable phenotypes in EHRs without the need for domain-specific knowledge about specific disease processes. Conclusion Probabilities computed from the generative PRISM classifier can enhance and accelerate applied Machine Learning research and discoveries with EHR data.


2020 ◽  
Author(s):  
James Daniell ◽  
Andreas Schaefer ◽  
Hugo Winter ◽  
Pierre Gehl ◽  
Phil Vardon ◽  
...  

<p>Within the course of the EU project NARSIS (New Approach to Reactor Safety ImprovementS), sites of decommissioned nuclear power plants (NPPs) were investigated for external hazards using a multi-hazard approach.</p><p>The starting point was a review of existing multi-hazard frameworks, as well as their application to real world locations. From this knowledge, after significant screening, external hazards were analysed at different site locations in Europe using stochastic event sets for earthquake, flood, lightning, tornadoes, tsunami, hail and other perils in order to identify key scenarios along the hazard curves. These were built from existing national and supranational stochastic event sets.</p><p>The joint probability at each site of certain threshold events occurring was calculated, and relevant risk scenarios were chosen based on these hazard thresholds. Most importantly, the concept of joint operational time windows was investigated. Because the overall hazard for events is generally low, the chance of two low probability events is often screened out. However, during the damage and recovery window of these events (the operational time), the joint probabilities are much higher, thus affecting the infrastructure. Including the cascading effects, aftershocks, secondary effects and associated event sequences, provides a new insight into the probabilities of multi-hazard events and the implications for multi-risk.</p><p>Historical events from the loss database CATDAT and other records are chosen where joint operational time windows have occurred to show empirical examples of joint occurrences and cascades in the past for European and international examples.</p><p>Joint probabilities for significant events at decommissioned NPPs are presented within the NARSIS project and the application to multi-risk within Probabilistic Safety Assessments (PSA), however it is the application to other industrial types and infrastructure which shows the need for integration of multi-hazard (coinciding or cascading) events into operational management plans as well as important thought processes for building standards and use.</p>


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Yi Chang ◽  
Yanshuo Mai ◽  
Linli Yi ◽  
Liuming Yu ◽  
Ying Chen ◽  
...  

Due to initial cracks, careless construction, and extreme load conditions, components with brittle behavior may exist in a structural system. The presence of brittle behavior of components usually is accompanied by a low strength. However, existing methods for calculating the reliability of structures of components with brittle behavior are rather complicated or impossible. By means of decomposing the entire system into a set of subsystems, this paper proposed a method to estimate the bounds on failure probability of k-out-of-n system of components with potentially brittle behavior by using universal generating function (UGF) and linear programming (LP). Based on the individual component state probabilities and joint probabilities of the states of a small number of components, the proposed method can provide the bounds for the failure probability of a system with a large number of components. The accuracy and efficiency of the proposed method are investigated using numerical examples.


2019 ◽  
Vol 29 (7) ◽  
pp. 1950-1959
Author(s):  
Tsung-Shan Tsou ◽  
Wei-Cheng Hsiao

Recently Perera et al. introduced two new binocular accuracy measures to evaluate diagnostic tests for paired organs. They adopted the Gaussian copula model to account for correlation between fellow eyes. As the measures are functions of several joint probabilities and due to the nature of the joint models, variations of the estimates for the two new measures were assessed via bootstrapping. We provide a different approach to inference about the two interesting and innovative measures. In our opinion, when patients are independent, the binomial models suffice for inference about the parameters of interest. Inference becomes simple and straightforward. We perform numerical studies and analyse the data set as of Perera et al. for illustration. Also, we investigate thru simulations the issue of robustness of the Gaussian copula and the binomial models under model misspecification.


This paper is readied the product programming of the SVPWM and half of breed PWM basically based DTC of recognition engine manipulate for assessing the strength Spectral Density (PSD) and the overall consonant mutilation (THD) of the road flows. The PWM set of guidelines utilizes three beautiful PWM methodologies like traditional SVPWM, AZPWM3 and combination PWM for the evaluation of the vitality spectra and consonant spectra. In quality spectra appraisal the extents of the power accrued at express frequencies and inside the consonant spectra the problem band sizes at one among a type replacing frequencies are taken into consideration for the assessment. To confirm the PWM calculations, numerical activity is performed making use of MATLAB/simulink Telugu (తెలుగు) is one of the Dravidian languages which is morphologically rich. As in the other languages it too contains polysemous words which have different meanings in different contexts. There are several language models exist to solve the word sense disambiguation problem with respect to each language like English, Chinese, Hindi and Kannada etc. The proposed method gives a solution for the word sense disambiguation problem with the help of ngram technique which has given good results in many other languages. The methodology mentioned in this paper finds the co-occurrence words of target polysemous word and we call them as n-grams. A Telugu corpus sent as input for training phase to find n-gram joint probabilities. By considering these joint probabilities the target polysemous word will be assigned a correct sense in testing phase. We evaluate the proposed method on some polysemous Telugu nouns and verbs. The methodology proposed gives the F-measure 0.94 when tested on Telugu corpus collected from CIIL, various news papers and story books.The present methodology can give better results with increase in size of training corpus and in future we plan to evaluate it on all words not only nouns and verbs


2019 ◽  
Vol 29 (1) ◽  
pp. 282-292
Author(s):  
Tsung-Shan Tsou

We introduce a robust likelihood approach to inference about marginal distributional characteristics for paired data without modeling correlation/joint probabilities. This method is reproducible in that it is applicable to paired settings with various sizes. The virtue of the new strategy is elucidated via testing marginal homogeneity in paired triplet scenario. We use simulations and real data analysis to demonstrate the merit of our robust likelihood methodology.


Sign in / Sign up

Export Citation Format

Share Document