probabilistic version
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 0)

Author(s):  
Edward Flemming

MaxEnt grammar is a probabilistic version of Harmonic Grammar in which the harmony scores of candidates are mapped onto probabilities. It has become the tool of choice for analyzing phonological phenomena involving probabilistic variation or gradient acceptability, but there is a competing proposal for making Harmonic Grammar probabilistic, Noisy Harmonic Grammar, in which variation is derived by adding random ‘noise’ to constraint weights. In this paper these grammar frameworks, and variants of them, are analyzed by reformulating them all in a format where noise is added to candidate harmonies, and the differences between frameworks lie in the distribution of this noise. This analysis reveals a basic difference between the models: in MaxEnt the relative probabilities of two candidates depend only on the difference in their harmony scores, whereas in Noisy Harmonic Grammar it also depends on the differences in the constraint violations incurred by the two candidates. This difference leads to testable predictions which are evaluated against data on variable realization of schwa in French (Smith & Pater 2020). The results support MaxEnt over Noisy Harmonic Grammar.


2021 ◽  
Vol 22 (2) ◽  
pp. 435
Author(s):  
Ravindra K. Bisht ◽  
Vladimir Rakocević

<p>A Meir-Keeler type fixed point theorem for a family of mappings is proved in Menger probabilistic metric space (Menger PM-space). We establish that completeness of the space is equivalent to fixed point property for a larger class of mappings that includes continuous as well as discontinuous mappings. In addition to it, a probabilistic fixed point theorem satisfying (ϵ - δ) type non-expansive mappings is established.</p>


2021 ◽  
Author(s):  
Yingqi Jing ◽  
Damián Ezequiel Blasi ◽  
Balthasar Bickel

A prominent principle in explaining a range of word order regularities is dependency locality, i.e. a principle that minimizes the linear distances (dependency lengths) between the head and its dependents. However, it remains unclear to what extent language users in fact observe locality when producing sentences under diverse conditions of cross-categorical harmony (such as the placement of verbal and nominal heads on the same vs different sides of their dependents), dependency direction (head-final vs head-initial) and parallel vs. hierarchical dependency structures (e.g. multiple adjectives dependent on the same head vs nested genitive dependents). Using 45 dependency-annotated corpora of diverse languages, we find that after controlling for harmony and conditioning on dependency types, dependency length minimization (DLM) is inversely correlated with the overall presence of head-final dependencies. This anti-DLM effect in sentences with more head-final dependencies is specifically associated with an accumulation of dependents in parallel structures and with disharmonic orders in hierarchical structures. We propose a detailed interpretation of these results and tentatively suggest a role for a probabilistic principle that favors embedding head-initial (e.g. VO) structures inside equally head-initial and thereby length-minimizing structures (e.g. relative clauses after the head noun) while head-final (OV) structures have a less pronounced preference for harmony and DLM. This is in line with earlier findings in research on the Greenbergian word order universals and with a probabilistic version of what has been suggested as the Final-Over-Final Condition more recently.


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1409
Author(s):  
Marija Boričić Joksimović

We give some simple examples of applying some of the well-known elementary probability theory inequalities and properties in the field of logical argumentation. A probabilistic version of the hypothetical syllogism inference rule is as follows: if propositions A, B, C, A→B, and B→C have probabilities a, b, c, r, and s, respectively, then for probability p of A→C, we have f(a,b,c,r,s)≤p≤g(a,b,c,r,s), for some functions f and g of given parameters. In this paper, after a short overview of known rules related to conjunction and disjunction, we proposed some probabilized forms of the hypothetical syllogism inference rule, with the best possible bounds for the probability of conclusion, covering simultaneously the probabilistic versions of both modus ponens and modus tollens rules, as already considered by Suppes, Hailperin, and Wagner.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Armin W Thomas ◽  
Felix Molter ◽  
Ian Krajbich

How do we choose when confronted with many alternatives? There is surprisingly little decision modelling work with large choice sets, despite their prevalence in everyday life. Even further, there is an apparent disconnect between research in small choice sets, supporting a process of gaze-driven evidence accumulation, and research in larger choice sets, arguing for models of optimal choice, satisficing, and hybrids of the two. Here, we bridge this divide by developing and comparing different versions of these models in a many-alternative value-based choice experiment with 9, 16, 25, or 36 alternatives. We find that human choices are best explained by models incorporating an active effect of gaze on subjective value. A gaze-driven, probabilistic version of satisficing generally provides slightly better fits to choices and response times, while the gaze-driven evidence accumulation and comparison model provides the best overall account of the data when also considering the empirical relation between gaze allocation and choice.


2020 ◽  
Vol 30 (2) ◽  
pp. 205-243
Author(s):  
Cornelis Middelburg ◽  

We first present a probabilistic version of ACP that rests on the principle that probabilistic choices are always resolved before choices involved in alternative composition and parallel composition are resolved and then extend this probabilistic version of ACP with a form of interleaving in which parallel processes are interleaved according to what is known as a process-scheduling policy in the field of operating systems. We use the term strategic interleaving for this more constrained form of interleaving. The extension covers probabilistic process-scheduling policies.


2020 ◽  
Author(s):  
Fabrizio Riguzzi ◽  
Elena Bellodi ◽  
Riccardo Zese ◽  
Marco Alberti ◽  
Evelina Lamma

Abstract Probabilistic logical models deal effectively with uncertain relations and entities typical of many real world domains. In the field of probabilistic logic programming usually the aim is to learn these kinds of models to predict specific atoms or predicates of the domain, called target atoms/predicates. However, it might also be useful to learn classifiers for interpretations as a whole: to this end, we consider the models produced by the inductive constraint logic system, represented by sets of integrity constraints, and we propose a probabilistic version of them. Each integrity constraint is annotated with a probability, and the resulting probabilistic logical constraint model assigns a probability of being positive to interpretations. To learn both the structure and the parameters of such probabilistic models we propose the system PASCAL for “probabilistic inductive constraint logic”. Parameter learning can be performed using gradient descent or L-BFGS. PASCAL has been tested on 11 datasets and compared with a few statistical relational systems and a system that builds relational decision trees (TILDE): we demonstrate that this system achieves better or comparable results in terms of area under the precision–recall and receiver operating characteristic curves, in a comparable execution time.


Episteme ◽  
2020 ◽  
pp. 1-23
Author(s):  
Kurtis Hagen

Abstract In an article based on a recent address to the Royal Institute of Philosophy, Keith Harris has argued that there is something epistemically wrong with conspiracy theorizing. Although he finds “standard criticisms” of conspiracy theories wanting, he argues that there are three subtle but significant problems with conspiracy theorizing: (1) It relies on an invalid probabilistic version of modus tollens. (2) It involves a problematic combination of both epistemic virtues and vices. And (3) it lacks an adequate basis for trust in its information sources. In response to Harris, this article argues that, like previous criticisms, these criticisms do little to undermine conspiracy theorizing as such. And they do not give us good reasons to dismiss any particular conspiracy theory without consideration of the relevant evidence.


2020 ◽  
Author(s):  
Armin Thomas ◽  
Felix Molter ◽  
Ian Krajbich

How do we choose when confronted with many alternatives? There is surprisingly little decision modeling work with large choice sets, despite their prevalence in everyday life. Even further, there is an apparent disconnect between research in small choice sets, supporting a process of gaze-driven evidence accumulation, and research in larger choice sets, arguing for models of optimal choice, satisficing, and hybrids of the two. Here, we bridge this divide by developing and comparing different versions of these models in a many-alternative value-based choice experiment with 9, 16, 25, or 36 alternatives. We find that human choices are best explained by models incorporating an active effect of gaze on subjective value. A gaze-driven, probabilistic version of satisficing generally outperforms the other models, though gaze-driven evidence accumulation and comparison performs comparably well with 9 alternatives and is overall most accurate in capturing the relation between gaze allocation and choice.


2020 ◽  
Vol 18 (01) ◽  
pp. 2050004
Author(s):  
Gábor Balogh ◽  
Stephan H. Bernhart ◽  
Peter F. Stadler ◽  
Jana Schor

The number of genes belonging to a multi-gene family usually varies substantially over their evolutionary history as a consequence of gene duplications and losses. A first step toward analyzing these histories in detail is the inference of the changes in copy number that take place along the individual edges of the underlying phylogenetic tree. The corresponding maximum parsimony minimizes the total number of changes along the edges of the species tree. Incorrectly determined numbers of family members however may influence the estimates drastically. We therefore augment the analysis by introducing a probabilistic model that also considers suboptimal assignments of changes. Technically, this amounts to a partition function variant of Sankoff’s parsimony algorithm. As a showcase application, we reanalyze the gain and loss patterns of metazoan microRNA families. As expected, the differences between the probabilistic and the parsimony method is moderate, in this limit of [Formula: see text], i.e. very little tolerance for deviations from parsimony, the total number of reconstructed changes is the same. However, we find that the partition function approach systematically predicts fewer gains and more loss events, showing that the data admit co-optimal solutions among which the parsimony approach selects biased representatives.


Sign in / Sign up

Export Citation Format

Share Document