statistical structure
Recently Published Documents


TOTAL DOCUMENTS

240
(FIVE YEARS 46)

H-INDEX

32
(FIVE YEARS 3)

Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 6
Author(s):  
Diederik Aerts ◽  
Lester Beltran

In previous research, we showed that ‘texts that tell a story’ exhibit a statistical structure that is not Maxwell–Boltzmann but Bose–Einstein. Our explanation is that this is due to the presence of ‘indistinguishability’ in human language as a result of the same words in different parts of the story being indistinguishable from one another, in much the same way that ’indistinguishability’ occurs in quantum mechanics, also there leading to the presence of Bose–Einstein rather than Maxwell–Boltzmann as a statistical structure. In the current article, we set out to provide an explanation for this Bose–Einstein statistics in human language. We show that it is the presence of ‘meaning’ in ‘texts that tell a story’ that gives rise to the lack of independence characteristic of Bose–Einstein, and provides conclusive evidence that ‘words can be considered the quanta of human language’, structurally similar to how ‘photons are the quanta of electromagnetic radiation’. Using several studies on entanglement from our Brussels research group, we also show, by introducing the von Neumann entropy for human language, that it is also the presence of ‘meaning’ in texts that makes the entropy of a total text smaller relative to the entropy of the words composing it. We explain how the new insights in this article fit in with the research domain called ‘quantum cognition’, where quantum probability models and quantum vector spaces are used in human cognition, and are also relevant to the use of quantum structures in information retrieval and natural language processing, and how they introduce ‘quantization’ and ‘Bose–Einstein statistics’ as relevant quantum effects there. Inspired by the conceptuality interpretation of quantum mechanics, and relying on the new insights, we put forward hypotheses about the nature of physical reality. In doing so, we note how this new type of decrease in entropy, and its explanation, may be important for the development of quantum thermodynamics. We likewise note how it can also give rise to an original explanatory picture of the nature of physical reality on the surface of planet Earth, in which human culture emerges as a reinforcing continuation of life.


2021 ◽  
Author(s):  
Amadeus Maes ◽  
Mauricio Barahona ◽  
Claudia Clopath

The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Existing spiking network models implementing sampling lack the ability to learn the statistical structure from observed stimuli and instead often hard-code a dynamics. Here, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Stefan Landmann ◽  
Caroline M Holmes ◽  
Mikhail Tikhonov

Bacteria live in environments that are continuously fluctuating and changing. Exploiting any predictability of such fluctuations can lead to an increased fitness. On longer timescales, bacteria can ‘learn’ the structure of these fluctuations through evolution. However, on shorter timescales, inferring the statistics of the environment and acting upon this information would need to be accomplished by physiological mechanisms. Here, we use a model of metabolism to show that a simple generalization of a common regulatory motif (end-product inhibition) is sufficient both for learning continuous-valued features of the statistical structure of the environment and for translating this information into predictive behavior; moreover, it accomplishes these tasks near-optimally. We discuss plausible genetic circuits that could instantiate the mechanism we describe, including one similar to the architecture of two-component signaling, and argue that the key ingredients required for such predictive behavior are readily accessible to bacteria.


2021 ◽  
Author(s):  
Miriam K. Forbes

Research on the patterns of covariation among mental disorders has proliferated, as summarized in the Hierarchical Taxonomy of Psychopathology (HiTOP). This brief letter sought to examine whether symptom overlap represents an important source of bias in the statistical structure of psychopathology. I found that 358 pairs of the DSM-5 diagnoses covered by the HiTOP framework had one or more overlapping, and that a third (n = 130; 34%) of the unique constituent symptoms do reinforce the higher-order structure of HiTOP through repetition within dimensions and/or between dimensions in the same superspectrum. By contrast, 86% of the possible pairs of diagnoses did not have any shared symptoms, and the majority of the unique constituent symptoms (n = 222; 58%) do not influence the structure through repetition; a fifth (n = 71; 19%) work against the HiTOP structure at the subfactor, spectrum, and superspectrum level. I conclude that symptom-level homogeneity likely inflates the similarity and consequent covariation of some DSM diagnoses—e.g., in the Antisocial Behavior dimension—and research on the statistical structure of psychopathology should account for this potential source of bias. However, the patterns of symptom overlap in the DSM are not strong enough to make the HiTOP structure a foregone conclusion.


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1459
Author(s):  
Tong Wu ◽  
Yong Wang

In this paper, we classify three-dimensional Lorentzian Lie groups on which Ricci tensors associated with Bott connections, canonical connections and Kobayashi–Nomizu connections are Codazzi tensors associated with these connections. We also classify three-dimensional Lorentzian Lie groups with the quasi-statistical structure associated with Bott connections, canonical connections and Kobayashi–Nomizu connections.


Author(s):  
Avinash J. Dalal ◽  
Amanda Lohss ◽  
Daniel Parry

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Dylan Festa ◽  
Amir Aschner ◽  
Aida Davila ◽  
Adam Kohn ◽  
Ruben Coen-Cagli

AbstractNeuronal activity in sensory cortex fluctuates over time and across repetitions of the same input. This variability is often considered detrimental to neural coding. The theory of neural sampling proposes instead that variability encodes the uncertainty of perceptual inferences. In primary visual cortex (V1), modulation of variability by sensory and non-sensory factors supports this view. However, it is unknown whether V1 variability reflects the statistical structure of visual inputs, as would be required for inferences correctly tuned to the statistics of the natural environment. Here we combine analysis of image statistics and recordings in macaque V1 to show that probabilistic inference tuned to natural image statistics explains the widely observed dependence between spike count variance and mean, and the modulation of V1 activity and variability by spatial context in images. Our results show that the properties of a basic aspect of cortical responses—their variability—can be explained by a probabilistic representation tuned to naturalistic inputs.


Author(s):  
Been Kim ◽  
Emily Reif ◽  
Martin Wattenberg ◽  
Samy Bengio ◽  
Michael C. Mozer

AbstractThe Gestalt laws of perceptual organization, which describe how visual elements in an image are grouped and interpreted, have traditionally been thought of as innate. Given past research showing that these laws have ecological validity, we investigate whether deep learning methods infer Gestalt laws from the statistics of natural scenes. We examine the law of closure, which asserts that human visual perception tends to “close the gap” by assembling elements that can jointly be interpreted as a complete figure or object. We demonstrate that a state-of-the-art convolutional neural network, trained to classify natural images, exhibits closure on synthetic displays of edge fragments, as assessed by similarity of internal representations. This finding provides further support for the hypothesis that the human perceptual system is even more elegant than the Gestaltists imagined: a single law—adaptation to the statistical structure of the environment—might suffice as fundamental.


2021 ◽  
Vol 41 (14) ◽  
pp. 3234-3253
Author(s):  
Seong-Hah Cho ◽  
Trinity Crapse ◽  
Piercesare Grimaldi ◽  
Hakwan Lau ◽  
Michele A. Basso

Author(s):  
Bettoni Roberta ◽  
Valentina Riva ◽  
Chiara Cantiani ◽  
Elena Maria Riboldi ◽  
Massimo Molteni ◽  
...  

AbstractStatistical learning refers to the ability to extract the statistical relations embedded in a sequence, and it plays a crucial role in the development of communicative and social skills that are impacted in the Autism Spectrum Disorder (ASD). Here, we investigated the relationship between infants’ SL ability and autistic traits in their parents. Using a visual habituation task, we tested infant offspring of adults (non-diagnosed) who show high (HAT infants) versus low (LAT infants) autistic traits. Results demonstrated that LAT infants learned the statistical structure embedded in a visual sequence, while HAT infants failed. Moreover, infants’ SL ability was related to autistic traits in their parents, further suggesting that early dysfunctions in SL might contribute to variabilities in ASD symptoms.


Sign in / Sign up

Export Citation Format

Share Document