statistical regularity
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 12)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 15 ◽  
Author(s):  
Olivier Penacchio ◽  
Sarah M. Haigh ◽  
Xortia Ross ◽  
Rebecca Ferguson ◽  
Arnold J. Wilkins

Visual discomfort is related to the statistical regularity of visual images. The contribution of luminance contrast to visual discomfort is well understood and can be framed in terms of a theory of efficient coding of natural stimuli, and linked to metabolic demand. While color is important in our interaction with nature, the effect of color on visual discomfort has received less attention. In this study, we build on the established association between visual discomfort and differences in chromaticity across space. We average the local differences in chromaticity in an image and show that this average is a good predictor of visual discomfort from the image. It accounts for part of the variance left unexplained by variations in luminance. We show that the local chromaticity difference in uncomfortable stimuli is high compared to that typical in natural scenes, except in particular infrequent conditions such as the arrangement of colorful fruits against foliage. Overall, our study discloses a new link between visual ecology and discomfort whereby discomfort arises when adaptive perceptual mechanisms are overstimulated by specific classes of stimuli rarely found in nature.


Author(s):  
Дмитро Сергійович Гаврилов ◽  
Сергій Степанович Бучік ◽  
Юрій Михайлович Бабенко ◽  
Сергій Сергійович Шульгін ◽  
Олександр Васильович Слободянюк

The subject of research in the article is the video processing processes based on the JPEG platform for data transmission in the information and telecommunication network. The aim is to build a method for processing a video image with the possibility of protecting it at the quantization stage with subsequent arithmetic coding. That will allow, while preserving the structural and statistical regularity, to ensure the necessary level of accessibility, reliability, and confidentiality when transmitting video data. Task: research of known methods of selective video image processing with the subsequent formalization of the video image processing procedure at the quantization stage and statistical coding of significant blocks based on the JPEG platform. The methods used are an algorithm based on the JPEG platform, methods for selecting significant informative blocks, arithmetic coding. The following results were obtained. A method for processing a video image with the possibility of its protection at the stage of quantization with subsequent arithmetic coding has been developed. This method will allow, while preserving the structural and statistical regularity, to fulfill the set requirements for an accessible, reliable, and confidential transmission of video data. Ensuring the required level of availability is associated with a 30% reduction in the video image volume compared to the original volume. Simultaneously, the provision of the required level of confidence is confirmed by an estimate of the peak signal-to-noise ratio for an authorized user, which is dB. Ensuring the required level of confidentiality is confirmed by an estimate of the peak signal-to-noise ratio in case of unauthorized access, which is equal to dB. Conclusions. The scientific novelty of the results obtained is as follows: for the first time, two methods of processing video images at the quantization stage have been proposed. The proposed technologies fulfill the assigned tasks to ensure the required level of confidentiality at a given level of confidence. Simultaneously, the method of using encryption tables has a higher level of cryptographic stability than the method of using the key matrix. It is due to a a more complex mathematical apparatus. Which, in turn, increases the time for processing the tributes. To fulfill the requirement of data availability, it is proposed to use arithmetic coding for info-normative blocks, which should be more efficient compared with the methods of code tables. So, the method of using the scoring tables has greater cryptographic stability, and the method of using the matrix-key has higher performance. Simultaneously, the use of arithmetic coding will satisfy the need for accessibility by reducing the initial volume.


Author(s):  
Manoj Kumar ◽  
Kara D Federmeier ◽  
Diane M Beck

Abstract Predictive coding models can simulate known perceptual or neuronal phenomena, but there have been fewer attempts to identify a reliable neural signature of predictive coding for complex stimuli. In a pair of studies, we test whether the N300 component of the event-related potential, occurring 250–350 ms post-stimulus-onset, has the response properties expected for such a signature of perceptual hypothesis testing at the level of whole objects and scenes. We show that N300 amplitudes are smaller to representative (“good exemplars”) compared to less representative (“bad exemplars”) items from natural scene categories. Integrating these results with patterns observed for objects, we establish that, across a variety of visual stimuli, the N300 is responsive to statistical regularity, or the degree to which the input is “expected” (either explicitly or implicitly) based on prior knowledge, with statistically regular images evoking a reduced response. Moreover, we show that the measure exhibits context-dependency; that is, we find the N300 sensitivity to category representativeness when stimuli are congruent with, but not when they are incongruent with, a category pre-cue. Thus, we argue that the N300 is the best candidate to date for an index of perceptual hypotheses testing for complex visual objects and scenes.


Author(s):  
Wayne C. Myrvold

This chapter begins with a puzzle: how is it that reliable prediction is ever possible, in physics? The reason that this is puzzling is that, even if the systems we are making predictions about are governed by deterministic laws that are known to us, the information available to us is a minuscule fraction of what might in principle be required to make a prediction. The answer to the puzzle lies in the phenomenon of statistical regularity, first identified in the social sciences. In a sufficiently large population, reliable predictions can be made about the total number of events that, taken individually, are unpredictable. Aggregate order arises out of individual disorder. This means that, as James Clerk Maxwell perceived already in the nineteenth century, all observed regularities are statistical regularities. To understand these requires the use of probabilistic concepts. This means that probabilistic reasoning is required even in our most certain predictions. Probability permeates physics, and we are going to have to make sense of it.


2020 ◽  
Vol 20 (11) ◽  
pp. 146
Author(s):  
Pei-Ling Yang ◽  
Evan G Center ◽  
Diane M Beck

Author(s):  
Dariusz Skotarek

Zipf’s Law states that within a given text the frequency of any word is inversely proportional to its rank in the frequency table of the words used in that text. It is a statistical regularity of a power law that occurs ubiquitously in language – so far every language that has been tested was found to display the Zipfian distribution. Toki Pona is an experimental artificial language spoken by hundreds of users. It is extremely minimalistic – its vocabulary consists of mere 120 words. A comparative statistical analysis of two parallel texts in French and Toki Pona showed that even a language of such scarce vocabulary adheres to Zipf’s Law just like natural languages.


Author(s):  
Chiung-Yu Chang ◽  
Feng-fan Hsieh

This study investigates how statistical regularity concerning the distribution of lexical tones and consonantal onsets in Mandarin, formulated as the “tone-consonant cooccurrence probability (TCCP)” here, influences results of wordlikeness judgment task. Native speakers were asked to rate the wordlikeness of monosyllabic real words and pseudowords with existing segmental combinations. Overall, real words with high probability were considered more wordlike than those with low probability. On the other hand, the probability effect was not significant on the well-formedness ratings of the pseudowords. These findings suggest that speakers are sensitive to the toneconsonant co-occurrence patterns, which follow gradual tendencies rather than an “allor-nothing” manner, but such sensitivity is probably limited to existing forms and cannot be extended to hypothetical ones.


Author(s):  
Xin Zhang

Abstract Apollonian gaskets are formed by repeatedly filling the gaps between three mutually tangent circles with further tangent circles. In this paper we give explicit formulas for the limiting pair correlation and the limiting nearest neighbor spacing of centers of circles from a fixed Apollonian gasket. These are corollaries of the convergence of moments that we prove. The input from ergodic theory is an extension of Mohammadi–Oh’s theorem on the equidisribution of expanding horospheres in the frame fundles of infinite volume hyperbolic spaces.


2019 ◽  
Vol 19 (10) ◽  
pp. 120c
Author(s):  
Su-Ling Yeh ◽  
Shuo-Heng Li ◽  
Li Jingling ◽  
Joshua Oon Soo Goh ◽  
Yi-Ping Chao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document