scholarly journals Lawful relation between perceptual bias and discriminability

2017 ◽  
Vol 114 (38) ◽  
pp. 10244-10249 ◽  
Author(s):  
Xue-Xin Wei ◽  
Alan A. Stocker

Perception of a stimulus can be characterized by two fundamental psychophysical measures: how well the stimulus can be discriminated from similar ones (discrimination threshold) and how strongly the perceived stimulus value deviates on average from the true stimulus value (perceptual bias). We demonstrate that perceptual bias and discriminability, as functions of the stimulus value, follow a surprisingly simple mathematical relation. The relation, which is derived from a theory combining optimal encoding and decoding, is well supported by a wide range of reported psychophysical data including perceptual changes induced by contextual modulation. The large empirical support indicates that the proposed relation may represent a psychophysical law in human perception. Our results imply that the computational processes of sensory encoding and perceptual decoding are matched and optimized based on identical assumptions about the statistical structure of the sensory environment.

2016 ◽  
Author(s):  
Xue-Xin Wei ◽  
Alan A. Stocker

Perception is a subjective experience that depends on the expectations and beliefs of an observer1. Psychophysical measures provide an objective yet indirect characterization of this experience by describing the dependency between the physical properties of a stimulus and the corresponding perceptually guided behavior2. Two fundamental psychophysical measures characterize an observer’s perception of a stimulus: how well the observer can discriminate the stimulus from similar ones (discrimination threshold) and how strongly the observer’s perceived stimulus value deviates from the true stimulus value (perceptual bias). It has long been thought that these two perceptual characteristics are independent3. Here we demonstrate that discrimination threshold and perceptual bias show a surprisingly simple mathematical relation. The relation, which we derived from assumptions of optimal sensory encoding and decoding4, is well supported by a wide range of reported psychophysical data5–16 including perceptual changes induced by spatial17,18 and temporal19–23 context, and attention24. The large empirical support suggests that the proposed relation represents a new law of human perception. Our results imply that universal rules govern the computational processes underlying human perception.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nadia Paraskevoudi ◽  
Iria SanMiguel

AbstractThe ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity.


2021 ◽  
pp. medethics-2021-107671
Author(s):  
Marcus Dahlquist ◽  
Henrik D Kugelberg

A wide range of non-pharmaceutical interventions (NPIs) have been introduced to stop or slow down the COVID-19 pandemic. Examples include school closures, environmental cleaning and disinfection, mask mandates, restrictions on freedom of assembly and lockdowns. These NPIs depend on coercion for their effectiveness, either directly or indirectly. A widely held view is that coercive policies need to be publicly justified—justified to each citizen—to be legitimate. Standardly, this is thought to entail that there is a scientific consensus on the factual propositions that are used to support the policies. In this paper, we argue that such a consensus has been lacking on the factual propositions justifying most NPIs. Consequently, they would on the standard view be illegitimate. This is regrettable since there are good reasons for granting the state the legitimate authority to enact NPIs under conditions of uncertainty. The upshot of our argument is that it is impossible to have both the standard interpretation of the permissibility of empirical claims in public justification and an effective pandemic response. We provide an alternative view that allows the state sufficient room for action while precluding the possibility of it acting without empirical support.


2014 ◽  
Vol 10 (6) ◽  
pp. 20140261 ◽  
Author(s):  
John P. DeLong

The parameters that drive population dynamics typically show a relationship with body size. By contrast, there is no theoretical or empirical support for a body-size dependence of mutual interference, which links foraging rates to consumer density. Here, I develop a model to predict that interference may be positively or negatively related to body size depending on how resource body size scales with consumer body size. Over a wide range of body sizes, however, the model predicts that interference will be body-size independent. This prediction was supported by a new dataset on interference and consumer body size. The stabilizing effect of intermediate interference therefore appears to be roughly constant across size, while the effect of body size on population dynamics is mediated through other parameters.


2020 ◽  
Vol 31 (9) ◽  
pp. 1107-1116
Author(s):  
Alexander Martin ◽  
Jennifer Culbertson

Similarities among the world’s languages may be driven by universal features of human cognition or perception. For example, in many languages, complex words are formed by adding suffixes to the ends of simpler words, but adding prefixes is much less common: Why might this be? Previous research suggests this is due to a domain-general perceptual bias: Sequences differing at their ends are perceived as more similar to each other than sequences differing at their beginnings. However, as is typical in psycholinguistic research, the evidence comes exclusively from one population—English speakers—who have extensive experience with suffixing. Here, we provided a much stronger test of this claim by investigating perceptual-similarity judgments in speakers of Kîîtharaka, a heavily prefixing Bantu language spoken in rural Kenya. We found that Kîîtharaka speakers ( N = 72) showed the opposite judgments to English speakers ( N = 51), which calls into question whether a universal bias in human perception can explain the suffixing preference in the world’s languages.


2013 ◽  
Vol 13 (1) ◽  
pp. 111 ◽  
Author(s):  
Casilda Garcia de la Maza

<p>The presence of adverbial modification, affectedness, or the aspectual characteristics of the verb phrase have usually been invoked as principles governing the possibility for a verb to appear in the middle mode, as defended by Roberts (1987), Fagan (1992), Doron and Rappaport-Hovav (1991) and Levin (1993), inter al. This paper presents the results of a data collection project aimed at unravelling the issue of the conditions on middle formation. The data show how existing accounts are deficient in a number of ways and leave a wide range of data unaccounted for. Instead, the data reveal that pragmatic relevance has a major role to play in the matter and provide empirical support to defend the essentially “pragmatic value” (Green 2004) of the construction. Some of the formal properties of middles which had been formerly put down to syntactic constraints are then reanalysed in the light of this characterisation, including the apparent requirement for adverbial modification, which can now be approached from a fresh perspective.</p>


Author(s):  
T. Clifton Morgan ◽  
Glenn Palmer

The “two-good theory” is a theory of foreign policy that is meant to apply to all states in all situations; that is, it is general. The theory is simple and assumes that states pursue two things in theory with respect to foreign policies: change (altering aspects of the status quo that they do not like) and maintenance (protecting aspects of the status quo that they do like). It also assumes that states have finite resources. In making these assumptions, the theory focuses on the trade-offs that states face in constructing their most desired foreign policy portfolios. Further, the theory assumes that protecting realized outcomes is easier than bringing about desired changes in the status quo. The theory assumes that states pursue two goods instead of the more traditional one good; for realism, that good is “power,” and for neorealism, it is “security.” This small step in theoretical development is very fruitful and leads to more interesting hypotheses, many of which enjoy empirical support. The theory captures more of the dynamics of international relations and of foreign policy choices than more traditional approaches do. A number of empirical tests of the implications of the two-good theory have been conducted and support the theory. As the theory can speak to a variety of foreign policy behaviors, these tests appropriately cover a wide range of activities, including conflict initiation and foreign aid allocation. The theory enjoys support from the results of these tests. If the research relaxes some of the parameters of the theory, the investigator can derive a series of corollaries to it. For example, the initial variant of the theory keeps a number of parameters constant to determine the effect of changes in capability. If, however, the investigator allows preferences to vary in a systematic and justifiable manner (consistent with the theory but not established by the theory), she can see how leaders in a range of situations can be expected to behave. The research strategy proposed, in other words, is to utilize the general nature of the two-good theory to investigate a number of interesting and surprising implications. For example, what may one expect to see if the United States supplies a recipient state with military aid to counter a rebellion? Under reasonable circumstances, the two-good theory can predict that the recipient would increase its change-seeking behavior by, for instance, engaging in negotiations to lower trade barriers.


2020 ◽  
Author(s):  
Wen-Kai You ◽  
Shreesh P. Mysore

ABSTRACTMice are being used increasing commonly to study visual behaviors, but the time-course of their perceptual dynamics is unclear. Here, using conditional accuracy analysis, a powerful method used to analyze human perception, and drift diffusion modeling, we investigated the dynamics and limits of mouse visual perception with a 2AFC orientation discrimination task. We found that it includes two stages – a short, sensory encoding stage lasting ∼300 ms, which involves the speed-accuracy tradeoff, and a longer visual short-term memory-dependent (VSTM) stage lasting ∼1700 ms. Manipulating stimulus features or adding a foil affected the sensory encoding stage, and manipulating stimulus duration altered the VSTM stage, of mouse perception. Additionally, mice discriminated targets as brief as 100 ms, and exhibited classic psychometric curves in a visual search task. Our results reveal surprising parallels between mouse and human visual perceptual processes, and provide a quantitative scaffold for exploring neural circuit mechanisms of visual perception.


2015 ◽  
Author(s):  
◽  
Thanh Thieu

For years, scientists have challenged the machine intelligence problem. Learning classes of objects followed by the classification of objects into their classes is a common task in machine intelligence. For this task, two objects representation schemes are often used: a vector-based representation, and a graph-based representation. While the vector representation has sound mathematical background and optimization tools, it lacks the ability to encode relations between the patterns and their parts, thus lacking the complexity of human perception. On the other hand, the graph-based representation naturally captures the intrinsic structural properties, but available algorithms usually have exponential complexity. In this work, we build an inductive learning algorithm that relies on graph-based representation of objects and their classes, and test the framework on a competitive dataset of human actions in static images. The method incorporates three primary measures of class representation: likelihood probability, family resemblance typicality, and minimum description length. Empirical benchmarking shows that the method is robust to the noisy input, scales well to real-world datasets, and achieves comparable performance to current learning techniques. Moreover, our method has the advantage of intuitive representation regarding both patterns and class representation. While applied to a specific problem of human pose recognition, our framework, named graphical Evolving Transformation System (gETS), can have a wide range of applications and can be used in other machine learning tasks.


2019 ◽  
Vol 242 (1) ◽  
pp. T81-T94 ◽  
Author(s):  
Clare M Reynolds ◽  
Mark H Vickers

Alterations in the environment during critical periods of development, including altered maternal nutrition, can increase the risk for the development of a range of metabolic, cardiovascular and reproductive disorders in offspring in adult life. Following the original epidemiological observations of David Barker that linked perturbed fetal growth to adult disease, a wide range of experimental animal models have provided empirical support for the developmental programming hypothesis. Although the mechanisms remain poorly defined, adipose tissue has been highlighted as playing a key role in the development of many disorders that manifest in later life. In particular, adipokines, including leptin and adiponectin, primarily secreted by adipose tissue, have now been shown to be important mediators of processes underpinning several phenotypic features associated with developmental programming including obesity, insulin sensitivity and reproductive disorders. Moreover, manipulation of adipokines in early life has provided for potential strategies to ameliorate or reverse the adverse sequalae that are associated with aberrant programming and provided insight into some of the mechanisms involved in the development of chronic disease across the lifecourse.


Sign in / Sign up

Export Citation Format

Share Document