perceptual unit
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Yannick Joye ◽  
Sabrina Bruyneel ◽  
Bob M. Fennis

In the present work we extend research into the unit bias effect and its extension—the portion size effect—by demonstrating the existence of a “Gestalt bias.” Drawing on the tenets of Gestalt psychology, we show that a unit bias effect can be observed for food portions that are composed of identical basic units, but which are subjectively grouped into, or perceived as a Gestalt—a larger whole. In three studies, we find that such subjectively constructed food wholes constitute a new (perceptual) unit that is perceived bigger than the units it is constructed from, thereby prompting increased eating and desire to eat.


2021 ◽  
Author(s):  
Yannick Joye ◽  
Sabrina Bruyneel ◽  
Bob M Fennis

In the present work we extend research into the unit bias effect and its extension—the portion size effect—by demonstrating the existence of a “Gestalt bias.” Drawing on the tenets of Gestalt psychology, we show that a unit bias effect can be observed for food portions that are composed of identical basic units, but which are subjectively grouped into, or perceived as a Gestalt—a larger whole. In three studies, we find that such subjectively constructed food wholes constitute a new (perceptual) unit that is perceived bigger than the units it is constructed from, thereby prompting increased eating and desire to eat.


2021 ◽  
Author(s):  
Joseph Nah ◽  
Joy Geng

While objects are fundamental units of vision that convey meaning, how different types of semantic knowledge affect perception is not fully understood. In contrast, the concept literature divides semantic information into taxonomic and thematic types. Taxonomic relationships reflect categorization by similarities (e.g., dog – wolf); thematic groups are based on complementary relationships shared within a common event (e.g., swimsuit – goggles; pool). A critical difference between these two information types is that thematic relationships are learned from the experienced co-occurrence of objects whereas taxonomic relationships are learned abstractly. In two studies, we test the hypothesis that visual processing of thematically related objects is more rapid because they serve as mutual visual primes and form a perceptual unit. The results demonstrate that learned co-occurrence not only shapes semantic knowledge, but also affects low level visual processing, revealing a link between how information is acquired (e.g., experienced vs. unobserved) and how it modulates perception.


2019 ◽  
Vol 24 (1) ◽  
pp. 71-84 ◽  
Author(s):  
Felix A. Dobrowohl ◽  
Andrew J. Milne ◽  
Roger T. Dean

Perceptual dimensions underlying timbre and sound-source identification have received considerable scientific attention. While these scholarly insights help us in understanding the nature of sound within a multidimensional timbral space, they carry little meaning for the majority of musicians. To help address this, we conducted two experiments to establish listeners’ perceptual thresholds (PT) for changes in sound using a staircase-procedure. Unlike most timbre perception research, these changes were sonic manipulations that are common in synthesisers, audio processors and instruments familiar to musicians and producers, and occurred within continuous sounds (rather than between discrete pairs of sounds). In experiment 1, two sounds (variants of a sawtooth oscillation) both with the same fundamental frequency (F1: 80 Hz, 240 Hz or 600 Hz) were played with no intervening gap. In each trial, the two sounds’ partials differed in amplitudes or frequencies to produce a timbre change. The sonic manipulations were varied in size to detect thresholds for the perceived timbre change – listeners were instructed to indicate whether or not they perceived a change within the sound. In experiment 2, we modified stimulus presentation to introduce the factor of transition time (TT). Rather than occurring instantaneously (as in experiment 1), the timbre manipulations were introduced gradually over the course of a 100 ms or a 1000 ms TT. Results revealed that PTs were significantly affected by the manipulations in experiment 1, and additionally by TT in experiment 2. Importantly, the data revealed an interaction between the F1 and the timbre manipulations, such that there were differential effects of timbre changes on the perceptual system depending on pitch height. Musicians (n=11) showed significantly smaller PTs compared to non-musicians (n=10). However, PTs for musicians and non-musicians were highly correlated (r=.83) across different sonic manipulations, indicating similar perceptual patterns in both. We hope that by establishing PTs for commonly used timbre manipulations, we can provide musicians with a general perceptual unit, for each manipulation, that can guide music composition and assessment.


2019 ◽  
pp. 127-153
Author(s):  
Kevin Connolly

This chapter argues that multisensory perceptions are learned, not the result of an automatic feature binding mechanism. For example, suppose you are at a live jazz show. The drummer begins a solo. You see the cymbal jolt and hear the clang. But you are also aware that the jolt and the clang are part of the same event. Psychologists have assumed that multisensory perceptions like this one are the result of an automatic feature binding mechanism. This chapter argues instead that when you experience the jolt and the clang as part of the same event, it is the result of a perceptual learning process. The jolt and the clang are best understood as a single learned perceptual unit, not as automatically bound. This chapter details the perceptual learning process of “unitization,” whereby we come to “chunk” the world into multisensory units, and argues that unitization best explains multisensory perception.


2018 ◽  
Author(s):  
Fan-Gang Zeng

Psychophysical laws quantitatively relate perceptual magnitude to stimulus intensity. While most people have accepted Stevens’s power function as the psychophysical law, few believe in Fechner’s original idea using just-noticeable-differences (jnd) as a constant perceptual unit to educe psychophysical laws. Here I present a unified theory in hearing, starting with a general form of Zwislocki’s loudness function (1965) to derive a general form of Brentano’s law. I will arrive at a general form of the loudness-jnd relationship that unifies previous loudness-jnd theories. Specifically, the “slope”, “proportional-jnd”, and “equal-loudness, equal-jnd” theories, are three additive terms in the new unified theory. I will also show that the unified theory is consistent with empirical data in both acoustic and electric hearing. Without any free parameters, the unified theory uses loudness balance functions to successfully predict the jnd function in a wide range of hearing situations. The situations include loudness recruitment and its jnd functions in sensorineural hearing loss and simultaneous masking, loudness enhancement and the midlevel hump in forward and backward masking, abnormal loudness and jnd functions in cochlear implant subjects. Predictions of these loudness-jnd functions were thought to be questionable at best in simultaneous masking or not possible at all in forward masking. The unified theory and its successful applications suggest that although the specific form of Fechner’s law needs to be revised, his original idea is valid in the wide range of hearing situations discussed here.


2016 ◽  
Vol 33 (3) ◽  
pp. 287-305 ◽  
Author(s):  
Chelsea Douglas ◽  
Jason Noble ◽  
Stephen McAdams

Sound mass has been an influential trend in music since the 1950’s and yet many questions about its perception remain unanswered. Approaching sound mass from the perspective of auditory scene analysis, we define it as a type of auditory grouping that retains an impression of multiplicity even as it is perceived as a perceptual unit. Sound mass requires all markers of the individual identities of sounds to be deemphasized to prevent them from splitting off into separate streams. Seeking to determine how consistent listeners are in their perception of sound mass, and whether it is possible to determine sound parameters and threshold values that predict sound mass perception, we conducted two perceptual studies on Ligeti’s Continuum. This piece consists of an extremely rapid, steady stream of eighth-note dyads with no tempo changes. We addressed the claim by Ligeti and others that the fusion into a continuous texture or sound mass occurs at ca. 20 attacks/s, hypothesizing that other factors such as pitch organization, emergent rhythm, timbre, and register would affect this value. A variety of factors were found to affect sound mass perception, suggesting that the threshold value is not absolute but varies according to principles of auditory scene analysis.


2015 ◽  
Vol 33 (1) ◽  
pp. 3-11 ◽  
Author(s):  
Caroline Palmer

Melody has been defined as a distinct perceptual unit that exhibits stability and coherence to listeners and performers. These psychological processes (distinctiveness, stability, coherence) contribute to the foundations of three theories of music cognition (Bregman, 1990; Krumhansl, 1990; Narmour, 1990), yet several mysteries still exist in the human experience of melody. From early exposure to lullabies and brief exposures in advertising jingles, to the full-length concert exposure of complex musical works, listeners’ imagination and focus are captured in unique ways by the experience of melody. People with various amounts of musical training hum, tap, clap, and find other ways of interacting with a melody; they perform to it. Listeners report the experience of a recurring melody playing in their minds (earworms). I discuss neuroscience findings that aid in modeling the fine-level time course of melodic experiences, and address how the listener/performer identifies a melody as distinct in a complex auditory scene, how expectations unfold in implications and realizations that contribute to coherence, and how hierarchical tonal relationships of stability are detected. The life cycle of a melody in the ears, brain, and heart of a listener/performer sheds light on the human experience of music.


Ethology ◽  
2010 ◽  
Vol 101 (2) ◽  
pp. 89-100 ◽  
Author(s):  
Geoffrey E. Gerstner ◽  
Victoria A. Fazio
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document