scholarly journals Hearing in a world of light: why, where, and how visual and auditory information are connected by the brain

2019 ◽  
Vol 12 (7) ◽  
Author(s):  
Jennifer M. Groh

Keynote by Jenny Groh (Duke University) at the 20th European Conference on Eye Movement Research (ECEM) in Alicante, 19.8.2019 Video stream: https://vimeo.com/356576513 Abstract: Information about eye movements with respect to the head is required for reconciling visual and auditory space. This keynote presentation describes recent findings concerning how eye movements affect early auditory processing via motor processes in the ear (eye movement-related eardrum oscillations, or EMREOs). Computational efforts to understand how eye movements are factored in to auditory processing to produce a reference frame aligned with visual space uncovered a second critical issue: sound location is not mapped but is instead rate (meter) coded in the primate brain, unlike visual space. Meter coding would appear to limit the representation of multiple simultaneous sounds. The second part of this presentation concerns how such a meter code could use fluctuating activity patterns to circumvent this limitation

2021 ◽  
pp. 1-33
Author(s):  
Sixin Liao ◽  
Lili Yu ◽  
Jan-Louis Kruger ◽  
Erik D. Reichle

Abstract This study investigated how semantically relevant auditory information might affect the reading of subtitles, and if such effects might be modulated by the concurrent video content. Thirty-four native Chinese speakers with English as their second language watched video with English subtitles in six conditions defined by manipulating the nature of the audio (Chinese/L1 audio vs. English/L2 audio vs. no audio) and the presence versus absence of video content. Global eye-movement analyses showed that participants tended to rely less on subtitles with Chinese or English audio than without audio, and the effects of audio were more pronounced in the presence of video presentation. Lexical processing of subtitles was not modulated by the audio. However, Chinese audio, which presumably obviated the need to read the subtitles, resulted in more superficial post-lexical processing of the subtitles relative to either the English or no audio. On the contrary, English audio accentuated post-lexical processing of the subtitles compared with Chinese audio or no audio, indicating that participants might use English audio to support subtitle reading (or vice versa) and thus engaged in deeper processing of the subtitles. These findings suggest that, in multimodal reading situations, eye movements are not only controlled by processing difficulties associated with properties of words (e.g., their frequency and length) but also guided by metacognitive strategies involved in monitoring comprehension and its online modulation by different information sources.


2019 ◽  
Vol 12 (7) ◽  
Author(s):  
Alexandra Spichtig ◽  
Christian Vorstius ◽  
Ronan Reilly ◽  
Jochen Laubrock

Video stream: https://vimeo.com/362645755 Eye-movement recording has made it possible to achieve a detailed understanding of oculomotor and cognitive behavior during reading and of changes in this behavior across the stages of reading development. Given that many students struggle to attain even basic reading skills, a logical extension of eye-movement research involves its applications in both the diagnostic and instructional areas of reading education. The focus of this symposium is on eye-movement research with potential implications for reading education. Christian Vorstius will review results from a large-scale longitudinal study that examined the development of spatial parameters in fixation patterns within three cohorts, ranging from elementary to early middle school, discussing an early development window and its potential influences on reading ability and orthography. Ronan Reilly and Xi Fan will present longitudinal data related to developmental changes in reading-related eye movements in Chinese. Their findings are indicative of increasing sensitivity to lexical predictability and sentence coherence. The authors suggest that delays in the emergence of these reading behaviors may signal early an increased risk of reading difficulty. Jochen Laubrock’s presentation will focus on perceptual span development and explore dimensions of this phenomenon with potential educational implications, such as the modulation of perceptual span in relation to cognitive load, as well as preview effects during oral and silent reading --and while reading comic books.


2019 ◽  
Vol 121 (2) ◽  
pp. 646-661 ◽  
Author(s):  
Marie E. Bellet ◽  
Joachim Bellet ◽  
Hendrikje Nienborg ◽  
Ziad M. Hafed ◽  
Philipp Berens

Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network to automatically detect saccades at human-level accuracy and with minimal training examples. Our algorithm surpasses state of the art according to common performance metrics and could facilitate studies of neurophysiological processes underlying saccade generation and visual processing. NEW & NOTEWORTHY Detecting saccades in eye movement recordings can be a difficult task, but it is a necessary first step in many applications. We present a convolutional neural network that can automatically identify saccades with human-level accuracy and with minimal training examples. We show that our algorithm performs better than other available algorithms, by comparing performance on a wide range of data sets. We offer an open-source implementation of the algorithm as well as a web service.


Author(s):  
Nandini Iyer ◽  
Eric R. Thompson ◽  
Brian D. Simpson

Auditory cues, when coupled with visual objects, have lead to reduced response times in visual search tasks, suggesting that adding auditory information can potentially aid Air Force operators in complex scenarios. These benefits are substantial when the spatial transformations that one has to make are relatively simple i.e., mapping a 3-D auditory space to a 3-D visual scene. The current study focused on listeners’ abilities to map sound surrounding a listener to a 2-D visual space, by measuring performance in localization tasks that required the following responses: 1) Headpointing: turn and face a loudspeaker from where a sound emanated, 2) Tablet: point to an icon representing a loudspeaker displayed in an array on a 2-D GUI or, 3) Hybrid: turn and face the loudspeaker from where a sound emanated and them indicate that location on a 2-D GUI. Results indicated that listeners’ localization errors were small when the response modality was head-pointing, and localization errors roughly doubled when they were asked to make a complex transformation o fauditory-visual space (i.e., while using a hybrid response); surprisingly, the hybrid response technique reduced errors compared to the tablet response conditions. These results have large implications for the design of auditory displays that require listeners to make complex, non-intuitive transformations of auditory-visual space.


2021 ◽  
Author(s):  
Ifedayo-Emmmanuel Adeyefa-Olasupo

Despite the incessant retinal disruptions that necessarily accompany eye movements, our percept of the visual world remains continuous and stable—a phenomenon referred to as spatial constancy. How the visual system achieves spatial constancy remains unclear despite almost four centuries worth of experimentation. Here I measured visual sensitivity at geometrically symmetric locations, observing transient sensitivity differences between them where none should be observed if cells that support spatial constancy indeed faithfully translate or converge. These differences, recapitulated by a novel neurobiological mechanical model, reflect an overriding influence of putative visually transient error signals that curve visual space. Intermediate eccentric locations likely to contain retinal disruptions are uniquely affected by curved visual space, suggesting that visual processing at these locations is transiently turned off before an eye movement, and with the gating off of these error signals, turned back on after an eye-movement— a possible mechanism underlying spatial constancy.


2000 ◽  
Vol 59 (2) ◽  
pp. 85-88 ◽  
Author(s):  
Rudolf Groner ◽  
Marina T. Groner ◽  
Kazuo Koga

2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


Author(s):  
Laura Hurley

The inferior colliculus (IC) receives prominent projections from centralized neuromodulatory systems. These systems include extra-auditory clusters of cholinergic, dopaminergic, noradrenergic, and serotonergic neurons. Although these modulatory sites are not explicitly part of the auditory system, they receive projections from primary auditory regions and are responsive to acoustic stimuli. This bidirectional influence suggests the existence of auditory-modulatory feedback loops. A characteristic of neuromodulatory centers is that they integrate inputs from anatomically widespread and functionally diverse sets of brain regions. This connectivity gives neuromodulatory systems the potential to import information into the auditory system on situational variables that accompany acoustic stimuli, such as context, internal state, or experience. Once released, neuromodulators functionally reconfigure auditory circuitry through a variety of receptors expressed by auditory neurons. In addition to shaping ascending auditory information, neuromodulation within the IC influences behaviors that arise subcortically, such as prepulse inhibition of the startle response. Neuromodulatory systems therefore provide a route for integrative behavioral information to access auditory processing from its earliest levels.


2020 ◽  
Vol 10 (5) ◽  
pp. 92
Author(s):  
Ramtin Zargari Marandi ◽  
Camilla Ann Fjelsted ◽  
Iris Hrustanovic ◽  
Rikke Dan Olesen ◽  
Parisa Gazerani

The affective dimension of pain contributes to pain perception. Cognitive load may influence pain-related feelings. Eye tracking has proven useful for detecting cognitive load effects objectively by using relevant eye movement characteristics. In this study, we investigated whether eye movement characteristics differ in response to pain-related feelings in the presence of low and high cognitive loads. A set of validated, control, and pain-related sounds were applied to provoke pain-related feelings. Twelve healthy young participants (six females) performed a cognitive task at two load levels, once with the control and once with pain-related sounds in a randomized order. During the tasks, eye movements and task performance were recorded. Afterwards, the participants were asked to fill out questionnaires on their pain perception in response to the applied cognitive loads. Our findings indicate that an increased cognitive load was associated with a decreased saccade peak velocity, saccade frequency, and fixation frequency, as well as an increased fixation duration and pupil dilation range. Among the oculometrics, pain-related feelings were reflected only in the pupillary responses to a low cognitive load. The performance and perceived cognitive load decreased and increased, respectively, with the task load level and were not influenced by the pain-related sounds. Pain-related feelings were lower when performing the task compared with when no task was being performed in an independent group of participants. This might be due to the cognitive engagement during the task. This study demonstrated that cognitive processing could moderate the feelings associated with pain perception.


Sign in / Sign up

Export Citation Format

Share Document