Intact Dynamic Visual Capture in People With One Eye

2018 ◽  
Vol 31 (7) ◽  
pp. 675-688 ◽  
Author(s):  
Stefania S. Moro ◽  
Jennifer K. E. Steeves

Abstract Observing motion in one modality can influence the perceived direction of motion in a second modality (dynamic capture). For example observing a square moving in depth can influence the perception of a sound to increase in loudness. The current study investigates whether people who have lost one eye are susceptible to audiovisual dynamic capture in the depth plane similar to binocular and eye-patched viewing control participants. Partial deprivation of the visual system from the loss of one eye early in life results in changes in the remaining intact senses such as hearing. Linearly expanding or contracting discs were paired with increasing or decreasing tones and participants were asked to indicate the direction of the auditory stimulus. Magnitude of dynamic visual capture was measured in people with one eye compared to eye-patched and binocular viewing controls. People with one eye have the same susceptibility to dynamic visual capture as controls, where they perceived the direction of the auditory signal to be moving in the direction of the incongruent visual signal, despite previously showing a lack of visual dominance for audiovisual cues. This behaviour may be the result of directing attention to the visual modality, their partially deficient sense, in order to gain important information about approaching and receding stimuli which in the former case could be life-threatening. These results contribute to the growing body of research showing that people with one eye display unique accommodations with respect to audiovisual processing that are likely adaptive in each unique sensory situation.

2020 ◽  
Vol 52 (06) ◽  
pp. 347-356
Author(s):  
Sven Gruber ◽  
Felix Beuschlein

AbstractHypokalemia is closely linked with the pathophysiology of primary aldosteronism (PA). Although hypokalemic PA is less common than the normokalemic course of the disease, hypokalemia is of particular importance for the manifestation and development of comorbidities. Specifically, a growing body of evidence demonstrates that hypokalemia in PA patients is associated with a more severe disease course regarding cardiovascular and metabolic morbidity and mortality. It is also well appreciated that low potassium levels per se can promote or exacerbate hypertension. The spectrum of hypokalemia-related symptoms ranges from asymptomatic courses to life-threatening conditions. Hypokalemia is found in 9–37% of all cases of PA with a predominance in patients with aldosterone producing adenoma. Conversely, hypokalemia resolves in almost 100% of cases after both, specific medical or surgical treatment of the disease. However, to date, high-level evidence about the prevalence of primary aldosteronism in a hypokalemic population is missing. Epidemiological data are expected from the recently launched IPAHK+study (“Incidence of Primary Aldosteronism in Patients with Hypokalemia”).


2011 ◽  
Vol 139 (1-2) ◽  
pp. 69-75
Author(s):  
Sladjana Martinovic-Mitrovic ◽  
Aleksandra Dickov ◽  
Dragan Mitrovic ◽  
Veselin Dickov ◽  
Mirjana Jovanovic ◽  
...  

Introduction. Consequences of heroin abuse include organic damage of cerebral structures. The level of impairments is in a direct and positive relation with the length of heroin abuse. Objective. The aim of this research was the evaluation of the reaction time with heroin addicts with different length of substance abuse. Methods. Research method: 90 examinees were divided into three groups with relation to the length of heroin abuse. Data collection included a questionnaire referring to socio-demographic and addictive characteristics. A specially designed programme was used for the evaluation of reaction time to audio/ visual signal. Results. In relation to the reaction time as overall model, the difference between examinees with different length of heroin abuse can be found on the marginal level of significance (F=1.69; df=12; p=0.07). In visual modality, with the increase of length of heroin abuse leads to a significant prolongation of simple (the first visual sign: F=3.29; df=2; p=0.04) and choice reaction time (the second visual sign: F=4.97; df=2; p=0.00; the third visual sign: F=3.08; df=2; p=0.05). Longer heroin consumption also leads to the prolongation of the simple (the first auditory task: F=3.41; df=2; p=0.04) and the complex auditory reaction time (the second auditory task: F=5.67; df=2; p=0.01; the third auditory task: F=6.42; df=2; p=0.00). Conclusion. Heroin abuse leads to the prolongation of both simple and choice reaction time in visual as well as auditory modality. The average daily dose of opiates was the most important predictor of the abovementioned cognitive dysfunction.


Author(s):  
Russell E. Lewis

Survival from many life-threatening invasive fungal diseases requires the timely administration of an effective systemic antifungal agent at the correct dose. Although some new antifungal agents have been introduced into clinical practice over the last two decades, each of these antifungals has limitations regarding spectrum, pharmacokinetic/pharmacodynamic properties, toxicity, and cost. Therefore, the selection and dosing of antifungal therapy need to be highly individualized. A growing body of evidence suggests that antifungal therapy is often underdosed, especially in critically ill patients with sepsis, hypoalbunaemia, and extracorporeal circuits. This underdosing may contribute to poor outcomes and increase the risk of antifungal resistance.This chapter discusses some of the drug-specific and host-specific variables clinicians must consider when selecting and dosing antifungal therapy in the treatment of invasive fungal diseases.


2003 ◽  
Vol 26 (1) ◽  
pp. 31-32
Author(s):  
Stephen Handel ◽  
Molly L. Erickson

AbstractThere are 2,000 hair cells in the cochlea, but only three cones in the retina. This disparity can be understood in terms of the differences between the physical characteristics of the auditory signal (discrete excitations and resonances requiring many narrowly tuned receptors) and those of the visual signal (smooth daylight excitations and reflectances requiring only a few broadly tuned receptors). We argue that this match supports the physicalism of color and timbre.


2020 ◽  
Vol 82 (7) ◽  
pp. 3544-3557 ◽  
Author(s):  
Jemaine E. Stacey ◽  
Christina J. Howard ◽  
Suvobrata Mitra ◽  
Paula C. Stacey

AbstractSeeing a talker’s face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162–171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601–615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.


2012 ◽  
Vol 25 (0) ◽  
pp. 112 ◽  
Author(s):  
Lukasz Piwek ◽  
Karin Petrini ◽  
Frank E. Pollick

Multimodal perception of emotions has been typically examined using displays of a solitary character (e.g., the face–voice and/or body–sound of one actor). We extend investigation to more complex, dyadic point-light displays combined with speech. A motion and voice capture system was used to record twenty actors interacting in couples with happy, angry and neutral emotional expressions. The obtained stimuli were validated in a pilot study and used in the present study to investigate multimodal perception of emotional social interactions. Participants were required to categorize happy and angry expressions displayed visually, auditorily, or using emotionally congruent and incongruent bimodal displays. In a series of cross-validation experiments we found that sound dominated the visual signal in the perception of emotional social interaction. Although participants’ judgments were faster in the bimodal condition, the accuracy of judgments was similar for both bimodal and auditory-only conditions. When participants watched emotionally mismatched bimodal displays, they predominantly oriented their judgments towards the auditory rather than the visual signal. This auditory dominance persisted even when the reliability of auditory signal was decreased with noise, although visual information had some effect on judgments of emotions when it was combined with a noisy auditory signal. Our results suggest that when judging emotions from observed social interaction, we rely primarily on vocal cues from the conversation, rather then visual cues from their body movement.


2019 ◽  
Vol 62 (10) ◽  
pp. 3860-3875 ◽  
Author(s):  
Kaylah Lalonde ◽  
Lynne A. Werner

Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1–3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset–offset cue for detection, but the same cue did not improve their discrimination. The onset–offset cue benefited infants for both detection and discrimination. Whereas the onset–offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset–offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.


2020 ◽  
Author(s):  
Christopher W Robinson

The current study examined how simple tones affect speeded visual responses in a visual-spatial sequence learning task. Across the three reported experiments, participants were presented with a visual target that appeared in different locations on a touchscreen monitor and they were instructed to touch the visual targets as quickly as possible. Response times typically sped up across training and participants were slower to respond to the visual stimuli when the sequences were paired with tones. Moreover, these interference effects were more pronounced early in training and explicit instructions directing attention to the visual modality had little effect on eliminating auditory interference, suggesting that these interference effects may stem from bottom-up factors and do not appear to be under attentional control. These findings have implications on tasks that require the processing of simultaneously presented auditory and visual information and provide support for a proposed mechanism underlying auditory dominance on a task that is typically better suited for the visual modality.


2021 ◽  
Vol 15 ◽  
Author(s):  
Thorben Hülsdünker ◽  
David Riedel ◽  
Hannes Käsbauer ◽  
Diemo Ruhnow ◽  
Andreas Mierau

Although vision is the dominating sensory system in sports, many situations require multisensory integration. Faster processing of auditory information in the brain may facilitate time-critical abilities such as reaction speed however previous research was limited by generic auditory and visual stimuli that did not consider audio-visual characteristics in ecologically valid environments. This study investigated the reaction speed in response to sport-specific monosensory (visual and auditory) and multisensory (audio-visual) stimulation. Neurophysiological analyses identified the neural processes contributing to differences in reaction speed. Nineteen elite badminton players participated in this study. In a first recording phase, the sound profile and shuttle speed of smash and drop strokes were identified on a badminton court using high-speed video cameras and binaural recordings. The speed and sound characteristics were transferred into auditory and visual stimuli and presented in a lab-based experiment, where participants reacted in response to sport-specific monosensory or multisensory stimulation. Auditory signal presentation was delayed by 26 ms to account for realistic audio-visual signal interaction on the court. N1 and N2 event-related potentials as indicators of auditory and visual information perception/processing, respectively were identified using a 64-channel EEG. Despite the 26 ms delay, auditory reactions were significantly faster than visual reactions (236.6 ms vs. 287.7 ms, p < 0.001) but still slower when compared to multisensory stimulation (224.4 ms, p = 0.002). Across conditions response times to smashes were faster when compared to drops (233.2 ms, 265.9 ms, p < 0.001). Faster reactions were paralleled by a lower latency and higher amplitude of the auditory N1 and visual N2 potentials. The results emphasize the potential of auditory information to accelerate the reaction time in sport-specific multisensory situations. This highlights auditory processes as a promising target for training interventions in racquet sports.


2018 ◽  
Vol 30 (3) ◽  
pp. 319-337 ◽  
Author(s):  
David M. Simon ◽  
Mark T. Wallace

Multisensory integration of visual mouth movements with auditory speech is known to offer substantial perceptual benefits, particularly under challenging (i.e., noisy) acoustic conditions. Previous work characterizing this process has found that ERPs to auditory speech are of shorter latency and smaller magnitude in the presence of visual speech. We sought to determine the dependency of these effects on the temporal relationship between the auditory and visual speech streams using EEG. We found that reductions in ERP latency and suppression of ERP amplitude are maximal when the visual signal precedes the auditory signal by a small interval and that increasing amounts of asynchrony reduce these effects in a continuous manner. Time–frequency analysis revealed that these effects are found primarily in the theta (4–8 Hz) and alpha (8–12 Hz) bands, with a central topography consistent with auditory generators. Theta effects also persisted in the lower portion of the band (3.5–5 Hz), and this late activity was more frontally distributed. Importantly, the magnitude of these late theta oscillations not only differed with the temporal characteristics of the stimuli but also served to predict participants' task performance. Our analysis thus reveals that suppression of single-trial brain responses by visual speech depends strongly on the temporal concordance of the auditory and visual inputs. It further illustrates that processes in the lower theta band, which we suggest as an index of incongruity processing, might serve to reflect the neural correlates of individual differences in multisensory temporal perception.


Sign in / Sign up

Export Citation Format

Share Document