scholarly journals The Effect of Irrelevant Environmental Noise on the Performance of Visual-to-Auditory Sensory Substitution Devices Used by Blind Adults

2019 ◽  
Vol 32 (2) ◽  
pp. 87-109 ◽  
Author(s):  
Galit Buchs ◽  
Benedetta Heimler ◽  
Amir Amedi

Abstract Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.

2012 ◽  
Vol 25 (0) ◽  
pp. 191
Author(s):  
Ella Striem-Amit ◽  
Miriam Guendelman ◽  
Amir Amedi

Sensory Substitution Devices (SSDs) convey visual information through sounds or touch, thus theoretically enabling a form of visual rehabilitation in the blind. However, for clinical use, these devices must provide fine-detailed visual information which was not shown yet for this or other means of visual restoration. To test the possible functional acuity conveyed by such devices, we used the Snellen acuity test conveyed through a high-resolution visual-to-auditory SSD (The vOICe). We show that congenitally fully blind adults can exceed the World Health Organization (WHO) blindness acuity threshold using SSDs, reaching the highest acuity reported yet with any visual rehabilitation approach. Preliminary findings of a neuroimaging study of a similar reading task using SSDs suggest the specific involvement of the congenitally blind visual cortex in processing sights-from-sounds. These results demonstrate the potential capacity of SSDs as inexpensive, non-invasive visual rehabilitation aids, as well as their advantage in charting the retention of functional properties of the visual cortex of the blind.


2021 ◽  
Author(s):  
Katarzyna Ciesla ◽  
T. Wolak ◽  
A. Lorens ◽  
H. Skarżyński ◽  
A. Amedi

Abstract Understanding speech in background noise is challenging. Wearing face-masks during COVID19-pandemics made it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on fingertips. After a short training session, participants significantly improved (16 out of 17) in speech-in-noise understanding, when added vibrations corresponded to low-frequencies extracted from the sentence. The level of understanding was maintained after training, when the loudness of the background noise doubled (mean group improvement of ~ 10 decibels). This result indicates that our solution can be very useful for the hearing-impaired patients. Even more interestingly, the improvement was transferred to a post-training situation when the touch input was removed, showing that we can apply the setup for auditory rehabilitation in cochlear implant-users. Future wearable implementations of our SSD can also be used in real-life situations, when talking on the phone or learning a foreign language. We discuss the basic science implications of our findings, such as we show that even in adulthood a new pairing can be established between a neuronal computation (speech processing) and an atypical sensory modality (tactile). Speech is indeed a multisensory signal, but learned from birth in an audio-visual context. Interestingly, adding lip reading cues to speech in noise provides benefit of the same or lower magnitude as we report here for adding touch.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Haoze Chen ◽  
Zhijie Zhang

AbstractDue to the audio information of different types of vehicle models are distinct, the vehicle information can be identified by the audio signal of vehicle accurately. In real life, in order to determine the type of vehicle, we do not need to obtain the visual information of vehicles and just need to obtain the audio information. In this paper, we extract and stitching different features from different aspects: Mel frequency cepstrum coefficients in perceptual characteristics, pitch class profile in psychoacoustic characteristics and short-term energy in acoustic characteristics. In addition, we improve the neural networks classifier by fusing the LSTM unit into the convolutional neural networks. At last, we put the novel feature to the hybrid neural networks to recognize different vehicles. The results suggest the novel feature we proposed in this paper can increase the recognition rate by 7%; destroying the training data randomly by superimposing different kinds of noise can improve the anti-noise ability in our identification system; and LSTM has great advantages in modeling time series, adding LSTM to the networks can improve the recognition rate of 3.39%.


1988 ◽  
Vol 32 (2) ◽  
pp. 75-75
Author(s):  
Thomas Z. Strybel

Developments of head-coupled control/display systems have focused primarily on the display of three dimensional visual information, as the visual system is the optimal sensory channel for the aquisition of spatial information in humans. The auditory system improves the efficiency of vision, however, by obtaining spatial information about relevant objects outside of the visual field of view. This auditory information can be used to direct head and eye movements. Head-coupled display systems, can also benefit from the addition of auditory spatial information, as it provides a natural method of signaling the location of important events outside of the visual field of view. This symposium will report on current efforts in the developments of head-coupled display systems, with an emphasis on the auditory spatial component. The first paper “Virtual Interface Environment Workstations”, by Scott S. Fisher, will report on the development of a prototype virtual environment. This environment consists of a head-mounted, wide-angle, stereoscopic display system which is controlled by operator position, voice, and gesture. With this interface, an operator can virtually explore a 360 degree synthesized environment, and viscerally interact with its components. The second paper, “A Virtual Display System For Conveying Three-Dimensional Acoustic Information” by Elizabeth M. Wenzel, Frederic L. Wightman and Scott H. Foster, will report on the development of a method of synthetically generating three-dimensional sound cues for the above-mentioned interface. The development of simulated auditory spatial cues is limited to some extent, by our knowlege of auditory spatial processing. The remaining papers will report on two areas of auditory space perception that have recieved little attention until recently. “Perception of Real and Simulated Motion in the Auditory Modality”, by Thomas Z. Strybel, will review recent research on auditory motion perception, because a natural acoustic environment must contain moving sounds. This review will consider applications of this knowledge to head-coupled display systems. The last paper, “Auditory Psychomotor Coordination”, will examine the interplay between the auditory, visual and motor systems. The specific emphasis of this paper is the use of auditory spatial information in the regulation of motor responses so as to provide efficient application of the visual channel.


2014 ◽  
Vol 26 (12) ◽  
pp. 2827-2839 ◽  
Author(s):  
Maria J. S. Guerreiro ◽  
Joaquin A. Anguera ◽  
Jyoti Mishra ◽  
Pascal W. M. Van Gerven ◽  
Adam Gazzaley

Selective attention involves top–down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top–down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top–down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.


2009 ◽  
Vol 21 (02) ◽  
pp. 131-137
Author(s):  
Rong Liu

A critical challenge in telerobotic system is data communication over networks without performance guarantee. This paper proposes a novel way of using auditory feedback as the sensory feedback to ensure that the teleoperated robotic system still functions in a real-time fashion under the unfavorable communication conditions, such as image losses, visual failures, and low-bandwidth communication links. The newly proposed method is tested through psychoacoustic experiments with 10 subjects conducting real-time robotic navigation tasks. The performance is analyzed according to an objective point of view (time to finish task, distance away to the target measurements), as well as subjective workload assessments for different sensory feedbacks. Moreover, the bandwidth consumed when auditory information is applied is considerably lower, compared with the visual information. Preliminary results demonstrate the feasibility of auditory display as a complement or substitute to visual display for remote robotic navigation.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2021 ◽  
Author(s):  
Olivier Bonnot ◽  
Vladimir Adrien ◽  
Veronique Venelle ◽  
Dominique Bonneau ◽  
Fanny Gollier-Briant ◽  
...  

BACKGROUND Conflicting data emerge from literature regarding actual use of smartphone application in medicine, some authors considering it as a breakthrough while other suggesting that real-life use is disappointing. However, digital tools are everyday more present in medicine. We developed SMARTAUTISM, a smartphone application focused on empowerment in a day to day-based help for parents having a child with Autism Spectrum Disorders (ASD) asking questions and providing a feed-back screen with simple curves. OBJECTIVE To evaluate the qualitative and quantitative usage of a smartphone application by caregivers of ASD individuals. METHODS This is a prospective, longitudinal, exploratory, open study with a 6-month follow-up period of family having one child with ASD. Data are recorded longitudinally, and outcome criteria were: (i) overall filling rate, (ii) filling rate by degree of completion and by interest of users for our feed-back screen and qualitative questionnaire based on attrition. RESULTS Participants have a very high intent to use our app during the six months period (95%). However, secondary analysis shows that only 46 of subjects had constant filling rate over 50%. Interestingly, those high-profile users are characterized by higher use and satisfaction with the feed-back screen when compared to low (p<0.001) and moderate (p=0.007) users. CONCLUSIONS Real or perceived utility is an important incentive in the use of empowerment smartphone apps. CLINICALTRIAL Clinical Trial : NCT03020277 INTERNATIONAL REGISTERED REPORT RR2-10.1136/bmjopen-2016-012135


2008 ◽  
Vol 2 (2) ◽  
Author(s):  
Glenn Nordehn ◽  
Spencer Strunic ◽  
Tom Soldner ◽  
Nicholas Karlisch ◽  
Ian Kramer ◽  
...  

Introduction: Cardiac auscultation accuracy is poor: 20% to 40%. Audio-only of 500 heart sounds cycles over a short time period significantly improved auscultation scores. Hypothesis: adding visual information to an audio-only format, significantly (p<.05) improves short and long term accuracy. Methods: Pre-test: Twenty-two 1st and 2nd year medical student participants took an audio-only pre-test. Seven students comprising our audio-only training cohort heard audio-only, of 500 heart sound repetitions. 15 students comprising our paired visual with audio cohort heard and simultaneously watched video spectrograms of the heart sounds. Immediately after trainings, both cohorts took audio-only post-tests; the visual with audio cohort also took a visual with audio post-test, a test providing audio with simultaneous video spectrograms. All tests were repeated in six months. Results: All tests given immediately after trainings showed significant improvement with no significant difference between the cohorts. Six months later neither cohorts maintained significant improvement on audio-only post-tests. Six months later the visual with audio cohort maintained significant improvement (p<.05) on the visual with audio post-test. Conclusions: Audio retention of heart sound recognition is not maintained if: trained using audio-only; or, trained using visual with audio. Providing visual with audio in training and testing allows retention of auscultation accuracy. Devices providing visual information during auscultation could prove beneficial.


Sign in / Sign up

Export Citation Format

Share Document