Identification of sound source in machine vision sequences using audio information

Author(s):  
A. Vahedian
2015 ◽  
Vol 713-715 ◽  
pp. 966-969
Author(s):  
Sheng Dong ◽  
Ai Guo Zhao ◽  
Li Yun Xing ◽  
Fei Wang ◽  
Fang Lei Song ◽  
...  

The purpose of this paper is to design a family or battlefield independent rescue system which is based on passive sound source localization and machine vision, The system consists of five modules: the sound acquisition and processing module based on FPGA, image acquisition and processing module based on TMS320DM6437, video capture and communication control module based on TMS320DM355, motion control module based on MSP430F149, PC server and smart handheld mobile client. It can track target person who need help automatically by recognizing his voice and image, to provide emergency medicines, communication tools, and transmit real-time video information to a server on PC and a mobile client on Andriod platform by wireless network, and be controlled remotely.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2794
Author(s):  
Mohammadreza Mirzaei ◽  
Peter Kán ◽  
Hannes Kaufmann

Sound source localization is important for spatial awareness and immersive Virtual Reality (VR) experiences. Deaf and Hard-of-Hearing (DHH) persons have limitations in completing sound-related VR tasks efficiently because they perceive audio information differently. This paper presents and evaluates a special haptic VR suit that helps DHH persons efficiently complete sound-related VR tasks. Our proposed VR suit receives sound information from the VR environment wirelessly and indicates the direction of the sound source to the DHH user by using vibrotactile feedback. Our study suggests that using different setups of the VR suit can significantly improve VR task completion times compared to not using a VR suit. Additionally, the results of mounting haptic devices on different positions of users’ bodies indicate that DHH users can complete a VR task significantly faster when two vibro-motors are mounted on their arms and ears compared to their thighs. Our quantitative and qualitative analysis demonstrates that DHH persons prefer using the system without the VR suit and prefer mounting vibro-motors in their ears. In an additional study, we did not find a significant difference in task completion time when using four vibro-motors with the VR suit compared to using only two vibro-motors in users’ ears without the VR suit.


2017 ◽  
Vol 29 (1) ◽  
pp. 146-153 ◽  
Author(s):  
Ryo Suzuki ◽  
◽  
Takuto Takahashi ◽  
Hiroshi G. Okuno

[abstFig src='/00290001/14.jpg' width='300' text='Children calling Cocoron to come closer' ] We have developed a self-propelling robotic pet, in which the robot audition software HARK (Honda Research Institute Japan Audition for Robots with Kyoto University) was installed to equip it with sound source localization functions, thus enabling it to move in the direction of sound sources. The developed robot, which is not installed with cameras or speakers, can communicate with humans by using only its own movements and the surrounding audio information obtained using a microphone. We have confirmed through field experiments, during which participants could gain hands-on experience with our developed robot, that participants behaved or felt as if they were touching a real pet. We also found that its high-precision sound source localization could contribute to the promotion and facilitation of human-robot interactions.


Author(s):  
Wesley E. Snyder ◽  
Hairong Qi
Keyword(s):  

1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


2013 ◽  
Author(s):  
Susanne Mayr ◽  
Gunnar Regenbrecht ◽  
Kathrin Lange ◽  
Albertgeorg Lang ◽  
Axel Buchner

2013 ◽  
Author(s):  
Agoston Torok ◽  
Daniel Mestre ◽  
Ferenc Honbolygo ◽  
Pierre Mallet ◽  
Jean-Marie Pergandi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document