scholarly journals Wearable Hearing Assist System to Provide Hearing-Dog Functionality

Robotics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 49
Author(s):  
Shimoyama

This study developed a wearable hearing-assist system that can identify the direction of a sound source while using short-term interaural time differences (ITDs) of sound pressure and convey the sound source direction to a hearing-impaired person via vibrators that are attached to his or her shoulders. This system, which is equipped with two microphones, could dynamically detect and convey the direction of front, side, and even rear sound sources. A male subject was able to turn his head toward continuous or intermittent sound sources within approximately 2.8 s when wearing the developed system. The sound source direction is probably overestimated when the interval between the two ears is smaller. When the subject can utilize vision, this may help in tracking the location of the target sound source, especially if the target comes into view, and it may shorten the tracking period.

1998 ◽  
Vol 10 (1) ◽  
pp. 62-68
Author(s):  
Manabu Ishihara ◽  
◽  
Makoto Matsuo ◽  
Jun Shirataki ◽  

In this study, we used noise as a sound source and defined the source volume as sound pressure. We analyzed by AHP the relationship between sonority and sound pressure of the median front and surroundings, and have identified its auditory sensation. Results showed that the farther away a sound heard by the subject from the center, the worse the consistency index. The consistency index was 0.1-0.9 in such a case. That is, the consistency index when sound moved away from the median front. In addition, the consistency index was found to be 0.10-0.27 in the median front after correction was added to an up-down sound image mistaken inversely. The consistency index was 0.12-0.24 when sound sources were in the same direction and 0.16-0.63 when sources were in different directions. Correction was recognized in experiment results, but consistency worsens when sound moved away from the center. Satisfactory correction can be expected only in correction of the median front.


2005 ◽  
Vol 13 (01) ◽  
pp. 187-201 ◽  
Author(s):  
BODO NOLTE

The self-developed Boundary Element code BEMCUP-3D solves structural-dynamic and acoustic problems as well as fluid-structure-interaction-phenomena in the frequency domain. Attainable outputs of this program are e.g. the system matrices. The inverse acoustic problem (sound source identification) is considered without inversion of matrices. The envelope surface (measurement surface), which encloses the entire arbitrarily shaped sound source, is treated as an exterior problem. The Dirichlet data on this surface or boundary are given from the proper sound pressure distribution of the sound source, which is also an exterior problem. This ensures that the corresponding velocity values on the measurement surface are exactly the same for both problems. Next the region between the sound source (an arbitrarily vibrating structure) and the envelope surface (a measurement surface in experimental investigations) is treated as an interior problem. The boundary conditions on the outer surface (measurement surface) for this problem are of Dirichlet type and the already available Neumann data, the sound pressure and the velocity distributions. An algorithm makes sure that after solving the unknown sound pressure and velocity values of the sound source are situated in the solving vector. Simple sound sources enable to investigate the stability, an optimal shape and an optimal position of the measurement surface.


1989 ◽  
Vol 111 (4) ◽  
pp. 480-485 ◽  
Author(s):  
Ken’iti Kido ◽  
Hiroshi Kanai ◽  
Masato Abe

This paper describes further investigations of an active noise control system in which an additional sound source is set close to the primary (noise) source. Successful application of this method to duct noise control has already been reported (Kido, 1987). The synthesized sound radiated by the additional source is identical to that of the primary source, except in polarity. The additional and primary sources form a dipole sound source with reduced effective radiation power. In theory, the distance between these two sound sources should be much less than the shortest wavelength in the required frequency range to realize an ideal dipole source. Then, the total sound pressure would be expected to attenuate in proportion to the square of the distance from the center of the sources, and little sound power would be radiated. However, in practice, the distance cannot be set small enough, so there is only a relatively small area around the dipole where the sound pressure attenuates in proportion to the square of the distance. Further afield, it attenuates in direct proportion to distance. Noise reduction is therefore limited. This paper describes the effects and the limits of performance of such a system as a function of wavelength and the dimensions of sound sources.


Author(s):  
Yukihito Niino ◽  
Toshihiko Shiraishi ◽  
Shin Morishita

Humans are able to well recognize mixtures of speech signals produced by two or more simultaneous speakers. This ability is known as cocktail party effect. To apply the cocktail party effect to engineering, we can construct novel systems of blind source separation such as current automatic speech recognition systems and active noise control systems under environment noises. A variety of methods have been developed to improve the performance of blind source separation in the presence of background noise or interfering speech. Considering blind source separation as the characteristics of human, artificial neural networks are suitable for it. In this paper, we proposed a method of blind source separation using a neural network. The present neural network can adaptively separate sound sources on training the internal parameters. The network was three-layered. Sound pressure was output from two sound sources and the mixed sound was measured with two microphones. The time history of microphone signals was input to the input layer of neural network. The two outputs of hidden layer were corresponding to the two sound pressure separated respectively. The two outputs of output layer were corresponding to the two microphone signals expected at next time step and compared with the actual microphone signals at next time step to train the neural network by a backpropagation method. In this procedure, the signal from each sound source was adaptively separated. There were two conditions of sound source, sinusoidal signals of 440 and 1000 Hz. In order to assess the performance of neural network numerically and experimentally, a basic independent component analysis (ICA) was conducted simultaneously. The results obtained are as follows. The performance of blind separation by the neural network was higher than the basic ICA. In addition, the neural network can successfully separate the sound source in spite of the position of sound sources.


2015 ◽  
Vol 23 (03) ◽  
pp. 1550014 ◽  
Author(s):  
Krzysztof Szemela

The sound radiation inside an acoustic canyon has been analyzed for a surface sound source located at the bottom. Based on rigorous mathematical manipulations, the formulas of a high computational efficiency describing the sound pressure and sound power have been obtained. They can be easily adapted to describe the sound radiation of an arbitrary system of sound sources. As an example of their application, the sound radiation of a piston has been investigated. The asymptotic formulas of the sound power modal coefficients have been obtained. They can be used to significantly improve the numerical calculation of the sound power.


Materials ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 429
Author(s):  
Jiangming Jin ◽  
Hao Cheng ◽  
Tianwei Xie ◽  
Huancai Lu

Controlling low frequency noise in an interior sound field is always a challenge in engineering, because it is hard to accurately localize the sound source. Spherical acoustic holography can reconstruct the 3D distributions of acoustic quantities in the interior sound field, and identify low-frequency sound sources, but the ultimate goal of controlling the interior noise is to improve the sound quality in the interior sound field. It is essential to know the contributions of sound sources to the sound quality objective parameters. This paper presents the mapping methodology from sound pressure to sound quality objective parameters, where sound quality objective parameters are calculated from sound pressure at each specific point. The 3D distributions of the loudness and sharpness are obtained by calculating each point in the entire interior sound field. The reconstruction errors of those quantities varying with reconstruction distance, sound frequency, and intersection angle are analyzed in numerical simulation for one- and two-monopole source sound fields. Verification experiments have been conducted in an anechoic chamber. Simulation and experimental results demonstrate that the sound source localization results based on 3D distributions of sound quality objective parameters are different from those based on sound pressure.


Signals ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 490-507
Author(s):  
Ryuichi Shimoyama

We developed a hearing assistance system that enables hearing-impaired people to track the horizontal movement of a single sound source. The movement of the sound source is presented to the subject by vibrating vibrators on both shoulders according to the distance to and direction of the sound source, which are estimated from the acoustic signals detected by microphones attached to both ears. We presented the direction of and distance to the sound source to the subject by changing the ratio of the intensity of the two vibrators according to the direction and by increasing the intensity the closer the person got to the sound source. The subject could recognize the approaching sound source as a change in the vibration intensity by turning their face in the direction where the intensity of both vibrators was equal. The direction of the moving sound source can be tracked with an accuracy of less than 5° when an analog vibration pattern is added to indicate the direction of the sound source. By presenting the direction of the sound source with high accuracy, it is possible to show subjects the approach and departure of a sound source.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


Author(s):  
Andrew Hadfield

There were few subjects that animated people in early modern Europe more than lying. The subject is endlessly represented and discussed in literature; treatises on rhetoric and courtiership; theology, philosophy, and jurisprudence; travel writing; pamphlets and news books; science and empirical observation; popular culture, especially books about strange, unexplained phenomena; and, of course, legal discourse. For many, lying could be controlled and limited even if not eradicated; for others, lying was a necessary element of a casuistical tradition, liars balancing complicated issues and short-term pragmatic considerations in the expectation of solving more problems than they caused through their deceit....


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 532
Author(s):  
Henglin Pu ◽  
Chao Cai ◽  
Menglan Hu ◽  
Tianping Deng ◽  
Rong Zheng ◽  
...  

Multiple blind sound source localization is the key technology for a myriad of applications such as robotic navigation and indoor localization. However, existing solutions can only locate a few sound sources simultaneously due to the limitation imposed by the number of microphones in an array. To this end, this paper proposes a novel multiple blind sound source localization algorithms using Source seParation and BeamForming (SPBF). Our algorithm overcomes the limitations of existing solutions and can locate more blind sources than the number of microphones in an array. Specifically, we propose a novel microphone layout, enabling salient multiple source separation while still preserving their arrival time information. After then, we perform source localization via beamforming using each demixed source. Such a design allows minimizing mutual interference from different sound sources, thereby enabling finer AoA estimation. To further enhance localization performance, we design a new spectral weighting function that can enhance the signal-to-noise-ratio, allowing a relatively narrow beam and thus finer angle of arrival estimation. Simulation experiments under typical indoor situations demonstrate a maximum of only 4∘ even under up to 14 sources.


Sign in / Sign up

Export Citation Format

Share Document