PocketEAR: An Assistive Sound Classification System for Hearing-Impaired

Author(s):  
Kamil Ekštein
2008 ◽  
Vol 15 (1) ◽  
pp. 85-94 ◽  
Author(s):  
Enrique Alexandre ◽  
Lucas Cuadra ◽  
Lorena Álvarez ◽  
Manuel Rosa-Zurera ◽  
Francisco López-Ferreras

Author(s):  
Ria Sinha

Abstract: This paper describes a digital assistant designed to help hearing-impaired people sense ambient sounds. The assistant relies on obtaining audio signals from the ambient environment of a hearing-impaired person. The audio signals are analysed by a machine learning model that uses spectral signatures as features to classify audio signals into audio categories (e.g., emergency, animal sounds, etc.) and specific audio types within the categories (e.g., ambulance siren, dog barking, etc.) and notify the user leveraging a mobile or wearable device. The user can configure active notification preferences and view historical logs. The machine learning classifier is periodically trained externally based on labeled audio sound samples. Additional system features include an audio amplification option and a speech to text option for transcribing human speech to text output. Keywords: assistive technology, sound classification, machine learning, audio processing, spectral fingerprinting


2020 ◽  
Vol MA2020-01 (26) ◽  
pp. 1853-1853
Author(s):  
Oleksii Kudin ◽  
Anastasiia Kryvokhata ◽  
Vitaliy Ivanovich Gorbenko

2020 ◽  
Author(s):  
Sai Priyamka Kotha ◽  
Sravani Nallagari ◽  
Jinan Fiaidhi

Speech is the most efficient and convenient way of communication. The learning capabilities of the deep learning architecture can be used to develop the sound classification system to overcome the efficiency issues of the traditional systems. We propose to develop a model that classifies the audio of the speaker.


2020 ◽  
Vol 43 (2) ◽  
pp. 505-515 ◽  
Author(s):  
Palani Thanaraj Krishnan ◽  
Parvathavarthini Balasubramanian ◽  
Snekhalatha Umapathy

Sign in / Sign up

Export Citation Format

Share Document