Spoken Language Identification System for Kashmiri and Related Languages Using Mel-Spectrograms and Deep Learning Approach

Author(s):  
Irshad Ahmad Thukroo ◽  
Rumaan Bashir
2020 ◽  
Vol 32 ◽  
pp. 01010
Author(s):  
Shubham Godbole ◽  
Vaishnavi Jadhav ◽  
Gajanan Birajdar

Spoken language is the most regular method of correspondence in this day and age. Endeavours to create language recognizable proof frameworks for Indian dialects have been very restricted because of the issue of speaker accessibility and language readability. However, the necessity of SLID is expanding for common and safeguard applications day by day. Feature extraction is a basic and important procedure performed in LID. A sound example is changed over into a spectrogram visual portrayal which describes a range of frequencies in regard with time. Three such spectrogram visuals were generated namely Log Spectrogram, Gammatonegram and IIR-CQT Spectrogram for audio samples from the standardized IIIT-H Indic Speech Database. These visual representations depict language specific details and the nature of each language. These spectrograms images were then used as an input to the CNN. Classification accuracy of 98.86% was obtained using the proposed methodology.


2019 ◽  
Vol 31 (12) ◽  
pp. 8483-8501 ◽  
Author(s):  
Himadri Mukherjee ◽  
Subhankar Ghosh ◽  
Shibaprasad Sen ◽  
Obaidullah Sk Md ◽  
K. C. Santosh ◽  
...  

2021 ◽  
Author(s):  
P. Golda Jeyasheeli ◽  
N. Indumathi

In Indian Population there is about 1 percent of the people are deaf and dumb. Deaf and dumb people use gestures to interact with each other. Ordinary humans fail to grasp the significance of gestures, which makes interaction between deaf and mute people hard. In attempt for ordinary citizens to understand the signs, an automated sign language identification system is proposed. A smart wearable hand device is designed by attaching different sensors to the gloves to perform the gestures. Each gesture has unique sensor values and those values are collected as an excel data. The characteristics of movements are extracted and categorized with the aid of a convolutional neural network (CNN). The data from the test set is identified by the CNN according to the classification. The objective of this system is to bridge the interaction gap between people who are deaf or hard of hearing and the rest of society.


Author(s):  
Mitsuru Baba ◽  
Naoto Hoshikawa ◽  
Hirotaka Nakayama ◽  
Tomoyoshi Ito ◽  
Atsushi Shiraki

Sign in / Sign up

Export Citation Format

Share Document