Articulatory-feature-based methods for performance improvement of Multilingual Phone Recognition Systems using Indian languages

Sadhana ◽  
2020 ◽  
Vol 45 (1) ◽  
Author(s):  
K E Manjunath ◽  
Dinesh Babu Jayagopi ◽  
K Sreenivasa Rao ◽  
V Ramasubramanian
Author(s):  
Manjunath K. E. ◽  
Srinivasa Raghavan K. M. ◽  
K. Sreenivasa Rao ◽  
Dinesh Babu Jayagopi ◽  
V. Ramasubramanian

In this study, we evaluate and compare two different approaches for multilingual phone recognition in code-switched and non-code-switched scenarios. First approach is a front-end Language Identification (LID)-switched to a monolingual phone recognizer (LID-Mono), trained individually on each of the languages present in multilingual dataset. In the second approach, a common multilingual phone-set derived from the International Phonetic Alphabet (IPA) transcription of the multilingual dataset is used to develop a Multilingual Phone Recognition System (Multi-PRS). The bilingual code-switching experiments are conducted using Kannada and Urdu languages. In the first approach, LID is performed using the state-of-the-art i-vectors. Both monolingual and multilingual phone recognition systems are trained using Deep Neural Networks. The performance of LID-Mono and Multi-PRS approaches are compared and analysed in detail. It is found that the performance of Multi-PRS approach is superior compared to more conventional LID-Mono approach in both code-switched and non-code-switched scenarios. For code-switched speech, the effect of length of segments (that are used to perform LID) on the performance of LID-Mono system is studied by varying the window size from 500 ms to 5.0 s, and full utterance. The LID-Mono approach heavily depends on the accuracy of the LID system and the LID errors cannot be recovered. But, the Multi-PRS system by virtue of not having to do a front-end LID switching and designed based on the common multilingual phone-set derived from several languages, is not constrained by the accuracy of the LID system, and hence performs effectively on code-switched and non-code-switched speech, offering low Phone Error Rates than the LID-Mono system.


2019 ◽  
Vol 22 (1) ◽  
pp. 157-168 ◽  
Author(s):  
K. E. Manjunath ◽  
Dinesh Babu Jayagopi ◽  
K. Sreenivasa Rao ◽  
V. Ramasubramanian

2013 ◽  
Vol 6 (1) ◽  
pp. 266-271
Author(s):  
Anurag Upadhyay ◽  
Chitranjanjit Kaur

This paper addresses the problem of speech recognition to identify various modes of speech data. Speaker sounds are the acoustic sounds of speech. Statistical models of speech have been widely used for speech recognition under neural networks. In paper we propose and try to justify a new model in which speech co articulation the effect of phonetic context on speech sound is modeled explicitly under a statistical framework. We study speech phone recognition by recurrent neural networks and SOUL Neural Networks. A general framework for recurrent neural networks and considerations for network training are discussed in detail. SOUL NN clustering the large vocabulary that compresses huge data sets of speech. This project also different Indian languages utter by different speakers in different modes such as aggressive, happy, sad, and angry. Many alternative energy measures and training methods are proposed and implemented. A speaker independent phone recognition rate of 82% with 25% frame error rate has been achieved on the neural data base. Neural speech recognition experiments on the NTIMIT database result in a phone recognition rate of 68% correct. The research results in this thesis are competitive with the best results reported in the literature. 


Algorithms ◽  
2019 ◽  
Vol 12 (10) ◽  
pp. 217 ◽  
Author(s):  
Alaa E. Abdel Hakim ◽  
Wael Deabes

In supervised Activities of Daily Living (ADL) recognition systems, annotating collected sensor readings is an essential, yet exhaustive, task. Readings are collected from activity-monitoring sensors in a 24/7 manner. The size of the produced dataset is so huge that it is almost impossible for a human annotator to give a certain label to every single instance in the dataset. This results in annotation gaps in the input data to the adopting learning system. The performance of the recognition system is negatively affected by these gaps. In this work, we propose and investigate three different paradigms to handle these gaps. In the first paradigm, the gaps are taken out by dropping all unlabeled readings. A single “Unknown” or “Do-Nothing” label is given to the unlabeled readings within the operation of the second paradigm. The last paradigm handles these gaps by giving every set of them a unique label identifying the encapsulating certain labels. Also, we propose a semantic preprocessing method of annotation gaps by constructing a hybrid combination of some of these paradigms for further performance improvement. The performance of the proposed three paradigms and their hybrid combination is evaluated using an ADL benchmark dataset containing more than 2.5 × 10 6 sensor readings that had been collected over more than nine months. The evaluation results emphasize the performance contrast under the operation of each paradigm and support a specific gap handling approach for better performance.


Author(s):  
N. Shobha Rani ◽  
Sanjay Kumar Verma ◽  
Anitta Joseph

Realization of high accuracies and efficiencies in South Indian character recognition systems is one of the principle goals to be attempted time after time so as to promote the usage of optical character recognition (OCR) for South Indian languages like Telugu. The process of character recognition comprises pre-processing, segmentation, feature extraction, classification and recognition. The feature extraction stage is meant for uniquely recognizing each character image for the purpose of classifying it. The selection of a feature extraction algorithm is very critical and important for any image processing application and mostly of the times it is directly proportional to the type of the image objects that we have to identify. For optical technologies like South Indian OCR, the feature extraction technique plays a very vital role in accuracy of recognition due to the huge character sets. In this work we mainly focus on evaluating the performance of various feature extraction techniques with respect to Telugu character recognition systems and analyze its efficiencies and accuracies in recognition of Telugu character set.


Author(s):  
R. SANJEEV KUNTE ◽  
R. D. SUDHAKER SAMUEL

Optical Character Recognition (OCR) systems have been effectively developed for the recognition of printed characters of non-Indian languages. Efforts are underway for the development of efficient OCR systems for Indian languages, especially for Kannada, a popular South Indian language. We present in this paper an OCR system developed for the recognition of basic characters in printed Kannada text, which can handle different font sizes and font sets. Wavelets that have been progressively used in pattern recognition and on-line character recognition systems are used in our system to extract the features of printed Kannada characters. Neural classifiers have been effectively used for the classification of characters based on wavelet features. The system methodology can be extended for the recognition of other south Indian languages, especially for Telugu.


2013 ◽  
Vol 10 (2) ◽  
pp. 1330-1338
Author(s):  
Vasudha S ◽  
Neelamma K. Patil ◽  
Dr. Lokesh R. Boregowda

Face recognition is one of the important applications of image processing and it has gained significant attention in wide range of law enforcement areas in which security is of prime concern. Although the existing automated machine recognition systems have certain level of maturity but their accomplishments are limited due to real time challenges. Face recognition systems are impressively sensitive to appearance variations due to lighting, expression and aging. The major metric in modeling the performance of a face recognition system is its accuracy of recognition. This paper proposes a novel method which improves the recognition accuracy as well as avoids face datasets being tampered through image splicing techniques. Proposed method uses a non-statistical procedure which avoids training step for face samples thereby avoiding generalizability problem which is caused due to statistical learning procedure. This proposed method performs well with images with partial occlusion and images with lighting variations as the local patch of the face is divided into several different patches. The performance improvement is shown considerably high in terms of recognition rate and storage space by storing train images in compressed domain and selecting significant features from superset of feature vectors for actual recognition.


Sign in / Sign up

Export Citation Format

Share Document