Inversion of speech by non-linear transformation of temporary

2016 ◽  
Vol 1 (1) ◽  
pp. 139-150
Author(s):  
Robert Wielgat ◽  
Anita Lorenc

Electromagnetic Articulography (EMA) is a precise method for speech articulators assessment which is carried out by sensors placed mainly on the tongue. Various methods are being developed in order to avoid the assessment by EMA sensors. One of them is speech inversion. Here preliminary research on speech inversion based on dynamic time warping (DTW) method has been described. Mel-frequency cepstral coefficients (MFCC) method has been chosen as the acoustic speech signal parametrization method. Root mean square errors (RMSE) of the evaluation have been presented and discussed.

2018 ◽  
Vol 7 (4.15) ◽  
pp. 486
Author(s):  
Mohammed Arif Mazumder ◽  
Rosalina Abdul Salam

Al-Quran is the most recited holy book in the Arabic language. Over 1.3-billion Muslim all over the world have an obligation to recite and learn Al-Quran. Learners from non-Arabic as well as from Arabic speaking communities face difficulties with Al-Quran recitation in the absence of a teacher (ustad) around. Advancement in speech recognition technology creates possible solutions to develop a system that has a capability to auricularly discern and validate the recitation. This paper investigates the speech recognition accuracy of template-based acoustic models and propose enhancement methods to improve the accuracy. A new scheme consists of enhancement of Normalized Least Mean Square (NLMS) and Dynamic Time Warping (DTW) algorithms have been proposed. The performance of the speech recognition accuracy was further improved by incorporating an adaptive optimal filtering with modified humming window for MFCC (Mel-frequency cepstral coefficients) using matching technique dynamic programming (DP), DTW (Dynamic Time Wrapping). The proposed scheme increases 5.5% of relative improvement in recognition accuracy achieved over conventional speech recognition process.  


Author(s):  
Ersa Triansyah ◽  
Youllia Indrawaty N

[Id] Pattern recognition memiliki kemampuan untuk mengenali suara dengan melakukan pengenalan pola suara melalui fitur-fitur sinyal suara yang kemudian dilakukan pengenalan pola melalui perbandingan pola suara uji dengan suara referensi. Untuk mendapatkan fitur-fitur sinyal suara, diperlukan metode untuk mengekstraksi sinyal suara sehingga fitur-fitur sinyal suara yang dibutuhkan terpenuhi. MFCC (Mel Frequency Cepstral Coefficients) merupakan alternatif metode untuk melakukan ektraksi sinyal yang menghasilkan koefisien cepstral dari sinyal suara. Koefisien cepstral sinyal suara dari hasil ektraksi tersebut, kemudian dilakukan perbandingan kesesuaian antara suara uji dan suara referensi. DTW (Dynamic Time Warping) salah satu algoritma untuk dapat melakukan perbandingan koefisien tersebut. Dalam kasus pegenalan ucapan huruf hijaiyyah umumnya dilakukan secara talaqqi (belajar intensif) antar seorang guru dengan murid, penilaian yang dilakukan bersifat subjektif berdasarkan kemampuan indera dari seorang guru, untuk itu aplikasi pengucapan huruf hijaiyyah merupakan salah satu alternatif untuk mengenali dan menguji kesesuaian ucapan secara objektif melalui penghitungan matematis dengan melakukan pengenalan pola suara. Dari pengujian yang telah dilakukan, dari 6 orang yang diuji melakukan pengucapan 29 huruf 3 tanda baca dan pengulang sebanyak 5 kali menghasilkan persentase kecocokan suara mencapai di atas 90 %, nilai threshold 1,3 Kata kunci: Speech Recognition, Pattern Recognition, MFCC, DTW, Hijaiyyah [En] Pattern recognition has ability to recognize voice by voice pattern recognition through voice signal features which then carried out voice pattern recognition through comparison of tester voice pattern with a reference voice. To get the sound signal features, it needs a method for extracting sound signal so that required sound signals features are fulfilled. MFCC is an alternative method to perform signal extraction which is produce cepstral coefficients of the sound signal. Cepstral coefficients of sound signal from the extraction then will be compared by the match between tester voice and reference voice. DTW is one of algorithm to do a comparison of the coefficients. In the case of introducing hijaiyyah generally talaqqi (intensively) conducted between a teacher and students, the appraisal is subjective based on the sensory capabilities of the teacher, therefore hijaiyyah pronunciation application is an alternative to identify and test the suitability of speech objectively through mathematical calculations by performing voice pattern recognition. From the testing that has been done, from 6 people tested do pronunciations 29 letters and punctuation repeater 3 to 5 times the yield percentage matches the sound reaches above 90%, a threshold value of 1.3.


2018 ◽  
Vol 10 (1) ◽  
pp. 49-58
Author(s):  
Candra Dinata ◽  
Diyah Puspitaningrum ◽  
Ernawati Erna

ABSTRAK Suara/ucapan adalah salah satu cara kita sebagai manusia untuk berkomunikasi dan mengekspresikan diri. Speech to text (ucapan ke text), merupakan salah satu bidang sains computer yaitu bidang pengolahan suara. Speech to text (STT) adalah penerjemahan kalimat (kata yang diucapkan) ke dalam text. STT merupakan proses pengolahan suatu sinyal suara, mengekstrak fitur dari sinyal suara tersebut yang selanjutkan dibandingkan dengan hasil ekstraksi dari sinyal suara yang lain untuk dapat dikenali persamaannya. Penelitian ini merancang dan membangun suatu program aplikasi Speech to Text yang mampu identifikasi suatu sinyal suara menggunakan perangkat lunak simulasi MATLAB R2016a. Terdapat dua proses umum pada bidang pengolahan suara, yaitu ekstraksi fitur dan pencocokan fitur. Pada sistem ini metode mel-frequency cepstral coefficients digunakan untuk mengekstraksi fitur dan metode dynamic time warping digunakan untuk pencocokan fitur. Metode DTW yang digunakan dapat menghitung jarak atau selisih antara dua data yang dibandingkan. Rata-rata akurasi yang didapat setelah dilakukan percobaan pada pengujian kata adalah 95.85% dan pada pengujian kalimat adalah 94%.   ABSTRACT Voice / speech is one of the ways we as human beings to communicate and express themselves. Speech to text (STT), is one of computer science is the field of sound processing. Speech to text (STT) is the translation of the sentence (the spoken word) in the text. STT is a voice signal processing, extracting features from the speech signal and then compared it with the extraction of the other sound signal to recognize the signal similarities. This research design and build an application program Speech to Text that is capable of identifying a sound signal using simulation software MATLAB R2016a. There are two common processes in the field of sound processing, feature extraction and matching features. In this system, the method mel-frequency cepstral coefficients are used to extract features and dynamic time warping method used for matching features. DTW method used can calculate the distance or the difference between the two data being compared. The average accuracy is obtained after experiments on the test word was 95.85% and the testing of the sentence is 94%. How to Cite : Dinata, C. Puspitaningrum, D. Erna, E. (2017). IMPLEMENTASI TEKNIK DYNAMIC TIME WARPING (DTW) PADA APLIKASI SPEECH TO TEXT. Jurnal Teknik Informatika, 10(1), 49-58. doi:10.15408/jti.v10i1.6816 Permalink/DOI: http://dx.doi.org/10.15408/jti.v10i1.6816


Author(s):  
Saeed MIAN QAISAR

This paper propose an original approach of achieving a Cymatics based visual perception of isolated speech commands. The idea is to smartly combine the effective speech processing and analysis methods with the phenomena of Cymatics. In this context, an effective approach for automatic isolated speech based message recognition is proposed. The incoming speech segment is enhanced by applying the appropriate pre-emphasis filtering, noise thresholding and zero alignment operations. The Mel-Frequency Cepstral coefficients (MFCCs), Delta coefficients and Delta-Delta coefficients are extracted from the enhanced speech segment. Later on, the Dynamic Time Warping (DTW) technique is employed to compare these extracted features with the reference templates. The comparison outcomes are used to make the classification decision. The classification decision is transformed into a methodical excitation. Finally, this excitation is converted into the systematic visual perceptions via the phenomenon of Cymatics. The system functionality is tested with an experimental setup and results are presented. The approach is novel and can be employed in various applications like visual art, encryption, education, archeology, architecture, integration of impaired people, etc.


Sign in / Sign up

Export Citation Format

Share Document