scholarly journals Chord-aware automatic music transcription based on hierarchical Bayesian integration of acoustic and language models

Author(s):  
Yuta Ojima ◽  
Eita Nakamura ◽  
Katsutoshi Itoyama ◽  
Kazuyoshi Yoshii

This paper describes automatic music transcription with chord estimation for music audio signals. We focus on the fact that concurrent structures of musical notes such as chords form the basis of harmony and are considered for music composition. Since chords and musical notes are deeply linked with each other, we propose joint pitch and chord estimation based on a Bayesian hierarchical model that consists of an acoustic model representing the generative process of a spectrogram and a language model representing the generative process of a piano roll. The acoustic model is formulated as a variant of non-negative matrix factorization that has binary variables indicating a piano roll. The language model is formulated as a hidden Markov model that has chord labels as the latent variables and emits a piano roll. The sequential dependency of a piano roll can be represented in the language model. Both models are integrated through a piano roll in a hierarchical Bayesian manner. All the latent variables and parameters are estimated using Gibbs sampling. The experimental results showed the great potential of the proposed method for unified music transcription and grammar induction.

Author(s):  
Ryo Nishikimi ◽  
Eita Nakamura ◽  
Masataka Goto ◽  
Kazuyoshi Yoshii

This paper describes an automatic singing transcription (AST) method that estimates a human-readable musical score of a sung melody from an input music signal. Because of the considerable pitch and temporal variation of a singing voice, a naive cascading approach that estimates an F0 contour and quantizes it with estimated tatum times cannot avoid many pitch and rhythm errors. To solve this problem, we formulate a unified generative model of a music signal that consists of a semi-Markov language model representing the generative process of latent musical notes conditioned on musical keys and an acoustic model based on a convolutional recurrent neural network (CRNN) representing the generative process of an observed music signal from the notes. The resulting CRNN-HSMM hybrid model enables us to estimate the most-likely musical notes from a music signal with the Viterbi algorithm, while leveraging both the grammatical knowledge about musical notes and the expressive power of the CRNN. The experimental results showed that the proposed method outperformed the conventional state-of-the-art method and the integration of the musical language model with the acoustic model has a positive effect on the AST performance.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 634
Author(s):  
Alakbar Valizada ◽  
Natavan Akhundova ◽  
Samir Rustamov

In this paper, various methodologies of acoustic and language models, as well as labeling methods for automatic speech recognition for spoken dialogues in emergency call centers were investigated and comparatively analyzed. Because of the fact that dialogue speech in call centers has specific context and noisy, emotional environments, available speech recognition systems show poor performance. Therefore, in order to accurately recognize dialogue speeches, the main modules of speech recognition systems—language models and acoustic training methodologies—as well as symmetric data labeling approaches have been investigated and analyzed. To find an effective acoustic model for dialogue data, different types of Gaussian Mixture Model/Hidden Markov Model (GMM/HMM) and Deep Neural Network/Hidden Markov Model (DNN/HMM) methodologies were trained and compared. Additionally, effective language models for dialogue systems were defined based on extrinsic and intrinsic methods. Lastly, our suggested data labeling approaches with spelling correction are compared with common labeling methods resulting in outperforming the other methods with a notable percentage. Based on the results of the experiments, we determined that DNN/HMM for an acoustic model, trigram with Kneser–Ney discounting for a language model and using spelling correction before training data for a labeling method are effective configurations for dialogue speech recognition in emergency call centers. It should be noted that this research was conducted with two different types of datasets collected from emergency calls: the Dialogue dataset (27 h), which encapsulates call agents’ speech, and the Summary dataset (53 h), which contains voiced summaries of those dialogues describing emergency cases. Even though the speech taken from the emergency call center is in the Azerbaijani language, which belongs to the Turkic group of languages, our approaches are not tightly connected to specific language features. Hence, it is anticipated that suggested approaches can be applied to the other languages of the same group.


Author(s):  
ROMAN BERTOLAMI ◽  
HORST BUNKE

Current multiple classifier systems for unconstrained handwritten text recognition do not provide a straightforward way to utilize language model information. In this paper, we describe a generic method to integrate a statistical n-gram language model into the combination of multiple offline handwritten text line recognizers. The proposed method first builds a word transition network and then rescores this network with an n-gram language model. Experimental evaluation conducted on a large dataset of offline handwritten text lines shows that the proposed approach improves the recognition accuracy over a reference system as well as over the original combination method that does not include a language model.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 1-16
Author(s):  
Juan Cruz-Benito ◽  
Sanjay Vishwakarma ◽  
Francisco Martin-Fernandez ◽  
Ismael Faro

In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context.


Author(s):  
NUR ZALIKHA MAT RADZI ◽  
NASIRIN ABDILLAH ◽  
DAENG HALIZA DAENG JAMAL

Hatimu Aisyah karya Sasterawan Negara ke-13 iaitu - Zurinah Hassan, yang juga penerima Anugerah Hadiah Penulis Asia Tenggara (SEA Write Award) pada tahun 2004. Rentetan kejayaan beliau, telah menjadi tumpuan para pengkaji untuk meneliti aspek mengenai pengarangan wanita. Hatimu Aisyah merupakan novel pertama dihasilkan oleh Zurinah Hassan yang menekankan mengenai amalan adat resam zaman terdahulu sehingga ditelan arus pemodenan zaman. Novel Hatimu Aisyah mengetengahkan gambaran wanita yang mengutamakan adat dalam konteks perjalanan hidup bermasyarakat. Kajian terhadap karya Zurinah Hassan ini, bersandarkan kepada Model Bahasa Gagasan Elaine Showalter dari perspektif ginokritik untuk melihat watak-watak wanita. Antara Perbincangan dalam kajian ini adalah berfokuskan kepada simbolik bahasa dan bahasa sebagai ekspresi kesedaran wanita. Hasil dapatan keseluruhan kajian menunjukkan bahawa Zurinah Hassan menggunakan bahasa yang bersesuaian dengan gagasan bahasa daripada Elaine Showalter tetapi agak kurang menyerlah. Hal ini disebabkan keterbatasan penggunaan bahasa selaras dengan sosiobudaya masyarakat Melayu. Penemuan kajian ini dalam model bahasa wanita dapat dilihat menerusi simbolik bahasa dan bahasa sebagai ekspresi kesedaran wanita. Hasil manfaat dan kepentingan diperolehi masa hadapan dapat dilihat bahawa golongan wanita menzahirkan protes dan kritikan menerusi corak penulisan karya mereka meskipun masih dalam keadaan terkawal.   Hatimu Aisyah the 13th National literary works, namely-Zurinah Hassan, who is also the recipient of the Southeast Asian Writer award (SEA Write Award) in 2004. His success string has been the focus of researchers to examine the aspects of women's writings. Hatimu Aisyah is the first novel to be produced by Zurinah Hassan that emphasizes on the historical practices of the past, having swallowed the current modernization of the day. The Hatimu Aisyah Novel highlights the portrayal of women who are customcentric in the context of the communities life. Studies on Zurinah Hassan's work are based on the language Model of Elaine Showalter from the perspective of Ginokritik to see the female characters. Among the discussions in this study are focused on symbolic language and language as a expression of women's awareness. The overall findings of the study showed that Zurinah Hassan used a language that fits the language idea of Elaine Showalter but was somewhat less striking. This is due to the limitations of usage in line with the Malay social. The findings of this study in female language models can be seen through the symbolic language and language in the expression of women's awareness. The results of the benefits and interests gained future can be seen that women are in their protest and criticism through their work writing patterns despite being controlled.


2020 ◽  
Vol 14 (4) ◽  
pp. 471-484
Author(s):  
Suraj Shetiya ◽  
Saravanan Thirumuruganathan ◽  
Nick Koudas ◽  
Gautam Das

Accurate selectivity estimation for string predicates is a long-standing research challenge in databases. Supporting pattern matching on strings (such as prefix, substring, and suffix) makes this problem much more challenging, thereby necessitating a dedicated study. Traditional approaches often build pruned summary data structures such as tries followed by selectivity estimation using statistical correlations. However, this produces insufficiently accurate cardinality estimates resulting in the selection of sub-optimal plans by the query optimizer. Recently proposed deep learning based approaches leverage techniques from natural language processing such as embeddings to encode the strings and use it to train a model. While this is an improvement over traditional approaches, there is a large scope for improvement. We propose Astrid, a framework for string selectivity estimation that synthesizes ideas from traditional and deep learning based approaches. We make two complementary contributions. First, we propose an embedding algorithm that is query-type (prefix, substring, and suffix) and selectivity aware. Consider three strings 'ab', 'abc' and 'abd' whose prefix frequencies are 1000, 800 and 100 respectively. Our approach would ensure that the embedding for 'ab' is closer to 'abc' than 'abd'. Second, we describe how neural language models could be used for selectivity estimation. While they work well for prefix queries, their performance for substring queries is sub-optimal. We modify the objective function of the neural language model so that it could be used for estimating selectivities of pattern matching queries. We also propose a novel and efficient algorithm for optimizing the new objective function. We conduct extensive experiments over benchmark datasets and show that our proposed approaches achieve state-of-the-art results.


2019 ◽  
Vol 36 (1) ◽  
pp. 20-30 ◽  
Author(s):  
Emmanouil Benetos ◽  
Simon Dixon ◽  
Zhiyao Duan ◽  
Sebastian Ewert

Author(s):  
Kelvin Guu ◽  
Tatsunori B. Hashimoto ◽  
Yonatan Oren ◽  
Percy Liang

We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.


Sign in / Sign up

Export Citation Format

Share Document