scholarly journals The dynamics of social deprivation in Mexico

2021 ◽  
pp. 1-20
Author(s):  
José Carlos Ramírez

This paper aims to model the dynamics of social deprivation in Mexico using a Markovian approach. First, we establish a scenario where a list of items characterizing social deprivation evolves as a first-order Markov chain under the sample period (2002-2012). Then, we estimate latent states and ergodic vectors of a hidden-Markov model to verify the strength of the conclusions drawn from such a scenario. After collecting results from both kinds of analyses, we find a similar pattern of impoverishment. The paper's conclusions state that the evolution of Mexico's deprivation profile may slightly worsen soon.

he proposed research is dedicated to verifying the claimed emotion of speaker-independent and text-independent formed on three dissimilar classifiers. The HMM3 short for Third-Order Hidden Markov Model, HMM2 short for Second-Order Hidden Markov Model, and HMM1 short for First-Order Hidden Markov Model are the three classifiers utilized in this study. Our work has been evaluated on our collected Emirati-accented speech corpus which entails 50 speakers of Emirati origin (25 female and 25 male) uttering sentences in six emotions by means of the extracted features by Mel-Frequency Cepstral Coefficients (MFCCs). Our outcomes prove that HMM3 is superior to each of HMM1 and HMM2 to authenticate the claimed emotion. The achieved results formed on HMM3 are very similar to the outcomes attained in the subjective valuation by Arab listeners.


Author(s):  
M. Vidyasagar

This chapter considers the basic properties of hidden Markov processes (HMPs) or hidden Markov models (HMMs), a special type of stochastic process. It begins with a discussion of three distinct types of HMMs and shows that they are all equivalent from the standpoint of their expressive power or modeling ability: Type 1 hidden Markov model, or a HMM of the deterministic function of a Markov chain type; hidden Markov model of Type 2, or a HMM of the random function of a Markov chain type; and hidden Markov model of Type 3, or a HMM of the joint Markov process type. The chapter also examines various issues related to the computation of likelihoods in a HMM before concluding with an overview of the Viterbi algorithm and the Baum–Welch algorithm.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Małgorzata Wiktoria Korolkiewicz

We propose a dependent hidden Markov model of credit quality. We suppose that the "true" credit quality is not observed directly but only through noisy observations given by posted credit ratings. The model is formulated in discrete time with a Markov chain observed in martingale noise, where "noise" terms of the state and observation processes are possibly dependent. The model provides estimates for the state of the Markov chain governing the evolution of the credit rating process and the parameters of the model, where the latter are estimated using the EM algorithm. The dependent dynamics allow for the so-called "rating momentum" discussed in the credit literature and also provide a convenient test of independence between the state and observation dynamics.


2020 ◽  
Vol 8 (1) ◽  
pp. 296-303
Author(s):  
Sergey S Yulin ◽  
Irina N Palamar

The problem of recognizing patterns, when there are few training data available, is particularly relevant and arises in cases when collection of training data is expensive or essentially impossible. The work proposes a new probability model MC&CL (Markov Chain and Clusters) based on a combination of markov chain and algorithm of clustering (self-organizing map of Kohonen, k-means method), to solve a problem of classifying sequences of observations, when the amount of training dataset is low. An original experimental comparison is made between the developed model (MC&CL) and a number of the other popular models to classify sequences: HMM (Hidden Markov Model), HCRF (Hidden Conditional Random Fields),LSTM (Long Short-Term Memory), kNN+DTW (k-Nearest Neighbors algorithm + Dynamic Time Warping algorithm). A comparison is made using synthetic random sequences, generated from the hidden markov model, with noise added to training specimens. The best accuracy of classifying the suggested model is shown, as compared to those under review, when the amount of training data is low.


Sign in / Sign up

Export Citation Format

Share Document