maximum entropy estimation
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 8)

H-INDEX

13
(FIVE YEARS 1)

Author(s):  
Yash Sharma

This paper proposed another Audio notion investigation utilizing programmed discourse acknowledgment is an arising research territory where assessment or opinion showed by a speaker is identified from regular sound. It is moderately under-investigated when contrasted with text-based notion identification. Separating speaker estimation from common sound sources is a difficult issue. Nonexclusive techniques for feeling extraction by and large use records from a discourse acknowledgment framework, and interaction the record utilizing text-based estimation classifiers. In this examination, we show that this standard framework is imperfect for sound assessment extraction. Then again, new engineering utilizing watchword spotting (UWS) is proposed for assumption discovery. In the new engineering, a book-based assessment classifier is used to naturally decide the most helpful and discriminative feeling bearing watchword terms, which are then utilized as a term list for UWS. To get a minimal yet discriminative assumption term list, iterative element enhancement for most maximum entropy estimation model is proposed to diminish model intricacy while keeping up powerful grouping precision. The proposed arrangement is assessed on sound acquired from recordings in youtube.com and UT-Opinion corpus. Our exploratory outcomes show that the proposed UWS based framework fundamentally outflanks the conventional engineering in distinguishing assumption for testing reasonable undertakings.


Author(s):  
Chenyang Song ◽  
Liguo Wang ◽  
Zeshui Xu

The logistic regression model is one of the most widely used classification models. In some practical situations, few samples and massive uncertain information bring more challenges to the application of the traditional logistic regression. This paper takes advantages of the hesitant fuzzy set (HFS) in depicting uncertain information and develops the logistic regression model under hesitant fuzzy environment. Considering the complexity and uncertainty in the application of this logistic regression, the concept of hesitant fuzzy information flow (HFIF) and the correlation coefficient between HFSs are introduced to determine the main factors. In order to better manage situations with small samples, a new optimized method based on the maximum entropy estimation is also proposed to determine the parameters. Then the Levenberg–Marquardt Algorithm (LMA) under hesitant fuzzy environment is developed to solve the parameter estimation problem with fewer samples and uncertain information in the logistic regression model. A specific implementation process for the optimized logistic regression model based on the maximum entropy estimation under the hesitant fuzzy environment is also provided. Moreover, we apply the proposed model to the prediction problem of Emergency Extreme Air Pollution Event (EEAPE). A comparative analysis and a sensitivity analysis are further conducted to illustrate the advantages of the optimized logistic regression model under hesitant fuzzy environment.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 548
Author(s):  
Yuri S. Popkov

The problem of randomized maximum entropy estimation for the probability density function of random model parameters with real data and measurement noises was formulated. This estimation procedure maximizes an information entropy functional on a set of integral equalities depending on the real data set. The technique of the Gâteaux derivatives is developed to solve this problem in analytical form. The probability density function estimates depend on Lagrange multipliers, which are obtained by balancing the model’s output with real data. A global theorem for the implicit dependence of these Lagrange multipliers on the data sample’s length is established using the rotation of homotopic vector fields. A theorem for the asymptotic efficiency of randomized maximum entropy estimate in terms of stationary Lagrange multipliers is formulated and proved. The proposed method is illustrated on the problem of forecasting of the evolution of the thermokarst lake area in Western Siberia.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rafael Renteria ◽  
Mario Chong ◽  
Irineu de Brito Junior ◽  
Ana Luna ◽  
Renato Quiliche

PurposeThis paper aims to design a vulnerability assessment model considering the multidimensional and systematic approach to disaster risk and vulnerability. This model serves to both risk mitigation and disaster preparedness phases of humanitarian logistics.Design/methodology/approachA survey of 27,218 households in Pueblo Rico and Dosquebradas was conducted to obtain information about disaster risk for landslides, floods and collapses. We adopted a cross entropy-based approach for the measure of disaster vulnerability (Kullback–Leibler divergence), and a maximum-entropy estimation for the reconstruction of risk a priori categorization (logistic regression). The capabilities approach of Sen supported theoretically our multidimensional assessment of disaster vulnerability.FindingsDisaster vulnerability is shaped by economic, such as physical attributes of households, and health indicators, which are in specific morbidity indicators that seem to affect vulnerability outputs. Vulnerability is heterogeneous between communities/districts according to formal comparisons of Kullback–Leibler divergence. Nor social dimension, neither chronic illness indicators seem to shape vulnerability, at least for Pueblo Rico and Dosquebradas.Research limitations/implicationsThe results need a qualitative or case study validation at the community/district level.Practical implicationsWe discuss how risk mitigation policies and disaster preparedness strategies can be driven by empirical results. For example, the type of stock to preposition can vary according to the disaster or the kind of alternative policies that can be formulated on the basis of the strong relationship between morbidity and disaster risk.Originality/valueEntropy-based metrics are not widely used in humanitarian logistics literature, as well as empirical data-driven techniques.


2020 ◽  
Vol 495 (3) ◽  
pp. 3350-3372 ◽  
Author(s):  
B Coleman ◽  
D Paterson ◽  
C Gordon ◽  
O Macias ◽  
H Ploeg

ABSTRACT The abundance and narrow magnitude dispersion of Red Clump (RC) stars make them a popular candidate for mapping the morphology of the bulge region of the Milky Way. Using an estimate of the RC’s intrinsic luminosity function, we extracted the three-dimensional density distribution of the RC from deep photometric catalogues of the VISTA Variables in the Via Lactea (VVV) survey. We used maximum entropy-based deconvolution to extract the spatial distribution of the bulge from Ks-band star counts. We obtained our extrapolated non-parametric model of the bulge over the inner 40° × 40° region of the Galactic centre. Our reconstruction also naturally matches on to a parametric fit to the bulge outside the VVV region and inpaints overcrowded and high extinction regions. We found a range of bulge properties consistent with other recent investigations based on the VVV data. In particular, we estimated the bulge mass to be in the range $[1.3,1.7]\times 10^{10} \, \mathrm{M}_\odot$, the X-component to be between 18 per cent and 25 per cent of the bulge mass, and the bulge angle with respect to the Sun–Galactic centre line to be between 18° and 32°. Studies of the FermiLarge Area Telescope (LAT) gamma-ray Galactic centre excess suggest that the excess may be traced by Galactic bulge distributed sources. We applied our deconvolved density in a template fitting analysis of this Fermi–LAT GeV excess and found an improvement in the fit compared to previous parametric-based templates.


2018 ◽  
Vol 1053 ◽  
pp. 012021
Author(s):  
Wilawan Srichaikul ◽  
Woraphon Yamaka ◽  
Paravee Maneejuk ◽  
Songsak Sriboonchitta

2018 ◽  
Author(s):  
Longxia Qian ◽  
Ren Zhang ◽  
Chengzu Bai ◽  
Yangjun Wang ◽  
Hongrui Wang

Abstract. In drought years, it is important to have an estimate or prediction of the probability that a water shortage risk will occur to enable risk mitigation. This study developed an improved logistic probability prediction model for water shortage risk in situations when there is insufficient data. First, information flow was applied to select water shortage risk factors. Then, the logistic regression model was used to describe the relation between water shortage risk and its factors, and an alternative method of parameter estimation (maximum entropy estimation) was proposed in situations where insufficient data was available. Water shortage risk probabilities in Beijing were predicted under different inflow scenarios by using the model. There were two main findings of the study. (1) The water shortage risk probability was predicted to be very high in 2020, although this was not the case in some high inflow conditions. (2) After using the transferred and reclaimed water, the water shortage risk probability declined under all inflow conditions (59.1% on average), but the water shortage risk probability was still high in some low inflow conditions.


Sign in / Sign up

Export Citation Format

Share Document