mixture of distributions
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 3)

H-INDEX

15
(FIVE YEARS 0)

2021 ◽  
Vol 3 ◽  
pp. 1-8
Author(s):  
José Rodríguez-Avi ◽  
Francisco Javier Ariza-López

Abstract. The modelling of the altimetric error is proposed by means of the mixture of normal distributions. This alternative allows to avoid the problems of lack of normality of the altimetric error and that have been indicated numerous times. The conceptual bases of the mixture of distributions are presented and its application is demonstrated with an applied example. In the example, the altimetric errors existing between a DEM with 5 × 5 m resolution and another DEM with 2 × 2 m resolution are modelled, which is considered as a reference. The application demonstrates the feasibility and power of analysis of the proposal made.


2021 ◽  
Vol 5 (1) ◽  
pp. 34
Author(s):  
Armand Taranco ◽  
Vincent Geronimi

This paper presents an analysis of the long-term dynamics of the terms of trade of primary commodities (TTPC) using an extended data set for the whole period 1900–2020. Following our original contribution, we implement three approaches of time series—the finite mixture of distributions, the Markov finite mixture of distributions, and the Markov regime-switching model. Our results confirm the hypothesis of the existence of a succession of three different dynamic regimes in the TTPC over the 1900–2020 period. It seems that the uncertainty characterising the long-term dynamic analysis of TTPC is better taken into account with a Markov hypothesis in the transition from one regime to another than without this hypothesis. In addition, this hypothesis improves the quality of the time series segmentation into regimes.


2021 ◽  
Author(s):  
Natalia Martinez ◽  
Guillermo Sapiro ◽  
Allen Tannenbaum ◽  
Travis J. Hollmann ◽  
Saad Nadeem

Segmenting noisy multiplex spatial tissue images constitutes a challenging task, since the characteristics of both the noise and the biology being imaged differs significantly across tissues and modalities; this is compounded by the high monetary and time costs associated with manual annotations. It is therefore imperative to build algorithms that can accurately segment the noisy images based on a small number of annotations. Recently techniques to derive such an algorithm from a few scribbled annotations have been proposed, mostly relying on the refinement and estimation of pseudo-labels. Other techniques leverage the success of self-supervised denoising as a parallel task to potentially improve the segmentation objective when few annotations are available. In this paper, we propose a method that augments the segmentation objective via self-supervised multi-channel quantized imputation, meaning that each class of the segmentation objective can be characterized by a mixture of distributions. This approach leverages the observation that perfect pixel-wise reconstruction or denoising of the image is not needed for accurate segmentation, and introduces a self-supervised classification objective that better aligns with the overall segmentation goal. We demonstrate the superior performance of our approach for a variety of cancer datasets acquired with different highly-multiplexed imaging modalities in real clinical settings. Code for our method along with a benchmarking dataset is available at https://github.com/natalialmg/ImPartial.


2018 ◽  
Vol 23 ◽  
pp. 00034
Author(s):  
Wiesław Szulczewski ◽  
Wojciech Jakubowski ◽  
Tamara Tokarczyk

Statistical models of freshet flows are the basis for the design of hydrotechnical structures and for undertaking all and any activities related with flood threat. With regard to the method of data preparation for estimation and to the estimation procedure itself, the methods applied in such situations can be divided into two parts - FFA (Flood Frequency Analysis) and POT (Peak Over Threshold). In this study a comparison of those methods is made, using an original mixture of distributions (FFA) and an original procedure of distribution estimation (POT) for six selected water gauges on the river Odra.


2017 ◽  
Vol 19 (2) ◽  
pp. 109-139
Author(s):  
Marc Comas-Cufí ◽  
Josep A Martín-Fernández ◽  
Glòria Mateu-Figueras

Methods in parametric cluster analysis commonly assume data can be modelled by means of a finite mixture of distributions. However, associating each mixture component to one cluster is frequently misleading because different mixture components can overlap, and then, associated clusters can overlap too suggesting a unique cluster. A number of approaches have already been proposed to construct the clusters by merging components using the posterior probabilities. This article presents a generic approach for building a hierarchy of mixture components that integrates and generalizes some techniques proposed earlier in the literature. Using this proposal, two new techniques based on the log-ratio of posterior probabilities are introduced. Moreover, to decide the final number of clusters, two new methods are presented. Simulated and real datasets are used to illustrate this methodology.


2017 ◽  
Vol 0 (0) ◽  
pp. 0-0
Author(s):  
M. Aslam ◽  
M. Tahir ◽  
Z. Hussain

2017 ◽  
Vol 64 (1) ◽  
pp. 45-59 ◽  
Author(s):  
Hassan Ezzat ◽  
Berna Kirkulak-Uludag

This paper investigates the validation of the Mixture of Distributions Hypothesis (MDH) using trading volume and number of trades as contemporaneous proxies for information arrival in 15 sector indices of the Saudi Stock Exchange (Tadawul) using the TGARCH model. Findings provide strong evidence for the validity of the MDH for the Saudi market. Volatility persistence decreases when the trading volume and the number of trades are included in the conditional variance equation. The most striking finding is that contemporaneous number of trades is a better proxy for information arrival than trading volume, interacting with volatility in a manner anticipated under the MDH. This can be attributed to the unique characteristic of the Saudi equity market where only domestic investors are allowed to execute trade transactions. Further, the results reveal that the leverage effect was amplified, indicating a more pronounced asymmetric effect of bad news on volatility.


Energy ◽  
2016 ◽  
Vol 112 ◽  
pp. 935-962 ◽  
Author(s):  
Qinghua Hu ◽  
Yun Wang ◽  
Zongxia Xie ◽  
Pengfei Zhu ◽  
Daren Yu

Sign in / Sign up

Export Citation Format

Share Document