PARALLELIZATION OF THE Α‐STABLE MODELLING ALGORITHMS

2007 ◽  
Vol 12 (4) ◽  
pp. 409-418
Author(s):  
Igoris Belovas ◽  
Vadimas Starikovičius

Stable distributions have a wide sphere of application: probability theory, physics, electronics, economics, sociology. Particularly important role they play in financial mathematics, since the classical models of financial market, which are based on the hypothesis of the normality, often become inadequate. However, the practical implementation of stable models is a nontrivial task, because the probability density functions of α‐stable distributions have no analytical representations (with a few exceptions). In this work we exploit the parallel computing technologies for acceleration of numerical solution of stable modelling problems. Specifically, we are solving the stable law parameters estimation problem by the maximum likelihood method. If we need to deal with a big number of long financial series, only the means of parallel technologies can allow us to get results in a adequate time. We have distinguished and defined several hierarchical levels of parallelism. We show that coarse‐grained Multi‐Sets parallelization is very efficient on computer clusters. Fine‐grained Maximum Likelihood level is very efficient on shared memory machines with Symmetric multiprocessing and Hyper‐threading technologies. Hybrid application, which is utilizing both of those levels, has shown superior performance compared to single level (MS) parallel application on cluster of Pentium 4 HT nodes.

2011 ◽  
Vol 52-54 ◽  
pp. 546-549
Author(s):  
Shi Bo Xin

According to sample mean submits normal distribution which is extracted from normal distribution, we give the equation of parameters estimation for normal distribution by bootstrap method, then we make a simulation analysis and compare the effect of parameters estimation which uses traditional maximum likelihood method and bootstrap method.


2012 ◽  
Vol 10 (2) ◽  
pp. 35-49
Author(s):  
Jan Purczyński

Simplified Method of GED Distribution Parameters EstimationIn this paper a simplified method of estimating GED distribution parameters has been proposed. The method uses first, second and 0.5-th order absolute moments. Unlike in maximum likelihood method, which involves solving a set of equations including special mathematical functions, the solution is given in the form of a simple relation. Application of three different approximations of Euler's gamma function value results in three different sets of results for which the χ2test is conducted. As a final solution (estimation of distribution parameters) the set is chosen which yields the smallest value of the χ2test statistic. The method proposed in this paper yields the χ2test statistic value which does not exceed the value of statistic for a distribution with parameters obtained with the maximum likelihood method.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6204
Author(s):  
Qinghong Liu ◽  
Yong Qin ◽  
Zhengyu Xie ◽  
Zhiwei Cao ◽  
Limin Jia

Trains shuttle in semiopen environments, and the surrounding environment plays an important role in the safety of train operation. The weather is one of the factors that affect the surrounding environment of railways. Under haze conditions, railway monitoring and staff vision could be blurred, threatening railway safety. This paper tackles image dehazing for railways. The contributions of this paper for railway video image dehazing are as follows: (1) this paper proposes an end-to-end residual block-based haze removal method that consists of two subnetworks, namely fine-grained and coarse-grained network can directly generate the clean image from input hazy image, called RID-Net (Railway Image Dehazing Network). (2) The combined loss function (per-pixel loss and perceptual loss functions) is proposed to achieve both low-level features and high-level features so to generate the high-quality restored images. (3) We take the full-reference criterion (PSNR&SSIM), object detection, running time, and sensory vision to evaluate the proposed dehazing method. Experimental results on railway synthesized dataset, benchmark indoor dataset, and real-world dataset demonstrate our method has superior performance compared to the state-of-the-art methods.


2006 ◽  
Vol 324-325 ◽  
pp. 615-618 ◽  
Author(s):  
Bing Yang ◽  
Yong Xiang Zhao

A new method is proposed to estimate the parameters of probabilistic fatigue crack growth rate models, including the Paris equation and its’ improved type. To take the statistical characteristics of whole test data into account, the method inherits the idea from the general maximum likelihood method which is widely used in parameters estimation of fatigue S-N curves, ε-N curves, and da/dN-"K curves, and extends the conventional correlation coefficient optimization method into parameters evaluation not only for mean curve, but also for standard deviation curve and probabilistic curve. Analysis on the test data of 16MnR steel indicates that present method is available and feasible. Comparing to the general maximum likelihood method, present method has simpler algorithm, and can avoid constructing and solving the likelihood function, so it is speedier in calculation.


2018 ◽  
Vol 48 (1) ◽  
pp. 70-93 ◽  
Author(s):  
Sanku Dey ◽  
Mazen Nassar ◽  
Devendra Kumar ◽  
Fahad Alaboud

In this paper, a new three-parameter distribution called the Alpha Logarithm Transformed Fr\'{e}chet (ALTF) distribution is introduced which offers a more flexible distribution for modeling lifetime data. Various properties of the proposed distribution, including explicit expressions for the quantiles, moments, incomplete moments, conditional moments, moment generating function R\'{e}nyi and $\delta$-entropies, stochastic ordering, stress-strength reliability and order statistics are derived. The new distribution can have decreasing, reversed J-shaped and upside-down bathtub failure rate functions depending on its parameter values. The maximum likelihood method is used to estimate the distribution parameters. A simulation study is conducted to evaluate the performance of the maximum likelihood estimates. Finally, the proposed extended model is applied on real data sets and the results are given which illustrate the superior performance of the ALTF distribution compared to some other well-known distributions.


2020 ◽  
Vol 36 (16) ◽  
pp. 4458-4465 ◽  
Author(s):  
Ruichu Cai ◽  
Xuexin Chen ◽  
Yuan Fang ◽  
Min Wu ◽  
Yuexing Hao

Abstract Motivation Synthetic lethality (SL) is a promising form of gene interaction for cancer therapy, as it is able to identify specific genes to target at cancer cells without disrupting normal cells. As high-throughput wet-lab settings are often costly and face various challenges, computational approaches have become a practical complement. In particular, predicting SLs can be formulated as a link prediction task on a graph of interacting genes. Although matrix factorization techniques have been widely adopted in link prediction, they focus on mapping genes to latent representations in isolation, without aggregating information from neighboring genes. Graph convolutional networks (GCN) can capture such neighborhood dependency in a graph. However, it is still challenging to apply GCN for SL prediction as SL interactions are extremely sparse, which is more likely to cause overfitting. Results In this article, we propose a novel dual-dropout GCN (DDGCN) for learning more robust gene representations for SL prediction. We employ both coarse-grained node dropout and fine-grained edge dropout to address the issue that standard dropout in vanilla GCN is often inadequate in reducing overfitting on sparse graphs. In particular, coarse-grained node dropout can efficiently and systematically enforce dropout at the node (gene) level, while fine-grained edge dropout can further fine-tune the dropout at the interaction (edge) level. We further present a theoretical framework to justify our model architecture. Finally, we conduct extensive experiments on human SL datasets and the results demonstrate the superior performance of our model in comparison with state-of-the-art methods. Availability and implementation DDGCN is implemented in Python 3.7, open-source and freely available at https://github.com/CXX1113/Dual-DropoutGCN. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Wang Zheng-fang ◽  
Z.F. Wang

The main purpose of this study highlights on the evaluation of chloride SCC resistance of the material,duplex stainless steel,OOCr18Ni5Mo3Si2 (18-5Mo) and its welded coarse grained zone(CGZ).18-5Mo is a dual phases (A+F) stainless steel with yield strength:512N/mm2 .The proportion of secondary Phase(A phase) accounts for 30-35% of the total with fine grained and homogeneously distributed A and F phases(Fig.1).After being welded by a specific welding thermal cycle to the material,i.e. Tmax=1350°C and t8/5=20s,microstructure may change from fine grained morphology to coarse grained morphology and from homogeneously distributed of A phase to a concentration of A phase(Fig.2).Meanwhile,the proportion of A phase reduced from 35% to 5-10°o.For this reason it is known as welded coarse grained zone(CGZ).In association with difference of microstructure between base metal and welded CGZ,so chloride SCC resistance also differ from each other.Test procedures:Constant load tensile test(CLTT) were performed for recording Esce-t curve by which corrosion cracking growth can be described, tf,fractured time,can also be recorded by the test which is taken as a electrochemical behavior and mechanical property for SCC resistance evaluation. Test environment:143°C boiling 42%MgCl2 solution is used.Besides, micro analysis were conducted with light microscopy(LM),SEM,TEM,and Auger energy spectrum(AES) so as to reveal the correlation between the data generated by the CLTT results and micro analysis.


Author(s):  
Anggis Sagitarisman ◽  
Aceng Komarudin Mutaqin

AbstractCar manufacturers in Indonesia need to determine reasonable warranty costs that do not burden companies or consumers. Several statistical approaches have been developed to analyze warranty costs. One of them is the Gertsbakh-Kordonsky method which reduces the two-dimensional warranty problem to one dimensional. In this research, we apply the Gertsbakh-Kordonsky method to estimate the warranty cost for car type A in XYZ company. The one-dimensional data will be tested using the Kolmogorov-Smirnov to determine its distribution and the parameter of distribution will be estimated using the maximum likelihood method. There are three approaches to estimate the parameter of the distribution. The difference between these three approaches is in the calculation of mileage for units that do not claim within the warranty period. In the application, we use claim data for the car type A. The data exploration indicates the failure of car type A is mostly due to the age of the vehicle. The Kolmogorov-Smirnov shows that the most appropriate distribution for the claim data is the three-parameter Weibull. Meanwhile, the estimated using the Gertsbakh-Kordonsky method shows that the warranty costs for car type A are around 3.54% from the selling price of this car unit without warranty i.e. around Rp. 4,248,000 per unit.Keywords: warranty costs; the Gertsbakh-Kordonsky method; maximum likelihood estimation; Kolmogorov-Smirnov test.                                   AbstrakPerusahaan produsen mobil di Indonesia perlu menentukan biaya garansi yang bersifat wajar tidak memberatkan perusahaan maupun konsumen. Beberapa pendekatan statistik telah dikembangkan untuk menganalisis biaya garansi. Salah satunya adalah metode Gertsbakh-Kordonsky yang mereduksi masalah garansi dua dimensi menjadi satu dimensi. Pada penelitian ini, metode Gertsbakh-Kordonsky akan digunakan untuk mengestimasi biaya garansi untuk mobil tipe A pada perusahaan XYZ. Data satu dimensi hasil reduksi diuji kecocokan distribusinya menggunakan uji kecocokan Kolmogorov-Smirnov dan taksiran parameter distribusinya menggunakan metode penaksir kemungkinan maksimum. Ada tiga pendekatan yang digunakan untuk menaksir parameter distribusi. Perbedaan dari ketiga pendekatan tersebut terletak pada perhitungan jarak tempuh untuk unit yang tidak melakukan klaim dalam periode garansi. Sebagai bahan aplikasi, kami menggunakan data klaim unit mobil tipe A. Hasil eksplorasi data menunjukkan bahwa kegagalan mobil tipe A lebih banyak disebabkan karena faktor usia kendaraan. Hasil uji kecocokan distribusi untuk data hasil reduksi menunjukkan bahwa distribusi yang cocok adalah distribusi Weibull 3-parameter. Sementara itu, hasil perhitungan taksiran biaya garansi menunjukan bahwa taksiran biaya garansi untuk unit mobil tipe A sekitar 3,54% dari harga jual unit mobil tipe A tanpa garansi, atau sekitar Rp. 4.248.000,- per unit.Kata Kunci: biaya garansi; metode Gertsbakh-Kordonsky; penaksiran kemungkinan maksimum; uji Kolmogorov-Smirnov.


2020 ◽  
Vol 70 (5) ◽  
pp. 1211-1230
Author(s):  
Abdus Saboor ◽  
Hassan S. Bakouch ◽  
Fernando A. Moala ◽  
Sheraz Hussain

AbstractIn this paper, a bivariate extension of exponentiated Fréchet distribution is introduced, namely a bivariate exponentiated Fréchet (BvEF) distribution whose marginals are univariate exponentiated Fréchet distribution. Several properties of the proposed distribution are discussed, such as the joint survival function, joint probability density function, marginal probability density function, conditional probability density function, moments, marginal and bivariate moment generating functions. Moreover, the proposed distribution is obtained by the Marshall-Olkin survival copula. Estimation of the parameters is investigated by the maximum likelihood with the observed information matrix. In addition to the maximum likelihood estimation method, we consider the Bayesian inference and least square estimation and compare these three methodologies for the BvEF. A simulation study is carried out to compare the performance of the estimators by the presented estimation methods. The proposed bivariate distribution with other related bivariate distributions are fitted to a real-life paired data set. It is shown that, the BvEF distribution has a superior performance among the compared distributions using several tests of goodness–of–fit.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guanglei Xu ◽  
William S. Oates

AbstractRestricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications such as image recognition, drug discovery, and materials design. The Boltzmann probability distribution is used as a model to identify network parameters by optimizing the likelihood of predicting an output given hidden states trained on available data. Training such networks often requires sampling over a large probability space that must be approximated during gradient based optimization. Quantum annealing has been proposed as a means to search this space more efficiently which has been experimentally investigated on D-Wave hardware. D-Wave implementation requires selection of an effective inverse temperature or hyperparameter ($$\beta $$ β ) within the Boltzmann distribution which can strongly influence optimization. Here, we show how this parameter can be estimated as a hyperparameter applied to D-Wave hardware during neural network training by maximizing the likelihood or minimizing the Shannon entropy. We find both methods improve training RBMs based upon D-Wave hardware experimental validation on an image recognition problem. Neural network image reconstruction errors are evaluated using Bayesian uncertainty analysis which illustrate more than an order magnitude lower image reconstruction error using the maximum likelihood over manually optimizing the hyperparameter. The maximum likelihood method is also shown to out-perform minimizing the Shannon entropy for image reconstruction.


Sign in / Sign up

Export Citation Format

Share Document