scholarly journals ADAPTIVE LASSO-TYPE ESTIMATION FOR MULTIVARIATE DIFFUSION PROCESSES

2012 ◽  
Vol 28 (4) ◽  
pp. 838-860 ◽  
Author(s):  
Alessandro De Gregorio ◽  
Stefano M. Iacus

The least absolute shrinkage and selection operator (LASSO) is a widely used statistical methodology for simultaneous estimation and variable selection. It is a shrinkage estimation method that allows one to select parsimonious models. In other words, this method estimates the redundant parameters as zero in the large samples and reduces variance of estimates. In recent years, many authors analyzed this technique from a theoretical and applied point of view. We introduce and study the adaptive LASSO problem for discretely observed multivariate diffusion processes. We prove oracle properties and also derive the asymptotic distribution of the LASSO estimator. This is a nontrivial extension of previous results by Wang and Leng (2007, Journal of the American Statistical Association, 102(479), 1039–1048) on LASSO estimation because of different rates of convergence of the estimators in the drift and diffusion coefficients. We perform simulations and real data analysis to provide some evidence on the applicability of this method.

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1835
Author(s):  
Antonio Barrera ◽  
Patricia Román-Román ◽  
Francisco Torres-Ruiz

A joint and unified vision of stochastic diffusion models associated with the family of hyperbolastic curves is presented. The motivation behind this approach stems from the fact that all hyperbolastic curves verify a linear differential equation of the Malthusian type. By virtue of this, and by adding a multiplicative noise to said ordinary differential equation, a diffusion process may be associated with each curve whose mean function is said curve. The inference in the resulting processes is presented jointly, as well as the strategies developed to obtain the initial solutions necessary for the numerical resolution of the system of equations resulting from the application of the maximum likelihood method. The common perspective presented is especially useful for the implementation of the necessary procedures for fitting the models to real data. Some examples based on simulated data support the suitability of the development described in the present paper.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Ali Kargarnejad ◽  
Mohsen Taherbaneh ◽  
Amir Hosein Kashefi

Tracking maximum power point of a solar panel is of interest in most of photovoltaic applications. Solar panel modeling is also very interesting exclusively based on manufacturers data. Knowing that the manufacturers generally give the electrical specifications of their products at one operating condition, there are so many cases in which the specifications in other conditions are of interest. In this research, a comprehensive one-diode model for a solar panel with maximum obtainable accuracy is fully developed only based on datasheet values. The model parameters dependencies on environmental conditions are taken into consideration as much as possible. Comparison between real data and simulations results shows that the proposed model has maximum obtainable accuracy. Then a new fuzzy-based controller to track the maximum power point of the solar panel is also proposed which has better response from speed, accuracy and stability point of view respect to the previous common developed one.


1984 ◽  
Vol 16 (3) ◽  
pp. 492-561 ◽  
Author(s):  
E. J. Hannan ◽  
L. Kavalieris

This paper is in three parts. The first deals with the algebraic and topological structure of spaces of rational transfer function linear systems—ARMAX systems, as they have been called. This structure theory is dominated by the concept of a space of systems of order, or McMillan degree, n, because of the fact that this space, M(n), can be realised as a kind of high-dimensional algebraic surface of dimension n(2s + m) where s and m are the numbers of outputs and inputs. In principle, therefore, the fitting of a rational transfer model to data can be considered as the problem of determining n and then the appropriate element of M(n). However, the fact that M(n) appears to need a large number of coordinate neighbourhoods to cover it complicates the task. The problems associated with this program, as well as theory necessary for the analysis of algorithms to carry out aspects of the program, are also discussed in this first part of the paper, Sections 1 and 2.The second part, Sections 3 and 4, deals with algorithms to carry out the fitting of a model and exhibits these algorithms through simulations and the analysis of real data.The third part of the paper discusses the asymptotic properties of the algorithm. These properties depend on uniform rates of convergence being established for covariances up to some lag increasing indefinitely with the length of record, T. The necessary limit theorems and the analysis of the algorithms are given in Section 5. Many of these results are of interest independent of the algorithms being studied.


2021 ◽  
Author(s):  
Masaki Uto

AbstractPerformance assessment, in which human raters assess examinee performance in a practical task, often involves the use of a scoring rubric consisting of multiple evaluation items to increase the objectivity of evaluation. However, even when using a rubric, assigned scores are known to depend on characteristics of the rubric’s evaluation items and the raters, thus decreasing ability measurement accuracy. To resolve this problem, item response theory (IRT) models that can estimate examinee ability while considering the effects of these characteristics have been proposed. These IRT models assume unidimensionality, meaning that a rubric measures one latent ability. In practice, however, this assumption might not be satisfied because a rubric’s evaluation items are often designed to measure multiple sub-abilities that constitute a targeted ability. To address this issue, this study proposes a multidimensional IRT model for rubric-based performance assessment. Specifically, the proposed model is formulated as a multidimensional extension of a generalized many-facet Rasch model. Moreover, a No-U-Turn variant of the Hamiltonian Markov chain Monte Carlo algorithm is adopted as a parameter estimation method for the proposed model. The proposed model is useful not only for improving the ability measurement accuracy, but also for detailed analysis of rubric quality and rubric construct validity. The study demonstrates the effectiveness of the proposed model through simulation experiments and application to real data.


Entropy ◽  
2018 ◽  
Vol 20 (11) ◽  
pp. 813 ◽  
Author(s):  
José Amigó ◽  
Sámuel Balogh ◽  
Sergio Hernández

Entropy appears in many contexts (thermodynamics, statistical mechanics, information theory, measure-preserving dynamical systems, topological dynamics, etc.) as a measure of different properties (energy that cannot produce work, disorder, uncertainty, randomness, complexity, etc.). In this review, we focus on the so-called generalized entropies, which from a mathematical point of view are nonnegative functions defined on probability distributions that satisfy the first three Shannon–Khinchin axioms: continuity, maximality and expansibility. While these three axioms are expected to be satisfied by all macroscopic physical systems, the fourth axiom (separability or strong additivity) is in general violated by non-ergodic systems with long range forces, this having been the main reason for exploring weaker axiomatic settings. Currently, non-additive generalized entropies are being used also to study new phenomena in complex dynamics (multifractality), quantum systems (entanglement), soft sciences, and more. Besides going through the axiomatic framework, we review the characterization of generalized entropies via two scaling exponents introduced by Hanel and Thurner. In turn, the first of these exponents is related to the diffusion scaling exponent of diffusion processes, as we also discuss. Applications are addressed as the description of the main generalized entropies advances.


2020 ◽  
Vol 9 (1) ◽  
pp. 61-81
Author(s):  
Lazhar BENKHELIFA

A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.


2020 ◽  
Author(s):  
Marta Pelizzola ◽  
Merle Behr ◽  
Housen Li ◽  
Axel Munk ◽  
Andreas Futschik

AbstractSince haplotype information is of widespread interest in biomedical applications, effort has been put into their reconstruction. Here, we propose a new, computationally efficient method, called haploSep, that is able to accurately infer major haplotypes and their frequencies just from multiple samples of allele frequency data. Our approach seems to be the first that is able to estimate more than one haplotype given such data. Even the accuracy of experimentally obtained allele frequencies can be improved by re-estimating them from our reconstructed haplotypes. From a methodological point of view, we model our problem as a multivariate regression problem where both the design matrix and the coefficient matrix are unknown. The design matrix, with 0/1 entries, models haplotypes and the columns of the coefficient matrix represent the frequencies of haplotypes, which are non-negative and sum up to one. We illustrate our method on simulated and real data focusing on experimental evolution and microbial data.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2105
Author(s):  
Claudia Angelini ◽  
Daniela De De Canditiis ◽  
Anna Plaksienko

In this paper, we consider the problem of estimating multiple Gaussian Graphical Models from high-dimensional datasets. We assume that these datasets are sampled from different distributions with the same conditional independence structure, but not the same precision matrix. We propose jewel, a joint data estimation method that uses a node-wise penalized regression approach. In particular, jewel uses a group Lasso penalty to simultaneously guarantee the resulting adjacency matrix’s symmetry and the graphs’ joint learning. We solve the minimization problem using the group descend algorithm and propose two procedures for estimating the regularization parameter. Furthermore, we establish the estimator’s consistency property. Finally, we illustrate our estimator’s performance through simulated and real data examples on gene regulatory networks.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Hisham M. Almongy ◽  
Ehab M. Almetwally ◽  
Randa Alharbi ◽  
Dalia Alnagar ◽  
E. H. Hafez ◽  
...  

This paper is concerned with the estimation of the Weibull generalized exponential distribution (WGED) parameters based on the adaptive Type-II progressive (ATIIP) censored sample. Maximum likelihood estimation (MLE), maximum product spacing (MPS), and Bayesian estimation based on Markov chain Monte Carlo (MCMC) methods have been determined to find the best estimation method. The Monte Carlo simulation is used to compare the three methods of estimation based on the ATIIP-censored sample, and also, we made a bootstrap confidence interval estimation. We will analyze data related to the distribution about single carbon fiber and electrical data as real data cases to show how the schemes work in practice.


2017 ◽  
Vol 40 (1) ◽  
pp. 165-203 ◽  
Author(s):  
Sanku Dey ◽  
Enayetur Raheem ◽  
Saikat Mukherjee

This article addresses the various properties and different methods of estimation of the unknown parameters of the Transmuted Rayleigh (TR) distribution from the frequentist point of view. Although, our main focus is on estimation from frequentist point of view,  yet, various mathematical and statistical properties of the TR distribution (such as quantiles, moments, moment generating function, conditional moments,  hazard rate, mean residual lifetime, mean past lifetime,  mean deviation about mean and median, the stochastic ordering,  various entropies, stress-strength parameter  and order statistics) are derived.  We briefly describe different frequentist methods of estimation approaches, namely, maximum likelihood estimators, moments estimators, L-moment estimators, percentile based estimators, least squares estimators, method of maximum product of spacings,  method of Cram\'er-von-Mises, methods of Anderson-Darling and right-tail Anderson-Darling and compare them using extensive numerical simulations. Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation for both small and large samples. Finally, the potentiality of the model is analyzed by means of two real data sets which is further illustrated by obtaining bias and standard error of the estimates and the bootstrap percentile confidence intervals using bootstrap resampling.


Sign in / Sign up

Export Citation Format

Share Document