scholarly journals Monte Carlo Simulation in Quantile Regression Parameter for Sparsity Estimate

2021 ◽  
Vol 2123 (1) ◽  
pp. 012027
Author(s):  
A Hapsery ◽  
A B Tribhuwaneswari

Abstract Monte Carlo is a method used to generate data according to the distribution and resampling until the parameters of the method used became convergen. The purpose of this simulation is first to prove that quantile regression with the estimated sparsity function parameter can model the data according to the non-uniform distribution of the data. Secondly, it’s to prove that the quantile regression is a developed method from the linear regression. The pattern of data which is not uniform is generally referred to as heterogeneous data, while the pattern of uniform data distribution is called homogeneous data. Data in this study will be generated for small and large samples on homogeneous and heterogeneous data. Uniformity of variance will be carried out on both heterogeneous and homogeneous data types, namely 0.25,1 and 4. The parameter estimation process and data generation will be resampled 1000 times. Thus, in conclusion of the simulation studies was the parameter estimates in the classical regression will be the same as the parameter estimates in the quantile regression at quantile 0.5. In the simulation, it is decided that the quantile regression method can be used on heterogeneous and homogeneous data to the unconstrained number of samples and variances.

2012 ◽  
Vol 2012 ◽  
pp. 1-17 ◽  
Author(s):  
Hiroyuki Taniai ◽  
Takayuki Shiohama

We propose a semiparametrically efficient estimator for α-risk-minimizing portfolio weights. Based on the work of Bassett et al. (2004), an α-risk-minimizing portfolio optimization is formulated as a linear quantile regression problem. The quantile regression method uses a pseudolikelihood based on an asymmetric Laplace reference density, and asymptotic properties such as consistency and asymptotic normality are obtained. We apply the results of Hallin et al. (2008) to the problem of constructing α-risk-minimizing portfolios using residual signs and ranks and a general reference density. Monte Carlo simulations assess the performance of the proposed method. Empirical applications are also investigated.


2018 ◽  
Vol 19 (5) ◽  
pp. 501-523
Author(s):  
Xi Liu ◽  
Keming Yu ◽  
Qifa Xu ◽  
Xueqing Tang

We investigate a new kernel-weighted likelihood smoothing quantile regression method. The likelihood is based on a normal scale-mixture representation of asymmetric Laplace distribution (ALD). This approach enjoys the same good design adaptation as the local quantile regression ( Spokoiny et al., 2013 , Journal of Statistical Planning and Inference, 143, 1109–1129), particularly for smoothing extreme quantile curves, and ensures non-crossing quantile curves for any given sample. The performance of the proposed method is evaluated via extensive Monte Carlo simulation studies and one real data analysis.


2020 ◽  
Author(s):  
Ahmad Sudi Pratikno

Data presentation, variation, and type of data are the three main used in analyzing a study. Presentation of data is a way for researchers to show the results of research to readers and display data to be analyzed. Some forms of data presentation such as bar charts, line charts, pie charts, scatter diagrams, and symbol charts. Variations in data are more likely to be how data is distributed. Variations in data types are homogeneous data and heterogeneous data. In the type of data, there are several types, which are as follows dichotomous data, nominal, ordinal, interval, and others.


1999 ◽  
Vol 15 (2) ◽  
pp. 91-98 ◽  
Author(s):  
Lutz F. Hornke

Summary: Item parameters for several hundreds of items were estimated based on empirical data from several thousands of subjects. The logistic one-parameter (1PL) and two-parameter (2PL) model estimates were evaluated. However, model fit showed that only a subset of items complied sufficiently, so that the remaining ones were assembled in well-fitting item banks. In several simulation studies 5000 simulated responses were generated in accordance with a computerized adaptive test procedure along with person parameters. A general reliability of .80 or a standard error of measurement of .44 was used as a stopping rule to end CAT testing. We also recorded how often each item was used by all simulees. Person-parameter estimates based on CAT correlated higher than .90 with true values simulated. For all 1PL fitting item banks most simulees used more than 20 items but less than 30 items to reach the pre-set level of measurement error. However, testing based on item banks that complied to the 2PL revealed that, on average, only 10 items were sufficient to end testing at the same measurement error level. Both clearly demonstrate the precision and economy of computerized adaptive testing. Empirical evaluations from everyday uses will show whether these trends will hold up in practice. If so, CAT will become possible and reasonable with some 150 well-calibrated 2PL items.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ikbal Taleb ◽  
Mohamed Adel Serhani ◽  
Chafik Bouhaddioui ◽  
Rachida Dssouli

AbstractBig Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper.


2021 ◽  
Vol 29 ◽  
pp. 95-115
Author(s):  
Rafal Kozubski ◽  
Graeme E. Murch ◽  
Irina V. Belova

We review the results of our Monte Carlo simulation studies carried out within the past two decades in the area of atomic-migration-controlled phenomena in intermetallic compounds. The review aims at showing the high potential of Monte Carlo methods in modelling both the equilibrium states of the systems and the kinetics of the running processes. We focus on three particular problems: (i) the atomistic origin of the complexity of the ‘order-order’ relaxations in γ’-Ni3Al; (ii) surface-induced ordering phenomena in γ-FePt and (iii) ‘order—order’ kinetics and self-diffusion in the ‘triple-defect’ β-NiAl. The latter investigation demonstrated how diverse Monte Carlo techniques may be used to model the phenomena where equilibrium thermodynamics interplays and competes with kinetic effects.


Sign in / Sign up

Export Citation Format

Share Document