location parameter
Recently Published Documents


TOTAL DOCUMENTS

391
(FIVE YEARS 62)

H-INDEX

24
(FIVE YEARS 2)

2021 ◽  
Vol 2021 ◽  
pp. 1-28
Author(s):  
Zahid Rasheed ◽  
Hongying Zhang ◽  
Muhammad Arslan ◽  
Babar Zaman ◽  
Syed Masroor Anwar ◽  
...  

The nonparametric (NP) control charts are famous for detecting a shift in the process parameters (location and/or dispersion) when the underlying process characteristic does not follow the distributional assumptions. Similarly, when the cost of estimations is very high and the ranking of observational is relatively simple, the ranked set sampling (RSS) technique is preferred over the simple random sampling (SRS) technique. On the other hand, the NP triple exponentially weighted moving average (EWMA) control chart based on SRS is superior to the NP EWMA and NP double EWMA (NP DEWMA) based on the SRS technique to detect a shift in the process location. This study designed an advanced form of NP TEWMA Wilcoxon signed-rank based on RSS, denoted as TEWMA − SR RSS control chart to identify a shift in the process location parameter. The Monte Carlo simulation method is used to assess the performance of the proposed TEWMA − SR RSS control chart along with SRS-based NP TEWMA (TEWMA-SR), SRS-based NP TEWMA sign (TEWMA-SN), SRS-based TEWMA − X ¯ , and RSS-based NP DEWMA-SR DEWMA − SR RSS control charts. The study shows that the proposed TEWMA − SR RSS control chart is more efficient in identifying shifts (especially in small shifts) in the process location than the existing control charts. Finally, a real-life application is also provided for the practical implementation of the proposed TEWMA − SR RSS control chart.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Bart Niyibizi ◽  
B. Wade Brorsen ◽  
Eunchun Park

PurposeThe purpose of this paper is to estimate crop yield densities considering time trends in the first three moments and spatially varying coefficients.Design/methodology/approachYield density parameters are assumed to be spatially correlated, through a Gaussian spatial process. This study spatially smooth multiple parameters using Bayesian Kriging.FindingsAssuming that county yields follow skew normal distributions, the location parameter increased faster in the eastern and northwestern counties of Iowa, while the scale increased faster in southern counties and the shape parameter increased more (implying less left skewness) in southwestern counties. Over time, the mean has increased sharply, while the variance and left skewness increased modestly.Originality/valueBayesian Kriging can smooth time-varying yield distributions, handle unbalanced panel data and provide estimates when data are missing. Most past models used a two-stage estimation procedure, while our procedure estimates parameters jointly.


2021 ◽  
Vol 13 (17) ◽  
pp. 3466
Author(s):  
Gustavo de Araújo Carvalho ◽  
Peter J. Minnett ◽  
Nelson F. F. Ebecken ◽  
Luiz Landau

Linear discriminant analysis (LDA) is a mathematically robust multivariate data analysis approach that is sometimes used for surface oil slick signature classification. Our goal is to rank the effectiveness of LDAs to differentiate oil spills from look-alike slicks. We explored multiple combinations of (i) variables (size information, Meteorological-Oceanographic (metoc), geo-location parameters) and (ii) data transformations (non-transformed, cube root, log10). Active and passive satellite-based measurements of RADARSAT, QuikSCAT, AVHRR, SeaWiFS, and MODIS were used. Results from two experiments are reported and discussed: (i) an investigation of 60 combinations of several attributes subjected to the same data transformation and (ii) a survey of 54 other data combinations of three selected variables subjected to different data transformations. In Experiment 1, the best discrimination was reached using ten cube-transformed attributes: ~85% overall accuracy using six pieces of size information, three metoc variables, and one geo-location parameter. In Experiment 2, two combinations of three variables tied as the most effective: ~81% of overall accuracy using area (log transformed), length-to-width ratio (log- or cube-transformed), and number of feature parts (non-transformed). After verifying the classification accuracy of 114 algorithms by comparing with expert interpretations, we concluded that applying different data transformations and accounting for metoc and geo-location attributes optimizes the accuracies of binary classifiers (oil spill vs. look-alike slicks) using the simple LDA technique.


2021 ◽  
Vol 71 (4) ◽  
pp. 1019-1026
Author(s):  
Dragan Jukić ◽  
Tomislav Marošević

Abstract In a recent paper [JUKIĆ, D.: A necessary and sufficient criterion for the existence of the global minima of a continuous lower bounded function on a noncompact set, J. Comput. Appl. Math. 375 (2020)], a new existence level was introduced and then was used to obtain a necessary and sufficient criterion for the existence of the global minima of a continuous lower bounded function on a noncompact set. In this paper, we determined that existence level for the residual sum of squares of the power-law regression with an unknown location parameter, and so we obtained a necessary and sufficient condition which guarantee the existence of the least squares estimate.


Water ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 2082
Author(s):  
Aditya Kapoor ◽  
Deepak Kashyap

Pilot point methodology (PPM) permits estimation of transmissivity at unsampled pilot points by solving the hydraulic head based inverse problem. Especially relevant to areas with sparse transmissivity data, the methodology supplements the limited field data. Presented herein is an approach for estimating parameters of PPM honoring the objectives of refinement of the transmissivity (T) interpolation and the model calibration. The parameters are the locations and number of pilot transmissivity points. The location parameter is estimated by defining a qualifying matrix Q comprising weighted sum of the hydraulic head-sensitivity and the kriging variance fields. Whereas the former component of Q promotes the model calibration, the latter one leads to improved T interpolation by locating pilot points in un-sampled tracts. Further, a three-stage methodology is proposed for an objective determination of the number of pilot points. It is based upon sequential upgradation of the Variogram as the pilot points are added to the data base, ensuring its convergence with the head-based optimal Variogram. The model has been illustrated by applying it to Satluj-Beas interbasin wherein the pumping test data is not only sparse, but also unevenly distributed.


Author(s):  
Sangita Das ◽  
Suchandan Kayal ◽  
N. Balakrishnan

Abstract Let $\{Y_{1},\ldots ,Y_{n}\}$ be a collection of interdependent nonnegative random variables, with $Y_{i}$ having an exponentiated location-scale model with location parameter $\mu _i$ , scale parameter $\delta _i$ and shape (skewness) parameter $\beta _i$ , for $i\in \mathbb {I}_{n}=\{1,\ldots ,n\}$ . Furthermore, let $\{L_1^{*},\ldots ,L_n^{*}\}$ be a set of independent Bernoulli random variables, independently of $Y_{i}$ 's, with $E(L_{i}^{*})=p_{i}^{*}$ , for $i\in \mathbb {I}_{n}.$ Under this setup, the portfolio of risks is the collection $\{T_{1}^{*}=L_{1}^{*}Y_{1},\ldots ,T_{n}^{*}=L_{n}^{*}Y_{n}\}$ , wherein $T_{i}^{*}=L_{i}^{*}Y_{i}$ represents the $i$ th claim amount. This article then presents several sufficient conditions, under which the smallest claim amounts are compared in terms of the usual stochastic and hazard rate orders. The comparison results are obtained when the dependence structure among the claim severities are modeled by (i) an Archimedean survival copula and (ii) a general survival copula. Several examples are also presented to illustrate the established results.


2021 ◽  
Author(s):  
Lei Yan ◽  
Lihua Xiong ◽  
Gusong Ruan ◽  
Chong-Yu Xu ◽  
Mengjie Zhang

Abstract In traditional flood frequency analysis, a minimum of 30 observations is required to guarantee the accuracy of design results with an allowable uncertainty; however, there has not been a recommendation for the requirement on the length of data in NFFA (nonstationary flood frequency analysis). Therefore, this study has been carried out with three aims: (i) to evaluate the predictive capabilities of nonstationary (NS) and stationary (ST) models with varying flood record lengths; (ii) to examine the impacts of flood record lengths on the NS and ST design floods and associated uncertainties; and (iii) to recommend the probable requirements of flood record length in NFFA. To achieve these objectives, 20 stations with record length longer than 100 years in Norway were selected and investigated by using both GEV (generalized extreme value)-ST and GEV-NS models with linearly varying location parameter (denoted by GEV-NS0). The results indicate that the fitting quality and predictive capabilities of GEV-NS0 outperform those of GEV-ST models when record length is approximately larger than 60 years for most stations, and the stability of the GEV-ST and GEV-NS0 is improved as record lengths increase. Therefore, a minimum of 60 years of flood observations is recommended for NFFA for the selected basins in Norway.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 897
Author(s):  
J. Agustín García ◽  
Mario M. Pizarro ◽  
F. Javier Acero ◽  
M. Isabel Parra

A Bayesian hierarchical framework with a Gaussian copula and a generalized extreme value (GEV) marginal distribution is proposed for the description of spatial dependencies in data. This spatial copula model was applied to extreme summer temperatures over the Extremadura Region, in the southwest of Spain, during the period 1980–2015, and compared with the spatial noncopula model. The Bayesian hierarchical model was implemented with a Monte Carlo Markov Chain (MCMC) method that allows the distribution of the model’s parameters to be estimated. The results show the GEV distribution’s shape parameter to take constant negative values, the location parameter to be altitude dependent, and the scale parameter values to be concentrated around the same value throughout the region. Further, the spatial copula model chosen presents lower deviance information criterion (DIC) values when spatial distributions are assumed for the GEV distribution’s location and scale parameters than when the scale parameter is taken to be constant over the region.


Econometrics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 27
Author(s):  
Arifatus Solikhah ◽  
Heri Kuswanto ◽  
Nur Iriawan ◽  
Kartika Fithriasari

We generalize the Gaussian Mixture Autoregressive (GMAR) model to the Fisher’s z Mixture Autoregressive (ZMAR) model for modeling nonlinear time series. The model consists of a mixture of K-component Fisher’s z autoregressive models with the mixing proportions changing over time. This model can capture time series with both heteroskedasticity and multimodal conditional distribution, using Fisher’s z distribution as an innovation in the MAR model. The ZMAR model is classified as nonlinearity in the level (or mode) model because the mode of the Fisher’s z distribution is stable in its location parameter, whether symmetric or asymmetric. Using the Markov Chain Monte Carlo (MCMC) algorithm, e.g., the No-U-Turn Sampler (NUTS), we conducted a simulation study to investigate the model performance compared to the GMAR model and Student t Mixture Autoregressive (TMAR) model. The models are applied to the daily IBM stock prices and the monthly Brent crude oil prices. The results show that the proposed model outperforms the existing ones, as indicated by the Pareto-Smoothed Important Sampling Leave-One-Out cross-validation (PSIS-LOO) minimum criterion.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Antoine Vanier ◽  
Véronique Sébille ◽  
Myriam Blanchin ◽  
Jean-Benoit Hardouin

Abstract Background Patient-Reported Outcomes (PROs) are standardized questionnaires used to measure subjective outcomes such as quality of life in healthcare. They are considered paramount to assess the results of therapeutic interventions. However, because their calibration is relative to internal standards in people’s mind, changes in PRO scores are difficult to interpret. Knowing the smallest value in the score that the patient perceives as change can help. An estimator linking the answers to a Patient Global Rating of Change (PGRC: a question measuring the overall feeling of change) with change in PRO scores is frequently used to obtain this value. In the last 30 years, a plethora of methods have been used to obtain these estimates, but there is no consensus on the appropriate method and no formal definition of this value. Methods We propose a model to explain changes in PRO scores and PGRC answers. Results A PGRC measures a construct called the Perceived Change (PC), whose determinants are elicited. Answering a PGRC requires discretizing a continuous PC into a category using threshold values that are random variables. Therefore, the populational value of the Minimal Perceived Change (MPC) is the location parameter value of the threshold on the PC continuum defining the switch from the absence of change to change. Conclusions We show how this model can help to hypothesize what are the appropriate methods to estimate the MPC and its potential to be a rigorous theoretical basis for future work on the interpretation of change in PRO scores.


Sign in / Sign up

Export Citation Format

Share Document