parameter estimate
Recently Published Documents


TOTAL DOCUMENTS

170
(FIVE YEARS 52)

H-INDEX

19
(FIVE YEARS 2)

2022 ◽  
Vol 9 (2) ◽  
pp. 104-108
Author(s):  
Zakaria et al. ◽  

The method of higher-order L-moments (LH-moment) was proposed as a more robust alternative compared to classical L-moments to characterize extreme events. The new derivation will be done for Mielke-Johnson’s Kappa and Three-Parameters Kappa Type-II (K3D-II) distributions based on the LH-moments approach. The data of maximum monthly rainfall for Embong station in Terengganu were used as a case study. The analyses were conducted using the classical L-moments method with η=0 and LH-moments methods with η=1, η=2, η=3 and η=4 for a complete data series and upper parts of the distributions. The most suitable distributions were determined based on the Mean Absolute Deviation Index (MADI), Mean Square Deviation Index (MSDI), and Correlation (r). Also, L-moment and LH-moment ratio diagrams were used to represent visual proofs of the results. The analysis showed that LH-moments methods at a higher order of K3D-II distribution best fit the data of maximum monthly rainfalls for the Embong station for the upper parts of the distribution compared to L-moments. The results also proved that whenever η increases, LH-moments reflect more and more characteristics of the upper part of the distribution. This seems to suggest that LH-moments estimates for the upper part of the distribution events are superior to L-moments in fitting the data of maximum monthly rainfalls.


Author(s):  
Riswan Riswan

The Item Response Theory (IRT) model contains one or more parameters in the model. These parameters are unknown, so it is necessary to predict them. This paper aims (1) to determine the sample size (N) on the stability of the item parameter (2) to determine the length (n) test on the stability of the estimate parameter examinee (3) to determine the effect of the model on the stability of the item and the parameter to examine (4) to find out Effect of sample size and test length on item stability and examinee parameter estimates (5) Effect of sample size, test length, and model on item stability and examinee parameter estimates. This paper is a simulation study in which the latent trait (q) sample simulation is derived from a standard normal population of ~ N (0.1), with a specific Sample Size (N) and test length (n) with the 1PL, 2PL and 3PL models using Wingen. Item analysis was carried out using the classical theory test approach and modern test theory. Item Response Theory and data were analyzed through software R with the ltm package. The results showed that the larger the sample size (N), the more stable the estimated parameter. For the length test, which is the greater the test length (n), the more stable the estimated parameter (q).


2021 ◽  
Vol 6 ◽  
Author(s):  
Stephen Humphry ◽  
Paul Montuoro

This article demonstrates that the Rasch model cannot reveal systematic differential item functioning (DIF) in single tests. The person total score is the sufficient statistic for the person parameter estimate, eliminating the possibility for residuals at the test level. An alternative approach is to use subset DIF analysis to search for DIF in item subsets that form the components of the broader latent trait. In this methodology, person parameter estimates are initially calculated using all test items. Then, in separate analyses, these person estimates are compared to the observed means in each subset, and the residuals assessed. As such, this methodology tests the assumption that the person locations in each factor group are invariant across subsets. The first objective is to demonstrate that in single tests differences in factor groups will appear as differences in the mean person estimates and the distributions of these estimates. The second objective is to demonstrate how subset DIF analysis reveals differences between person estimates and the observed means in subsets. Implications for practitioners are discussed.


Author(s):  
Matthias Himmelsbach ◽  
Andreas Kroll

AbstractThis paper is concerned with the analysis of optimization procedures for optimal experiment design for locally affine Takagi-Sugeno (TS) fuzzy models based on the Fisher Information Matrix (FIM). The FIM is used to estimate the covariance matrix of a parameter estimate. It depends on the model parameters as well as the regression variables. Due to the dependency on the model parameters good initial models are required. Since the FIM is a matrix, a scalar measure of the FIM is optimized. Different measures and optimization goals are investigated in three case studies.


Prosthesis ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 331-341
Author(s):  
James L. Sheets ◽  
David B. Marx ◽  
Nina Ariani ◽  
Valentim A. R. Barão ◽  
Alvin G. Wee

The objective was to compare the repeatability between dental faculty, whose clinical practice was primarily restorative dentistry, and final year dental students in categorizing the inherent translucency of images selected at random using either a 3- or 7-point scale (translucent to opaque). Digital images of anterior dentition were randomly selected based on inherent translucency. Thirty images (five were repeated) were randomized and categorized by 20 dental students and 20 faculty on their inherent translucency. Statistical analysis was performed using an F test for analysis of variance at 95% confidence interval. A covariance parameter estimate (CPE) was accomplished to compare the inter-rater variability of the dental faculty and dental students. Statistically, more variability occurred between Slides (CPE of 0.185 (p = 0.001)) and between Subject and Slide (CPE of 0.122 (p = 0.0002)) than within subjects (CPE of 0.021 (p = 0.083)). Viewing repeat Slides, Students (CPE = 0.16) were more consistent (p < 0.05) than faculty (CPE = 1.8) using the 3- point scale, while the CPE was the same (CPE = 0.669) using 7-point scale. Dental students and faculty were consistent using the 7-point scale to judge repeat slides, while dental students in this limited pilot study were more consistent when viewing a repeat slide using the 3-point scale.


Author(s):  
Amey Thakur

Abstract: Neuro Fuzzy is a hybrid system that combines Artificial Neural Networks with Fuzzy Logic. Provides a great deal of freedom when it comes to thinking. This phrase, on the other hand, is frequently used to describe a system that combines both approaches. There are two basic streams of neural network and fuzzy system study. Modelling several elements of the human brain (structure, reasoning, learning, perception, and so on) as well as artificial systems and data: pattern clustering and recognition, function approximation, system parameter estimate, and so on. In general, neural networks and fuzzy logic systems are parameterized nonlinear computing methods for numerical data processing (signals, images, stimuli). These algorithms can be integrated into dedicated hardware or implemented on a general-purpose computer. The network system acquires knowledge through a learning process. Internal parameters are used to store the learned information (weights). Keywords: Artificial Neural Networks (ANNs), Neural Networks (NNs), Fuzzy Logic (FL), Neuro-Fuzzy, Probability Reasoning, Soft Computing, Fuzzification, Defuzzification, Fuzzy Inference Systems, Membership Function.


Author(s):  
Matteo Borella ◽  
Graziano Martello ◽  
Davide Risso ◽  
Chiara Romualdi

Abstract Motivation Single-cell RNA sequencing (scRNA-seq) enables transcriptome-wide gene expression measurements at single-cell resolution providing a comprehensive view of the compositions and dynamics of tissue and organism development. The evolution of scRNA-seq protocols has led to a dramatic increase of cells throughput, exacerbating many of the computational and statistical issues that previously arose for bulk sequencing. In particular, with scRNA-seq data all the analyses steps, including normalization, have become computationally intensive, both in terms of memory usage and computational time. In this perspective, new accurate methods able to scale efficiently are desirable. Results Here, we propose PsiNorm, a between-sample normalization method based on the power-law Pareto distribution parameter estimate. Here, we show that the Pareto distribution well resembles scRNA-seq data, especially those coming from platforms that use unique molecular identifiers. Motivated by this result, we implement PsiNorm, a simple and highly scalable normalization method. We benchmark PsiNorm against seven other methods in terms of cluster identification, concordance and computational resources required. We demonstrate that PsiNorm is among the top performing methods showing a good trade-off between accuracy and scalability. Moreover, PsiNorm does not need a reference, a characteristic that makes it useful in supervised classification settings, in which new out-of-sample data need to be normalized. Availability and implementation PsiNorm is implemented in the scone Bioconductor package and available at https://bioconductor.org/packages/scone/. Supplementary information Supplementary data are available at Bioinformatics online.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1834
Author(s):  
Vlad Stefan Barbu ◽  
Alex Karagrigoriou ◽  
Andreas Makrides

Semi-Markov processes are typical tools for modeling multi state systems by allowing several distributions for sojourn times. In this work, we focus on a general class of distributions based on an arbitrary parent continuous distribution function G with Kumaraswamy as the baseline distribution and discuss some of its properties, including the advantageous property of being closed under minima. In addition, an estimate is provided for the so-called stress–strength reliability parameter, which measures the performance of a system in mechanical engineering. In this work, the sojourn times of the multi-state system are considered to follow a distribution with two shape parameters, which belongs to the proposed general class of distributions. Furthermore and for a multi-state system, we provide parameter estimates for the above general class, which are assumed to vary over the states of the system. The theoretical part of the work also includes the asymptotic theory for the proposed estimators with and without censoring as well as expressions for classical reliability characteristics. The performance and effectiveness of the proposed methodology is investigated via simulations, which show remarkable results with the help of statistical (for the parameter estimates) and graphical tools (for the reliability parameter estimate).


2021 ◽  
Vol 14 (1) ◽  
pp. 68-78
Author(s):  
Titin Siswantining ◽  
Muhammad Ihsan ◽  
Saskya Mary Soemartojo ◽  
Devvi Sarwinda ◽  
Herley Shaori Al-Ash ◽  
...  

Missing values are a problem that is often encountered in various fields and must be addressed to obtain good statistical inference such as parameter estimation. Missing values can be found in any type of data, included count data that has Poisson distributed. One solution to overcome that problem is applying multiple imputation techniques. The multiple imputation technique for the case of count data consists of three main stages, namely the imputation, the analysis, and pooling parameter. The use of the normal distribution refers to the sampling distribution using the central limit theorem for discrete distributions. This study is also equipped with numerical simulations which aim to compare accuracy based on the resulting bias value. Based on the study, the solutions proposed to overcome the missing values in the count data yield satisfactory results. This is indicated by the size of the bias parameter estimate is small. But the bias value tends to increase with increasing percentage of observation of missing values and when the parameter values are small.


2021 ◽  
Vol 13 (9) ◽  
pp. 1828
Author(s):  
Hongjian Wei ◽  
Yingping Huang ◽  
Fuzhi Hu ◽  
Baigan Zhao ◽  
Zhiyang Guo ◽  
...  

Motion estimation is crucial to predict where other traffic participants will be at a certain period of time, and accordingly plan the route of the ego-vehicle. This paper presents a novel approach to estimate the motion state by using region-level instance segmentation and extended Kalman filter (EKF). Motion estimation involves three stages of object detection, tracking and parameter estimate. We first use a region-level segmentation to accurately locate the object region for the latter two stages. The region-level segmentation combines color, temporal (optical flow), and spatial (depth) information as the basis for segmentation by using super-pixels and Conditional Random Field. The optical flow is then employed to track the feature points within the object area. In the stage of parameter estimate, we develop a relative motion model of the ego-vehicle and the object, and accordingly establish an EKF model for point tracking and parameter estimate. The EKF model integrates the ego-motion, optical flow, and disparity to generate optimized motion parameters. During tracking and parameter estimate, we apply edge point constraint and consistency constraint to eliminate outliers of tracking points so that the feature points used for tracking are ensured within the object body and the parameter estimates are refined by inner points. Experiments have been conducted on the KITTI dataset, and the results demonstrate that our method presents excellent performance and outperforms the other state-of-the-art methods either in object segmentation and parameter estimate.


Sign in / Sign up

Export Citation Format

Share Document