point estimation
Recently Published Documents


TOTAL DOCUMENTS

1024
(FIVE YEARS 246)

H-INDEX

36
(FIVE YEARS 5)

2022 ◽  
pp. 1-24
Author(s):  
Kohei Ichikawa ◽  
Asaki Kataoka

Abstract Animals make efficient probabilistic inferences based on uncertain and noisy information from the outside environment. It is known that probabilistic population codes, which have been proposed as a neural basis for encoding probability distributions, allow general neural networks (NNs) to perform near-optimal point estimation. However, the mechanism of sampling-based probabilistic inference has not been clarified. In this study, we trained two types of artificial NNs, feedforward NN (FFNN) and recurrent NN (RNN), to perform sampling-based probabilistic inference. Then we analyzed and compared their mechanisms of sampling. We found that sampling in RNN was performed by a mechanism that efficiently uses the properties of dynamical systems, unlike FFNN. In addition, we found that sampling in RNNs acted as an inductive bias, enabling a more accurate estimation than in maximum a posteriori estimation. These results provide important arguments for discussing the relationship between dynamical systems and information processing in NNs.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 545
Author(s):  
Bor-Jiunn Hwang ◽  
Hui-Hui Chen ◽  
Chaur-Heh Hsieh ◽  
Deng-Yu Huang

Based on experimental observations, there is a correlation between time and consecutive gaze positions in visual behaviors. Previous studies on gaze point estimation usually use images as the input for model trainings without taking into account the sequence relationship between image data. In addition to the spatial features, the temporal features are considered to improve the accuracy in this paper by using videos instead of images as the input data. To be able to capture spatial and temporal features at the same time, the convolutional neural network (CNN) and long short-term memory (LSTM) network are introduced to build a training model. In this way, CNN is used to extract the spatial features, and LSTM correlates temporal features. This paper presents a CNN Concatenating LSTM network (CCLN) that concatenates spatial and temporal features to improve the performance of gaze estimation in the case of time-series videos as the input training data. In addition, the proposed model can be optimized by exploring the numbers of LSTM layers, the influence of batch normalization (BN) and global average pooling layer (GAP) on CCLN. It is generally believed that larger amounts of training data will lead to better models. To provide data for training and prediction, we propose a method for constructing datasets of video for gaze point estimation. The issues are studied, including the effectiveness of different commonly used general models and the impact of transfer learning. Through exhaustive evaluation, it has been proved that the proposed method achieves a better prediction accuracy than the existing CNN-based methods. Finally, 93.1% of the best model and 92.6% of the general model MobileNet are obtained.


Author(s):  
Chen Zhongshan ◽  
Feng Xinning ◽  
Oscar Sanjuán Martínez ◽  
Rubén González Crespo

In human-computer interaction and virtual truth, hand pose estimation is essential. Public dataset experimental analysis Different biometric shows that a particular system creates low manual estimation errors and has a more significant opportunity for new hand pose estimation activity. Due to the fluctuations, self-occlusion, and specific modulations, the structure of hand photographs is quite tricky. Hence, this paper proposes a Hybrid approach based on machine learning (HABoML) to enhance the current competitiveness, performance experience, experimental hand shape, and key point estimation analysis. In terms of strengthening the ability to make better self-occlusion adjustments and special handshake and poses estimations, the machine learning algorithm is combined with a hybrid approach. The experiment results helped define a set of follow-up experiments for the proposed systems in this field, which had a high efficiency and performance level. The HABoML strategy decreased analysis precision by 9.33% and is a better solution.


2022 ◽  
Author(s):  
Sepideh Etemadi ◽  
Mehdi Khashei

Abstract Modeling and forecasting are among the most powerful and widely-used tools in decision support systems. The Fuzzy Linear Regression (FLR) is the most fundamental method in the fuzzy modeling area in which the uncertain relationship between the target and explanatory variables is estimated and has been frequently used in a broad range of real-world applications efficaciously. The operation logic in this method is to minimize the vagueness of the model, defined as the sum of individual spreads of the fuzzy coefficients. Although this process is coherent and can obtain the narrowest α-cut interval and exceptionally the most accurate results in the training data sets, it can not guarantee to achieve the desired level of generalization. While the quality of made managerial decisions in the modeling-based field is dependent on the generalization ability of the used method. On the other hand, the generalizability of a method is generally dependent on the precision as well as reliability of results, simultaneously. In this paper, a novel methodology is presented for the fuzzy linear regression modeling; in which in contrast to conventional methods, the constructed models' reliability is maximized instead of minimizing the vagueness. In the proposed model, fuzzy parameters are estimated in such a way that the variety of the ambiguity of the model is minimized in different data conditions. In other words, the weighted variance of different ambiguities in each validation data situation is minimized in order to estimate the unknown fuzzy parameters. To comprehensively assess the proposed method's performance, 74 benchmark datasets are regarded from the UCI. Empirical outcomes show that, in 64.86% of case studies, the proposed method has better generalizability, i.e., narrower α-cut interval as well as more accurate results in the interval and point estimation, than classic versions. It is obviously demonstrated the importance of the outcomes' reliability in addition to the precision that is not considered in the traditional FLR modeling processes. Hence, the presented EFLR method can be considered as a suitable alternative in fuzzy modeling fields, especially when more generalization is favorable.


2021 ◽  
Author(s):  
Tiago Dias Domingues ◽  
Helena Mourino ◽  
Nuno Sepulveda

In this work will apply mixture models based on distributions from the SMSN family to antibody data against four SARS-CoV-2 virus antigens. Furthermore, since the true infection status of individuals is known a priori, performance measures will be calculated for the methods proposed for cutoff point estimation such as sensitivity, specificity and accuracy. The results of a simulation study will also be presented.


2021 ◽  
Author(s):  
Rui Chen ◽  
Dihua Sun ◽  
Min Zhao ◽  
Zhizong Liu ◽  
Shuai Huang ◽  
...  

2021 ◽  
Vol 9 ◽  
Author(s):  
Xueqian Fu ◽  
Xianping Wu ◽  
Nian Liu

New energy power systems with high-permeability photovoltaic and wind power are high-dimensional dynamic large-scale systems with nonlinear, uncertain and complex operating characteristics. The uncertainty of new energies creates challenges in detailed analyses of operating conditions and the efficient planning of distribution networks. Probabilistic power flows (PPFs) are effective tools for uncertainty analyses of distribution networks, and they can be applied in stochastic programming, risk assessment and other fields. We propose different forms of PPFs, which are origin moments rather than means and variances, based on point estimation. We design a stochastic programming model suitable for new energy planning in practice, and the PPF results can be used to improve energy stochastic programming methods by considering the principle of maximum entropy (POME) and quadratic fourth-order moment (QFM) estimation. The origin moments of PPFs are transformed into central moments as inputs of QFM based on probability theory. QFM can efficiently estimate the constraint probability levels of stochastic optimal planning models, and the proposed method is verified based on an IEEE 33-node distribution network.


Author(s):  
Rafael Weißbach ◽  
Dominik Wied

AbstractFor a sample of Exponentially distributed durations we aim at point estimation and a confidence interval for its parameter. A duration is only observed if it has ended within a certain time interval, determined by a Uniform distribution. Hence, the data is a truncated empirical process that we can approximate by a Poisson process when only a small portion of the sample is observed, as is the case for our applications. We derive the likelihood from standard arguments for point processes, acknowledging the size of the latent sample as the second parameter, and derive the maximum likelihood estimator for both. Consistency and asymptotic normality of the estimator for the Exponential parameter are derived from standard results on M-estimation. We compare the design with a simple random sample assumption for the observed durations. Theoretically, the derivative of the log-likelihood is less steep in the truncation-design for small parameter values, indicating a larger computational effort for root finding and a larger standard error. In applications from the social and economic sciences and in simulations, we indeed, find a moderately increased standard error when acknowledging truncation.


Sign in / Sign up

Export Citation Format

Share Document