error distribution
Recently Published Documents


TOTAL DOCUMENTS

512
(FIVE YEARS 112)

H-INDEX

26
(FIVE YEARS 2)

Author(s):  
H. S. Lee ◽  
T. A. Musa ◽  
W. A. Wan Aris ◽  
A. Z. Sha’ameri

Abstract. Broadcast orbits are compared against final orbit to get the error of broadcast orbit. The errors are analysed by presenting the error over space, especially longitude. The satellite trajectory is divided into three sector namely northern, southern, and transitional sectors. Spatial analysis show that the error is correlated with the latitude and longitude. Some consistency pattern can be observed from the distribution of the error in the spatial analysis. Standard deviation (SD) is used to quantify the consistency, providing more quantitative insights into the spatial analysis. Four patterns can be observed in the error distribution, namely consistency in northern and southern sector, consistency of transitional sector, changes after transitional sector, and correlation between ΔX component and ΔY component. The spatial analysis shows potential to be used in broadcast orbit error estimation and prediction. A model that uses this predicted broadcast orbit error as a correction will be designed in the future to improve the broadcast orbit accuracy.


Stats ◽  
2022 ◽  
Vol 5 (1) ◽  
pp. 70-88
Author(s):  
Johannes Ferreira ◽  
Ané van der Merwe

This paper proposes a previously unconsidered generalization of the Lindley distribution by allowing for a measure of noncentrality. Essential structural characteristics are investigated and derived in explicit and tractable forms, and the estimability of the model is illustrated via the fit of this developed model to real data. Subsequently, this model is used as a candidate for the parameter of a Poisson model, which allows for departure from the usual equidispersion restriction that the Poisson offers when modelling count data. This Poisson-noncentral Lindley is also systematically investigated and characteristics are derived. The value of this count model is illustrated and implemented as the count error distribution in an integer autoregressive environment, and juxtaposed against other popular models. The effect of the systematically-induced noncentrality parameter is illustrated and paves the way for future flexible modelling not only as a standalone contender in continuous Lindley-type scenarios but also in discrete and discrete time series scenarios when the often-encountered equidispersed assumption is not adhered to in practical data environments.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 73
Author(s):  
Dragana Bajić ◽  
Nina Japundžić-Žigon

Approximate and sample entropies are acclaimed tools for quantifying the regularity and unpredictability of time series. This paper analyses the causes of their inconsistencies. It is shown that the major problem is a coarse quantization of matching probabilities, causing a large error between their estimated and true values. Error distribution is symmetric, so in sample entropy, where matching probabilities are directly summed, errors cancel each other. In approximate entropy, errors are accumulating, as sums involve logarithms of matching probabilities. Increasing the time series length increases the number of quantization levels, and errors in entropy disappear both in approximate and in sample entropies. The distribution of time series also affects the errors. If it is asymmetric, the matching probabilities are asymmetric as well, so the matching probability errors cease to be mutually canceled and cause a persistent entropy error. Despite the accepted opinion, the influence of self-matching is marginal as it just shifts the error distribution along the error axis by the matching probability quant. Artificial lengthening the time series by interpolation, on the other hand, induces large error as interpolated samples are statistically dependent and destroy the level of unpredictability that is inherent to the original signal.


Energies ◽  
2021 ◽  
Vol 15 (1) ◽  
pp. 147
Author(s):  
Tianyu Hu ◽  
Mengran Zhou ◽  
Kai Bian ◽  
Wenhao Lai ◽  
Ziwei Zhu

Short-term load forecasting is an important part of load forecasting, which is of great significance to the optimal power flow and power supply guarantee of the power system. In this paper, we proposed the load series reconstruction method combined improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) with sample entropy (SE). The load series is decomposed by ICEEMDAN and is reconstructed into a trend component, periodic component, and random component by comparing with the sample entropy of the original series. Extreme learning machine optimized by salp swarm algorithm (SSA-ELM) is used to predict respectively, and the final prediction value is obtained by superposition of the prediction results of the three components. Then, the prediction error of the training set is divided into four load intervals according to the predicted value, and the kernel probability density is estimated to obtain the error distribution of the training set. Combining the predicted value of the prediction set with the error distribution of the corresponding load interval, the prediction load interval can be obtained. The prediction method is verified by taking the hourly load data of a region in Denmark in 2019 as an example. The final experimental results show that the proposed method has a high prediction accuracy for short-term load forecasting.


Modelling ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 1-13
Author(s):  
Régis Santos ◽  
Osman Crespo ◽  
Wendell Medeiros-Leal ◽  
Ana Novoa-Pabon ◽  
Mário Pinho

Abstract: Indices of abundance are usually a key input parameter used for fitting a stock assessment model, as they provide abundance estimates representative of the fraction of the stock that is vulnerable to fishing. These indices can be estimated from catches derived from fishery-dependent sources, such as catch per unit effort (CPUE) and landings per unit effort (LPUE), or from scientific survey data (e.g., relative population number—RPN). However, fluctuations in many factors (e.g., vessel size, period, area, gear) may affect the catch rates, bringing the need to evaluate the appropriateness of the statistical models for the standardization process. In this research, we analyzed different generalized linear models to select the best technique to standardize catch rates of target and non-target species from fishery dependent (CPUE and LPUE) and independent (RPN) data. The examined error distribution models were gamma, lognormal, tweedie, and hurdle models. For hurdle, positive observations were analyzed assuming a lognormal (hurdle–lognormal) or gamma (hurdle–gamma) error distribution. Based on deviance table analyses and diagnostic checks, the hurdle–lognormal was the statistical model that best satisfied the underlying characteristics of the different data sets. Finally, catch rates (CPUE, LPUE and RPN) of the thornback ray Raja clavata, blackbelly rosefish Helicolenus dactylopterus, and common mora Mora moro from the NE Atlantic (Azores region) were standardized. The analyses confirmed the spatial and temporal nature of their distribution.


Author(s):  
Svetlana Grishina

The article continues research on the adaptation of well-developed methods of systems theory to economic systems. It is shown that economic objects, as a rule, are nonlinear. The issues of analysis and evaluation of the accuracy of nonlinear economic systems are considered. It is shown that the use for these purposes of statistical methods based on the statistical approximation of a nonlinear transformation causes difficulties associated with the requirement of a normal distribution law at the output of a nonlinear element, as well as with a limited ability to assess the magnitude and range of effects under which there is a loss of stability of the system. The article substantiates the possibility and expediency of using the methods of random Markov processes to determine the density of the error distribution of a nonlinear system. In this paper, the main tasks that should be solved in the study of nonlinear economic systems are highlighted. The direction of further research is presented.


2021 ◽  
Vol 9 (12) ◽  
pp. 10-16
Author(s):  
Wilson Moseki Thupeng

The economy of Botswana heavily relies on mineral exports (mainly diamond exports), which are largely dependent on the exchange rate. And, the US Dollar is one of the most important currencies in the basket of currencies to which the Botswana Pula is pegged. Therefore, this paper seeks to empirically establish the baseline characteristics of the Botswana Pula (BWP) and the US Dollar (USD) exchange rate and to identify the most plausible probability distribution from the skewed generalized t (SGT) family that can be used to model the log-returns of the daily BWP/USD exchange rates for the period January 2001 to December 2020. The SGT family is a highly versatile class of models that can capture the skewness and kleptokurticity that are inherent in financial time series. Four probability distributions are considered in this study: skewed t, skewed generalized error, generalized t and skewed generalized t. The maximum likelihood approach is used to estimate the parameters of each model. Model comparison and selection are based on the Akaike information criterion (AIC) and Bayesian information criterion (BIC). The results of the study show that the daily BWP/USD exchange rate series is nonnormal, negatively skewed heavy-tailed. It is also found that, based on the values of both the AIC and BIC, the model that gives the best fit to the data is the skewed t, which is closely followed by the skewed generalized error distribution, while the generalized t gives the worst fit. Keywords: Pula/US Dollar exchange rate, log returns, Generalized t distribution, Skewed generalized error distribution, Skewed generalized t distribution, Skewed t distribution, skewness, kurtosis, maximum likelihood


2021 ◽  
Vol 72 (2) ◽  
pp. 353-370
Author(s):  
Martina Ivanová ◽  
Miroslava Kyseľová ◽  
Anna Gálisová

Abstract The paper deals with the acquisition of Slovak word order in written texts of students of Slovak as a foreign language. Its attention is focused on identifying the correct and incorrect placement of enclitic components, and their erroneous usage is analysed with respect to different investigated variables (types of enclitic components, types of syntactic construction, distance from lexical/syntactic anchor, and realization in pre- or post-verbal position). The paper also pays attention to the error rate regarding individual proficiency levels of students, and error distribution in two language groups, Slavic and Non-Slavic learners, is compared.


Econometrics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 41
Author(s):  
Mustafa Salamh ◽  
Liqun Wang

Many financial and economic time series exhibit nonlinear patterns or relationships. However, most statistical methods for time series analysis are developed for mean-stationary processes that require transformation, such as differencing of the data. In this paper, we study a dynamic regression model with nonlinear, time-varying mean function, and autoregressive conditionally heteroscedastic errors. We propose an estimation approach based on the first two conditional moments of the response variable, which does not require specification of error distribution. Strong consistency and asymptotic normality of the proposed estimator is established under strong-mixing condition, so that the results apply to both stationary and mean-nonstationary processes. Moreover, the proposed approach is shown to be superior to the commonly used quasi-likelihood approach and the efficiency gain is significant when the (conditional) error distribution is asymmetric. We demonstrate through a real data example that the proposed method can identify a more accurate model than the quasi-likelihood method.


Author(s):  
Encarnación Castro ◽  
María C. Cañadas ◽  
Marta Molina ◽  
Susana Rodríguez-Domingo

AbstractThis paper describes the difficulties faced by a group of middle school students (13- to 15-year-olds) attempting to translate algebraic statements written in verbal language into symbolic language and vice versa. The data used were drawn from their replies to a written quiz and semi-structured interviews. In the former, students were confronted with a series of algebraic statements and asked to choose the sole translation, of four proposed for each, that was semantically congruent with the original. The results show that most of the errors detected were due to arithmetic issues, especially around the distinction between product and exponent or sum and product in connection with the notions of perimeter and area. As a rule, the error distribution by type varied depending on the type of task involved.


Sign in / Sign up

Export Citation Format

Share Document