statistical efficiency
Recently Published Documents


TOTAL DOCUMENTS

176
(FIVE YEARS 56)

H-INDEX

20
(FIVE YEARS 4)

2021 ◽  
Author(s):  
Guillermo Ferreira ◽  
Jorge Mateu ◽  
Emilio Porcu ◽  
Alfredo Alegría

Abstract An increasing interest in models for multivariate spatio-temporal processes has been noted in the last years. Some of these models are very flexible and can capture both marginal and cross spatial associations amongst the components of the multivariate process. In order to contribute to the statistical analysis of these models, this paper deals with the estimation and prediction of multivariate spatio-temporal processes by using multivariate state-space models. In this context, a multivariate spatio-temporal process is represented through the well-known Wold decomposition. Such an approach allows for an easy implementation of the Kalman filter to estimate linear temporal processes exhibiting both short and long range dependencies, together with a spatial correlation structure. We illustrate, through simulation experiments, that our method offers a good balance between statistical efficiency and computational complexity. Finally, we apply the method for the analysis of a bivariate dataset on average daily temperatures and maximum daily solar radiations from 21 meteorological stations located in a portion of south-central Chile.


Author(s):  
Łukasz Knypiński

Purpose The purpose of this paper is to execute the efficiency analysis of the selected metaheuristic algorithms (MAs) based on the investigation of analytical functions and investigation optimization processes for permanent magnet motor. Design/methodology/approach A comparative performance analysis was conducted for selected MAs. Optimization calculations were performed for as follows: genetic algorithm (GA), particle swarm optimization algorithm (PSO), bat algorithm, cuckoo search algorithm (CS) and only best individual algorithm (OBI). All of the optimization algorithms were developed as computer scripts. Next, all optimization procedures were applied to search the optimal of the line-start permanent magnet synchronous by the use of the multi-objective objective function. Findings The research results show, that the best statistical efficiency (mean objective function and standard deviation [SD]) is obtained for PSO and CS algorithms. While the best results for several runs are obtained for PSO and GA. The type of the optimization algorithm should be selected taking into account the duration of the single optimization process. In the case of time-consuming processes, algorithms with low SD should be used. Originality/value The new proposed simple nondeterministic algorithm can be also applied for simple optimization calculations. On the basis of the presented simulation results, it is possible to determine the quality of the compared MAs.


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2474
Author(s):  
Nitzan Cohen ◽  
Yakir Berchenko

Information criteria such as the Akaike information criterion (AIC) and Bayesian information criterion (BIC) are commonly used for model selection. However, the current theory does not support unconventional data, so naive use of these criteria is not suitable for data with missing values. Imputation, at the core of most alternative methods, is both distorted as well as computationally demanding. We propose a new approach that enables the use of classic well-known information criteria for model selection when there are missing data. We adapt the current theory of information criteria through normalization, accounting for the different sample sizes used for each candidate model (focusing on AIC and BIC). Interestingly, when the sample sizes are different, our theoretical analysis finds that AICj/nj is the proper correction for AICj that we need to optimize (where nj is the sample size available to the jth model) while −(BICj−BICi)/(nj−ni) is the correction of BIC. Furthermore, we find that the computational complexity of normalized information criteria methods is exponentially better than that of imputation methods. In a series of simulation studies, we find that normalized-AIC and normalized-BIC outperform previous methods (i.e., normalized-AIC is more efficient, and normalized BIC includes only important variables, although it tends to exclude some of them in cases of large correlation). We propose three additional methods aimed at increasing the statistical efficiency of normalized-AIC: post-selection imputation, Akaike sub-model averaging, and minimum-variance averaging. The latter succeeds in increasing efficiency further.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Lifu Zhang ◽  
Tarek S. Abdelrahman

The growth in size and complexity of convolutional neural networks (CNNs) is forcing the partitioning of a network across multiple accelerators during training and pipelining of backpropagation computations over these accelerators. Pipelining results in the use of stale weights. Existing approaches to pipelined training avoid or limit the use of stale weights with techniques that either underutilize accelerators or increase training memory footprint. This paper contributes a pipelined backpropagation scheme that uses stale weights to maximize accelerator utilization and keep memory overhead modest. It explores the impact of stale weights on the statistical efficiency and performance using 4 CNNs (LeNet-5, AlexNet, VGG, and ResNet) and shows that when pipelining is introduced in early layers, training with stale weights converges and results in models with comparable inference accuracies to those resulting from nonpipelined training (a drop in accuracy of 0.4%, 4%, 0.83%, and 1.45% for the 4 networks, respectively). However, when pipelining is deeper in the network, inference accuracies drop significantly (up to 12% for VGG and 8.5% for ResNet-20). The paper also contributes a hybrid training scheme that combines pipelined with nonpipelined training to address this drop. The potential for performance improvement of the proposed scheme is demonstrated with a proof-of-concept pipelined backpropagation implementation in PyTorch on 2 GPUs using ResNet-56/110/224/362, achieving speedups of up to 1.8X over a 1-GPU baseline.


Author(s):  
Zekâi Şen ◽  
Eyüp Şişman ◽  
Burak Kızılöz

Abstract In every aspect of scientific research, model predictions need calibration and validation as their representativity of the record measurement. In the literature, there are a myriad of formulations, empirical expressions, algorithms and software for model efficiency assessment. In general, model predictions are curve fitting procedures with a set of assumptions that are not cared for sensitively in many studies, but only a single value comparison between the measurements and predictions is taken into consideration, and then the researcher makes the decision as for the model efficiency. Among the classical statistical efficiency formulations, the most widely used ones are bias (BI), mean square error (MSE), correlation coefficient (CC) and Nash-Sutcliffe efficiency (NSE) procedures all of which are embedded within the visual inspection and numerical analysis (VINAM) square graph as measurements versus predictions scatter diagram. The VINAM provides a set of verbal interpretations and then numerical improvements embracing all the previous statistical efficiency formulations. The fundamental criterion in the VINAM is 1:1 (45°) main diagonal along which all visual, science philosophical, logical, rational and mathematical procedures boil down for model validation. The application of the VINAM approach is presented for artificial neural network (ANN) and adaptive network-based fuzzy inference system (ANFIS) model predictions.


2021 ◽  
Vol 17 (8) ◽  
pp. e1009266
Author(s):  
Yangqing Deng ◽  
Wei Pan

It is of great interest and potential to discover causal relationships between pairs of exposures and outcomes using genetic variants as instrumental variables (IVs) to deal with hidden confounding in observational studies. Two most popular approaches are Mendelian randomization (MR), which usually use independent genetic variants/SNPs across the genome, and transcriptome-wide association studies (TWAS) (or their generalizations) using cis-SNPs local to a gene (or some genome-wide and likely dependent SNPs), as IVs. In spite of their many promising applications, both approaches face a major challenge: the validity of their causal conclusions depends on three critical assumptions on valid IVs, and more generally on other modeling assumptions, which however may not hold in practice. The most likely as well as challenging situation is due to the wide-spread horizontal pleiotropy, leading to two of the three IV assumptions being violated and thus to biased statistical inference. More generally, we’d like to conduct a goodness-of-fit (GOF) test to check the model being used. Although some methods have been proposed as being robust to various degrees to the violation of some modeling assumptions, they often give different and even conflicting results due to their own modeling assumptions and possibly lower statistical efficiency, imposing difficulties to the practitioner in choosing and interpreting varying results across different methods. Hence, it would help to directly test whether any assumption is violated or not. In particular, there is a lack of such tests for TWAS. We propose a new and general GOF test, called TEDE (TEsting Direct Effects), applicable to both correlated and independent SNPs/IVs (as commonly used in TWAS and MR respectively). Through simulation studies and real data examples, we demonstrate high statistical power and advantages of our new method, while confirming the frequent violation of modeling (including valid IV) assumptions in practice and thus the importance of model checking by applying such a test in MR/TWAS analysis.


2021 ◽  
Author(s):  
Olugbenga Falode ◽  
Christopher Udomboso

Abstract Crude oil, a base for more than 6000 products that we use on a daily basis, accounts for 33% of global energy consumption. However, the outbreak and transmission of COVID-19 had significant implications for the entire value chain in the oil industry. The price crash and the fluctuations in price is known to have far reaching effect on global economies, with Nigeria hard. It has therefore become imperative to develop a tool for forecasting the price of crude oil in order to minimise the risks associated with volatility in oil prices and also be able to do proper planning. Hence, this article proposed a hybrid forecasting model involving a classical and machine learning techniques – autoregressive neural network, in determining the prices of crude oil. The monthly data used were obtained from the Central Bank of Nigeria website, spanning January 2006 to October 2020. Statistical efficiency was computed for the hybrid, and the models from which the proposed hybrid was built, using the percent relative efficiency. Analyses showed that the efficiency of the hybrid model, at 20 and 100 hidden neurons, was higher than that of the individual models, the latter being the best performing. The study recommends urgent diversification of the economy in order not for the nation to be plunged into a seemingly unending recession.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 934
Author(s):  
Yuxuan Zhang ◽  
Kaiwei Liu ◽  
Wenhao Gui

For the purpose of improving the statistical efficiency of estimators in life-testing experiments, generalized Type-I hybrid censoring has lately been implemented by guaranteeing that experiments only terminate after a certain number of failures appear. With the wide applications of bathtub-shaped distribution in engineering areas and the recently introduced generalized Type-I hybrid censoring scheme, considering that there is no work coalescing this certain type of censoring model with a bathtub-shaped distribution, we consider the parameter inference under generalized Type-I hybrid censoring. First, estimations of the unknown scale parameter and the reliability function are obtained under the Bayesian method based on LINEX and squared error loss functions with a conjugate gamma prior. The comparison of estimations under the E-Bayesian method for different prior distributions and loss functions is analyzed. Additionally, Bayesian and E-Bayesian estimations with two unknown parameters are introduced. Furthermore, to verify the robustness of the estimations above, the Monte Carlo method is introduced for the simulation study. Finally, the application of the discussed inference in practice is illustrated by analyzing a real data set.


2021 ◽  
Author(s):  
Gang Chen ◽  
Daniel S Pine ◽  
Melissa A Brotman ◽  
Ashley R Smith ◽  
Robert W Cox ◽  
...  

Big data initiatives have gained popularity for leveraging a large sample of subjects to study a wide range of effect magnitudes in the brain. On the other hand, most task-based FMRI designs feature relatively small number of subjects, so that resulting parameter estimates may be associated with compromised precision. Nevertheless, little attention has been given to another important dimension of experimental design, which can equally boost a study's statistical efficiency: the trial sample size. Here, we systematically explore the different factors that impact effect uncertainty, drawing on evidence from hierarchical modeling, simulations and an FMRI dataset of 42 subjects who completed a large number of trials of a commonly used cognitive task. We find that, due to the presence of relatively large cross-trial variability: 1) trial sample size has nearly the same impact as subject sample size on statistical efficiency; 2) increasing both trials and subjects improves statistical efficiency more effectively than focusing on subjects alone; 3) trial sample size can be leveraged with the number of subjects to improve the cost-effectiveness of an experimental design; 4) for small trial sample sizes, rather than the common practice of condition-level modeling through summary statistics, trial-level modeling may be necessary to accurately assess the standard error of an effect estimate. Lastly, we make practical recommendations for improving experimental designs across neuroimaging and behavioral studies.


Sign in / Sign up

Export Citation Format

Share Document