error measures
Recently Published Documents


TOTAL DOCUMENTS

192
(FIVE YEARS 43)

H-INDEX

24
(FIVE YEARS 3)

Materials ◽  
2022 ◽  
Vol 15 (2) ◽  
pp. 626
Author(s):  
Ireneusz Marzec ◽  
Jerzy Bobiński

Results of the numerical simulations of the size effect phenomenon for concrete in comparison with experimental data are presented. In-plane geometrically similar notched and unnotched beams under three-point bending are analyzed. EXtended Finite Element Method (XFEM) with a cohesive softening law is used. Comprehensive parametric study with the respect to the tensile strength and the initial fracture energy is performed. Sensitivity of the results with respect to the material parameters and the specimen geometry is investigated. Three different softening laws are examined. First, a bilinear softening definition is utilized. Then, an exponential curve is taken. Finally, a rational Bezier curve is tested. An ambiguity in choosing material parameters and softening curve definitions is discussed. Numerical results are compared with experimental outcomes recently reported in the literature. Two error measures are defined and used to quantitatively assess calculated maximum forces (nominal strengths) in comparison with experimental values as a primary criterion. In addition, the force—displacement curves are also analyzed. It is shown that all softening curves produce results consistent with the experimental data. Moreover, with different softening laws assumed, different initial fracture energies should be taken to obtain proper results.


2021 ◽  
Vol 20 (4) ◽  
pp. 158-165
Author(s):  
Pardeep Singla ◽  
Manoj Duhan ◽  
Sumit Saroha

Renewable energy systems (RES) are no longer confined to being used as a stand-alone entity in the modern era. These RES, especially solar panels are also used with the grid power systems to supply electricity. However, precise forecasting of solar irradiance is necessary to ensure that the grid operates in a balanced and planned manner. Various solar forecasting models (SFM) are presented in the literature to produce an accurate solar forecast. Nevertheless, each model has gone through the step of evaluation of its accuracy using some error measures. Many error measures are discussed in the literature for deterministic as well as probabilistic solar forecasting. But, each study has its own selected error measure which sometimes landed on a wrong interpretation of results if not selected appropriately. As a result, this paper offers a critical assessment of several common error metrics with the goal of discussing alternative error metrics and establishing a viable set of error metrics for deterministic and probabilistic solar forecasting. Based on highly cited research from the last three years (2019-2021), error measures for both types of forecasting are presented with their basic functionalities, advantages & limitations which equipped the reader to pick the required compatible metrics


MAUSAM ◽  
2021 ◽  
Vol 48 (2) ◽  
pp. 205-212
Author(s):  
JOHNNY C. L. CHAN

ABSTRACT. This paper reviews the methods by which techniques for predicting tropical cyclone (TC) motion can be evaluated. Different error measures (forecast error, systematic error, and cross-track and along-track errors) are described in detail. Examples are then given to show how these techniques can be further evaluated by stratifying the forecasts based on factors related to the TC, including latitude, longitude, intensity change, size and past movement. Application of the Empirical-Orthogonal-Function (EOF) approach to represent the environmental flow associated with the TCs is also proposed. The magnitudes of the EOF coefficients can then be used to stratify the forecasts since these coefficients represent different types of flow fields. A complete evaluation of a forecast technique then consists of a combination of analyzing the different error measures based on both the storm- related factors and the EOF coefficients.    


2021 ◽  
Vol 5 (4) ◽  
pp. 190
Author(s):  
Lin Ma ◽  
Jun Li ◽  
Ye Zhao

Rural community population forecasting has important guiding significance to rural construction and development. In this study, a novel grey Bernoulli model combined with an improved Aquila Optimizer (IAO) was used to forecast rural community population in China. Firstly, this study improved the Aquila Optimizer by combining quasi-opposition learning strategy and wavelet mutation strategy, and proposed the new IAO algorithm. By comparing with other algorithms on CEC2017 test functions, the proposed IAO algorithm has the advantages of faster convergence speed and higher convergence accuracy. Secondly, based on the data of China’s rural community population from 1990 to 2019, a consistent fractional accumulation nonhomogeneous grey Bernoulli model called CFANGBM(1, 1, b, c) was established for rural population forecasting. The proposed IAO algorithm was used to optimize the parameters of the model, and then the rural population of China was predicted. Four error measures were used to evaluate the model, and by comparing with other forecasting models, the experimental results show that the proposed model had the smallest error between the forecasted value and the real value, which illustrates the effectiveness of using the IAO algorithm to solve CFANGBM(1, 1, b, c). At the end of this paper, the forecast data of China’s rural population from 2020 to 2024 are given for reference.


2021 ◽  
Vol 14 (10) ◽  
pp. 486
Author(s):  
Dante Miller ◽  
Jong-Min Kim

In this study, we predicted the log returns of the top 10 cryptocurrencies based on market cap, using univariate and multivariate machine learning methods such as recurrent neural networks, deep learning neural networks, Holt’s exponential smoothing, autoregressive integrated moving average, ForecastX, and long short-term memory networks. The multivariate long short-term memory networks performed better than the univariate machine learning methods in terms of the prediction error measures.


Author(s):  
Hyuk-Jae Roh

This paper develops a weather traffic model and verifies its validity through temporal transferability within the context of the Alberta highway network. This research used traffic data and weather data collected at four weigh-in-motion (WIM) sites and weather stations for five years to develop models for two-vehicle classes. We collected an additional year of traffic data from the exact four WIM locations to conduct a temporal transferability test. We evaluated the estimation accuracy resulting from the temporal transferability by measuring the value of R</sup><sup>2 and the magnitude of five error measures. All in all, the developed models were transferred successfully to a different year. The study results reveal that different structural types of models could be better suitable for each vehicle type at other road functions.


Author(s):  
Betania Sánchez-Santamaría ◽  
Boris Mederos ◽  
Delfino Cornejo-Monroy ◽  
Rey David Molina-Arredondo ◽  
Víctor Castaño

Accelerated degradation tests (ADT) are widely used in the manufacturing industry to obtain information on the reliability of components and materials, through degrading the lifespan of the product by applying an acceleration factor which causes damage to the material. The main objective is to obtain fast information which is modeled to estimate the characteristics of the material life under normal conditions of use and to save time and expenses. The purpose of this work is to estimate the lifespan distribution of gold nanoparticles stabilized with lipoic acid (GNPs@LA) through accelerated degradation tests applying sodium chloride (NaCl) as an acceleration factor. For this, the synthesis of GNPs@LA was carried out, a constant stress ADT (CSADT) was applied, and the non-linear Wiener process was proposed with random effects, error measures and different covariability for the adjustment of the degradation signals. The information obtained with the test and analysis allows us to obtain the life distribution in GNPs@LA, the results make possible to determine the guaranteed time for a possible commercialization and successful application based on the stability of the material. In addition, for the evaluation and selection of the model, the Akaike and Bootstraping criteria were used.


Materials ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4178
Author(s):  
Olaf Popczyk ◽  
Grzegorz Dziatkiewicz

New engineering materials exhibit a complex internal structure that determines their properties. For thermal metamaterials, it is essential to shape their thermophysical parameters’ spatial variability to ensure unique properties of heat flux control. Modeling heterogeneous materials such as thermal metamaterials is a current research problem, and meshless methods are currently quite popular for simulation. The main problem when using new modeling methods is the selection of their optimal parameters. The Kansa method is currently a well-established method of solving problems described by partial differential equations. However, one unsolved problem associated with this method that hinders its popularization is choosing the optimal shape parameter value of the radial basis functions. The algorithm proposed by Fasshauer and Zhang is, as of today, one of the most popular and the best-established algorithms for finding a good shape parameter value for the Kansa method. However, it turns out that it is not suitable for all classes of computational problems, e.g., for modeling the 1D heat conduction in non-homogeneous materials, as in the present paper. The work proposes two new algorithms for finding a good shape parameter value, one based on the analysis of the condition number of the matrix obtained by performing specific operations on interpolation matrix and the other being a modification of the Fasshauer algorithm. According to the error measures used in work, the proposed algorithms for the considered class of problem provide shape parameter values that lead to better results than the classic Fasshauer algorithm.


2021 ◽  
Author(s):  
Caitlyn McColeman ◽  
Fumeng Yang ◽  
Timothy F. Brady ◽  
Steven Franconeri

Data can be visually represented using visual channels like position, length or luminance. An existing ranking of these visual channels is based on how accurately participants could report the ratio between two depicted values. There is an assumption that this ranking should hold for different tasks and for different numbers of marks. However, there is surprisingly little existing work that tests this assumption, especially given that visually computing ratios is relatively unimportant in real-world visualizations, compared to seeing, remembering, and comparing trends and motifs, across displays that almost universally depict more than two values.To simulate the information extracted from a glance at a visualization, we instead asked participants to immediately reproduce a set of values from memory after they were shown the visualization. These values could be shown in a bar graph (position (bar)), line graph (position (line)), heat map (luminance), bubble chart (area), misaligned bar graph (length), or `wind map' (angle). With a Bayesian multilevel modeling approach, we show how the rank positions of visual channels shift across different numbers of marks (2, 4 or 8) and for bias, precision, and error measures. The ranking did not hold, even for reproductions of only 2 marks, and the new probabilistic ranking was highly inconsistent for reproductions of different numbers of marks. Other factors besides channel choice had an order of magnitude more influence on performance, such as the number of values in the series (e.g., more marks led to larger errors), or the value of each mark (e.g., small values were systematically overestimated). Every visual channel was worse for displays with 8 marks than 4, consistent with established limits on visual memory. These results point to the need for a body of empirical studies that move beyond two-value ratio judgments as a baseline for reliably ranking the quality of a visual channel, including testing new tasks (detection of trends or motifs), timescales (immediate computation, or later comparison), and the number of values (from a handful, to thousands)


2021 ◽  
Vol 18 (2) ◽  
pp. 40-55
Author(s):  
Lídio Mauro Lima Campos ◽  
◽  
Jherson Haryson Almeida Pereira ◽  
Danilo Souza Duarte ◽  
Roberto Célio Limão Oliveira ◽  
...  

The aim of this paper is to introduce a biologically inspired approach that can automatically generate Deep Neural networks with good prediction capacity, smaller error and large tolerance to noises. In order to do this, three biological paradigms were used: Genetic Algorithm (GA), Lindenmayer System and Neural Networks (DNNs). The final sections of the paper present some experiments aimed at investigating the possibilities of the method in the forecast the price of energy in the Brazilian market. The proposed model considers a multi-step ahead price prediction (12, 24, and 36 weeks ahead). The results for MLP and LSTM networks show a good ability to predict peaks and satisfactory accuracy according to error measures comparing with other methods.


Sign in / Sign up

Export Citation Format

Share Document