fitting error
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 20)

H-INDEX

10
(FIVE YEARS 2)

2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Yang Li ◽  
Lijing Zhang ◽  
Yuan Tian ◽  
Wanqiang Qi

This paper establishes a hybrid education teaching practice quality evaluation system in colleges and constructs a hybrid teaching quality evaluation model based on a deep belief network. Karl Pearson correlation coefficient and root mean square error (RMSE) indicators are used to measure the closeness and fluctuation between the effective online teaching quality evaluation results evaluated by this method and the actual teaching quality results. The experimental results show the following: (1) As the number of iterations increases, the fitting error of the DBN model decreases significantly. When the number of iterations reaches 20, the fitting error of the DBN model stabilizes and decreases to below 0.01. The experimental results show that the model used in this method has good learning and training performance, and the fitting error is low. (2) The evaluation correlation coefficients are all greater than 0.85, and the root mean square error of the evaluation is less than 0.45, indicating that the evaluation results of this method are similar to the actual evaluation level and have small errors, which can be effectively applied to online teaching quality evaluation in colleges and universities.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ziqi Yin ◽  
Kai Zhang

Forecasting the depth of groundwater in arid and semiarid areas is a great challenge because these areas are complex hydrogeological environments and the observational data are limited. To deal with this problem, the grey seasonal index model is proposed. The seasonal characteristics of time series were represented by indicators, and the grey model with fractional-order accumulation was employed to fit and forecast different periodic indicators and long-term trends, respectively. Then, the prediction results of the two were combined together to obtain the prediction results. To verify the model performance, the proposed model is applied to groundwater prediction in Yinchuan Plain. The results show that the fitting error of the proposed model is 2.08%, while for comparison, the fitting error of the grey model of data grouping and Holt–Winters model is 3.94% and 5%, respectively. In the same way, it is concluded that the fitting error of groundwater in Weining Plain by the proposed model is 2.26%. On the whole, the groundwater depth in Ningxia Plain including Yinchuan Plain and Weining Plain will increase further.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Yujiang He ◽  
Bernhard Sick

AbstractCatastrophic forgetting means that a trained neural network model gradually forgets the previously learned tasks when being retrained on new tasks. Overcoming the forgetting problem is a major problem in machine learning. Numerous continual learning algorithms are very successful in incremental learning of classification tasks, where new samples with their labels appear frequently. However, there is currently no research that addresses the catastrophic forgetting problem in regression tasks as far as we know. This problem has emerged as one of the primary constraints in some applications, such as renewable energy forecasts. This article clarifies problem-related definitions and proposes a new methodological framework that can forecast targets and update itself by means of continual learning. The framework consists of forecasting neural networks and buffers, which store newly collected data from a non-stationary data stream in an application. The changed probability distribution of the data stream, which the framework has identified, will be learned sequentially. The framework is called CLeaR (Continual Learning for Regression Tasks), where components can be flexibly customized for a specific application scenario. We design two sets of experiments to evaluate the CLeaR framework concerning fitting error (training), prediction error (test), and forgetting ratio. The first one is based on an artificial time series to explore how hyperparameters affect the CLeaR framework. The second one is designed with data collected from European wind farms to evaluate the CLeaR framework’s performance in a real-world application. The experimental results demonstrate that the CLeaR framework can continually acquire knowledge in the data stream and improve the prediction accuracy. The article concludes with further research issues arising from requirements to extend the framework.


Risks ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 124
Author(s):  
Ewa Dziwok ◽  
Marta A. Karaś

The paper presents an alternative approach to measuring systemic illiquidity applicable to countries with frontier and emerging financial markets, where other existing methods are not applicable. We develop a novel Systemic Illiquidity Noise (SIN)-based measure, using the Nelson–Siegel–Svensson methodology in which we utilize the curve-fitting error as an indicator of financial system illiquidity. We empirically apply our method to a set of 10 divergent Central and Eastern Europe countries—Bulgaria, Croatia, Czechia, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, and Slovakia—in the period of 2006–2020. The results show three periods of increased risk in the sample period: the global financial crisis, the European public debt crisis, and the COVID-19 pandemic. They also allow us to identify three divergent sets of countries with different systemic liquidity risk characteristics. The analysis also illustrates the impact of the introduction of the euro on systemic illiquidity risk. The proposed methodology may be of consequence for financial system regulators and macroprudential bodies: it allows for contemporaneous monitoring of discussed risk at a minimal cost using well-known models and easily accessible data.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 699
Author(s):  
David Romero-Bascones ◽  
Maitane Barrenechea ◽  
Ane Murueta-Goyena ◽  
Marta Galdós ◽  
Juan Carlos Gómez-Esteban ◽  
...  

Disentangling the cellular anatomy that gives rise to human visual perception is one of the main challenges of ophthalmology. Of particular interest is the foveal pit, a concave depression located at the center of the retina that captures light from the gaze center. In recent years, there has been a growing interest in studying the morphology of the foveal pit by extracting geometrical features from optical coherence tomography (OCT) images. Despite this, research has devoted little attention to comparing existing approaches for two key methodological steps: the location of the foveal center and the mathematical modelling of the foveal pit. Building upon a dataset of 185 healthy subjects imaged twice, in the present paper the image alignment accuracy of four different foveal center location methods is studied in the first place. Secondly, state-of-the-art foveal pit mathematical models are compared in terms of fitting error, repeatability, and bias. The results indicate the importance of using a robust foveal center location method to align images. Moreover, we show that foveal pit models can improve the agreement between different acquisition protocols. Nevertheless, they can also introduce important biases in the parameter estimates that should be considered.


Author(s):  
Cristina G. Wilson ◽  
Feifei Qian ◽  
Douglas J. Jerolmack ◽  
Sonia Roberts ◽  
Jonathan Ham ◽  
...  

AbstractHow do scientists generate and weight candidate queries for hypothesis testing, and how does learning from observations or experimental data impact query selection? Field sciences offer a compelling context to ask these questions because query selection and adaptation involves consideration of the spatiotemporal arrangement of data, and therefore closely parallels classic search and foraging behavior. Here we conduct a novel simulated data foraging study—and a complementary real-world case study—to determine how spatiotemporal data collection decisions are made in field sciences, and how search is adapted in response to in-situ data. Expert geoscientists evaluated a hypothesis by collecting environmental data using a mobile robot. At any point, participants were able to stop the robot and change their search strategy or make a conclusion about the hypothesis. We identified spatiotemporal reasoning heuristics, to which scientists strongly anchored, displaying limited adaptation to new data. We analyzed two key decision factors: variable-space coverage, and fitting error to the hypothesis. We found that, despite varied search strategies, the majority of scientists made a conclusion as the fitting error converged. Scientists who made premature conclusions, due to insufficient variable-space coverage or before the fitting error stabilized, were more prone to incorrect conclusions. We found that novice undergraduates used the same heuristics as expert geoscientists in a simplified version of the scenario. We believe the findings from this study could be used to improve field science training in data foraging, and aid in the development of technologies to support data collection decisions.


Author(s):  
Xiaole Guo ◽  
Xixiang Liu ◽  
Miaomiao Zhao ◽  
Jie Yan ◽  
Wenqiang Yang ◽  
...  

To accurately track body attitude under high dynamic environments, a new attitude updating algorithm for the strapdown inertial navigation system is proposed after further applying higher degree polynomial to the quaternion Picard iteration (QPI) algorithm. With QPI, calculation error introduced by Picard approximation can be eliminated, but the angular rate fitting error introduced by substituting polynomial for angular rate of body will still affect the accuracy of the attitude updating algorithms which are designed based on polynomial model. Hence, a five- rather three-degree polynomial constructing method using four samples of gyro outputs with coning motion constrain is designed and tested. Simulation results indicate the proposed method owns more accuracy than QPI, optimal coning algorithm, and Fourth4Rot under both low and high dynamic environments.


2021 ◽  
Author(s):  
Yujiang He ◽  
Bernhard Sick

Abstract Catastrophic forgetting means that a trained neural network model gradually forgets the previously learned tasks when being retrained on new tasks. Overcoming the forgetting problem is a major problem in machine learning. Numerous continual learning algorithms are very successful in incremental learning of classification tasks, where new samples with their labels appear frequently. However, there is currently no research that addresses the catastrophic forgetting problem in regression tasks as far as we know. This problem has emerged as one of the primary constraints in some applications, such as renewable energy forecasts. This article clarifies problem-related definitions and proposes a new methodological framework that can forecast targets and update itself by means of continual learning. The framework consists of forecasting neural networks and buffers, which store newly collected data from a non-stationary data stream in an application. The changed probability distribution of the data stream, which the framework has identified, will be learned sequentially. The framework is called CLeaR (Continual Learning for Regression Tasks), where components can be flexibly customized for a specific application scenario. We design two sets of experiments to evaluate the CLeaR framework concerning fitting error (training), prediction error (test), and forgetting ratio. The first one is based on an artificial time series to explore how hyperparameters affect the CLeaR framework. The second one is designed with data collected from European wind farms to evaluate the CLeaR framework's performance in a real-world application. The experimental results demonstrate that the CLeaR framework can continually acquire knowledge in the data stream and improve the prediction accuracy. The article concludes with further research issues arising from requirements to extend the framework.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1304
Author(s):  
Wenchao Wu ◽  
Yongguang Hu ◽  
Yongzong Lu

Plant leaf 3D architecture changes during growth and shows sensitive response to environmental stresses. In recent years, acquisition and segmentation methods of leaf point cloud developed rapidly, but 3D modelling leaf point clouds has not gained much attention. In this study, a parametric surface modelling method was proposed for accurately fitting tea leaf point cloud. Firstly, principal component analysis was utilized to adjust posture and position of the point cloud. Then, the point cloud was sliced into multiple sections, and some sections were selected to generate a point set to be fitted (PSF). Finally, the PSF was fitted into non-uniform rational B-spline (NURBS) surface. Two methods were developed to generate the ordered PSF and the unordered PSF, respectively. The PSF was firstly fitted as B-spline surface and then was transformed to NURBS form by minimizing fitting error, which was solved by particle swarm optimization (PSO). The fitting error was specified as weighted sum of the root-mean-square error (RMSE) and the maximum value (MV) of Euclidean distances between fitted surface and a subset of the point cloud. The results showed that the proposed modelling method could be used even if the point cloud is largely simplified (RMSE < 1 mm, MV < 2 mm, without performing PSO). Future studies will model wider range of leaves as well as incomplete point cloud.


Sign in / Sign up

Export Citation Format

Share Document