STATISTICAL AND MACHINE LEARNING HIGH-FREQUENCY TIME SERIES FORECASTING METHODS IN AUTOMATIC MODE

Author(s):  
Sai Van Cuong ◽  
M. V. Shcherbakov

The research of the problem of automatic high-frequency time series forecasting (without expert) is devoted. The efficiency of high-frequency time series forecasting using different statistical and machine learning modelsis investigated. Theclassical statistical forecasting methods are compared with neural network models based on 1000 synthetic sets of high-frequency data. The neural network models give better prediction results, however, it takes more time to compute compared to statistical approaches.

2020 ◽  
pp. 1-22 ◽  
Author(s):  
D. Sykes ◽  
A. Grivas ◽  
C. Grover ◽  
R. Tobin ◽  
C. Sudlow ◽  
...  

Abstract Using natural language processing, it is possible to extract structured information from raw text in the electronic health record (EHR) at reasonably high accuracy. However, the accurate distinction between negated and non-negated mentions of clinical terms remains a challenge. EHR text includes cases where diseases are stated not to be present or only hypothesised, meaning a disease can be mentioned in a report when it is not being reported as present. This makes tasks such as document classification and summarisation more difficult. We have developed the rule-based EdIE-R-Neg, part of an existing text mining pipeline called EdIE-R (Edinburgh Information Extraction for Radiology reports), developed to process brain imaging reports, (https://www.ltg.ed.ac.uk/software/edie-r/) and two machine learning approaches; one using a bidirectional long short-term memory network and another using a feedforward neural network. These were developed on data from the Edinburgh Stroke Study (ESS) and tested on data from routine reports from NHS Tayside (Tayside). Both datasets consist of written reports from medical scans. These models are compared with two existing rule-based models: pyConText (Harkema et al. 2009. Journal of Biomedical Informatics42(5), 839–851), a python implementation of a generalisation of NegEx, and NegBio (Peng et al. 2017. NegBio: A high-performance tool for negation and uncertainty detection in radiology reports. arXiv e-prints, p. arXiv:1712.05898), which identifies negation scopes through patterns applied to a syntactic representation of the sentence. On both the test set of the dataset from which our models were developed, as well as the largely similar Tayside test set, the neural network models and our custom-built rule-based system outperformed the existing methods. EdIE-R-Neg scored highest on F1 score, particularly on the test set of the Tayside dataset, from which no development data were used in these experiments, showing the power of custom-built rule-based systems for negation detection on datasets of this size. The performance gap of the machine learning models to EdIE-R-Neg on the Tayside test set was reduced through adding development Tayside data into the ESS training set, demonstrating the adaptability of the neural network models.


Recently, the stock market prediction has become one of the essential application areas of time-series forecasting research. The successful prediction of the stock market can be better guided to the investors to maximize their profit and to minimize the risk of investment. The stock market data are very much complex, non-linear and dynamic. Due to this reason, still, it is a challenging task. In recent time, deep learning method has become one of the most popular machine learning methods for time-series forecasting due to their temporal feature extraction capabilities. In this paper, we have proposed a novel Deep Learning-based Integrated Stacked Model (DISM) that integrates both the 1D Convolution neural network and LSTM recurrent neural network to find the spatial and temporal features from the stock market data. Our proposed DISM is applied to forecast the stock market. Here, we have also compared our proposed DISM with the single structured stacked LSTM, and 1D Convolution neural network models, and some other statistical models. We have observed that our proposed DISM produces better results in terms of accuracy and stability.


Author(s):  
Hyun-il Lim

The neural network is an approach of machine learning by training the connected nodes of a model to predict the results of specific problems. The prediction model is trained by using previously collected training data. In training neural network models, overfitting problems can occur from the excessively dependent training of data and the structural problems of the models. In this paper, we analyze the effect of DropConnect for controlling overfitting in neural networks. It is analyzed according to the DropConnect rates and the number of nodes in designing neural networks. The analysis results of this study help to understand the effect of DropConnect in neural networks. To design an effective neural network model, the DropConnect can be applied with appropriate parameters from the understanding of the effect of the DropConnect in neural network models.


Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3642
Author(s):  
Alessandro Simeone ◽  
Elliot Woolley ◽  
Josep Escrig ◽  
Nicholas James Watson

Effectively cleaning equipment is essential for the safe production of food but requires a significant amount of time and resources such as water, energy, and chemicals. To optimize the cleaning of food production equipment, there is the need for innovative technologies to monitor the removal of fouling from equipment surfaces. In this work, optical and ultrasonic sensors are used to monitor the fouling removal of food materials with different physicochemical properties from a benchtop rig. Tailored signal and image processing procedures are developed to monitor the cleaning process, and a neural network regression model is developed to predict the amount of fouling remaining on the surface. The results show that the three dissimilar food fouling materials investigated were removed from the test section via different cleaning mechanisms, and the neural network models were able to predict the area and volume of fouling present during cleaning with accuracies as high as 98% and 97%, respectively. This work demonstrates that sensors and machine learning methods can be effectively combined to monitor cleaning processes.


The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


2021 ◽  
Vol 12 (6) ◽  
pp. 1-21
Author(s):  
Jayant Gupta ◽  
Carl Molnar ◽  
Yiqun Xie ◽  
Joe Knight ◽  
Shashi Shekhar

Spatial variability is a prominent feature of various geographic phenomena such as climatic zones, USDA plant hardiness zones, and terrestrial habitat types (e.g., forest, grasslands, wetlands, and deserts). However, current deep learning methods follow a spatial-one-size-fits-all (OSFA) approach to train single deep neural network models that do not account for spatial variability. Quantification of spatial variability can be challenging due to the influence of many geophysical factors. In preliminary work, we proposed a spatial variability aware neural network (SVANN-I, formerly called SVANN ) approach where weights are a function of location but the neural network architecture is location independent. In this work, we explore a more flexible SVANN-E approach where neural network architecture varies across geographic locations. In addition, we provide a taxonomy of SVANN types and a physics inspired interpretation model. Experiments with aerial imagery based wetland mapping show that SVANN-I outperforms OSFA and SVANN-E performs the best of all.


2018 ◽  
Vol 8 (8) ◽  
pp. 1290 ◽  
Author(s):  
Beata Mrugalska

Increasing expectations of industrial system reliability require development of more effective and robust fault diagnosis methods. The paper presents a framework for quality improvement on the neural model applied for fault detection purposes. In particular, the proposed approach starts with an adaptation of the modified quasi-outer-bounding algorithm towards non-linear neural network models. Subsequently, its convergence is proven using quadratic boundedness paradigm. The obtained algorithm is then equipped with the sequential D-optimum experimental design mechanism allowing gradual reduction of the neural model uncertainty. Finally, an emerging robust fault detection framework on the basis of the neural network uncertainty description as the adaptive thresholds is proposed.


Sign in / Sign up

Export Citation Format

Share Document