scholarly journals PREDICTION AND MAPPING OF COCHLODINIUM POLYKRIKOIDES RED TIDE USING MACHINE LEARNING UNDER IMBALANCED DATA

Author(s):  
S. H. Bak ◽  
D. H. Hwang ◽  
U. Enkhjargal ◽  
H. J. Yoon

Abstract. Cochlodinium polykrikoides (C. polykrikoides) is a phytoplankton that causes red tides every year in the middle of the South Sea of Korea. C. polykrikoides is a harmful Algae that has migratory ability and causes the fisheries damage over a long period of wide sea area if it causes red tide once. To minimize red tide damage, it is important to anticipate and prepare the red tide occurrence timing and location in advance. In this study, we predicted the occurrence of red tide of C. polykrikoides using machine learning techniques and compared the results of each algorithm. Logistic regression model, decision tree model, and multilayer neural network model were used for prediction of red tide occurrence. To produce the data set for model learning, we used the red tide occurrence map provided by the National Institute of Fisheries Science, the Local Data Assimilation and Prediction System (LDAPS) provided by the Korea Meteorological Agency, and the G1SST provided by the National Oceanic and Atmospheric Administration (NOAA). The feature vectors used for modeling consisted of 59 elements, which were made by using temperature, water temperature, precipitation, solar radiation, wind direction and wind speed. Only a very small number of red tide cases can be collected compared to the case of no red tide cases. Thus, an imbalance data problem arises in the data set. To overcome this imbalanced data problem, we used adding noise after oversampling to data of red tide occurrence to solve the difference of data between two classes.The data set is divided into 8 : 2 to prevent over-fitting and 80% is used as the learning data. The remaining 20% was used to evaluate the performance of each model. As a result of evaluating the prediction performance of each model, the multilayer neural network model showed the highest prediction accuracy.

2020 ◽  
Vol 4 (2) ◽  
pp. 90-96
Author(s):  
Ishita Charkraborty ◽  
◽  
Brent Vyvial ◽  

With the advent of machine learning, data-based models can be used to increase efficiency and reduce cost for the characterization of various anomalies in pipelines. In this work, artificial intelligence is used to classify pipeline dents directly from the in-line inspection (ILI) data according to their risk categories. A deep neural network model is built with available ILI data, and the resulting machine learning model requires only the ILI data as an input to classify dents in different risk categories. Using a machine learning based model eliminates the need for conducting detailed engineering analysis to determine the effects of dents on the integrity of the pipeline. Concepts from computer vision are used to build the deep neural network using the available data. The deep neural network model is then trained on a sub set of the available ILI data and the model is tested for accuracy on a previously unseen set of the available data. The developed model predicts risk factors associated with a dent with 94% accuracy for a previously unseen data set.


2020 ◽  
Author(s):  
Nicholas Menghi ◽  
Kemal Kacar ◽  
Will Penny

AbstractThis paper uses constructs from the field of multitask machine learning to define pairs of learning tasks that either shared or did not share a common subspace. Human subjects then learnt these tasks using a feedback-based approach. We found, as hypothesised, that subject performance was significantly higher on the second task if it shared the same subspace as the first, an advantage that played out most strongly at the beginning of the second task. Additionally, accuracy was positively correlated over subjects learning same-subspace tasks but was not correlated for those learning different-subspace tasks. These results, and other aspects of learning dynamics, were compared to the behaviour of a Neural Network model trained using sequential Bayesian inference. Human performance was found to be consistent with a Soft Parameter Sharing variant of this model that constrained representations to be similar among tasks but only when this aided learning. We propose that the concept of shared subspaces provides a useful framework for the experimental study of human multitask and transfer learning.Author summaryHow does knowledge gained from previous experience affect learning of new tasks ? This question of “Transfer Learning” has been addressed by teachers, psychologists, and more recently by researchers in the fields of neural networks and machine learning. Leveraging constructs from machine learning, we designed pairs of learning tasks that either shared or did not share a common subspace. We compared the dynamics of transfer learning in humans with those of a multitask neural network model, finding that human performance was consistent with a soft parameter sharing variant of the model. Learning was boosted in the early stages of the second task if the same subspace was shared between tasks. Additionally, accuracy between tasks was positively correlated but only when they shared the same subspace. Our results highlight the roles of subspaces, showing how they could act as a learning boost if shared, and be detrimental if not.


2020 ◽  
Vol 8 (10) ◽  
pp. 766
Author(s):  
Dohan Oh ◽  
Julia Race ◽  
Selda Oterkus ◽  
Bonguk Koo

Mechanical damage is recognized as a problem that reduces the performance of oil and gas pipelines and has been the subject of continuous research. The artificial neural network in the spotlight recently is expected to be another solution to solve the problems relating to the pipelines. The deep neural network, which is on the basis of artificial neural network algorithm and is a method amongst various machine learning methods, is applied in this study. The applicability of machine learning techniques such as deep neural network for the prediction of burst pressure has been investigated for dented API 5L X-grade pipelines. To this end, supervised learning is employed, and the deep neural network model has four layers with three hidden layers, and the neural network uses the fully connected layer. The burst pressure computed by deep neural network model has been compared with the results of finite element analysis based parametric study, and the burst pressure calculated by the experimental results. According to the comparison results, it showed good agreement. Therefore, it is concluded that deep neural networks can be another solution for predicting the burst pressure of API 5L X-grade dented pipelines.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1514
Author(s):  
Seung-Ho Lim ◽  
WoonSik William Suh ◽  
Jin-Young Kim ◽  
Sang-Young Cho

The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.


2021 ◽  
Vol 72 (1) ◽  
pp. 11-20
Author(s):  
Mingtao He ◽  
Wenying Li ◽  
Brian K. Via ◽  
Yaoqi Zhang

Abstract Firms engaged in producing, processing, marketing, or using lumber and lumber products always invest in futures markets to reduce the risk of lumber price volatility. The accurate prediction of real-time prices can help companies and investors hedge risks and make correct market decisions. This paper explores whether Internet browsing habits can accurately nowcast the lumber futures price. The predictors are Google Trends index data related to lumber prices. This study offers a fresh perspective on nowcasting the lumber price accurately. The novel outlook of employing both machine learning and deep learning methods shows that despite the high predictive power of both the methods, on average, deep learning models can better capture trends and provide more accurate predictions than machine learning models. The artificial neural network model is the most competitive, followed by the recurrent neural network model.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Bo Liu ◽  
Qilin Wu ◽  
Yiwen Zhang ◽  
Qian Cao

Pruning is a method of compressing the size of a neural network model, which affects the accuracy and computing time when the model makes a prediction. In this paper, the hypothesis that the pruning proportion is positively correlated with the compression scale of the model but not with the prediction accuracy and calculation time is put forward. For testing the hypothesis, a group of experiments are designed, and MNIST is used as the data set to train a neural network model based on TensorFlow. Based on this model, pruning experiments are carried out to investigate the relationship between pruning proportion and compression effect. For comparison, six different pruning proportions are set, and the experimental results confirm the above hypothesis.


Author(s):  
A. Saravanan ◽  
J. Jerald ◽  
A. Delphin Carolina Rani

AbstractThe objective of the paper is to develop a new method to model the manufacturing cost–tolerance and to optimize the tolerance values along with its manufacturing cost. A cost–tolerance relation has a complex nonlinear correlation among them. The property of a neural network makes it possible to model the complex correlation, and the genetic algorithm (GA) is integrated with the best neural network model to optimize the tolerance values. The proposed method used three types of neural network models (multilayer perceptron, backpropagation network, and radial basis function). These network models were developed separately for prismatic and rotational parts. For the construction of network models, part size and tolerance values were used as input neurons. The reference manufacturing cost was assigned as the output neuron. The qualitative production data set was gathered in a workshop and partitioned into three files for training, testing, and validation, respectively. The architecture of the network model was identified based on the best regression coefficient and the root-mean-square-error value. The best network model was integrated into the GA, and the role of genetic operators was also studied. Finally, two case studies from the literature were demonstrated in order to validate the proposed method. A new methodology based on the neural network model enables the design and process planning engineers to propose an intelligent decision irrespective of their experience.


Mathematics ◽  
2019 ◽  
Vol 7 (10) ◽  
pp. 890 ◽  
Author(s):  
Zhihao Zhang ◽  
Zhe Wu ◽  
David Rincon ◽  
Panagiotis Christofides

Machine learning has attracted extensive interest in the process engineering field, due to the capability of modeling complex nonlinear process behavior. This work presents a method for combining neural network models with first-principles models in real-time optimization (RTO) and model predictive control (MPC) and demonstrates the application to two chemical process examples. First, the proposed methodology that integrates a neural network model and a first-principles model in the optimization problems of RTO and MPC is discussed. Then, two chemical process examples are presented. In the first example, a continuous stirred tank reactor (CSTR) with a reversible exothermic reaction is studied. A feed-forward neural network model is used to approximate the nonlinear reaction rate and is combined with a first-principles model in RTO and MPC. An RTO is designed to find the optimal reactor operating condition balancing energy cost and reactant conversion, and an MPC is designed to drive the process to the optimal operating condition. A variation in energy price is introduced to demonstrate that the developed RTO scheme is able to minimize operation cost and yields a closed-loop performance that is very close to the one attained by RTO/MPC using the first-principles model. In the second example, a distillation column is used to demonstrate an industrial application of the use of machine learning to model nonlinearities in RTO. A feed-forward neural network is first built to obtain the phase equilibrium properties and then combined with a first-principles model in RTO, which is designed to maximize the operation profit and calculate optimal set-points for the controllers. A variation in feed concentration is introduced to demonstrate that the developed RTO scheme can increase operation profit for all considered conditions.


Sign in / Sign up

Export Citation Format

Share Document