Ship As a Wave Buoy: Estimating Relative Wave Direction From In-Service Ship Motion Measurements Using Machine Learning

Author(s):  
Bart Mak ◽  
Bülent Düz

Abstract For operations at sea it is important to have a good estimate of the current local sea state. Often, sea state information comes from wave buoys or weather forecasts. Sometimes wave radars are used. These sources are not always available or reliable. Being able to reliably use ship motions to estimate sea state characteristics reduces the dependency on external and/or expensive sources. In this paper, we present a method to estimate sea state characteristics from time series of 6-DOF ship motions using machine learning. The available data consists of ship motion and wave scanning radar measurements recorded for a period of two years on a frigate type vessel. The research focused on estimating the relative wave direction, since this is most difficult to estimate using traditional methods. Time series are well suited as input, since the phase differences between motion signals hold the information relevant for this case. This type of input data requires machine learning algorithms that can capture both the relation between the input channels and the time dependence. To this end, convolutional neural networks (CNN) and recurrent neural networks (RNN) are adopted in this study for multivariate time series regression. The results show that the estimation of the relative wave direction is acceptable, assuming that the data set is large enough and covers enough sea states. Investigating the chronological properties of the data set, it turned out that this is not yet the case. The paper will include discussions on how to interpret the results and how to treat temporal data in a more general sense.

2013 ◽  
Vol 712-715 ◽  
pp. 1550-1554
Author(s):  
Xin Dong Yang ◽  
Zuo Chao Wang ◽  
Ai Guo Shi ◽  
Bo Liu ◽  
Li Li

Wind and waves have particularly significant influence upon exertion of naval vessels battle effectiveness. It is urgently necessary to improve the ability of the Navy to carry out combat service in severe sea state normally. This paper aims to obtain the accurate prediction of ship motions with second level predictable time in real waves. According to the characteristics of the ship motion, the research on extremely short-time prediction of ship motion has been carried out based on multi-variable chaotic time series analysis, and the effectiveness of the prediction of ship motion in real wave is highly improved.


2020 ◽  
Vol 73 ◽  
pp. 01004
Author(s):  
Tomàš Brabenec ◽  
Petr Šuleř

International trade is an important factor of economic growth. While foreign trade has existed throughout the history, its political, economic and social importance has grown significantly in the last centuries. The objective of the contribution is to use machine learning forecasting for predicting the balance of trade of the Czech Republic (CR) and the People´s Republic of China (PRC) through analysing and machine learning forecasting of the CR import from the PRC and the CR export to the PRC. The data set includes monthly trade balance intervals from January 2000 to June 2019. The contribution investigates and subsequently smooths two time series: the CR import from the PRC; the CR export to the PRC. The balance of trade of both countries in the entire monitored period is negative from the perspective of the CR. A total of 10,000 neural networks are generated. 5 neural structures with the best characteristics are retained. Neural networks are able to capture both the trend of the entire time series and its seasonal fluctuations, but it is necessary to work with time series lag. The CR import from the PRC is growing and it is expected to grow in the future. The CR export to the PRC is growing and it is expected to grow in the future, but its increase in absolute values will be slower than the increase of the CR import from the PRC.


Author(s):  
R Vinayakumar ◽  
K.P. Soman ◽  
Prabaharan Poornachandran

This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.


Author(s):  
Son Nguyen ◽  
Anthony Park

This chapter compares the performances of multiple Big Data techniques applied for time series forecasting and traditional time series models on three Big Data sets. The traditional time series models, Autoregressive Integrated Moving Average (ARIMA), and exponential smoothing models are used as the baseline models against Big Data analysis methods in the machine learning. These Big Data techniques include regression trees, Support Vector Machines (SVM), Multilayer Perceptrons (MLP), Recurrent Neural Networks (RNN), and long short-term memory neural networks (LSTM). Across three time series data sets used (unemployment rate, bike rentals, and transportation), this study finds that LSTM neural networks performed the best. In conclusion, this study points out that Big Data machine learning algorithms applied in time series can outperform traditional time series models. The computations in this work are done by Python, one of the most popular open-sourced platforms for data science and Big Data analysis.


2019 ◽  
Vol 14 ◽  
pp. 155892501988346 ◽  
Author(s):  
Mine Seçkin ◽  
Ahmet Çağdaş Seçkin ◽  
Aysun Coşkun

Although textile production is heavily automation-based, it is viewed as a virgin area with regard to Industry 4.0. When the developments are integrated into the textile sector, efficiency is expected to increase. When data mining and machine learning studies are examined in textile sector, it is seen that there is a lack of data sharing related to production process in enterprises because of commercial concerns and confidentiality. In this study, a method is presented about how to simulate a production process and how to make regression from the time series data with machine learning. The simulation has been prepared for the annual production plan, and the corresponding faults based on the information received from textile glove enterprise and production data have been obtained. Data set has been applied to various machine learning methods within the scope of supervised learning to compare the learning performances. The errors that occur in the production process have been created using random parameters in the simulation. In order to verify the hypothesis that the errors may be forecast, various machine learning algorithms have been trained using data set in the form of time series. The variable showing the number of faulty products could be forecast very successfully. When forecasting the faulty product parameter, the random forest algorithm has demonstrated the highest success. As these error values have given high accuracy even in a simulation that works with uniformly distributed random parameters, highly accurate forecasts can be made in real-life applications as well.


2018 ◽  
Vol 210 ◽  
pp. 04019 ◽  
Author(s):  
Hyontai SUG

Recent world events in go games between human and artificial intelligence called AlphaGo showed the big advancement in machine learning technologies. While AlphaGo was trained using real world data, AlphaGo Zero was trained using massive random data, and the fact that AlphaGo Zero won AlphaGo completely revealed that diversity and size in training data is important for better performance for the machine learning algorithms, especially in deep learning algorithms of neural networks. On the other hand, artificial neural networks and decision trees are widely accepted machine learning algorithms because of their robustness in errors and comprehensibility respectively. In this paper in order to prove that diversity and size in data are important factors for better performance of machine learning algorithms empirically, the two representative algorithms are used for experiment. A real world data set called breast tissue was chosen, because the data set consists of real numbers that is very good property for artificial random data generation. The result of the experiment proved the fact that the diversity and size of data are very important factors for better performance.


Author(s):  
Guilherme Loriato Potratz ◽  
Smith Washington Arauco Canchumuni ◽  
Jose David Bermudez Castro ◽  
Júlia Potratz ◽  
Marco Aurélio C. Pacheco

One of the critical processes in the exploration of hydrocarbons is the identification and prediction of lithofacies that constitute the reservoir. One of the cheapest and most efficient ways to carry out that process is from the interpretation of well log data, which are often obtained continuously and in the majority of drilled wells. The main methodologies used to correlate log data to data obtained in well cores are based on statistical analyses, machine learning models and artificial neural networks. This study aims to test an algorithm of dimension reduction of data together with an unsupervised classification method of predicting lithofacies automatically. The performance of the methodology presented was compared to predictions made with artificial neural networks. We used the t-Distributed Stochastic Neighbor Embedding (t-SNE) as an algorithm for mapping the wells logging data in a smaller feature space. Then, the predictions of facies are performed using a KNN algorithm. The method is assessed in the public dataset of the Hugoton and Panoma fields. Prediction of facies through traditional artificial neural networks obtained an accuracy of 69%, where facies predicted through the t-SNE + K-NN algorithm obtained an accuracy of 79%. Considering the nature of the data, which have high dimensionality and are not linearly correlated, the efficiency of t SNE+KNN can be explained by the ability of the algorithm to identify hidden patterns in a fuzzy boundary in data set. It is important to stress that the application of machine learning algorithms offers relevant benefits to the hydrocarbon exploration sector, such as identifying hidden patterns in high-dimensional datasets, searching for complex and non-linear relationships, and avoiding the need for a preliminary definition of mathematic relations among the model’s input data.


Landslides can easily be tragic to human life and property. Increase in the rate of human settlement in the mountains has resulted in safety concerns. Landslides have caused economic loss between 1-2% of the GDP in many developing countries. In this study, we discuss a deep learning approach to detect landslides. Convolutional Neural Networks are used for feature extraction for our proposed model. As there was no source of an exact and precise data set for feature extraction, therefore, a new data set was built for testing the model. We have tested and compared this work with our proposed model and with other machine-learning algorithms such as Logistic Regression, Random Forest, AdaBoost, K-Nearest Neighbors and Support Vector Machine. Our proposed deep learning model produces a classification accuracy of 96.90% outperforming the classical machine-learning algorithms.


Author(s):  
Supun Nakandala ◽  
Marta M. Jankowska ◽  
Fatima Tuz-Zahra ◽  
John Bellettiere ◽  
Jordan A. Carlson ◽  
...  

Background: Machine learning has been used for classification of physical behavior bouts from hip-worn accelerometers; however, this research has been limited due to the challenges of directly observing and coding human behavior “in the wild.” Deep learning algorithms, such as convolutional neural networks (CNNs), may offer better representation of data than other machine learning algorithms without the need for engineered features and may be better suited to dealing with free-living data. The purpose of this study was to develop a modeling pipeline for evaluation of a CNN model on a free-living data set and compare CNN inputs and results with the commonly used machine learning random forest and logistic regression algorithms. Method: Twenty-eight free-living women wore an ActiGraph GT3X+ accelerometer on their right hip for 7 days. A concurrently worn thigh-mounted activPAL device captured ground truth activity labels. The authors evaluated logistic regression, random forest, and CNN models for classifying sitting, standing, and stepping bouts. The authors also assessed the benefit of performing feature engineering for this task. Results: The CNN classifier performed best (average balanced accuracy for bout classification of sitting, standing, and stepping was 84%) compared with the other methods (56% for logistic regression and 76% for random forest), even without performing any feature engineering. Conclusion: Using the recent advancements in deep neural networks, the authors showed that a CNN model can outperform other methods even without feature engineering. This has important implications for both the model’s ability to deal with the complexity of free-living data and its potential transferability to new populations.


Author(s):  
R Vinayakumar ◽  
K.P. Soman ◽  
Prabaharan Poornachandran

This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.


Sign in / Sign up

Export Citation Format

Share Document