Leeway Prediction of Oceanic Disastrous Target via Support Vector Regression

Author(s):  
Nipon Theera-Umpon ◽  
◽  
Udomsak Boonprasert ◽  

This paper demonstrates an application of support vector machine (SVM) to the oceanic disasters search and rescue operation. The support vector regression (SVR) for system identification of a nonlinear black-box model is utilized in this research. The SVR-based ocean model helps the search and rescue unit by predicting the disastrous target’s position at any given time instant. The closer the predicted location to the actual location would shorten the searching time and minimize the loss. One of the most popular ocean models, namely the Princeton ocean model, is applied to provide the ground truth of the target leeway. From the experiments, the results on the simulated data show that the proposed SVR-based ocean model provides a good prediction compared to the Princeton ocean model. Moreover, the experimental results on the real data collected by the Royal Thai Navy also show that the proposed model can be used as an auxiliary tool in the search and rescue operation.

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Francesca Pizzorni Ferrarese ◽  
Flavio Simonetti ◽  
Roberto Israel Foroni ◽  
Gloria Menegaz

Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios.


2021 ◽  
Vol 13 (5) ◽  
pp. 2426
Author(s):  
David Bienvenido-Huertas ◽  
Jesús A. Pulido-Arcas ◽  
Carlos Rubio-Bellido ◽  
Alexis Pérez-Fargallo

In recent times, studies about the accuracy of algorithms to predict different aspects of energy use in the building sector have flourished, being energy poverty one of the issues that has received considerable critical attention. Previous studies in this field have characterized it using different indicators, but they have failed to develop instruments to predict the risk of low-income households falling into energy poverty. This research explores the way in which six regression algorithms can accurately forecast the risk of energy poverty by means of the fuel poverty potential risk index. Using data from the national survey of socioeconomic conditions of Chilean households and generating data for different typologies of social dwellings (e.g., form ratio or roof surface area), this study simulated 38,880 cases and compared the accuracy of six algorithms. Multilayer perceptron, M5P and support vector regression delivered the best accuracy, with correlation coefficients over 99.5%. In terms of computing time, M5P outperforms the rest. Although these results suggest that energy poverty can be accurately predicted using simulated data, it remains necessary to test the algorithms against real data. These results can be useful in devising policies to tackle energy poverty in advance.


2020 ◽  
Author(s):  
Yoonjee Kang ◽  
Denis Thieffry ◽  
Laura Cantini

AbstractNetworks are powerful tools to represent and investigate biological systems. The development of algorithms inferring regulatory interactions from functional genomics data has been an active area of research. With the advent of single-cell RNA-seq data (scRNA-seq), numerous methods specifically designed to take advantage of single-cell datasets have been proposed. However, published benchmarks on single-cell network inference are mostly based on simulated data. Once applied to real data, these benchmarks take into account only a small set of genes and only compare the inferred networks with an imposed ground-truth.Here, we benchmark four single-cell network inference methods based on their reproducibility, i.e. their ability to infer similar networks when applied to two independent datasets for the same biological condition. We tested each of these methods on real data from three biological conditions: human retina, T-cells in colorectal cancer, and human hematopoiesis.GENIE3 results to be the most reproducible algorithm, independently from the single-cell sequencing platform, the cell type annotation system, the number of cells constituting the dataset, or the thresholding applied to the links of the inferred networks. In order to ensure the reproducibility and ease extensions of this benchmark study, we implemented all the analyses in scNET, a Jupyter notebook available at https://github.com/ComputationalSystemsBiology/scNET.


2020 ◽  
Vol 10 (19) ◽  
pp. 6648
Author(s):  
Gabriel Astudillo ◽  
Raúl Carrasco ◽  
Christian Fernández-Campusano ◽  
Máx Chacón

Predicting copper price is essential for making decisions that can affect companies and governments dependent on the copper mining industry. Copper prices follow a time series that is nonlinear and non-stationary, and that has periods that change as a result of potential growth, cyclical fluctuation and errors. Sometimes, the trend and cyclical components together are referred to as a trend-cycle. In order to make predictions, it is necessary to consider the different characteristics of a trend-cycle. In this paper, we study a copper price prediction method using support vector regression (SVR). This work explores the potential of the SVR with external recurrences to make predictions at 5, 10, 15, 20 and 30 days into the future in the copper closing price at the London Metal Exchange. The best model for each forecast interval is performed using a grid search and balanced cross-validation. In experiments on real data sets, our results obtained indicate that the parameters (C, ε, γ) of the model support vector regression do not differ between the different prediction intervals. Additionally, the amount of preceding values used to make the estimates does not vary according to the predicted interval. Results show that the support vector regression model has a lower prediction error and is more robust. Our results show that the presented model is able to predict copper price volatilities near reality, as the root-mean-square error (RMSE) was equal to or less than the 2.2% for prediction periods of 5 and 10 days.


2021 ◽  
Vol 19 (3) ◽  
pp. 85-109
Author(s):  
Jingying Lin ◽  
Caio Almeida

Pricing American options accurately is of great theoretical and practical importance. We propose using machine learning methods, including support vector regression and classification and regression trees. These more advanced techniques extend the traditional Longstaff-Schwartz approach, replacing the OLS regression step in the Monte Carlo simulation. We apply our approach to both simulated data and market data from the S&P 500 Index option market in 2019. Our results suggest that support vector regression can be an alternative to the existing OLS-based pricing method, requiring fewer simulations and reducing the vulnerability to misspecification of basis functions.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1312
Author(s):  
Sangyeol Lee ◽  
Chang Kyeom Kim ◽  
Dongwuk Kim

This paper considers monitoring an anomaly from sequentially observed time series with heteroscedastic conditional volatilities based on the cumulative sum (CUSUM) method combined with support vector regression (SVR). The proposed online monitoring process is designed to detect a significant change in volatility of financial time series. The tuning parameters are optimally chosen using particle swarm optimization (PSO). We conduct Monte Carlo simulation experiments to illustrate the validity of the proposed method. A real data analysis with the S&P 500 index, Korea Composite Stock Price Index (KOSPI), and the stock price of Microsoft Corporation is presented to demonstrate the versatility of our model.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 578
Author(s):  
Sangyeol Lee ◽  
Chang Kyeom Kim ◽  
Sangjo Lee

This study considers the problem of detecting a change in the conditional variance of time series with time-varying volatilities based on the cumulative sum (CUSUM) of squares test using the residuals from support vector regression (SVR)-generalized autoregressive conditional heteroscedastic (GARCH) models. To compute the residuals, we first fit SVR-GARCH models with different tuning parameters utilizing a time series of training set. We then obtain the best SVR-GARCH model with the optimal tuning parameters via a time series of the validation set. Subsequently, based on the selected model, we obtain the residuals, as well as the estimates of the conditional volatility and employ these to construct the residual CUSUM of squares test. We conduct Monte Carlo simulation experiments to illustrate its validity with various linear and nonlinear GARCH models. A real data analysis with the S&P 500 index, Korea Composite Stock Price Index (KOSPI), and Korean won/U.S. dollar (KRW/USD) exchange rate datasets is provided to exhibit its scope of application.


2018 ◽  
Vol 14 (04) ◽  
pp. 137 ◽  
Author(s):  
Wei Zhai

This paper aims to present a desirable prediction method for oceanographic trends. Therefore, an online monitoring scheme was prepared to collect the accurate oceanographic hydrological data based on wireless sensor network (WSN) and computer technology. Then, the data collected by the WSN were processed by support vector regression algorithm. To obtain the most important parameters of the algorithm, the particle swarm optimization was introduced to search for the global optimal solution through the coopetition between the particles. After that, an oceanographic hydrological data collection and observation system was created based on the hydrological situation of New York harbour. Then, the traditional support vector regression and the proposed method were applied to predict the oceanographic trends based on water temperature, salinity and other indices. The results show that the proposed algorithm enhanced the data utilization rate of the WSN, and achieved good prediction accuracy. The research provides important insights into the application of advanced technology in oceanographic forecast.


2000 ◽  
Vol 12 (10) ◽  
pp. 2385-2404 ◽  
Author(s):  
G. Baudat ◽  
F. Anouar

We present a new method that we call generalized discriminant analysis (GDA) to deal with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. In the transformed space, linear properties make it easy to extend and generalize the classical linear discriminant analysis (LDA) to nonlinear discriminant analysis. The formulation is expressed as an eigenvalue problem resolution. Using a different kernel, one can cover a wide class of nonlinearities. For both simulated data and alternate kernels, we give classification results, as well as the shape of the decision function. The results are confirmed using real data to perform seed classification.


Author(s):  
M Perzyk ◽  
R Biernacki ◽  
J Kozlowski

Determination of the most significant manufacturing process parameters using collected past data can be very helpful in solving important industrial problems, such as the detection of root causes of deteriorating product quality, the selection of the most efficient parameters to control the process, and the prediction of breakdowns of machines, equipment, etc. A methodology of determination of relative significances of process variables and possible interactions between them, based on interrogations of generalized regression models, is proposed and tested. The performance of several types of data mining tool, such as artificial neural networks, support vector machines, regression trees, classification trees, and a naïve Bayesian classifier, is compared. Also, some simple non-parametric statistical methods, based on an analysis of variance (ANOVA) and contingency tables, are evaluated for comparison purposes. The tests were performed using simulated data sets, with assumed hidden relationships, as well as on real data collected in the foundry industry. It was found that the performance of significance and interaction factors obtained from regression models, and, in particular, neural networks, is satisfactory, while the other methods appeared to be less accurate and/or less reliable.


Sign in / Sign up

Export Citation Format

Share Document