scholarly journals Underwater sound speed inversion by joint artificial neural network and ray theory

Author(s):  
Wei Huang ◽  
Deshi Li ◽  
Peng Jiang
2021 ◽  
Vol 9 (5) ◽  
pp. 488
Author(s):  
Jin Huang ◽  
Yu Luo ◽  
Jian Shi ◽  
Xin Ma ◽  
Qian-Qian Li ◽  
...  

Ocean sound speed is an essential foundation for marine scientific research and marine engineering applications. In this article, a model based on a comprehensive optimal back propagation artificial neural network model is developed. The Levenberg–Marquardt algorithm is used to optimize the model, and the momentum term, normalization, and early termination method were used to predict the high precision marine sound speed profile. The sound speed profile was described by five indicators: date, time, latitude, longitude, and depth. The model used data from the CTD observation dataset of scientific investigation over the South China Sea (2009–2012) (108°–120°E, 6°–8°N), which includes comprehensive scientific investigation data from four voyages. The feasibility of modeling the sound speed field in the South China Sea is investigated. The proposed model uses the momentum term, normalization, and early termination in a traditional BP artificial neural network structure and mitigates issues with overtraining and difficulty when determining the BP neural network parameters. With the LM algorithm, a fast-modeling method for the sound field effectively achieves the precision requirement for sound speed prediction. Through the prediction and verification of the data from 2009 to 2012, the newly proposed optimized BP network model is shown to dramatically reduce the training time and improve precision compared to the traditional network model. Results showed that the root mean squared error decreased from 1.7903 m/s to 0.95732 m/s, and the training time decreased from 612.43 s to 4.231 s. Finally, the sound ray tracing simulations confirm that the model meets the accuracy requirements of acoustic sounding and verify the model’s feasibility for the real-time prediction of the vertical sound speed in saltwater bodies.


2000 ◽  
Vol 25 (4) ◽  
pp. 325-325
Author(s):  
J.L.N. Roodenburg ◽  
H.J. Van Staveren ◽  
N.L.P. Van Veen ◽  
O.C. Speelman ◽  
J.M. Nauta ◽  
...  

2004 ◽  
Vol 171 (4S) ◽  
pp. 502-503
Author(s):  
Mohamed A. Gomha ◽  
Khaled Z. Sheir ◽  
Saeed Showky ◽  
Khaled Madbouly ◽  
Emad Elsobky ◽  
...  

1998 ◽  
Vol 49 (7) ◽  
pp. 717-722 ◽  
Author(s):  
M C M de Carvalho ◽  
M S Dougherty ◽  
A S Fowkes ◽  
M R Wardman

2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


Sign in / Sign up

Export Citation Format

Share Document