A Simulation Method for the Design of a 3-D Acoustical Imaging System for Sub-Bottom Investigation

Author(s):  
M. Palmese ◽  
A. Trucco
2014 ◽  
Vol 39 (4) ◽  
pp. 620-629 ◽  
Author(s):  
Yeqiang Han ◽  
Xiang Tian ◽  
Fan Zhou ◽  
Rongxin Jiang ◽  
Yaowu Chen

2021 ◽  
Vol 8 ◽  
Author(s):  
Woen-Sug Choi ◽  
Derek R. Olson ◽  
Duane Davis ◽  
Mabel Zhang ◽  
Andy Racson ◽  
...  

One of the key distinguishing aspects of underwater manipulation tasks is the perception challenges of the ocean environment, including turbidity, backscatter, and lighting effects. Consequently, underwater perception often relies on sonar-based measurements to estimate the vehicle’s state and surroundings, either standalone or in concert with other sensing modalities, to support the perception necessary to plan and control manipulation tasks. Simulation of the multibeam echosounder, while not a substitute for in-water testing, is a critical capability for developing manipulation strategies in the complex and variable ocean environment. Although several approaches exist in the literature to simulate synthetic sonar images, the methods in the robotics community typically use image processing and video rendering software to comply with real-time execution requirements. In addition to a lack of physics-based interaction model between sound and the scene of interest, several basic properties are absent in these rendered sonar images–notably the coherent imaging system and coherent speckle that cause distortion of the object geometry in the sonar image. To address this deficiency, we present a physics-based multibeam echosounder simulation method to capture these fundamental aspects of sonar perception. A point-based scattering model is implemented to calculate the acoustic interaction between the target and the environment. This is a simplified representation of target scattering but can produce realistic coherent image speckle and the correct point spread function. The results demonstrate that this multibeam echosounder simulator generates qualitatively realistic images with high efficiency to provide the sonar image and the physical time series signal data. This synthetic sonar data is a key enabler for developing, testing, and evaluating autonomous underwater manipulation strategies that use sonar as a component of perception.


2021 ◽  
Vol 11 (11) ◽  
pp. 5182
Author(s):  
Shao Zhang ◽  
Guoqing Yang ◽  
Tao Sun ◽  
Kunyang Du ◽  
Jin Guo

With the development of our society, unmanned aerial vehicles (UAVs) appear more frequently in people’s daily lives, which could become a threat to public security and privacy, especially at night. At the same time, laser active imaging is an important detection method for night vision. In this paper, we implement a UAV detection model for our laser active imaging system based on deep learning and a simulated dataset that we constructed. Firstly, the model is pre-trained on the largest available dataset. Then, it is transferred to a simulated dataset to learn about the UAV features. Finally, the trained model is tested on real laser active imaging data. The experimental results show that the performance of the proposed method is greatly improved compared to the model not trained on the simulated dataset, which verifies the transferability of features learned from the simulated data, the effectiveness of the proposed simulation method, and the feasibility of our solution for UAV detection in the laser active imaging domain. Furthermore, a comparative experiment with the previous method is carried out. The results show that our model can achieve high-precision, real-time detection at 104.1 frames per second (FPS).


Sign in / Sign up

Export Citation Format

Share Document