acoustic images
Recently Published Documents


TOTAL DOCUMENTS

187
(FIVE YEARS 39)

H-INDEX

14
(FIVE YEARS 4)

Author(s):  
Jiaqi Wang ◽  
Haisen Li ◽  
Weidong Du ◽  
Tianyao Xing ◽  
Tian Zhou

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jue Gao ◽  
Peiyi Zhu

In this paper, we propose an underwater target perception architecture, which adopts the three-stage processing including underwater scene acoustic imaging, local high-order statistics (HOS) space conversion, and region-of-interest (ROI) detection. After analysing the problem of the underwater targets represented by the acoustic images, the unique cube structure of the target in local skewness space is noticed, which is used as a clue to develop the ROI detection of underwater scenes. In order to restore the actual appearance of the ROI as much as possible, the focus processing is explored to achieve the target reconstruction. When the target size and number are unknown, using an uncertain theoretical template can achieve a better target reconstruction effect. The performance of the proposed method in terms of SNR, detection rate, and false alarm rate is verified by experiments with several acoustic image sequences. Moreover, target perception architecture is general and can be generalized to a wider range of underwater applications.


Author(s):  
Priyadharsini Ravisankar

Underwater acoustic images are captured by sonar technology which uses sound as a source. The noise in the acoustic images may occur only during acquisition. These noises may be multiplicative in nature and cause serious effects on the images affecting their visual quality. Generally image denoising techniques that remove the noise from the images can use linear and non-linear filters. In this paper, wavelet based denoising method is used to reduce the noise from the images. The image is decomposed using Stationary Wavelet Transform (SWT) into low and high frequency components. The various shrinkage functions such as Visushrink and Sureshrink are used for selecting the threshold to remove the undesirable signals in the low frequency component. The high frequency components such as edges and corners are retained. Then the inverse SWT is used for reconstruction of denoised image by combining the modified low frequency components with the high frequency components. The performance measure Peak Signal to Noise Ratio (PSNR) is obtained for various wavelets such as Haar, Daubechies,Coiflet and by changing the thresholding methods.


2021 ◽  
Author(s):  
José Enrique Almanza-Medina ◽  
Benjamin Henson ◽  
Yuriy Zakharov

Many underwater applications that involve the use of autonomous underwater vehicles require accurate navigation systems. Image registration from acoustic images is a technique that can be used to achieve this task by comparing two consecutive sonar images and estimate the motion of the vechicle. The use of deep learning (DL) techniques for motion estimation can significantly reduce the processing complexity and achieve high-accuracy position estimates. In this paper we investigate the performance improvement when using two sonar sensors compared to using a single sensor. The DL network is trained using images generated by a sonar simulator. The results show an improvement in the estimation accuracy when using two sensors.


2021 ◽  
Author(s):  
José Enrique Almanza-Medina ◽  
Benjamin Henson ◽  
Yuriy Zakharov

Many underwater applications that involve the use of autonomous underwater vehicles require accurate navigation systems. Image registration from acoustic images is a technique that can be used to achieve this task by comparing two consecutive sonar images and estimate the motion of the vechicle. The use of deep learning (DL) techniques for motion estimation can significantly reduce the processing complexity and achieve high-accuracy position estimates. In this paper we investigate the performance improvement when using two sonar sensors compared to using a single sensor. The DL network is trained using images generated by a sonar simulator. The results show an improvement in the estimation accuracy when using two sensors.


2021 ◽  
Author(s):  
Bo Gong ◽  
◽  
Ela Manuel ◽  
Youfang Liu ◽  
David Forand ◽  
...  

Logging-while-drilling (LWD) acoustic imaging technology emerged in the past few years as a low-cost solution to detect and characterize fractures in high-angle and horizontal wells. This type of imaging tool works in either water-based or oil-based drilling fluids, making it a competitive choice for logging unconventional shale wells, which are often drilled with oil-based mud. With high-resolution acoustic amplitude and travel-time images, fractures, bedding planes and other drilling-related features can be identified, providing new insights for reservoir characterization and wellbore geomechanics. The quality of LWD acoustic images however is directly affected by drilling parameters and borehole conditions, as the received signal is sensitive to formation property and wellbore changes at the same time. As a result, interpretation can be quite challenging, and caution needs to be taken to differentiate actual formation property changes from drilling-related features or image artifacts. This paper demonstrates the complexity of interpreting LWD acoustic images through multiple case studies. The examples were collected from vertical and horizontal wells in multiple shale plays in North America, with the images logged and processed by different service companies. Depending on the geology and borehole conditions, various features and artifacts were observed from the images, which can be used as a reference for geologists and petrophysicists. Images acquired with different drilling parameters were compared to show the effect of drilling conditions on image quality. Recommendations and best practices of using this new type of image log are also shared.


Author(s):  
Francesco Asdrubali ◽  
Giorgio Baldinelli ◽  
Francesco Bianchi ◽  
Danilo Costarelli ◽  
Francesco D'Alessandro ◽  
...  

2021 ◽  
Vol 9 (4) ◽  
pp. 361
Author(s):  
António José Oliveira ◽  
Bruno Miguel Ferreira ◽  
Nuno Alexandre Cruz

In underwater navigation, sonars are useful sensing devices for operation in confined or structured environments, enabling the detection and identification of underwater environmental features through the acquisition of acoustic images. Nonetheless, in these environments, several problems affect their performance, such as background noise and multiple secondary echoes. In recent years, research has been conducted regarding the application of feature extraction algorithms to underwater acoustic images, with the purpose of achieving a robust solution for the detection and matching of environmental features. However, since these algorithms were originally developed for optical image analysis, conclusions in the literature diverge regarding their suitability to acoustic imaging. This article presents a detailed comparison between the SURF (Speeded-Up Robust Features), ORB (Oriented FAST and Rotated BRIEF), BRISK (Binary Robust Invariant Scalable Keypoints), and SURF-Harris algorithms, based on the performance of their feature detection and description procedures, when applied to acoustic data collected by an autonomous underwater vehicle. Several characteristics of the studied algorithms were taken into account, such as feature point distribution, feature detection accuracy, and feature description robustness. A possible adaptation of feature extraction procedures to acoustic imaging is further explored through the implementation of a feature selection module. The performed comparison has also provided evidence that further development of the current feature description methodologies might be required for underwater acoustic image analysis.


2021 ◽  
Vol 4 (1) ◽  
pp. 41-60
Author(s):  
Tea Prsir ◽  
Jean-Philippe Goldman ◽  
Antoine Auchlin

This paper presents results from an on-going study of prosodic and phonostylistic variation across speaking styles, i.e., acoustic images associated to types of language production, also called phonogenres. It extends previous work in (1, 2) by enlarging the corpus (C-PhonoGenre, 8 hours) and by exploring a more comprehensive collection of genres. The situational parameters in (3, 4) are reduced to four situational features, each admitting three values, the combination of which differentiates sub-phonogenres. The main goal of this study is to establish correlations between the situational and prosodic features of discourse. Corpus processing, annotation and measure calculation are performed semi-automatically, through a set of tools implemented under Praat and manual steps. Rhythmical measurements by DurationAnalyser (5) combined with the output of ProsoReport (6) produce an acoustic analysis of the differences between phonogenres. A large number of micro- and macro-prosodic measures provide a finegrained ‘prosometric’ description. This article presents the methodology for collecting the corpus, and results for the phonogenres.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 317
Author(s):  
Jurgen Vandendriessche ◽  
Bruno da Silva ◽  
Lancelot Lhoest ◽  
An Braeken ◽  
Abdellah Touhafi

Acoustic cameras allow the visualization of sound sources using microphone arrays and beamforming techniques. The required computational power increases with the number of microphones in the array, the acoustic images resolution, and in particular, when targeting real-time. Such a constraint limits the use of acoustic cameras in many wireless sensor network applications (surveillance, industrial monitoring, etc.). In this paper, we propose a multi-mode System-on-Chip (SoC) Field-Programmable Gate Arrays (FPGA) architecture capable to satisfy the high computational demand while providing wireless communication for remote control and monitoring. This architecture produces real-time acoustic images of 240 × 180 resolution scalable to 640 × 480 by exploiting the multithreading capabilities of the hard-core processor. Furthermore, timing cost for different operational modes and for different resolutions are investigated to maintain a real time system under Wireless Sensor Networks constraints.


Sign in / Sign up

Export Citation Format

Share Document