underwater vision
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 27)

H-INDEX

13
(FIVE YEARS 2)

Author(s):  
Marc Picheral ◽  
Camille Catalano ◽  
Denis Brousseau ◽  
Hervé Claustre ◽  
Laurent Coppola ◽  
...  

2021 ◽  
Author(s):  
Shudi Yang ◽  
Zhehan Chen ◽  
Jiaxiong Wu ◽  
Zhipeng Feng

Abstract Underwater vision research is the foundation of marine-related disciplines. The target contour extraction is of great significance to target tracking and visual information mining. Aiming at the problem that conventional active contour models cannot effectively extract the contours of salient targets in underwater images, we propose a dual-fusion active contour model with semantic information. First, the saliency images are introduced as semantic information, and extract salient target contours by fusing Chan–Vese and local binary fitting models. Then, the original underwater images are used to supplement the missing contour information by using the local image fitting. Compared with state-of-the-art contour extraction methods, our dual-fusion active contour model can effectively filter out background information and accurately extract salient target contours.


2021 ◽  
pp. 1-13
Author(s):  
Long Hou ◽  
Long Yu ◽  
Shengwei Tian ◽  
Yanhan Zhang

Underwater image enhancement has always been a hot spot in underwater vision research. However, due to complicated underwater environment, a lot of problems such as the color distortion and low brightness of underwater raw images are very likely to occur. In response to the above situation, we proposed a generative adversarial network that integrated multiple attention to enhance underwater images. In the generator, we introduced multi-layer dense connections and CSAM modules, of which the former could capture more detailed features and make use of previous features, while the latter could improve the utilization of the feature map. Meanwhile, we improved the enhancement effect of the generated image by combining VGG19 content loss function and SmoothL1 loss function. Finally, we verified the effectiveness of the proposed model through qualitative and quantitative experiments, and compared the results with the performance of several latest models. The results show that the methods proposed in this paper are superior to the existing methods.


2021 ◽  
Vol 925 (1) ◽  
pp. 012054
Author(s):  
F Muhammad ◽  
Poerbandono ◽  
H Sternberg

Abstract Underwater vision-based mapping (VbM) constructs three-dimensional (3D) map and robot position simultaneously out of a quasi-continuous structure from motion (SfM) method. It is the so-called simultaneous localization and mapping (SLAM), which might be beneficial for mapping of shallow seabed features as it is free from unnecessary parasitic returns which is found in sonar survey. This paper presents a discussion resulted from a small-scale testing of 3D underwater positioning task. We analyse the setting and performance of a standard web-camera, used for such a task, while fully submerged underwater. SLAM estimates the robot (i.e. camera) position from the constructed 3D map by reprojecting the detected features (points) to the camera scene. A marker-based camera calibration is used to eliminate refractions effect due to light propagation in water column. To analyse the positioning accuracy, a fiducial marker-based system –with millimetres accuracy of reprojection error– is used as a trajectory’s true value (ground truth). Controlled experiment with a standard web-camera running with 30 fps (frame per-second) shows that such a system is capable to robustly performing underwater navigation task. Sub-metre accuracy is achieved utilizing at least 1 pose (1 Hz) every second.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7205
Author(s):  
Xueting Zhang ◽  
Xiaohai Fang ◽  
Mian Pan ◽  
Luhua Yuan ◽  
Yaxin Zhang ◽  
...  

Underwater vision-based detection plays an increasingly important role in underwater security, ocean exploration and other fields. Due to the absorption and scattering effects of water on light, as well as the movement of the carrier, underwater images generally have problems such as noise pollution, color cast and motion blur, which seriously affect the performance of underwater vision-based detection. To address these problems, this study proposes an end-to-end marine organism detection framework that can jointly optimize the image enhancement and object detection. The framework uses a two-stage detection network with dynamic intersection over union (IoU) threshold as the backbone and adds an underwater image enhancement module (UIEM) composed of denoising, color correction and deblurring sub-modules to greatly improve the framework’s ability to deal with severely degraded underwater images. Meanwhile, a self-built dataset is introduced to pre-train the UIEM, so that the training of the entire framework can be performed end-to-end. The experimental results show that compared with the existing end-to-end models applied to marine organism detection, the detection precision of the proposed framework can improve by at least 6%, and the detection speed has not been significantly reduced, so that it can complete the high-precision real-time detection of marine organisms.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7043
Author(s):  
Xiaoteng Zhou ◽  
Changli Yu ◽  
Xin Yuan ◽  
Citong Luo

In the field of underwater vision, image matching between the main two sensors (sonar and optical camera) has always been a challenging problem. The independent imaging mechanism of the two determines the modalities of the image, and the local features of the images under various modalities are significantly different, which makes the general matching method based on the optical image invalid. In order to make full use of underwater acoustic and optical images, and promote the development of multisensor information fusion (MSIF) technology, this letter proposes to apply an image attribute transfer algorithm and advanced local feature descriptor to solve the problem of underwater acousto-optic image matching. We utilize real and simulated underwater images for testing; experimental results show that our proposed method could effectively preprocess these multimodal images to obtain an accurate matching result, thus providing a new solution for the underwater multisensor image matching task.


2021 ◽  
Vol 18 (183) ◽  
Author(s):  
Xingwen Zheng ◽  
Amar M. Kamat ◽  
Ming Cao ◽  
Ajay Giri Prakash Kottapalli

Seals are known to use their highly sensitive whiskers to precisely follow the hydrodynamic trail left behind by prey. Studies estimate that a seal can track a herring that is swimming as far as 180 m away, indicating an incredible detection apparatus on a par with the echolocation system of dolphins and porpoises. This remarkable sensing capability is enabled by the unique undulating structural morphology of the whisker that suppresses vortex-induced vibrations (VIVs) and thus increases the signal-to-noise ratio of the flow-sensing whiskers. In other words, the whiskers vibrate minimally owing to the seal's swimming motion, eliminating most of the self-induced noise and making them ultrasensitive to the vortices in the wake of escaping prey. Because of this impressive ability, the seal whisker has attracted much attention in the scientific community, encompassing multiple fields of sensory biology, fluid mechanics, biomimetic flow sensing and soft robotics. This article presents a comprehensive review of the seal whisker literature, covering the behavioural experiments on real seals, VIV suppression capabilities enabled by the undulating geometry, wake vortex-sensing mechanisms, morphology and material properties and finally engineering applications inspired by the shape and functionality of seal whiskers. Promising directions for future research are proposed.


2021 ◽  
Vol 8 ◽  
Author(s):  
Qi Zhao ◽  
Ziqiang Zheng ◽  
Huimin Zeng ◽  
Zhibin Yu ◽  
Haiyong Zheng ◽  
...  

Underwater depth prediction plays an important role in underwater vision research. Because of the complex underwater environment, it is extremely difficult and expensive to obtain underwater datasets with reliable depth annotation. Thus, underwater depth map estimation with a data-driven manner is still a challenging task. To tackle this problem, we propose an end-to-end system including two different modules for underwater image synthesis and underwater depth map estimation, respectively. The former module aims to translate the hazy in-air RGB-D images to multi-style realistic synthetic underwater images while retaining the objects and the structural information of the input images. Then we construct a semi-real RGB-D underwater dataset using the synthesized underwater images and the original corresponding depth maps. We conduct supervised learning to perform depth estimation through the pseudo paired underwater RGB-D images. Comprehensive experiments have demonstrated that the proposed method can generate multiple realistic underwater images with high fidelity, which can be applied to enhance the performance of monocular underwater image depth estimation. Furthermore, the trained depth estimation model can be applied to real underwater image depth map estimation. We will release our codes and experimental setting in https://github.com/ZHAOQIII/UW_depth.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3268
Author(s):  
Qi Zhao ◽  
Zhichao Xin ◽  
Zhibin Yu ◽  
Bing Zheng

As one of the key requirements for underwater exploration, underwater depth map estimation is of great importance in underwater vision research. Although significant progress has been achieved in the fields of image-to-image translation and depth map estimation, a gap between normal depth map estimation and underwater depth map estimation still remains. Additionally, it is a great challenge to build a mapping function that converts a single underwater image into an underwater depth map due to the lack of paired data. Moreover, the ever-changing underwater environment further intensifies the difficulty of finding an optimal mapping solution. To eliminate these bottlenecks, we developed a novel image-to-image framework for underwater image synthesis and depth map estimation in underwater conditions. For the problem of the lack of paired data, by translating hazy in-air images (with a depth map) into underwater images, we initially obtained a paired dataset of underwater images and corresponding depth maps. To enrich our synthesized underwater dataset, we further translated hazy in-air images into a series of continuously changing underwater images with a specified style. For the depth map estimation, we included a coarse-to-fine network to provide a precise depth map estimation result. We evaluated the efficiency of our framework for a real underwater RGB-D dataset. The experimental results show that our method can provide a diversity of underwater images and the best depth map estimation precision.


Sign in / Sign up

Export Citation Format

Share Document