scholarly journals A Simulation Method of Specific Fish-Eye Imaging System Based on Image Postprocessing

Author(s):  
Wenhui Li ◽  
You Qu ◽  
Ying Wang ◽  
Jialun Liu
2021 ◽  
Vol 8 ◽  
Author(s):  
Woen-Sug Choi ◽  
Derek R. Olson ◽  
Duane Davis ◽  
Mabel Zhang ◽  
Andy Racson ◽  
...  

One of the key distinguishing aspects of underwater manipulation tasks is the perception challenges of the ocean environment, including turbidity, backscatter, and lighting effects. Consequently, underwater perception often relies on sonar-based measurements to estimate the vehicle’s state and surroundings, either standalone or in concert with other sensing modalities, to support the perception necessary to plan and control manipulation tasks. Simulation of the multibeam echosounder, while not a substitute for in-water testing, is a critical capability for developing manipulation strategies in the complex and variable ocean environment. Although several approaches exist in the literature to simulate synthetic sonar images, the methods in the robotics community typically use image processing and video rendering software to comply with real-time execution requirements. In addition to a lack of physics-based interaction model between sound and the scene of interest, several basic properties are absent in these rendered sonar images–notably the coherent imaging system and coherent speckle that cause distortion of the object geometry in the sonar image. To address this deficiency, we present a physics-based multibeam echosounder simulation method to capture these fundamental aspects of sonar perception. A point-based scattering model is implemented to calculate the acoustic interaction between the target and the environment. This is a simplified representation of target scattering but can produce realistic coherent image speckle and the correct point spread function. The results demonstrate that this multibeam echosounder simulator generates qualitatively realistic images with high efficiency to provide the sonar image and the physical time series signal data. This synthetic sonar data is a key enabler for developing, testing, and evaluating autonomous underwater manipulation strategies that use sonar as a component of perception.


2021 ◽  
Vol 480 ◽  
pp. 126458
Author(s):  
Wenwen Lu ◽  
Shanyong Chen ◽  
Yupeng Xiong ◽  
Junfeng Liu
Keyword(s):  

2013 ◽  
Vol 846-847 ◽  
pp. 574-577
Author(s):  
Jian Hui Wu ◽  
Guo Yun Zhang ◽  
Long Yuan Guo ◽  
Shuai Yuan

A staring and nothing blind-zone video surveillance system was designed based on the DSP chip of TMS320DM6467 and the fish-eye lens which has ultra-view field. This fisheye lens has 185 degree field of view, and make the system has 360 degree panorama and nothing blind-zone monitoring which used two fish-eye lens imaging system. Firstly, this paper designed the panorama imaging system, and then studied the calibration method for it. The hardware system used the multimedia imagery processing chip of TMS320DM6467 which has dual core processor and can processing the fish-eye image real time. The experiment shows the fisheye image had 360 degree sphere with nothing blind-zone for surveillance area and the hardware of processing center can work stable. This system can intelligent surveillance when upload the algorithm.


2021 ◽  
Vol 11 (11) ◽  
pp. 5182
Author(s):  
Shao Zhang ◽  
Guoqing Yang ◽  
Tao Sun ◽  
Kunyang Du ◽  
Jin Guo

With the development of our society, unmanned aerial vehicles (UAVs) appear more frequently in people’s daily lives, which could become a threat to public security and privacy, especially at night. At the same time, laser active imaging is an important detection method for night vision. In this paper, we implement a UAV detection model for our laser active imaging system based on deep learning and a simulated dataset that we constructed. Firstly, the model is pre-trained on the largest available dataset. Then, it is transferred to a simulated dataset to learn about the UAV features. Finally, the trained model is tested on real laser active imaging data. The experimental results show that the performance of the proposed method is greatly improved compared to the model not trained on the simulated dataset, which verifies the transferability of features learned from the simulated data, the effectiveness of the proposed simulation method, and the feasibility of our solution for UAV detection in the laser active imaging domain. Furthermore, a comparative experiment with the previous method is carried out. The results show that our model can achieve high-precision, real-time detection at 104.1 frames per second (FPS).


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6649
Author(s):  
Ying Li ◽  
Ombeline de La Rochefoucauld ◽  
Philippe Zeitoun

In recent years, integral imaging, a promising three-dimensional imaging technology, has attracted more and more attention for its broad applications in robotics, computational vision, and medical diagnostics. In the visible spectrum, an integral imaging system can be easily implemented by inserting a micro-lens array between a image formation optic and a pixelated detector. By using a micro-Fresnel Zone Plate (FZP) array instead of the refractive lens array, the integral imaging system can be applied in X-ray. Due to micro-scale dimensions of FZP in the array and current manufacturing techniques, the number of zones of FZP is limited. This may have an important impact on the FZP imaging performance. The paper introduces a simulation method based on the scalar diffraction theory. With the aid of this method, the effect of the number of zones on the FZP imaging performance is numerically investigated, especially the case of very small number of zones. Results of several simulation of FZP imaging are presented and show the image can be formed by a FZP with a number of zones as low as 5. The paper aims at offering a numerical approach in order to facilitate the design of FZP for integral imaging.


Sign in / Sign up

Export Citation Format

Share Document