A Real-time Underwater Object Detection Algorithm for Multi-beam Forward Looking Sonar

2012 ◽  
Vol 45 (5) ◽  
pp. 306-311 ◽  
Author(s):  
Enric Galceran ◽  
Vladimir Djapic ◽  
Marc Carreras ◽  
David P Williams
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zuopeng Zhao ◽  
Zhongxin Zhang ◽  
Xinzheng Xu ◽  
Yi Xu ◽  
Hualin Yan ◽  
...  

It is necessary to improve the performance of the object detection algorithm in resource-constrained embedded devices by lightweight improvement. In order to further improve the recognition accuracy of the algorithm for small target objects, this paper integrates 5 × 5 deep detachable convolution kernel on the basis of MobileNetV2-SSDLite model, extracts features of two special convolutional layers in addition to detecting the target, and designs a new lightweight object detection network—Lightweight Microscopic Detection Network (LMS-DN). The network can be implemented on embedded devices such as NVIDIA Jetson TX2. The experimental results show that LMS-DN only needs fewer parameters and calculation costs to obtain higher identification accuracy and stronger anti-interference than other popular object detection models.


2016 ◽  
Vol 54 (11) ◽  
pp. 6833-6842 ◽  
Author(s):  
Sung-Ho Cho ◽  
Hyun-Key Jung ◽  
Hyosun Lee ◽  
Hyoungrea Rim ◽  
Seong Kon Lee

2019 ◽  
Vol 77 ◽  
pp. 398-408 ◽  
Author(s):  
Shengyu Lu ◽  
Beizhan Wang ◽  
Hongji Wang ◽  
Lihao Chen ◽  
Ma Linjian ◽  
...  

Author(s):  
Garv Modwel ◽  
Anu Mehra ◽  
Nitin Rakesh ◽  
K K Mishra

Background: Object detection algorithm scans every frame in the video to detect the objects present which is time consuming. This process becomes undesirable while dealing with real time system, which needs to act with in a predefined time constraint. To have quick response we need reliable detection and recognition for objects. Methods: To deal with the above problem a hybrid method is being implemented. This hybrid method combines three important algorithms to reduce scanning task for every frame. Recursive Density Estimation (RDE) algorithm decides which frame need to be scanned. You Look at Once (YOLO) algorithm does the detection and recognition in the selected frame. Detected objects are being tracked through Speed-up Robust Feature (SURF) algorithm to track the objects in subsequent frames. Results: Through the experimental study, we demonstrate that hybrid algorithm is more efficient compared to two different algorithm of same level. The algorithm is having high accuracy and low time latency (which is necessary for real time processing). Conclusion: The hybrid algorithm is able to detect with a minimum accuracy of 97 percent for all the conducted experiments and time lag experienced is also negligible, which makes it considerably efficient for real time application.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Xuewei Wang ◽  
Jun Liu ◽  
Xiaoning Zhu

Abstract Background Research on early object detection methods of crop diseases and pests in the natural environment has been an important research direction in the fields of computer vision, complex image processing and machine learning. Because of the complexity of the early images of tomato diseases and pests in the natural environment, the traditional methods can not achieve real-time and accurate detection. Results Aiming at the complex background of early period of tomato diseases and pests image objects in the natural environment, an improved object detection algorithm based on YOLOv3 for early real-time detection of tomato diseases and pests was proposed. Firstly, aiming at the complex background of tomato diseases and pests images under natural conditions, dilated convolution layer is used to replace convolution layer in backbone network to maintain high resolution and receptive field and improve the ability of small object detection. Secondly, in the detection network, according to the size of candidate box intersection ratio (IOU) and linear attenuation confidence score predicted by multiple grids, the obscured objects of tomato diseases and pests are retained, and the detection problem of mutual obscure objects of tomato diseases and pests is solved. Thirdly, to reduce the model volume and reduce the model parameters, the network is lightweight by using the idea of convolution factorization. Finally, by introducing a balance factor, the small object weight in the loss function is optimized. The test results of nine common tomato diseases and pests under six different background conditions are statistically analyzed. The proposed method has a F1 value of 94.77%, an AP value of 91.81%, a false detection rate of only 2.1%, and a detection time of only 55 Ms. The test results show that the method is suitable for early detection of tomato diseases and pests using large-scale video images collected by the agricultural Internet of Things. Conclusions At present, most of the object detection of diseases and pests based on computer vision needs to be carried out in a specific environment (such as picking the leaves of diseases and pests and placing them in the environment with light supplement equipment, so as to achieve the best environment). For the images taken by the Internet of things monitoring camera in the field, due to various factors such as light intensity, weather change, etc., the images are very different, the existing methods cannot work reliably. The proposed method has been applied to the actual tomato production scenarios, showing good detection performance. The experimental results show that the method in this study improves the detection effect of small objects and leaves occlusion, and the recognition effect under different background conditions is better than the existing object detection algorithms. The results show that the method is feasible to detect tomato diseases and pests in the natural environment.


The primary issue faced by all types of visually challenged people around the globe is their self-independence. They feel dependent on every task they want to perform in their daily lives and this acts as an obstacle to the exciting things which they would otherwise want to do. This paper proposes a solution in the form of wearable smart spectacles which works on the Raspberry Pi Platform for making the visually challenged people self-sufficient and move freely in their known as well as unknown surroundings. This smart Spectacle uses a USB (Universal Serial Bus) camera and detects real-time objects in the vicinity using the SSD-MobileNets object detection algorithm and provides vision in the form of audio through the use of Headphones. The Smart Spectacles also combines the use of the OCR algorithm for Text Detection and proposes the module for quick and accurate detection of currency by the visually challenged.


Sign in / Sign up

Export Citation Format

Share Document