scholarly journals Dimension Measurement and Key Point Detection of Boxes through Laser-Triangulation and Deep Learning-Based Techniques

2019 ◽  
Vol 10 (1) ◽  
pp. 26 ◽  
Author(s):  
Tao Peng ◽  
Zhijiang Zhang ◽  
Fansheng Chen ◽  
Dan Zeng

Dimension measurement is of utmost importance in the logistics industry. This work studies a hand-held structured light vision system for boxes. This system measures dimension information through laser triangulation and deep learning using only two laser-box images from a camera and a cross-line laser projector. The structured edge maps of the boxes are detected by a novel end-to-end deep learning model based on a trimmed-holistically nested edge detection network. The precise geometry of the box is calculated by the 3D coordinates of the key points in the laser-box image through laser triangulation. An optimization method for effectively calibrating the system through the maximum likelihood estimation is then proposed. Results show that the proposed key point detection algorithm and the designed laser-vision-based visual system can locate and perform dimension measurement of measured boxes with high accuracy and reliability. The experimental outcomes show that the system is suitable for portable automatic box dimension online measurement.

Author(s):  
Shubham Kakirde ◽  
Shubham Jain ◽  
Swaraj Kaondal ◽  
Reena Kumbhare ◽  
Rita Das

In this fast-paced world, it is inevitable that the manual labor employed in industries will be replaced by their automated counterparts. There are a number of existing solutions which deal with object dimensions estimation but only a few of them are suitable for deployment in the industry. The reason being the trade-off between the cost, time for processing, accuracy and system complexity. The proposed system aims to automate the mentioned tasks with the help of a single camera and a line laser module for each conveyor belt setup using laser triangulation method to measure the height and edge detection algorithm for measuring the length and breadth of the object. The minimal use of equipment makes the system simple, power and time efficient. The proposed system has an average error of around 3% in the dimension estimation.


2020 ◽  
Vol 10 (10) ◽  
pp. 3544 ◽  
Author(s):  
Mahdi Bahaghighat ◽  
Qin Xin ◽  
Seyed Ahmad Motamedi ◽  
Morteza Mohammadi Zanjireh ◽  
Antoine Vacavant

Today, energy issues are more important than ever. Because of the importance of environmental concerns, clean and renewable energies such as wind power have been most welcomed globally, especially in developing countries. Worldwide development of these technologies leads to the use of intelligent systems for monitoring and maintenance purposes. Besides, deep learning as a new area of machine learning is sharply developing. Its strong performance in computer vision problems has conducted us to provide a high accuracy intelligent machine vision system based on deep learning to estimate the wind turbine angular velocity, remotely. This velocity along with other information such as pitch angle and yaw angle can be used to estimate the wind farm energy production. For this purpose, we have used SSD (Single Shot Multi-Box Detector) object detection algorithm and some specific classification methods based on DenseNet, SqueezeNet, ResNet50, and InceptionV3 models. The results indicate that the proposed system can estimate rotational speed with about 99.05 % accuracy.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012016
Author(s):  
Jiabin Wang ◽  
Faqin Gao

Abstract The traditional visual inertial odometry according to the manually designed rules extracts key points. However, the manually designed extraction rules are easy to be affected and have poor robustness in the scene of illumination and perspective change, resulting in the decline of positioning accuracy. Deep learning methods show strong robustness in key point extraction. In order to improve the positioning accuracy of visual inertial odometer in the scene of illumination and perspective change, deep learning is introduced into the visual inertial odometer system for key point detection. The encoder part of MagicPoint network is improved by depthwise separable convolution, and then the network is trained by self-supervised method; A visual inertial odometer system based on deep learning is compose by using the trained network to replace the traditional key points detection algorithm on the basis of VINS. The key point detection network is tested on HPatches dataset, and the odometer positioning effect is evaluated on EUROC dataset. The results show that the improved visual inertial odometer based on deep learning can reduce the positioning error by more than 5% without affecting the real-time performance.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2052
Author(s):  
Xinghai Yang ◽  
Fengjiao Wang ◽  
Zhiquan Bai ◽  
Feifei Xun ◽  
Yulin Zhang ◽  
...  

In this paper, a deep learning-based traffic state discrimination method is proposed to detect traffic congestion at urban intersections. The detection algorithm includes two parts, global speed detection and a traffic state discrimination algorithm. Firstly, the region of interest (ROI) is selected as the road intersection from the input image of the You Only Look Once (YOLO) v3 object detection algorithm for vehicle target detection. The Lucas-Kanade (LK) optical flow method is employed to calculate the vehicle speed. Then, the corresponding intersection state can be obtained based on the vehicle speed and the discrimination algorithm. The detection of the vehicle takes the position information obtained by YOLOv3 as the input of the LK optical flow algorithm and forms an optical flow vector to complete the vehicle speed detection. Experimental results show that the detection algorithm can detect the vehicle speed and traffic state discrimination method can judge the traffic state accurately, which has a strong anti-interference ability and meets the practical application requirements.


Sign in / Sign up

Export Citation Format

Share Document