scholarly journals Real-time Factory Smoke Detection Based on Two-stage Relation-guided Algorithm

Author(s):  
Zhenyu Wang ◽  
Senrong Ji ◽  
Duokun Yin

Abstract Recently, using image sensing devices to analyze air quality has attracted much attention of researchers. To keep real-time factory smoke under universal social supervision, this paper proposes a mobile-platform-running efficient smoke detection algorithm based on image analysis techniques. Since most smoke images in real scenes have challenging variances, it’s difficult for existing object detection methods. To this end, we introduce the two-stage smoke detection (TSSD) algorithm based on the lightweight framework, in which the prior knowledge and contextual information are modeled into the relation-guided module to reduce the smoke search space, which can therefore significantly improve the shortcomings of the single-stage method. Experimental results show that the TSSD algorithm can robustly improve the detection accuracy of the single-stage method and has good compatibility for different image resolution inputs. Compared with various state-of-the-art detection methods, the accuracy AP mean of the TSSD model reaches 59.24%, even surpassing the current detection model Faster R-CNN. In addition, the detection speed of our proposed model can reach 50 ms (20 FPS), which meets the real-time requirements, and can be deployed in the mobile terminal carrier. This model can be widely used in some scenes with smoke detection requirements, providing great potential for practical environmental applications.

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zhaoli Wu ◽  
Xin Wang ◽  
Chao Chen

Due to the limitation of energy consumption and power consumption, the embedded platform cannot meet the real-time requirements of the far-infrared image pedestrian detection algorithm. To solve this problem, this paper proposes a new real-time infrared pedestrian detection algorithm (RepVGG-YOLOv4, Rep-YOLO), which uses RepVGG to reconstruct the YOLOv4 backbone network, reduces the amount of model parameters and calculations, and improves the speed of target detection; using space spatial pyramid pooling (SPP) obtains different receptive field information to improve the accuracy of model detection; using the channel pruning compression method reduces redundant parameters, model size, and computational complexity. The experimental results show that compared with the YOLOv4 target detection algorithm, the Rep-YOLO algorithm reduces the model volume by 90%, the floating-point calculation is reduced by 93.4%, the reasoning speed is increased by 4 times, and the model detection accuracy after compression reaches 93.25%.


Author(s):  
Guoqing Zhou ◽  
Xiang Zhou ◽  
Tao Yue ◽  
Yilong Liu

This paper presents a method which combines the traditional threshold method and SVM method, to detect the cloud of Landsat-8 images. The proposed method is implemented using DSP for real-time cloud detection. The DSP platform connects with emulator and personal computer. The threshold method is firstly utilized to obtain a coarse cloud detection result, and then the SVM classifier is used to obtain high accuracy of cloud detection. More than 200 cloudy images from Lansat-8 were experimented to test the proposed method. Comparing the proposed method with SVM method, it is demonstrated that the cloud detection accuracy of each image using the proposed algorithm is higher than those of SVM algorithm. The results of the experiment demonstrate that the implementation of the proposed method on DSP can effectively realize the real-time cloud detection accurately.


2013 ◽  
Vol 129 ◽  
pp. 561-567 ◽  
Author(s):  
Jaime Massanet-Nicolau ◽  
Richard Dinsdale ◽  
Alan Guwy ◽  
Gary Shipley

Agriculture ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 1238
Author(s):  
Xiaoyu Li ◽  
Yuefeng Du ◽  
Lin Yao ◽  
Jun Wu ◽  
Lei Liu

At present, the wide application of the CNN (convolutional neural network) algorithm has greatly improved the intelligence level of agricultural machinery. Accurate and real-time detection for outdoor conditions is necessary for realizing intelligence and automation of corn harvesting. In view of the problems with existing detection methods for judging the integrity of corn kernels, such as low accuracy, poor reliability, and difficulty in adapting to the complicated and changeable harvesting environment, this paper investigates a broken corn kernel detection device for combine harvesters by using the yolov4-tiny model. Hardware construction is first designed to acquire continuous images and processing of corn kernels without overlap. Based on the images collected, the yolov4-tiny model is then utilized for training recognition of the intact and broken corn kernels samples. Next, a broken corn kernel detection algorithm is developed. Finally, the experiments are carried out to verify the effectiveness of the broken corn kernel detection device. The laboratory results show that the accuracy of the yolov4-tiny model is 93.5% for intact kernels and 93.0% for broken kernels, and the value of precision, recall, and F1 score are 92.8%, 93.5%, and 93.11%, respectively. The field experiment results show that the broken kernel rate obtained by the designed detection device are in good agreement with that obtained by the manually calculated statistic, with differentials at only 0.8%. This study provides a technical reference of a real-time method for detecting a broken corn kernel rate.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 2038
Author(s):  
Zhen Tao ◽  
Shiwei Ren ◽  
Yueting Shi ◽  
Xiaohua Wang ◽  
Weijiang Wang

Railway transportation has always occupied an important position in daily life and social progress. In recent years, computer vision has made promising breakthroughs in intelligent transportation, providing new ideas for detecting rail lines. Yet the majority of rail line detection algorithms use traditional image processing to extract features, and their detection accuracy and instantaneity remain to be improved. This paper goes beyond the aforementioned limitations and proposes a rail line detection algorithm based on deep learning. First, an accurate and lightweight RailNet is designed, which takes full advantage of the powerful advanced semantic information extraction capabilities of deep convolutional neural networks to obtain high-level features of rail lines. The Segmentation Soul (SS) module is creatively added to the RailNet structure, which improves segmentation performance without any additional inference time. The Depth Wise Convolution (DWconv) is introduced in the RailNet to reduce the number of network parameters and eventually ensure real-time detection. Afterward, according to the binary segmentation maps of RailNet output, we propose the rail line fitting algorithm based on sliding window detection and apply the inverse perspective transformation. Thus the polynomial functions and curvature of the rail lines are calculated, and rail lines are identified in the original images. Furthermore, we collect a real-world rail lines dataset, named RAWRail. The proposed algorithm has been fully validated on the RAWRail dataset, running at 74 FPS, and the accuracy reaches 98.6%, which is superior to the current rail line detection algorithms and shows powerful potential in real applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xi Cheng

Most of the existing smoke detection methods are based on manual operation, which is difficult to meet the needs of fire monitoring. To further improve the accuracy of smoke detection, an automatic feature extraction and classification method based on fast regional convolution neural network (fast R–CNN) was introduced in the study. This method uses a selective search algorithm to obtain the candidate images of the sample images. The preselected area coordinates and the sample image of visual task are used as network learning. During the training process, we use the feature migration method to avoid the lack of smoke data or limited data sources. Finally, a target detection model is obtained, which is strongly related to a specified visual task, and it has well-trained weight parameters. Experimental results show that this method not only improves the detection accuracy but also effectively reduces the false alarm rate. It can not only meet the real time and accuracy of fire detection but also realize effective fire detection. Compared with similar fire detection algorithms, the improved algorithm proposed in this paper has better robustness to fire detection and has better performance in accuracy and speed.


Author(s):  
Yuxia Wang ◽  
Wenzhu Yang ◽  
Tongtong Yuan ◽  
Qian Li

Lower detection accuracy and insufficient detection ability for small objects are the main problems of the region-free object detection algorithm. Aiming at solving the abovementioned problems, an improved object detection method using feature map refinement and anchor optimization is proposed. Firstly, the reverse fusion operation is performed on each of the object detection layer, which can provide the lower layers with more semantic information by the fusion of detection features at different levels. Secondly, the self-attention module is used to refine each detection feature map, calibrates the features between channels, and enhances the expression ability of local features. In addition, the anchor optimization model is introduced on each feature layer associated with anchors, and the anchors with higher probability of containing an object and more closely match the location and size of the object are obtained. In this model, semantic features are used to confirm and remove negative anchors to reduce search space of the objects, and preliminary adjustments are made to the locations and sizes of anchors. Comprehensive experimental results on PASCAL VOC detection dataset demonstrate the effectiveness of the proposed method. In particular, with VGG-16 and lower dimension 300×300 input size, the proposed method achieves a mAP of 79.1% on VOC 2007 test set with an inference speed of 24.7 milliseconds per image.


Sign in / Sign up

Export Citation Format

Share Document