scholarly journals Lightweight Fruit-Detection Algorithm for Edge Computing Applications

2021 ◽  
Vol 12 ◽  
Author(s):  
Wenli Zhang ◽  
Yuxin Liu ◽  
Kaizhen Chen ◽  
Huibin Li ◽  
Yulin Duan ◽  
...  

In recent years, deep-learning-based fruit-detection technology has exhibited excellent performance in modern horticulture research. However, deploying deep learning algorithms in real-time field applications is still challenging, owing to the relatively low image processing capability of edge devices. Such limitations are becoming a new bottleneck and hindering the utilization of AI algorithms in modern horticulture. In this paper, we propose a lightweight fruit-detection algorithm, specifically designed for edge devices. The algorithm is based on Light-CSPNet as the backbone network, an improved feature-extraction module, a down-sampling method, and a feature-fusion module, and it ensures real-time detection on edge devices while maintaining the fruit-detection accuracy. The proposed algorithm was tested on three edge devices: NVIDIA Jetson Xavier NX, NVIDIA Jetson TX2, and NVIDIA Jetson NANO. The experimental results show that the average detection precision of the proposed algorithm for orange, tomato, and apple datasets are 0.93, 0.847, and 0.850, respectively. Deploying the algorithm, the detection speed of NVIDIA Jetson Xavier NX reaches 21.3, 24.8, and 22.2 FPS, while that of NVIDIA Jetson TX2 reaches 13.9, 14.1, and 14.5 FPS and that of NVIDIA Jetson NANO reaches 6.3, 5.0, and 8.5 FPS for the three datasets. Additionally, the proposed algorithm provides a component add/remove function to flexibly adjust the model structure, considering the trade-off between the detection accuracy and speed in practical usage.

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3166 ◽  
Author(s):  
Cao ◽  
Song ◽  
Song ◽  
Xiao ◽  
Peng

Lane detection is an important foundation in the development of intelligent vehicles. To address problems such as low detection accuracy of traditional methods and poor real-time performance of deep learning-based methodologies, a lane detection algorithm for intelligent vehicles in complex road conditions and dynamic environments was proposed. Firstly, converting the distorted image and using the superposition threshold algorithm for edge detection, an aerial view of the lane was obtained via region of interest extraction and inverse perspective transformation. Secondly, the random sample consensus algorithm was adopted to fit the curves of lane lines based on the third-order B-spline curve model, and fitting evaluation and curvature radius calculation were then carried out on the curve. Lastly, by using the road driving video under complex road conditions and the Tusimple dataset, simulation test experiments for lane detection algorithm were performed. The experimental results show that the average detection accuracy based on road driving video reached 98.49%, and the average processing time reached 21.5 ms. The average detection accuracy based on the Tusimple dataset reached 98.42%, and the average processing time reached 22.2 ms. Compared with traditional methods and deep learning-based methodologies, this lane detection algorithm had excellent accuracy and real-time performance, a high detection efficiency and a strong anti-interference ability. The accurate recognition rate and average processing time were significantly improved. The proposed algorithm is crucial in promoting the technological level of intelligent vehicle driving assistance and conducive to the further improvement of the driving safety of intelligent vehicles.


2021 ◽  
Vol 13 (21) ◽  
pp. 4377
Author(s):  
Long Sun ◽  
Jie Chen ◽  
Dazheng Feng ◽  
Mengdao Xing

Unmanned aerial vehicle (UAV) is one of the main means of information warfare, such as in battlefield cruises, reconnaissance, and military strikes. Rapid detection and accurate recognition of key targets in UAV images are the basis of subsequent military tasks. The UAV image has characteristics of high resolution and small target size, and in practical application, the detection speed is often required to be fast. Existing algorithms are not able to achieve an effective trade-off between detection accuracy and speed. Therefore, this paper proposes a parallel ensemble deep learning framework for unmanned aerial vehicle video multi-target detection, which is a global and local joint detection strategy. It combines a deep learning target detection algorithm with template matching to make full use of image information. It also integrates multi-process and multi-threading mechanisms to speed up processing. Experiments show that the system has high detection accuracy for targets with focal lengths varying from one to ten times. At the same time, the real-time and stable display of detection results is realized by aiming at the moving UAV video image.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Rui Wang ◽  
Ziyue Wang ◽  
Zhengwei Xu ◽  
Chi Wang ◽  
Qiang Li ◽  
...  

Object detection is an important part of autonomous driving technology. To ensure the safe running of vehicles at high speed, real-time and accurate detection of all the objects on the road is required. How to balance the speed and accuracy of detection is a hot research topic in recent years. This paper puts forward a one-stage object detection algorithm based on YOLOv4, which improves the detection accuracy and supports real-time operation. The backbone of the algorithm doubles the stacking times of the last residual block of CSPDarkNet53. The neck of the algorithm replaces the SPP with the RFB structure, improves the PAN structure of the feature fusion module, adds the attention mechanism CBAM and CA structure to the backbone and neck structure, and finally reduces the overall width of the network to the original 3/4, so as to reduce the model parameters and improve the inference speed. Compared with YOLOv4, the algorithm in this paper improves the average accuracy on KITTI dataset by 2.06% and BDD dataset by 2.95%. When the detection accuracy is almost unchanged, the inference speed of this algorithm is increased by 9.14%, and it can detect in real time at a speed of more than 58.47 FPS.


2021 ◽  
Vol 2083 (2) ◽  
pp. 022041
Author(s):  
Caiyu Liu ◽  
Zuofeng Zhou ◽  
Qingquan Wu

Abstract As an important part of road maintenance, the detection of road sprinkles has attracted extensive attention from scholars. However, after years of research, there are still some problems in the detection of road sprinkles. First of all, the detection accuracy of traditional detection algorithm is deficient. Second, deep learning approaches have great limitations for there are various kinds of sprinkles which makes it difficult to build a data set. In view of the above problems, this paper proposes a road sprinkling detection method based on multi-feature fusion. The characteristics of color, gradient, luminance and neighborhood information were considered in our method. Compared with other traditional methods, our method has higher detection accuracy. In addition, compared with deep learning-based methods, our approach doesn’t involve creating a complex data set and reduces costs. The main contributions of this paper are as follows: I. For the first time, the density clustering algorithm is combined with the detection of sprinkles, which provides a new idea for this field. II. The use of multi-feature fusion improves the accuracy and robustness of the traditional method which makes the algorithm usable in many real-world scenarios.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guoyuan Shi ◽  
Yingjie Zhang ◽  
Manni Zeng

Purpose Workpiece sorting is a key link in industrial production lines. The vision-based workpiece sorting system is non-contact and widely applicable. The detection and recognition of workpieces are the key technologies of the workpiece sorting system. To introduce deep learning algorithms into workpiece detection and improve detection accuracy, this paper aims to propose a workpiece detection algorithm based on the single-shot multi-box detector (SSD). Design/methodology/approach Propose a multi-feature fused SSD network for fast workpiece detection. First, the multi-view CAD rendering images of the workpiece are used as deep learning data sets. Second, the visual geometry group network was trained for workpiece recognition to identify the category of the workpiece. Third, this study designs a multi-level feature fusion method to improve the detection accuracy of SSD (especially for small objects); specifically, a feature fusion module is added, which uses “element-wise sum” and “concatenation operation” to combine the information of shallow features and deep features. Findings Experimental results show that the actual workpiece detection accuracy of the method can reach 96% and the speed can reach 41 frames per second. Compared with the original SSD, the method improves the accuracy by 7% and improves the detection performance of small objects. Originality/value This paper innovatively introduces the SSD detection algorithm into workpiece detection in industrial scenarios and improves it. A feature fusion module has been added to combine the information of shallow features and deep features. The multi-feature fused SSD network proves the feasibility and practicality of introducing deep learning algorithms into workpiece sorting.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4736
Author(s):  
Sk. Tanzir Mehedi ◽  
Adnan Anwar ◽  
Ziaur Rahman ◽  
Kawsar Ahmed

The Controller Area Network (CAN) bus works as an important protocol in the real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust architecture. The risk of IVN devices has still been insecure and vulnerable due to the complex data-intensive architectures which greatly increase the accessibility to unauthorized networks and the possibility of various types of cyberattacks. Therefore, the detection of cyberattacks in IVN devices has become a growing interest. With the rapid development of IVNs and evolving threat types, the traditional machine learning-based IDS has to update to cope with the security requirements of the current environment. Nowadays, the progression of deep learning, deep transfer learning, and its impactful outcome in several areas has guided as an effective solution for network intrusion detection. This manuscript proposes a deep transfer learning-based IDS model for IVN along with improved performance in comparison to several other existing models. The unique contributions include effective attribute selection which is best suited to identify malicious CAN messages and accurately detect the normal and abnormal activities, designing a deep transfer learning-based LeNet model, and evaluating considering real-world data. To this end, an extensive experimental performance evaluation has been conducted. The architecture along with empirical analyses shows that the proposed IDS greatly improves the detection accuracy over the mainstream machine learning, deep learning, and benchmark deep transfer learning models and has demonstrated better performance for real-time IVN security.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3521 ◽  
Author(s):  
Funa Zhou ◽  
Po Hu ◽  
Shuai Yang ◽  
Chenglin Wen

Rotating machinery usually suffers from a type of fault, where the fault feature extracted in the frequency domain is significant, while the fault feature extracted in the time domain is insignificant. For this type of fault, a deep learning-based fault diagnosis method developed in the frequency domain can reach high accuracy performance without real-time performance, whereas a deep learning-based fault diagnosis method developed in the time domain obtains real-time diagnosis with lower diagnosis accuracy. In this paper, a multimodal feature fusion-based deep learning method for accurate and real-time online diagnosis of rotating machinery is proposed. The proposed method can directly extract the potential frequency of abnormal features involved in the time domain data. Firstly, multimodal features corresponding to the original data, the slope data, and the curvature data are firstly extracted by three separate deep neural networks. Then, a multimodal feature fusion is developed to obtain a new fused feature that can characterize the potential frequency feature involved in the time domain data. Lastly, the fused new feature is used as the input of the Softmax classifier to achieve a real-time online diagnosis result from the frequency-type fault data. A simulation experiment and a case study of the bearing fault diagnosis confirm the high efficiency of the method proposed in this paper.


Sign in / Sign up

Export Citation Format

Share Document