scholarly journals Lightweight Feature Enhancement Network for Single-Shot Object Detection

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1066
Author(s):  
Peng Jia ◽  
Fuxiang Liu

At present, the one-stage detector based on the lightweight model can achieve real-time speed, but the detection performance is challenging. To enhance the discriminability and robustness of the model extraction features and improve the detector’s detection performance for small objects, we propose two modules in this work. First, we propose a receptive field enhancement method, referred to as adaptive receptive field fusion (ARFF). It enhances the model’s feature representation ability by adaptively learning the fusion weights of different receptive field branches in the receptive field module. Then, we propose an enhanced up-sampling (EU) module to reduce the information loss caused by up-sampling on the feature map. Finally, we assemble ARFF and EU modules on top of YOLO v3 to build a real-time, high-precision and lightweight object detection system referred to as the ARFF-EU network. We achieve a state-of-the-art speed and accuracy trade-off on both the Pascal VOC and MS COCO data sets, reporting 83.6% AP at 37.5 FPS and 42.5% AP at 33.7 FPS, respectively. The experimental results show that our proposed ARFF and EU modules improve the detection performance of the ARFF-EU network and achieve the development of advanced, very deep detectors while maintaining real-time speed.

2020 ◽  
Vol 10 (2) ◽  
pp. 612
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

To construct a safe and sound autonomous driving system, object detection is essential, and research on fusion of sensors is being actively conducted to increase the detection rate of objects in a dynamic environment in which safety must be secured. Recently, considerable performance improvements in object detection have been achieved with the advent of the convolutional neural network (CNN) structure. In particular, the YOLO (You Only Look Once) architecture, which is suitable for real-time object detection by simultaneously predicting and classifying bounding boxes of objects, is receiving great attention. However, securing the robustness of object detection systems in various environments still remains a challenge. In this paper, we propose a weighted mean-based adaptive object detection strategy that enhances detection performance through convergence of individual object detection results based on an RGB camera and a LiDAR (Light Detection and Ranging) for autonomous driving. The proposed system utilizes the YOLO framework to perform object detection independently based on image data and point cloud data (PCD). Each detection result is united to reduce the number of objects not detected at the decision level by the weighted mean scheme. To evaluate the performance of the proposed object detection system, tests on vehicles and pedestrians were carried out using the KITTI Benchmark Suite. Test results demonstrated that the proposed strategy can achieve detection performance with a higher mean average precision (mAP) for targeted objects than an RGB camera and is also robust against external environmental changes.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Xiaoguo Zhang ◽  
Ye Gao ◽  
Fei Ye ◽  
Qihan Liu ◽  
Kaixin Zhang

SSD (Single Shot MultiBox Detector) is one of the best object detection algorithms and is able to provide high accurate object detection performance in real time. However, SSD shows relatively poor performance on small object detection because its shallow prediction layer, which is responsible for detecting small objects, lacks enough semantic information. To overcome this problem, SKIPSSD, an improved SSD with a novel skip connection of multiscale feature maps, is proposed in this paper to enhance the semantic information and the details of the prediction layers through skippingly fusing high-level and low-level feature maps. For the detail of the fusion methods, we design two feature fusion modules and multiple fusion strategies to improve the SSD detector’s sensitivity and perception ability. Experimental results on the PASCAL VOC2007 test set demonstrate that SKIPSSD significantly improves the detection performance and outperforms lots of state-of-the-art object detectors. With an input size of 300 × 300, SKIPSSD achieves 79.0% mAP (mean average precision) at 38.7 FPS (frame per second) on a single 1080 GPU, 1.8% higher than the mAP of SSD while still keeping the real-time detection speed.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5279
Author(s):  
Dong-Hoon Kwak ◽  
Guk-Jin Son ◽  
Mi-Kyung Park ◽  
Young-Duk Kim

The consumption of seaweed is increasing year by year worldwide. Therefore, the foreign object inspection of seaweed is becoming increasingly important. Seaweed is mixed with various materials such as laver and sargassum fusiforme. So it has various colors even in the same seaweed. In addition, the surface is uneven and greasy, causing diffuse reflections frequently. For these reasons, it is difficult to detect foreign objects in seaweed, so the accuracy of conventional foreign object detectors used in real manufacturing sites is less than 80%. Supporting real-time inspection should also be considered when inspecting foreign objects. Since seaweed requires mass production, rapid inspection is essential. However, hyperspectral imaging techniques are generally not suitable for high-speed inspection. In this study, we overcome this limitation by using dimensionality reduction and using simplified operations. For accuracy improvement, the proposed algorithm is carried out in 2 stages. Firstly, the subtraction method is used to clearly distinguish seaweed and conveyor belts, and also detect some relatively easy to detect foreign objects. Secondly, a standardization inspection is performed based on the result of the subtraction method. During this process, the proposed scheme adopts simplified and burdenless calculations such as subtraction, division, and one-by-one matching, which achieves both accuracy and low latency performance. In the experiment to evaluate the performance, 60 normal seaweeds and 60 seaweeds containing foreign objects were used, and the accuracy of the proposed algorithm is 95%. Finally, by implementing the proposed algorithm as a foreign object detection platform, it was confirmed that real-time operation in rapid inspection was possible, and the possibility of deployment in real manufacturing sites was confirmed.


2019 ◽  
Author(s):  
Jimut Bahan Pal

It has been a real challenge for computers with low computing power and memory to detect objects in real time. After the invention of Convolution Neural Networks (CNN) it is easy for computers to detect images and recognize them. There are several technologies and models which can detect objects in real time, but most of them require high end technologies in terms of GPUs and TPUs. Though, recently many new algorithms and models have been proposed, which runs on low resources. In this paper we studied MobileNets to detect objects using webcam to successfully build a real time objectdetection system. We observed the pre trained model of the famous MS COCO dataset to achieve our purpose. Moreover, we applied Google’s open source TensorFlow as our back end. This real time object detection system may help in future to solve various complex vision problems.


2020 ◽  
Vol 12 (13) ◽  
pp. 2136 ◽  
Author(s):  
Arun Narenthiran Veeranampalayam Sivakumar ◽  
Jiating Li ◽  
Stephen Scott ◽  
Eric Psota ◽  
Amit J. Jhala ◽  
...  

Mid- to late-season weeds that escape from the routine early-season weed management threaten agricultural production by creating a large number of seeds for several future growing seasons. Rapid and accurate detection of weed patches in field is the first step of site-specific weed management. In this study, object detection-based convolutional neural network models were trained and evaluated over low-altitude unmanned aerial vehicle (UAV) imagery for mid- to late-season weed detection in soybean fields. The performance of two object detection models, Faster RCNN and the Single Shot Detector (SSD), were evaluated and compared in terms of weed detection performance using mean Intersection over Union (IoU) and inference speed. It was found that the Faster RCNN model with 200 box proposals had similar good weed detection performance to the SSD model in terms of precision, recall, f1 score, and IoU, as well as a similar inference time. The precision, recall, f1 score and IoU were 0.65, 0.68, 0.66 and 0.85 for Faster RCNN with 200 proposals, and 0.66, 0.68, 0.67 and 0.84 for SSD, respectively. However, the optimal confidence threshold of the SSD model was found to be much lower than that of the Faster RCNN model, which indicated that SSD might have lower generalization performance than Faster RCNN for mid- to late-season weed detection in soybean fields using UAV imagery. The performance of the object detection model was also compared with patch-based CNN model. The Faster RCNN model yielded a better weed detection performance than the patch-based CNN with and without overlap. The inference time of Faster RCNN was similar to patch-based CNN without overlap, but significantly less than patch-based CNN with overlap. Hence, Faster RCNN was found to be the best model in terms of weed detection performance and inference time among the different models compared in this study. This work is important in understanding the potential and identifying the algorithms for an on-farm, near real-time weed detection and management.


2019 ◽  
Vol 107 (1) ◽  
pp. 651-661 ◽  
Author(s):  
Adwitiya Arora ◽  
Atul Grover ◽  
Raksha Chugh ◽  
S. Sofana Reka

2020 ◽  
Vol 17 (2) ◽  
pp. 803-813
Author(s):  
Emy Haryatmi ◽  
Aris Tito Sugihharto ◽  
Maulana Mujahidin

Autonomous robotic monitoring of human and animal used to secure the house or the flat. This robot can move freely from one side to another side of the house or the flat. Accordingly, this robot is sending the vision to android smartphone in real time. Therefore, the owner of the house or the flat can see inside the house while they were not around. The objective of this research is to develop autonomous robotic to be able to monitor the area from the existing of human and animal and send the vision to android smartphone in real time. This research used four leg robotic as an autonomous robotic monitoring. Four leg robotic is equipped with camera which connected to router based on wireless connection, PIR and ultrasonic sensor and GSM module to send short message service (SMS) notification if PIR sensor detect some movement. The experiment used cat as an animal object. Experiment showed that this robot can follow the object, either a human or an animal, within range of 10–45 cm from robot in 12 seconds and send the vision to android smartphone in real-time. The SMS is received in 5 seconds after camera capture the object. This robotic cannot determine who is the object on the camera because it is not equipped with object detection system.


Sign in / Sign up

Export Citation Format

Share Document