scholarly journals Self-Recurrent Learning and Gap Sample Feature Synthesis-Based Object Detection Method

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Lvjiyuan Jiang ◽  
Haifeng Wang ◽  
Kai Yan ◽  
Chengjiang Zhou ◽  
Songlin Li ◽  
...  

Object detection-based deep learning by using the looking and thinking twice mechanism plays an important role in electrical construction work. Nevertheless, the use of this mechanism in object detection produces some problems, such as calculation pressure caused by multilayer convolution and redundant features that confuse the network. In this paper, we propose a self-recurrent learning and gap sample feature fusion-based object detection method to solve the aforementioned problems. The network consists of three modules: self-recurrent learning-based feature fusion (SLFF), residual enhancement architecture-based multichannel (REAML), and gap sample-based features fusion (GSFF). SLFF detects objects in the background through an iterative convolutional network. REAML, which serves as an information filtering module, is used to reduce the interference of redundant features in the background. GSFF adds feature augmentation to the network. Simultaneously, our model can effectively improve the operation and production efficiency of electric power companies’ personnel and guarantee the safety of lives and properties.

Author(s):  
Siyu Chen ◽  
Li Wang ◽  
Zheng Fang ◽  
Zhensheng Shi ◽  
Anxue Zhang

2021 ◽  
Vol 11 (17) ◽  
pp. 7984
Author(s):  
Prabu Subramani ◽  
Khalid Nazim Abdul Sattar ◽  
Rocío Pérez de Prado ◽  
Balasubramanian Girirajan ◽  
Marcin Wozniak

Connected autonomous vehicles (CAVs) currently promise cooperation between vehicles, providing abundant and real-time information through wireless communication technologies. In this paper, a two-level fusion of classifiers (TLFC) approach is proposed by using deep learning classifiers to perform accurate road detection (RD). The proposed TLFC-RD approach improves the classification by considering four key strategies such as cross fold operation at input and pre-processing using superpixel generation, adequate features, multi-classifier feature fusion and a deep learning classifier. Specifically, the road is classified as drivable and non-drivable areas by designing the TLFC using the deep learning classifiers, and the detected information using the TLFC-RD is exchanged between the autonomous vehicles for the ease of driving on the road. The TLFC-RD is analyzed in terms of its accuracy, sensitivity or recall, specificity, precision, F1-measure and max F measure. The TLFC- RD method is also evaluated compared to three existing methods: U-Net with the Domain Adaptation Model (DAM), Two-Scale Fully Convolutional Network (TFCN) and a cooperative machine learning approach (i.e., TAAUWN). Experimental results show that the accuracy of the TLFC-RD method for the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset is 99.12% higher than its competitors.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


Sign in / Sign up

Export Citation Format

Share Document