scholarly journals A Vehicle Detection Model Based on 5G-V2X for Smart City Security Perception

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Teng Liu ◽  
Cheng Xu ◽  
Hongzhe Liu ◽  
Xuewei Li ◽  
Pengfei Wang

Security perception systems based on 5G-V2X have become an indispensable part of smart city construction. However, the detection speed of traditional deep learning models is slow, and the low-latency characteristics of 5G networks cannot be fully utilized. In order to improve the safety perception ability based on 5G-V2X, increase the detection speed in vehicle perception. A vehicle perception model is proposed. First, an adaptive feature extraction method is adopted to enhance the expression of small-scale features and improve the feature extraction ability of small-scale targets. Then, by improving the feature fusion method, the shallow information is fused layer by layer to solve the problem of feature loss. Finally, the attention enhancement method is introduced to increase the center point prediction ability and solve the problem of target occlusion. The experimental results show that the UA-DETRAC data set has a good detection effect. Compared with the vehicle detection capability before the improvement, the detection accuracy and speed have been greatly improved, which effectively improves the security perception capability based on the 5G-V2X system, thereby promoting the construction of smart cities.

2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Feng Wang ◽  
Zhiming Xu ◽  
Zemin Qiu ◽  
Weichuan Ni ◽  
Jiaqi Li ◽  
...  

The target detection algorithms have the problems of low detection accuracy and susceptibility to occlusion in existing smart cities. In response to this phenomenon, this paper presents an algorithm for target detection in a smart city combined with depth learning and feature extraction. It proposes an adaptive strategy is introduced to optimize the algorithm search windows based on the traditional SSD algorithm, which according to the target operating conditions change, strengthening the algorithm to enhance the accuracy of the objective function which is combined with the weighted correlation feature fusion method, and this method is a combination of appearance depth features and depth features. Experimental results show that this algorithm has a better antiblocking ability and detection accuracy compared with the conventional SSD algorithms. In addition, it has better stability in a changing environment.


Author(s):  
Zhenying Xu ◽  
Ziqian Wu ◽  
Wei Fan

Defect detection of electromagnetic luminescence (EL) cells is the core step in the production and preparation of solar cell modules to ensure conversion efficiency and long service life of batteries. However, due to the lack of feature extraction capability for small feature defects, the traditional single shot multibox detector (SSD) algorithm performs not well in EL defect detection with high accuracy. Consequently, an improved SSD algorithm with modification in feature fusion in the framework of deep learning is proposed to improve the recognition rate of EL multi-class defects. A dataset containing images with four different types of defects through rotation, denoising, and binarization is established for the EL. The proposed algorithm can greatly improve the detection accuracy of the small-scale defect with the idea of feature pyramid networks. An experimental study on the detection of the EL defects shows the effectiveness of the proposed algorithm. Moreover, a comparison study shows the proposed method outperforms other traditional detection methods, such as the SIFT, Faster R-CNN, and YOLOv3, in detecting the EL defect.


Author(s):  
Tu Renwei ◽  
Zhu Zhongjie ◽  
Bai Yongqiang ◽  
Gao Ming ◽  
Ge Zhifeng

Unmanned Aerial Vehicle (UAV) inspection has become one of main methods for current transmission line inspection, but there are still some shortcomings such as slow detection speed, low efficiency, and inability for low light environment. To address these issues, this paper proposes a deep learning detection model based on You Only Look Once (YOLO) v3. On the one hand, the neural network structure is simplified, that is the three feature maps of YOLO v3 are pruned into two to meet specific detection requirements. Meanwhile, the K-means++ clustering method is used to calculate the anchor value of the data set to improve the detection accuracy. On the other hand, 1000 sets of power tower and insulator data sets are collected, which are inverted and scaled to expand the data set, and are fully optimized by adding different illumination and viewing angles. The experimental results show that this model using improved YOLO v3 can effectively improve the detection accuracy by 6.0%, flops by 8.4%, and the detection speed by about 6.0%.


Smart Cities ◽  
2020 ◽  
Vol 3 (1) ◽  
pp. 48-73
Author(s):  
Maroula N. Alverti ◽  
Kyriakos Themistocleous ◽  
Phaedon C. Kyriakidis ◽  
Diofantos G. Hadjimitsis

The smart city notion provides an integrated and systematic answer to challenges facing cities today. Smart city policy makers and technology vendors are increasingly stating their interest in human-centered smart cities. On the other hand, in many studies smart city policies bring forward a one-size-fits-all type of recommendation for all areas in question instead of location-specific ones. Based on the above considerations, this paper illustrates that smart citizen characteristics, alongside local urban challenges, are paving the way towards more effective efforts in smart city policy decision making. Our main presumption is that the development level of human-centered indicators of smart cities varies locally. The scientific objective of this paper is to find a simple, understandable link between human smart characteristics and local determinants in Limassol city, Cyprus. The data set consists of seven indicators defined as human smart characteristics and seven which determine local urban challenges consisting of demographic dynamics and built infrastructure attributes based on housing. Correlations of the 14 above indicators are examined in entirety and separately, as the study area was divided into three spatial sub-groups (high, moderate, and low coverage areas) depending on dispersed urbanization, as the main challenge of the study area. The data were obtained mainly from the most recent population census in 2011 and categorized in sub-groups by triggering CLC 2012. Analyzing the statistics using principal component analysis (PCA), we identify significant relationships between human smart city characteristics, demographic dynamics and built infrastructure attributes which can be used in local policy decision making. Spatial variations based on the dispersed urbanization are also observed regarding the above-mentioned relationships.


Author(s):  
Shang Jiang ◽  
Haoran Qin ◽  
Bingli Zhang ◽  
Jieyu Zheng

The loss function is a crucial factor that affects the detection precision in the object detection task. In this paper, we optimize both two loss functions for classification and localization simultaneously. Firstly, we reconstruct the classification loss function by combining the prediction results of localization, aiming to establish the correlation between localization and classification subnetworks. Compared to the existing studies, in which the correlation is only established among the positive samples and applied to improve the localization accuracy of predicted boxes, this paper utilizes the correlation to define the hard negative samples and then puts emphasis on the classification of them. Thus the whole misclassified rate for negative samples can be reduced. Besides, a novel localization loss named MIoU is proposed by incorporating a Mahalanobis distance between the predicted box and target box, eliminating the gradients inconsistency problem in the DIoU loss, further improving the localization accuracy. Finally, the proposed methods are applied to train the networks for nighttime vehicle detection. Experimental results show that the detection accuracy can be outstandingly improved with our proposed loss functions without hurting the detection speed.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zhenbo Lu ◽  
Wei Zhou ◽  
Shixiang Zhang ◽  
Chen Wang

Quick and accurate crash detection is important for saving lives and improved traffic incident management. In this paper, a feature fusion-based deep learning framework was developed for video-based urban traffic crash detection task, aiming at achieving a balance between detection speed and accuracy with limited computing resource. In this framework, a residual neural network (ResNet) combined with attention modules was proposed to extract crash-related appearance features from urban traffic videos (i.e., a crash appearance feature extractor), which were further fed to a spatiotemporal feature fusion model, Conv-LSTM (Convolutional Long Short-Term Memory), to simultaneously capture appearance (static) and motion (dynamic) crash features. The proposed model was trained by a set of video clips covering 330 crash and 342 noncrash events. In general, the proposed model achieved an accuracy of 87.78% on the testing dataset and an acceptable detection speed (FPS > 30 with GTX 1060). Thanks to the attention module, the proposed model can capture the localized appearance features (e.g., vehicle damage and pedestrian fallen-off) of crashes better than conventional convolutional neural networks. The Conv-LSTM module outperformed conventional LSTM in terms of capturing motion features of crashes, such as the roadway congestion and pedestrians gathering after crashes. Compared to traditional motion-based crash detection model, the proposed model achieved higher detection accuracy. Moreover, it could detect crashes much faster than other feature fusion-based models (e.g., C3D). The results show that the proposed model is a promising video-based urban traffic crash detection algorithm that could be used in practice in the future.


Author(s):  
Zhangu Wang ◽  
Jun Zhan ◽  
Chunguang Duan ◽  
Xin Guan ◽  
Kai Yang

Vehicle detection in severe weather has always been a difficult task in the environmental perception of intelligent vehicles. This paper proposes a vehicle detection method based on pseudo-visual search and the histogram of oriented gradients (HOG)–local binary pattern (LBP) feature fusion. Using radar detection information, this method can directly extract the region of interest (ROI) of vehicles from infrared images by imitating human vision. Unlike traditional methods, the pseudo-visual search mechanism is independent of complex image processing and environmental interferences, thereby significantly improving the speed and accuracy of ROI extraction. More notably, the ROI extraction process based on pseudo-visual search can reduce image processing by 40%–80%, with an ROI extraction time of only 4 ms, which is far lower than the traditional algorithms. In addition, we used the HOG–LBP fusion feature to train the vehicle classifier, which improves the extraction ability of local and global features of vehicles. The HOG–LBP fusion feature can improve vehicle detection accuracy by 6%–9%, compared to a single feature. Experimental results show that the accuracy of vehicle detection is 92.7%, and the detection speed is 31 fps, which validates the feasibility of the proposed method and effectively improve the vehicle detection performance in severe weather


2021 ◽  
Vol 1 (1) ◽  
pp. 9-13
Author(s):  
Zhongqiang Huang ◽  
Ping Zhang ◽  
Ruigang Liu ◽  
Dongxu Li

The identification of immature apples is a key technical link to realize automatic real-time monitoring of orchards, expert decision-making, and realization of orchard output prediction. In the orchard scene, the reflection caused by light and the color of immature apples are highly similar to the leaves, especially the obscuration and overlap of fruits by leaves and branches, which brings great challenges to the detection of immature apples. This paper proposes an improved YOLOv3 detection method for immature apples in the orchard scene. Use CSPDarknet53 as the backbone network of the model, introduce the CIOU target frame regression mechanism, and combine with the Mosaic algorithm to improve the detection accuracy. For the data set with severely occluded fruits, the F1 and mAP of the immature apple recognition model proposed in this article are 0.652 and 0.675, respectively. The inference speed for a single 416×416 picture is 12 ms, the detection speed can reach 83 frames/s on 1080ti, and the inference speed is 8.6 ms. Therefore, for the severely occluded immature apple data set, the method proposed in this article has a significant detection effect, and provides a feasible solution for the automation and mechanization of the apple industry.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Yao Chen ◽  
Tao Duan ◽  
Changyuan Wang ◽  
Yuanyuan Zhang ◽  
Mo Huang

Ship detection on synthetic aperture radar (SAR) imagery has many valuable applications for both civil and military fields and has received extraordinary attention in recent years. The traditional detection methods are insensitive to multiscale ships and usually time-consuming, results in low detection accuracy and limitation for real-time processing. To balance the accuracy and speed, an end-to-end ship detection method for complex inshore and offshore scenes based on deep convolutional neural networks (CNNs) is proposed in this paper. First, the SAR images are divided into different grids, and the anchor boxes are predefined based on the responsible grids for dense ship prediction. Then, Darknet-53 with residual units is adopted as a backbone to extract features, and a top-down pyramid structure is added for multiscale feature fusion with concatenation. By this means, abundant hierarchical features containing both spatial and semantic information are extracted. Meanwhile, the strategies such as soft non-maximum suppression (Soft-NMS), mix-up and mosaic data augmentation, multiscale training, and hybrid optimization are used for performance enhancement. Besides, the model is trained from scratch to avoid learning objective bias of pretraining. The proposed one-stage method adopts end-to-end inference by a single network, so the detection speed can be guaranteed due to the concise paradigm. Extensive experiments are performed on the public SAR ship detection dataset (SSDD), and the results show that the method can detect both inshore and offshore ships with higher accuracy than other mainstream methods, yielding the accuracy with an average of 95.52%, and the detection speed is quite fast with about 72 frames per second (FPS). The actual Sentinel-1 and Gaofen-3 data are utilized for verification, and the detection results also show the effectiveness and robustness of the method.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xu Han ◽  
Lining Zhao ◽  
Yue Ning ◽  
Jingfeng Hu

The application of ship detection for assistant intelligent ship navigation has stringent requirements for the model’s detection speed and accuracy. In response to this problem, this study uses an improved YOLO-V4 detection model (ShipYOLO) to detect ships. Compared to YOLO-V4, the model has three main improvements. Firstly, the backbone network (CSPDarknet) of YOLO-V4 is optimized. In the training process, the 3  ×  3 convolution, 1  ×  1 convolution, and identity parallel mode are used to replace the original feature extraction component (ResUnit) and more features are extracted. In the inference process, the branch parameters are combined to form a new backbone network named RCSPDarknet, which improves the inference speed of the model while improving the accuracy. Secondly, in order to solve the problem of missed detection of the small-scale ships, we designed a new amplified receptive field module named DSPP with dilated convolution and Max-Pooling, which improves the model’s acquisition of small-scale ship spatial information and robustness of ship target space displacement. Finally, we use the attention mechanism and Resnet’s shortcut idea to improve the feature pyramid structure (PAFPN) of YOLO-V4 and get a new feature pyramid structure named AtFPN. The structure effectively improves the model’s feature extraction effect for ships of different scales and reduces the number of model parameters, further improving the model’s inference speed and detection accuracy. In addition, we have created a ship dataset with a total of 2238 images, which is a single-category dataset. The experimental results show that ShipYOLO has the advantage of faster speed and higher accuracy even in different input sizes. Considering the input size of 320  ×  320 on the PC equipped with NVIDIA 1080Ti GPU, the FPS and mAP@5 : 5:95 (mAP90) of ShipYOLO are increased by 23.7% and 13.6% (10.6%), respectively, with an input size of 320  ×  320, ShipYOLO, compared to YOLO-V4.


Sign in / Sign up

Export Citation Format

Share Document