scholarly journals Cascaded Cross-Layer Fusion Network for Pedestrian Detection

Mathematics ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 139
Author(s):  
Zhifeng Ding ◽  
Zichen Gu ◽  
Yanpeng Sun ◽  
Xinguang Xiang

The detection method based on anchor-free not only reduces the training cost of object detection, but also avoids the imbalance problem caused by an excessive number of anchors. However, these methods only pay attention to the impact of the detection head on the detection performance, thus ignoring the impact of feature fusion on the detection performance. In this article, we take pedestrian detection as an example and propose a one-stage network Cascaded Cross-layer Fusion Network (CCFNet) based on anchor-free. It consists of Cascaded Cross-layer Fusion module (CCF) and novel detection head. Among them, CCF fully considers the distribution of high-level information and low-level information of feature maps under different stages in the network. First, the deep network is used to remove a large amount of noise in the shallow features, and finally, the high-level features are reused to obtain a more complete feature representation. Secondly, for the pedestrian detection task, a novel detection head is designed, which uses the global smooth map (GSMap) to provide global information for the center map to obtain a more accurate center map. Finally, we verified the feasibility of CCFNet on the Caltech and CityPersons datasets.

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0250782
Author(s):  
Bin Wang ◽  
Bin Xu

With the rapid development of Unmanned Aerial Vehicles, vehicle detection in aerial images plays an important role in different applications. Comparing with general object detection problems, vehicle detection in aerial images is still a challenging research topic since it is plagued by various unique factors, e.g. different camera angle, small vehicle size and complex background. In this paper, a Feature Fusion Deep-Projection Convolution Neural Network is proposed to enhance the ability to detect small vehicles in aerial images. The backbone of the proposed framework utilizes a novel residual block named stepwise res-block to explore high-level semantic features as well as conserve low-level detail features at the same time. A specially designed feature fusion module is adopted in the proposed framework to further balance the features obtained from different levels of the backbone. A deep-projection deconvolution module is used to minimize the impact of the information contamination introduced by down-sampling/up-sampling processes. The proposed framework has been evaluated by UCAS-AOD, VEDAI, and DOTA datasets. According to the evaluation results, the proposed framework outperforms other state-of-the-art vehicle detection algorithms for aerial images.


2021 ◽  
Vol 13 (18) ◽  
pp. 3650
Author(s):  
Ru Luo ◽  
Jin Xing ◽  
Lifu Chen ◽  
Zhouhao Pan ◽  
Xingmin Cai ◽  
...  

Although deep learning has achieved great success in aircraft detection from SAR imagery, its blackbox behavior has been criticized for low comprehensibility and interpretability. Such challenges have impeded the trustworthiness and wide application of deep learning techniques in SAR image analytics. In this paper, we propose an innovative eXplainable Artificial Intelligence (XAI) framework to glassbox deep neural networks (DNN) by using aircraft detection as a case study. This framework is composed of three parts: hybrid global attribution mapping (HGAM) for backbone network selection, path aggregation network (PANet), and class-specific confidence scores mapping (CCSM) for visualization of the detector. HGAM integrates the local and global XAI techniques to evaluate the effectiveness of DNN feature extraction; PANet provides advanced feature fusion to generate multi-scale prediction feature maps; while CCSM relies on visualization methods to examine the detection performance with given DNN and input SAR images. This framework can select the optimal backbone DNN for aircraft detection and map the detection performance for better understanding of the DNN. We verify its effectiveness with experiments using Gaofen-3 imagery. Our XAI framework offers an explainable approach to design, develop, and deploy DNN for SAR image analytics.


Author(s):  
Xinge Zhu ◽  
Liang Li ◽  
Weigang Zhang ◽  
Tianrong Rao ◽  
Min Xu ◽  
...  

Visual emotion recognition aims to associate images with appropriate emotions. There are different visual stimuli that can affect human emotion from low-level to high-level, such as color, texture, part, object, etc. However, most existing methods treat different levels of features as independent entity without having effective method for feature fusion. In this paper, we propose a unified CNN-RNN model to predict the emotion based on the fused features from different levels by exploiting the dependency among them. Our proposed architecture leverages convolutional neural network (CNN) with multiple layers to extract different levels of features with in a multi-task learning framework, in which two related loss functions are introduced to learn the feature representation. Considering the dependencies within the low-level and high-level features, a new bidirectional recurrent neural network (RNN) is proposed to integrate the learned features from different layers in the CNN model. Extensive experiments on both Internet images and art photo datasets demonstrate that our method outperforms the state-of-the-art methods with at least 7% performance improvement.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zishuo Han ◽  
Chunping Wang ◽  
Qiang Fu

Purpose The purpose of this paper is to use the most popular deep learning algorithm to complete the vehicle detection in the urban area of MiniSAR image, and provide reliable means for ground monitoring. Design/methodology/approach An accurate detector called the rotation region-based convolution neural networks (CNN) with multilayer fusion and multidimensional attention (M2R-Net) is proposed in this paper. Specifically, M2R-Net adopts the multilayer feature fusion strategy to extract feature maps with more extensive information. Next, the authors implement the multidimensional attention network to highlight target areas. Furthermore, a novel balanced sampling strategy for hard and easy positive-negative samples and a global balanced loss function are applied to deal with spatial imbalance and objective imbalance. Finally, rotation anchors are used to predict and calibrate the minimum circumscribed rectangle of vehicles. Findings By analyzing many groups of experiments, the validity and universality of the proposed model are verified. More importantly, comparisons with SSD, LRTDet, RFCN, DFPN, CMF-RCNN, R3Det, SCRDet demonstrate that M2R-Net has state-of-the-art detection performance. Research limitations/implications The progress in the field of MiniSAR application has been slow due to strong speckle noise, phase error, complex environments and a low signal-to-noise ratio. In addition, four kinds of imbalances, i.e. spatial imbalance, scale imbalance, class imbalance and objective imbalance, in object detection based on the CNN greatly inhibit the optimization of detection performance. Originality/value This research can not only enrich the means of daily traffic monitoring but also be used for enemy intelligence reconnaissance in wartime.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Fariba Rezaei ◽  
Mehran Yazdi

Recently, attention toward autonomous surveillance has been intensified and anomaly detection in crowded scenes is one of those significant surveillance tasks. Traditional approaches include the extraction of handcrafted features that need the subsequent task of model learning. They are mostly used to extract low-level spatiotemporal features of videos, neglecting the effect of semantic information. Recently, deep learning (DL) methods have been emerged in various domains, especially CNN for visual problems, with the ability to extract high-level information at higher layers of their architectures. On the other side, topic modeling-based approaches like NMF can extract more semantic representations. Here, we investigate a new hybrid visual embedding method based on deep features and a topic model for anomaly detection. Features per frame are computed hierarchically through a pretrained deep model, and in parallel, topic distributions are learned through multilayer nonnegative matrix factorization entangling information from extracted deep features. Training is accomplished through normal samples. Thereafter, K -means is applied to find typical normal clusters. At test time, after achieving feature representation through deep model and topic distribution for test frames, a statistical earth mover distance (EMD) metric is evaluated to measure the difference between normal cluster centroids and test topic distributions. High difference versus a threshold is detected as an anomaly. Experimental results on the benchmark Ped1 and Ped2 UCSD datasets demonstrate the effectiveness of our proposed method in anomaly detection.


2020 ◽  
Vol 15 ◽  
pp. 155892502090826 ◽  
Author(s):  
Junfeng Jing ◽  
Dong Zhuo ◽  
Huanhuan Zhang ◽  
Yong Liang ◽  
Min Zheng

To improve the detection rate of defect and the fabric product quality, a higher real-time performance fabric defect detection method based on the improved YOLOv3 model is proposed. There are two key steps: first, on the basis of YOLOv3, the dimension clustering of target frames is carried out by combining the fabric defect size and k-means algorithm to determine the number and size of prior frames. Second, the low-level features are combined with the high-level information, and the YOLO detection layer is added on to the feature maps of different sizes, so that it can be better applied to the defect detection of the gray cloth and the lattice fabric. The error detection rate of the improved network model is less than 5% for both gray cloth and checked cloth. Experimental results show that the proposed method can detect and mark fabric defects more effectively than YOLOv3, and effectively reduce the error detection rate.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1536
Author(s):  
Deng Jiang ◽  
Bei Sun ◽  
Shaojing Su ◽  
Zhen Zuo ◽  
Peng Wu ◽  
...  

Deep learning methods have significantly improved object detection performance, but small object detection remains an extremely difficult and challenging task in computer vision. We propose a feature fusion and spatial attention-based single shot detector (FASSD) for small object detection. We fuse high-level semantic information into shallow layers to generate discriminative feature representations for small objects. To adaptively enhance the expression of small object areas and suppress the feature response of background regions, the spatial attention block learns a self-attention mask to enhance the original feature maps. We also establish a small object dataset (LAKE-BOAT) of a scene with a boat on a lake and tested our algorithm to evaluate its performance. The results show that our FASSD achieves 79.3% mAP (mean average precision) on the PASCAL VOC2007 test with input 300 × 300, which outperforms the original single shot multibox detector (SSD) by 1.6 points, as well as most improved algorithms based on SSD. The corresponding detection speed was 45.3 FPS (frame per second) on the VOC2007 test using a single NVIDIA TITAN RTX GPU. The test results of a simplified FASSD on the LAKE-BOAT dataset indicate that our model achieved an improvement of 3.5% mAP on the baseline network while maintaining a real-time detection speed (64.4 FPS).


2021 ◽  
Vol 13 (10) ◽  
pp. 1912
Author(s):  
Zhili Zhang ◽  
Meng Lu ◽  
Shunping Ji ◽  
Huafen Yu ◽  
Chenhui Nie

Extracting water-bodies accurately is a great challenge from very high resolution (VHR) remote sensing imagery. The boundaries of a water body are commonly hard to identify due to the complex spectral mixtures caused by aquatic vegetation, distinct lake/river colors, silts near the bank, shadows from the surrounding tall plants, and so on. The diversity and semantic information of features need to be increased for a better extraction of water-bodies from VHR remote sensing images. In this paper, we address these problems by designing a novel multi-feature extraction and combination module. This module consists of three feature extraction sub-modules based on spatial and channel correlations in feature maps at each scale, which extract the complete target information from the local space, larger space, and between-channel relationship to achieve a rich feature representation. Simultaneously, to better predict the fine contours of water-bodies, we adopt a multi-scale prediction fusion module. Besides, to solve the semantic inconsistency of feature fusion between the encoding stage and the decoding stage, we apply an encoder-decoder semantic feature fusion module to promote fusion effects. We carry out extensive experiments in VHR aerial and satellite imagery respectively. The result shows that our method achieves state-of-the-art segmentation performance, surpassing the classic and recent methods. Moreover, our proposed method is robust in challenging water-body extraction scenarios.


2021 ◽  
Vol 13 (2) ◽  
pp. 38
Author(s):  
Yao Xu ◽  
Qin Yu

Great achievements have been made in pedestrian detection through deep learning. For detectors based on deep learning, making better use of features has become the key to their detection effect. While current pedestrian detectors have made efforts in feature utilization to improve their detection performance, the feature utilization is still inadequate. To solve the problem of inadequate feature utilization, we proposed the Multi-Level Feature Fusion Module (MFFM) and its Multi-Scale Feature Fusion Unit (MFFU) sub-module, which connect feature maps of the same scale and different scales by using horizontal and vertical connections and shortcut structures. All of these connections are accompanied by weights that can be learned; thus, they can be used as adaptive multi-level and multi-scale feature fusion modules to fuse the best features. Then, we built a complete pedestrian detector, the Adaptive Feature Fusion Detector (AFFDet), which is an anchor-free one-stage pedestrian detector that can make full use of features for detection. As a result, compared with other methods, our method has better performance on the challenging Caltech Pedestrian Detection Benchmark (Caltech) and has quite competitive speed. It is the current state-of-the-art one-stage pedestrian detection method.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yu-Fei Bai ◽  
Hong-Bo Zhang ◽  
Qing Lei ◽  
Ji-Xiang Du

Multiperson pose estimation is an important and complex problem in computer vision. It is regarded as the problem of human skeleton joint detection and solved by the joint heat map regression network in recent years. The key of achieving accurate pose estimation is to learn robust and discriminative feature maps. Although the current methods have made significant progress through interlayer fusion and intralevel fusion of feature maps, few works pay attention to the combination of the two methods. In this paper, we propose a multistage polymerization network (MPN) for multiperson pose estimation. The MPN continuously learns rich underlying spatial information by fusing features within the layers. The MPN also adds hierarchical connections between feature maps at the same resolution for interlayer fusion, so as to reuse low-level spatial information and refine high-level semantic information to obtain accurate keypoint representation. In addition, we observe a lack of connection between the output low-level information and the high-level information. To solve this problem, an effective shuffled attention mechanism (SAM) is proposed. The shuffle aims to promote the cross-channel information exchange between pyramid feature maps, while attention makes a trade-off between the low-level and high-level representations of the output features. As a result, the relationship between the space and the channel of the feature map is further enhanced. Evaluation of the proposed method is carried out on public datasets, and experimental results show that our method has better performance than current methods.


Sign in / Sign up

Export Citation Format

Share Document