MFFusion: A Multi-level Features Fusion Model for Malicious Traffic Detection based on Deep Learning

2022 ◽  
Vol 202 ◽  
pp. 108658
Author(s):  
Kunda Lin ◽  
Xiaolong Xu ◽  
Fu Xiao
Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 111
Author(s):  
Shaojun Wu ◽  
Ling Gao

In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%).


Author(s):  
T. Jiang ◽  
X. J. Wang

Abstract. In recent years, deep learning technology has been continuously developed and gradually transferred to various fields. Among them, Convolutional Neural Network (CNN), which has the ability to extract deep features of images due to its unique network structure, plays an increasingly important role in the realm of Hyperspectral images classification. This paper attempts to construct a features fusion model that combines the deep features derived from 1D-CNN and 2D-CNN, and explores the potential of features fusion model in the field of hyperspectral image classification. The experiment is based on the deep learning open source framework TensorFlow with Python3 as programming environment. Firstly, constructing multi-layer perceptron (MLP), 1D-CNN and 2DCNN models respectively, and then, using the pre-trained 1D-CNN and 2D-CNN models as feature extractors, finally, extracting features via constructing the features fusion model. The general open hyperspectral datasets (Pavia University) were selected as a test to compare classification accuracy and classification confidence among different models. The experimental results show that the features fusion model obtains higher overall accuracy (99.65%), Kappa coefficient (0.9953) and lower uncertainty for the boundary and unknown regions (3.43%) in the data set. Since features fusion model inherits the structural characteristics of 1D-CNN and 2DCNN, the complementary advantages between the models are achieved. The spectral and spatial features of hyperspectral images are fully exploited, thus getting state-of-the-art classification accuracy and generalization performance.


2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


2018 ◽  
Vol 14 (10) ◽  
pp. 155014771880671 ◽  
Author(s):  
Tao Li ◽  
Hai Wang ◽  
Yuan Shao ◽  
Qiang Niu

With the rapid growth of indoor positioning requirements without equipment and the convenience of channel state information acquisition, the research on indoor fingerprint positioning based on channel state information is increasingly valued. In this article, a multi-level fingerprinting approach is proposed, which is composed of two-level methods: the first layer is achieved by deep learning and the second layer is implemented by the optimal subcarriers filtering method. This method using channel state information is termed multi-level fingerprinting with deep learning. Deep neural networks are applied in the deep learning of the first layer of multi-level fingerprinting with deep learning, which includes two phases: an offline training phase and an online localization phase. In the offline training phase, deep neural networks are used to train the optimal weights. In the online localization phase, the top five closest positions to the location position are obtained through forward propagation. The second layer optimizes the results of the first layer through the optimal subcarriers filtering method. Under the accuracy of 0.6 m, the positioning accuracy of two common environments has reached, respectively, 96% and 93.9%. The evaluation results show that the positioning accuracy of this method is better than the method based on received signal strength, and it is better than the support vector machine method, which is also slightly improved compared with the deep learning method.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8080
Author(s):  
Ahmed Shaheen ◽  
Umair bin Waheed ◽  
Michael Fehler ◽  
Lubos Sokol ◽  
Sherif Hanafy

Automatic detection of low-magnitude earthquakes has become an increasingly important research topic in recent years due to a sharp increase in induced seismicity around the globe. The detection of low-magnitude seismic events is essential for microseismic monitoring of hydraulic fracturing, carbon capture and storage, and geothermal operations for hazard detection and mitigation. Moreover, the detection of micro-earthquakes is crucial to understanding the underlying mechanisms of larger earthquakes. Various algorithms, including deep learning methods, have been proposed over the years to detect such low-magnitude events. However, there is still a need for improving the robustness of these methods in discriminating between local sources of noise and weak seismic events. In this study, we propose a convolutional neural network (CNN) to detect seismic events from shallow borehole stations in Groningen, the Netherlands. We train a CNN model to detect low-magnitude earthquakes, harnessing the multi-level sensor configuration of the G-network in Groningen. Each G-network station consists of four geophones at depths of 50, 100, 150, and 200 m. Unlike prior deep learning approaches that use 3-component seismic records only at a single sensor level, we use records from the entire borehole as one training example. This allows us to train the CNN model using moveout patterns of the energy traveling across the borehole sensors to discriminate between events originating in the subsurface and local noise arriving from the surface. We compare the prediction accuracy of our trained CNN model to that of the STA/LTA and template matching algorithms on a two-month continuous record. We demonstrate that the CNN model shows significantly better performance than STA/LTA and template matching in detecting new events missing from the catalog and minimizing false detections. Moreover, we find that using the moveout feature allows us to effectively train our CNN model using only a fraction of the data that would be needed otherwise, saving plenty of manual labor in preparing training labels. The proposed approach can be easily applied to other microseismic monitoring networks with multi-level sensors.


Author(s):  
Arjun Benagatte Channegowda ◽  
H N Prakash

Providing security in biometrics is the major challenging task in the current situation. A lot of research work is going on in this area. Security can be more tightened by using complex security systems, like by using more than one biometric trait for recognition. In this paper multimodal biometric models are developed to improve the recognition rate of a person. The combination of physiological and behavioral biometrics characteristics is used in this work. Fingerprint and signature biometrics characteristics are used to develop a multimodal recognition system. Histograms of oriented gradients (HOG) features are extracted from biometric traits and for these feature fusions are applied at two levels. Features of fingerprint and signatures are fused using concatenation, sum, max, min, and product rule at multilevel stages, these features are used to train deep learning neural network model. In the proposed work, multi-level feature fusion for multimodal biometrics with a deep learning classifier is used and results are analyzed by a varying number of hidden neurons and hidden layers. Experiments are carried out on SDUMLA-HMT, machine learning and data mining lab, Shandong University fingerprint datasets, and MCYT signature biometric recognition group datasets, and encouraging results were obtained.


Sign in / Sign up

Export Citation Format

Share Document