scholarly journals Wi-Fi Fingerprint-Based Indoor Mobile User Localization Using Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Junhang Bai ◽  
Yongliang Sun ◽  
Weixiao Meng ◽  
Cheng Li

In recent years, deep learning has been used for Wi-Fi fingerprint-based localization to achieve a remarkable performance, which is expected to satisfy the increasing requirements of indoor location-based service (LBS). In this paper, we propose a Wi-Fi fingerprint-based indoor mobile user localization method that integrates a stacked improved sparse autoencoder (SISAE) and a recurrent neural network (RNN). We improve the sparse autoencoder by adding an activity penalty term in its loss function to control the neuron outputs in the hidden layer. The encoders of three improved sparse autoencoders are stacked to obtain high-level feature representations of received signal strength (RSS) vectors, and an SISAE is constructed for localization by adding a logistic regression layer as the output layer to the stacked encoders. Meanwhile, using the previous location coordinates computed by the trained SISAE as extra inputs, an RNN is employed to compute more accurate current location coordinates for mobile users. The experimental results demonstrate that the mean error of the proposed SISAE-RNN for mobile user localization can be reduced to 1.60 m.

2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Shan Pang ◽  
Xinyi Yang

In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Chen Xing ◽  
Li Ma ◽  
Xiaoquan Yang

Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE) method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR) approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU) as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR) can achieve higher accuracies than the popular support vector machine (SVM) classifier.


Author(s):  
Ding Liu

The latest development of computer vision has made exciting progress and tremendous impact in our daily lives. In this exciting era of technological advances, deep learning has gained huge popularity as a powerful tool for solving a lot of computer vision problems, and has added a great boost to this already rapidly developing field. Conventionally, the connection between different vision tasks is fragile. For example, low-level image processing and high-level vision tasks are usually coped with separately. However, the inherent relation of feature representations among various tasks should be effectively utilized rather than omitted. My research focuses on connecting low-level image processing and high-level vision via deep learning. Specifically, my goal is to design deep learning mechanisms that can efficiently and effectively learn features from low-level image processing and use them to improve the performance of high-level vision tasks.


2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4045
Author(s):  
Alessandro Sassu ◽  
Jose Francisco Saenz-Cogollo ◽  
Maurizio Agelli

Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.


Sign in / Sign up

Export Citation Format

Share Document