scholarly journals Augmented CWT Features for Deep Learning-Based Indoor Localization Using WiFi RSSI Data

2021 ◽  
Vol 11 (4) ◽  
pp. 1806 ◽  
Author(s):  
Paul Ssekidde ◽  
Odongo Steven Eyobu ◽  
Dong Seog Han ◽  
Tonny J. Oyana

Localization is one of the current challenges in indoor navigation research. The conventional global positioning system (GPS) is affected by weak signal strengths due to high levels of signal interference and fading in indoor environments. Therefore, new positioning solutions tailored for indoor environments need to be developed. In this paper, we propose a deep learning approach for indoor localization. However, the performance of a deep learning system depends on the quality of the feature representation. This paper introduces two novel feature set extractions based on the continuous wavelet transforms (CWT) of the received signal strength indicators’ (RSSI) data. The two novel CWT feature sets were augmented with additive white Gaussian noise. The first feature set is CWT image-based, and the second is composed of the CWT PSD numerical data that were dimensionally equalized using principal component analysis (PCA). These proposed image and numerical data feature sets were both evaluated using CNN and ANN models with the goal of identifying the room that the human subject was in and estimating the precise location of the human subject in that particular room. Extensive experiments were conducted to generate the proposed augmented CWT feature set and numerical CWT PSD feature set using two analyzing functions, namely, Morlet and Morse. For validation purposes, the performance of the two proposed feature sets were compared with each other and other existing feature set formulations. The accuracy, precision and recall results show that the proposed feature sets performed better than the conventional feature sets used to validate the study. Similarly, the mean localization error generated by the proposed feature set predictions was less than those of the conventional feature sets used in indoor localization. More particularly, the proposed augmented CWT-image feature set outperformed the augmented CWT-PSD numerical feature set. The results also show that the Morse-based feature sets trained with CNN produced the best indoor positioning results compared to all Morlet and ANN-based feature set formulations.

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 875 ◽  
Author(s):  
Xiaochao Dang ◽  
Xiong Si ◽  
Zhanjun Hao ◽  
Yaning Huang

With the rapid development of wireless network technology, wireless passive indoor localization has become an increasingly important technique that is widely used in indoor location-based services. Channel state information (CSI) can provide more detailed and specific subcarrier information, which has gained the attention of researchers and has become an emphasis in indoor localization technology. However, existing research has generally adopted amplitude information for eigenvalue calculations. There are few research studies that have used phase information from CSI signals for localization purposes. To eliminate the signal interference existing in indoor environments, we present a passive human indoor localization method named FapFi, which fuses CSI amplitude and phase information to fully utilize richer signal characteristics to find location. In the offline stage, we filter out redundant values and outliers in the CSI amplitude information and then process the CSI phase information. A fusion method is utilized to store the processed amplitude and phase information as a fingerprint database. The experimental data from two typical laboratory and conference room environments were gathered and analyzed. The extensive experimental results demonstrate that the proposed algorithm is more efficient than other algorithms in data processing and achieves decimeter-level localization accuracy.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5495
Author(s):  
Brahim El Boudani ◽  
Loizos Kanaris ◽  
Akis Kokkinis ◽  
Michalis Kyriacou ◽  
Christos Chrysoulas ◽  
...  

In the near future, the fifth-generation wireless technology is expected to be rolled out, offering low latency, high bandwidth and multiple antennas deployed in a single access point. This ecosystem will help further enhance various location-based scenarios such as assets tracking in smart factories, precise smart management of hydroponic indoor vertical farms and indoor way-finding in smart hospitals. Such a system will also integrate existing technologies like the Internet of Things (IoT), WiFi and other network infrastructures. In this respect, 5G precise indoor localization using heterogeneous IoT technologies (Zigbee, Raspberry Pi, Arduino, BLE, etc.) is a challenging research area. In this work, an experimental 5G testbed has been designed integrating C-RAN and IoT networks. This testbed is used to improve both vertical and horizontal localization (3D Localization) in a 5G IoT environment. To achieve this, we propose the DEep Learning-based co-operaTive Architecture (DELTA) machine learning model implemented on a 3D multi-layered fingerprint radiomap. The DELTA begins by estimating the 2D location. Then, the output is recursively used to predict the 3D location of a mobile station. This approach is going to benefit use cases such as 3D indoor navigation in multi-floor smart factories or in large complex buildings. Finally, we have observed that the proposed model has outperformed traditional algorithms such as Support Vector Machine (SVM) and K-Nearest Neighbor (KNN).


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1995
Author(s):  
Danshi Sun ◽  
Erhu Wei ◽  
Zhuoxi Ma ◽  
Chenxi Wu ◽  
Shiyi Xu

Indoor navigation has attracted commercial developers and researchers in the last few decades. The development of localization tools, methods and frameworks enables current communication services and applications to be optimized by incorporating location data. For clinical applications such as workflow analysis, Bluetooth Low Energy (BLE) beacons have been employed to map the positions of individuals in indoor environments. To map locations, certain existing methods use the received signal strength indicator (RSSI). Devices need to be configured to allow for dynamic interference patterns when using the RSSI sensors to monitor indoor positions. In this paper, our objective is to explore an alternative method for monitoring a moving user’s indoor position using BLE sensors in complex indoor building environments. We developed a Convolutional Neural Network (CNN) based positioning model based on the 2D image composed of the received number of signals indicator from both x and y-axes. In this way, like a pixel, we interact with each 10 × 10 matrix holding the spatial information of coordinates and suggest the possible shift of a sensor, adding a sensor and removing a sensor. To develop CNN we adopted a neuro-evolution approach to optimize and create several layers in the network dynamically, through enhanced Particle Swarm Optimization (PSO). For the optimization of CNN, the global best solution obtained by PSO is directly given to the weights of each layer of CNN. In addition, we employed dynamic inertia weights in the PSO, instead of a constant inertia weight, to maintain the CNN layers’ length corresponding to the RSSI signals from BLE sensors. Experiments were conducted in a building environment where thirteen beacon devices had been installed in different locations to record coordinates. For evaluation comparison, we further adopted machine learning and deep learning algorithms for predicting a user’s location in an indoor environment. The experimental results indicate that the proposed optimized CNN-based method shows high accuracy (97.92% with 2.8% error) for tracking a moving user’s locations in a complex building without complex calibration as compared to other recent methods.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 318 ◽  
Author(s):  
Daniel Alshamaa ◽  
Farah Mourad-Chehade ◽  
Paul Honeine ◽  
Aly Chkeir

Indoor localization has several applications ranging from people tracking and indoor navigation, to autonomous robot navigation and asset tracking. We tackle the problem as a zoning localization where the objective is to determine the zone where the mobile sensor resides at any instant. The decision-making process in localization systems relies on data coming from multiple sensors. The data retrieved from these sensors require robust fusion approaches to be processed. One of these approaches is the belief functions theory (BFT), also called the Dempster–Shafer theory. This theory deals with uncertainty and imprecision with a theoretically attractive evidential reasoning framework. This paper investigates the usage of the BFT to define an evidence framework for estimating the most probable sensor’s zone. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.


Author(s):  
C. Guney

Satellite navigation systems with GNSS-enabled devices, such as smartphones, car navigation systems, have changed the way users travel in outdoor environment. GNSS is generally not well suited for indoor location and navigation because of two reasons: First, GNSS does not provide a high level of accuracy although indoor applications need higher accuracies. Secondly, poor coverage of satellite signals for indoor environments decreases its accuracy. So rather than using GNSS satellites within closed environments, existing indoor navigation solutions rely heavily on installed sensor networks. There is a high demand for accurate positioning in wireless networks in GNSS-denied environments. However, current wireless indoor positioning systems cannot satisfy the challenging needs of indoor location-aware applications. Nevertheless, access to a user’s location indoors is increasingly important in the development of context-aware applications that increases business efficiency. In this study, how can the current wireless location sensing systems be tailored and integrated for specific applications, like smart cities/grids/buildings/cars and IoT applications, in GNSS-deprived areas.


2020 ◽  
pp. bjophthalmol-2020-317825
Author(s):  
Yonghao Li ◽  
Weibo Feng ◽  
Xiujuan Zhao ◽  
Bingqian Liu ◽  
Yan Zhang ◽  
...  

Background/aimsTo apply deep learning technology to develop an artificial intelligence (AI) system that can identify vision-threatening conditions in high myopia patients based on optical coherence tomography (OCT) macular images.MethodsIn this cross-sectional, prospective study, a total of 5505 qualified OCT macular images obtained from 1048 high myopia patients admitted to Zhongshan Ophthalmic Centre (ZOC) from 2012 to 2017 were selected for the development of the AI system. The independent test dataset included 412 images obtained from 91 high myopia patients recruited at ZOC from January 2019 to May 2019. We adopted the InceptionResnetV2 architecture to train four independent convolutional neural network (CNN) models to identify the following four vision-threatening conditions in high myopia: retinoschisis, macular hole, retinal detachment and pathological myopic choroidal neovascularisation. Focal Loss was used to address class imbalance, and optimal operating thresholds were determined according to the Youden Index.ResultsIn the independent test dataset, the areas under the receiver operating characteristic curves were high for all conditions (0.961 to 0.999). Our AI system achieved sensitivities equal to or even better than those of retina specialists as well as high specificities (greater than 90%). Moreover, our AI system provided a transparent and interpretable diagnosis with heatmaps.ConclusionsWe used OCT macular images for the development of CNN models to identify vision-threatening conditions in high myopia patients. Our models achieved reliable sensitivities and high specificities, comparable to those of retina specialists and may be applied for large-scale high myopia screening and patient follow-up.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2020 ◽  
Vol 101 ◽  
pp. 209
Author(s):  
R. Baskaran ◽  
B. Ajay Rajasekaran ◽  
V. Rajinikanth
Keyword(s):  

Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


Bone ◽  
2021 ◽  
pp. 115972
Author(s):  
Abhinav Suri ◽  
Brandon C. Jones ◽  
Grace Ng ◽  
Nancy Anabaraonye ◽  
Patrick Beyrer ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document