Reducing the Computational Cost of Ratio-Based Indoor Localization

Author(s):  
John Keller ◽  
Xiaoyan Li
Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 103 ◽  
Author(s):  
David Sánchez-Rodríguez ◽  
Itziar Alonso-González ◽  
Carlos Ley-Bosch ◽  
Miguel Quintana-Suárez

Indoor localization has received tremendous attention in the last two decades due to location-aware services being highly demanded. Wireless networks have been suggested to solve this problem in many research works, and efficient algorithms have been developed with precise location and high accuracy. Nevertheless, those approaches often have high computational and high energy consumption. Hence, in temporary environments, such as emergency situations, where a fast deployment of an indoor localization system is required, those methods are not appropriate. In this manuscript, a methodology for fast building of an indoor localization system is proposed. For that purpose, a reduction of the data dimensionality is achieved by applying data fusion and feature transformation, which allow us to reduce the computational cost of the classifier training phase. In order to validate the methodology, three different datasets were used: two of them are public datasets based mainly on Received Signal Strength (RSS) from different Wi-Fi access point, and the third is a set of RSS values gathered from the LED lamps in a Visible Light Communication (VLC) network. The simulation results show that the proposed methodology considerably amends the overall computational performance and provides an acceptable location estimation error.


2020 ◽  
Vol 5 (2) ◽  
pp. 40
Author(s):  
Shi Chen

With the rapid development of the huge promotion of the Internet and artificial intelligence, the demand for location-based services in indoor environments has grown rapidly. At present, for the localization of the indoor environment, researchers from all walks of life have proposed many indoor localization solutions based on different technologies. Fingerprint localization technology, as a commonly used indoor localization technology, has led to continuous research and improvement due to its low accuracy and complex calculations. An indoor localization system based on fingerprint clustering is proposed by this paper. The system includes offline phase and online phase. We collect the RSS signal in the offline phase. We preprocess it with the Gaussian model to build a fingerprint database, and then we use the K-Means++ algorithm to cluster the fingerprints and group the fingerprints with similar signal strengths into a clustering subset. In the online phase, we classify the measured received signal strength (RSS), and then use the weighted K-Nearest neighbor (WKNN) algorithm to calculate the localization error. The experimental results show that we can reduce the localization error and effectively reduce the computational cost of the localization algorithm in the online phase, and effectively improve the efficiency of real-time localization in the online phase.


Author(s):  
D. Sánchez-Rodríguez ◽  
I. Alonso-González ◽  
J. Sánchez-Medina ◽  
C. Ley-Bosch ◽  
L. Díaz-Vilariño

Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.


2012 ◽  
Author(s):  
Todd Wareham ◽  
Robert Robere ◽  
Iris van Rooij
Keyword(s):  

Author(s):  
Nadia Ghariani ◽  
Mohamed Salah Karoui ◽  
Mondher Chaoui ◽  
Mongi Lahiani ◽  
Hamadi Ghariani

2020 ◽  
Vol 2020 (14) ◽  
pp. 378-1-378-7
Author(s):  
Tyler Nuanes ◽  
Matt Elsey ◽  
Radek Grzeszczuk ◽  
John Paul Shen

We present a high-quality sky segmentation model for depth refinement and investigate residual architecture performance to inform optimally shrinking the network. We describe a model that runs in near real-time on mobile device, present a new, highquality dataset, and detail a unique weighing to trade off false positives and false negatives in binary classifiers. We show how the optimizations improve bokeh rendering by correcting stereo depth misprediction in sky regions. We detail techniques used to preserve edges, reject false positives, and ensure generalization to the diversity of sky scenes. Finally, we present a compact model and compare performance of four popular residual architectures (ShuffleNet, MobileNetV2, Resnet-101, and Resnet-34-like) at constant computational cost.


Author(s):  
Mohammad Salimibeni ◽  
Zohreh Hajiakhondi-Meybodi ◽  
Parvin Malekzadeh ◽  
Mohammadamin Atashi ◽  
Konstantinos N. Plataniotis ◽  
...  

2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


2020 ◽  
Author(s):  
Florencia Klein ◽  
Daniela Cáceres-Rojas ◽  
Monica Carrasco ◽  
Juan Carlos Tapia ◽  
Julio Caballero ◽  
...  

<p>Although molecular dynamics simulations allow for the study of interactions among virtually all biomolecular entities, metal ions still pose significant challenges to achieve an accurate structural and dynamical description of many biological assemblies. This is particularly the case for coarse-grained (CG) models. Although the reduced computational cost of CG methods often makes them the technique of choice for the study of large biomolecular systems, the parameterization of metal ions is still very crude or simply not available for the vast majority of CG- force fields. Here, we show that incorporating statistical data retrieved from the Protein Data Bank (PDB) to set specific Lennard-Jones interactions can produce structurally accurate CG molecular dynamics simulations. Using this simple approach, we provide a set of interaction parameters for Calcium, Magnesium, and Zinc ions, which cover more than 80% of the metal-bound structures reported on the PDB. Simulations performed using the SIRAH force field on several proteins and DNA systems show that using the present approach it is possible to obtain non-bonded interaction parameters that obviate the use of topological constraints. </p>


Sign in / Sign up

Export Citation Format

Share Document