localization model
Recently Published Documents


TOTAL DOCUMENTS

154
(FIVE YEARS 53)

H-INDEX

14
(FIVE YEARS 4)

2021 ◽  
Vol 127 (27) ◽  
Author(s):  
Thomas Q. McKenzie-Smith ◽  
Jack F. Douglas ◽  
Francis W. Starr

Author(s):  
Panimalar Kathiroli ◽  
◽  
Kanmani. x Kanmani. S

Wireless sensor networks (WSNs) have lately been widely used due to its abundant practice in methods that have to be spread over a large range. In any wireless application, the position precision of node is an important core component. Node localization intends to calculate the geographical coordinates of unknown nodes by the assistance of known nodes. In a multidimensional space, node localization is well-thought-out as an optimization problem that can be solved by relying on any metaheuristic’s algorithms for optimal outputs. This paper presents a new localization model using Salp Swarm optimization Algorithm with Doppler Effect (LOSSADE) that exploit the strengths of both methods. The Doppler effect iteratively considers distance between the nodes to determine the position of the nodes. The location of the salp leader and the prey will get updated using the Doppler shift. The performance validation of the presented approach simulated by MATLAB in the network environment with random node deployment. A detailed experimental analysis takes place and the results are investigated under a varying number of anchor nodes, and transmission range in the given search area. The obtained simulation results are compared over the traditional algorithm along with other the state-of-the-art methods shows that the proposed LOSSADE model depicts better localization performance in terms of robustness, accuracy in locating target node position and computation time.


Author(s):  
Taekyeong Jeong ◽  
Janggon Yoo ◽  
Daegyoum Kim

Abstract Inspired by the lateral line systems of various aquatic organisms that are capable of hydrodynamic imaging using ambient flow information, this study develops a deep learning-based object localization model that can detect the location of objects using flow information measured from a moving sensor array. In numerical simulations with the assumption of a potential flow, a two-dimensional hydrofoil navigates around four stationary cylinders in a uniform flow and obtains two types of sensory data during a simulation, namely flow velocity and pressure, from an array of sensors located on the surface of the hydrofoil. Several neural network models are constructed using the flow velocity and pressure data, and these are used to detect the positions of the hydrofoil and surrounding objects. The model based on a long short-term memory network, which is capable of learning order dependence in sequence prediction problems, outperforms the other models. The number of sensors is then optimized using feature selection techniques. This sensor optimization leads to a new object localization model that achieves impressive accuracy in predicting the locations of the hydrofoil and objects with only 40$\%$ of the sensors used in the original model.


2021 ◽  
Author(s):  
Obinna Ifediora

This study explains why the African Union (AU) claims primacy and opposes the UN Security Council’s and the ICC’s efforts to enforce R2P and nonimpunity norms in peacemaking. It draws on the concepts of norm subsidiarity and African agency in global politics and analyses African norm-setting and policy instruments. The central argument is that the AU is a subsidiary actor in the international system and has created subsidiary norms on immunity, the right to protect, and continental sovereignty to defend Africa’s vital security interests. The significance is that existing studies have applied the norm localization model and assumed that the AU is or should be a localizing actor and subordinated African subsidiary norms to international principles. Thus, the current approach has missed the collision of African and international norms we are witnessing. This study contributes to knowledge by enriching the understanding of African subsidiary norms and agency in international relations.


2021 ◽  
Author(s):  
Diego L. Guarin ◽  
Andrea Bandini ◽  
Aidan Dempster ◽  
Henry Wang ◽  
Siavash Rezaei ◽  
...  

Background: Automatic facial landmark localization is an essential component in many computer vision applications, including video-based detection of neurological diseases. Machine learning models for facial landmarks localization are typically trained on faces of healthy individuals, and we found that model performance is inferior when applied to faces of people with neurological diseases. Fine-tuning pre-trained models with representative images improves performance on clinical populations significantly. However, questions related to the characteristics of the database used to fine-tune the model and the clinical impact of the improved model remain. Methods: We employed the Toronto NeuroFace dataset – a dataset consisting videos of Healthy Controls (HC), individuals Post-Stroke, and individuals with Amyotrophic Lateral Sclerosis performing speech and non-speech tasks with thousands of manually annotated frames - to fine-tune a well-known deep learning-based facial landmark localization model. The pre-trained and fine-tuned models were used to extract landmark-based facial features from videos, and the facial features were used to discriminate clinical groups from HC. Results: Fine-tuning a facial landmark localization model with a diverse database that includes HC and individuals with neurological disorders resulted in significantly improved performance for all groups. Our results also showed that fine-tuning the model with representative data greatly improved the ability of the subsequent classifier to classify clinical groups vs. HC from videos. Conclusions: Using a diverse database for model fine-tuning might result in better model performance for HC and clinical groups. We demonstrated that fine-tuning a model for landmark localization with representative data results in improved detection of neurological diseases.


2021 ◽  
Vol 5 (4) ◽  
pp. 1-30
Author(s):  
Saideep Tiku ◽  
Prathmesh Kale ◽  
Sudeep Pasricha

Indoor localization services are a crucial aspect for the realization of smart cyber-physical systems within cities of the future. Such services are poised to reinvent the process of navigation and tracking of people and assets in a variety of indoor and subterranean environments. The growing ownership of computationally capable smartphones has laid the foundations of portable fingerprinting-based indoor localization through deep learning. However, as the demand for accurate localization increases, the computational complexity of the associated deep learning models increases as well. We present an approach for reducing the computational requirements of a deep learning-based indoor localization framework while maintaining localization accuracy targets. Our proposed methodology is deployed and validated across multiple smartphones and is shown to deliver up to 42% reduction in prediction latency and 45% reduction in prediction energy as compared to the best-known baseline deep learning-based indoor localization model.


2021 ◽  
Author(s):  
Filippo Moro ◽  
Emmanuel Hardy ◽  
Bruno Fain ◽  
Thomas Dalgaty ◽  
Paul Clemencon ◽  
...  

Abstract Real-world sensory-processing applications require compact, low-latency, and low-power computing systems. Enabled by their in-memory event-driven computing abilities, hybrid memristive-CMOS neuromorphic architectures provide an ideal hardware substrate for such tasks. To demonstrate the full potential of such systems, we propose and experimentally demonstrate an end-to-end sensory processing solution for a real-world object localization application. Drawing inspiration from the barn owl’s neuroanatomy, we developed a bio-mimetic, event-driven object localization system that couples state-of-the-art piezoelectric micromachined ultrasound transducer (pMUT) sensors to a neuromorphic resistive memories-based computational map. We present measurement results from the fabricated system comprising resistive memories-based coincidence detectors, delay line circuits, and a full-custom pMUT sensor. We use these experimental results to calibrate our system-level simulations. These simulations are then used to estimate the angular resolution and energy efficiency of the object localization model. The results reveal the potential of our approach, evaluated in orders of magnitude greater energy efficiency than a microcontroller performing the same task.


2021 ◽  
Author(s):  
A. S. Jameel Hassan ◽  
Suren Sritharan ◽  
Gihan Jayatilaka ◽  
Vijitha Herath ◽  
Parakrama B. Ekanayake ◽  
...  

2021 ◽  
Author(s):  
Thi Mai Anh Bui ◽  
Nhat Hai Nguyen

Precisely locating buggy files for a given bug report is a cumbersome and time-consuming task, particularly in a large-scale project with thousands of source files and bug reports. An efficient bug localization module is desirable to improve the productivity of the software maintenance phase. Many previous approaches rank source files according to their relevance to a given bug report based on simple lexical matching scores. However, the lexical mismatches between natural language expressions used to describe bug reports and technical terms of software source code might reduce the bug localization system’s accuracy. Incorporating domain knowledge through some features such as the semantic similarity, the fixing frequency of a source file, the code change history and similar bug reports is crucial to efficiently locating buggy files. In this paper, we propose a bug localization model, BugLocGA that leverages both lexical and semantic information as well as explores the relation between a bug report and a source file through some domain features. Given a bug report, we calculate the ranking score with every source files through a weighted sum of all features, where the weights are trained through a genetic algorithm with the aim of maximizing the performance of the bug localization model using two evaluation metrics: mean reciprocal rank (MRR) and mean average precision (MAP). The empirical results conducted on some widely-used open source software projects have showed that our model outperformed some state of the art approaches by effectively recommending relevant files where the bug should be fixed.


Sign in / Sign up

Export Citation Format

Share Document