Real-time location estimation for indoor navigation using a visual-inertial sensor

Sensor Review ◽  
2020 ◽  
Vol 40 (4) ◽  
pp. 455-464
Author(s):  
Zhe Wang ◽  
Xisheng Li ◽  
Xiaojuan Zhang ◽  
Yanru Bai ◽  
Chengcai Zheng

Purpose The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data. Design/methodology/approach First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia. Findings In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310. Originality/value This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.

Kybernetes ◽  
2010 ◽  
Vol 39 (1) ◽  
pp. 127-139 ◽  
Author(s):  
Chingiz Hajiyev ◽  
Ali Okatan

PurposeThe purpose of this paper is to design the fault detection algorithm for multidimensional dynamic systems using a new approach for checking the statistical characteristics of Kalman filter innovation sequence.Design/methodology/approachThe proposed approach is based on given statistics for the mathematical expectation of the spectral norm of the normalized innovation matrix of the Kalman filter.FindingsThe longitudinal dynamics of an aircraft as an example is considered, and detection of various sensor faults affecting the mean and variance of the innovation sequence is examined.Research limitations/implicationsA real‐time detection of sensor faults affecting the mean and variance of the innovation sequence, applied to the linearized aircraft longitudinal dynamics, is examined. The non‐linear longitudinal dynamics model of an aircraft is linearized. Faults affecting the covariances of the innovation sequence are not considered in the paper.Originality/valueThe proposed approach permits simultaneous real‐time checking of the expected value and the variance of the innovation sequence and does not need a priori information about statistical characteristics of this sequence in the failure case.


2019 ◽  
Vol 8 (4) ◽  
pp. 338-350
Author(s):  
Mauricio Loyola

Purpose The purpose of this paper is to propose a simple, fast, and effective method for detecting measurement errors in data collected with low-cost environmental sensors typically used in building monitoring, evaluation, and automation applications. Design/methodology/approach The method combines two unsupervised learning techniques: a distance-based anomaly detection algorithm analyzing temporal patterns in data, and a density-based algorithm comparing data across different spatially related sensors. Findings Results of tests using 60,000 observations of temperature and humidity collected from 20 sensors during three weeks show that the method effectively identified measurement errors and was not affected by valid unusual events. Precision, recall, and accuracy were 0.999 or higher for all cases tested. Originality/value The method is simple to implement, computationally inexpensive, and fast enough to be used in real-time with modest open-source microprocessors and a wide variety of environmental sensors. It is a robust and convenient approach for overcoming the hardware constraints of low-cost sensors, allowing users to improve the quality of collected data at almost no additional cost and effort.


2017 ◽  
Vol 15 (4) ◽  
pp. 505-527 ◽  
Author(s):  
Wilson E. Sakpere ◽  
Nhlanhla Boyfriend Wilton Mlitwa ◽  
Michael Adeyeye Oshin

Purpose This research aims to focus on providing interventions to alleviate usability challenges to strengthen the overall accuracy and the navigation effectiveness in indoor and stringent environments through the experiential manipulation of technical attributes of the positioning and navigation system. Design/methodology/approach The study followed a quantitative and experimental method of empirical enquiry and software engineering and synthesis research methods. The study further entails three implementation processes, namely, map generation, positioning framework and navigation service using a prototype mobile navigation application that uses the near field communication (NFC) technology. Findings The approach and findings revealed that the capability of NFC in leveraging its low-cost infrastructure of passive tags, its availability in mobile devices and the ubiquity of the mobile device provided a cost-effective solution with impressive accuracy and usability. The positioning accuracy achieved was less than 9 cm. The usability improved from 44 to 96 per cent based on feedbacks given by respondents who tested the application in an indoor environment. These showed that NFC is a viable alternative to resolve the challenges identified in previous solutions and technologies. Research limitations/implications The major limitation of the navigation application was that there is no real-time update of user position. This can be investigated and extended further by using NFC in a hybrid make-up with WLAN, radio-frequency identification (RFID) or Bluetooth as a cost-effective solution for real-time indoor positioning because of their coverage and existing infrastructures. The hybrid positioning model, which merges two or more techniques or technologies, is becoming more popular and will improve its accuracy, robustness and usability. In addition, it will balance complexity, compensate for the limitations in the technologies and achieve real-time mobile indoor navigation. Although the presence of WLAN, RFID and Bluetooth technologies are likely to result in system complexity and high cost, NFC will reduce the system’s complexity and balance the trade-off. Practical implications Whilst limitations in existing indoor navigation technologies meant putting up with poor signal and poor communication capabilities, outcomes of the NFC framework will offer valuable insight. It presents new possibilities on how to overcome signal quality limitations at improved turn-around time in constrained indoor spaces. Social implications The innovations have a direct positive social impact in that it will offer new solutions to mobile communications in the previously impossible terrains such as underground platforms and densely covered spaces. With the ability to operate mobile applications without signal inhibitions, the quality of communication – and ultimately, life opportunities – are enhanced. Originality/value While navigating, users face several challenges, such as infrastructure complexity, high-cost solution, inaccuracy and usability. Hence, as a contribution, this paper presents a symbolic map and path architecture of a floor of the test-bed building that was uploaded to OpenStreetMap. Furthermore, the implementation of the RFID and the NFC architectures produced new insight on how to redress the limitations in challenged spaces. In addition, a prototype mobile indoor navigation application was developed and implemented, offering novel solution to the practical problems inhibiting navigation in indoor challenged spaces – a practical contribution to the community of practice.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1121 ◽  
Author(s):  
Nassr Alsaeedi ◽  
Dieter Wloka

The aim of the study is to develop a real-time eyeblink detection algorithm that can detect eyeblinks during the closing phase for a virtual reality headset (VR headset) and accordingly classify the eye’s current state (open or closed). The proposed method utilises analysis of a motion vector for detecting eyelid closure, and a Haar cascade classifier (HCC) for localising the eye in the captured frame. When the downward motion vector (DMV) is detected, a cross-correlation between the current region of interest (eye in the current frame) and a template image for an open eye is used for verifying eyelid closure. A finite state machine is used for decision making regarding eyeblink occurrence and tracking the eye state in a real-time video stream. The main contributions of this study are, first, the ability of the proposed algorithm to detect eyeblinks during the closing or the pause phases before the occurrence of the reopening phase of the eyeblink. Second, realising the proposed approach by implementing a valid real-time eyeblink detection sensor for a VR headset based on a real case scenario. The sensor is used in the ongoing study that we are conducting. The performance of the proposed method was 83.9% for accuracy, 91.8% for precision and 90.40% for the recall. The processing time for each frame took approximately 11 milliseconds. Additionally, we present a new dataset for non-frontal eye monitoring configuration for eyeblink tracking inside a VR headset. The data annotations are also included, such that the dataset can be used for method validation and performance evaluation in future studies.


2014 ◽  
Vol 68 (3) ◽  
pp. 434-452 ◽  
Author(s):  
Zhiwen Xian ◽  
Xiaoping Hu ◽  
Junxiang Lian

Exact motion estimation is a major task in autonomous navigation. The integration of Inertial Navigation Systems (INS) and the Global Positioning System (GPS) can provide accurate location estimation, but cannot be used in a GPS denied environment. In this paper, we present a tight approach to integrate a stereo camera and low-cost inertial sensor. This approach takes advantage of the inertial sensor's fast response and visual sensor's slow drift. In contrast to previous approaches, features both near and far from the camera are simultaneously taken into consideration in the visual-inertial approach. The near features are parameterised in three dimensional (3D) Cartesian points which provide range and heading information, whereas the far features are initialised in Inverse Depth (ID) points which provide bearing information. In addition, the inertial sensor biases and a stationary alignment are taken into account. The algorithm employs an Iterative Extended Kalman Filter (IEKF) to estimate the motion of the system, the biases of the inertial sensors and the tracked features over time. An outdoor experiment is presented to validate the proposed algorithm and its accuracy.


Author(s):  
Bin Li ◽  
Yu Yang ◽  
Chengshuai Qin ◽  
Xiao Bai ◽  
Lihui Wang

Purpose Focusing on the problem that the visual detection algorithm of navigation path line in intelligent harvester robot is susceptible to interference and low accuracy, a navigation path detection algorithm based on improved random sampling consensus is proposed. Design/methodology/approach First, inverse perspective mapping was applied to the original images of rice or wheat to restore the three-dimensional spatial geometric relationship between rice or wheat rows. Second, set the target region and enhance the image to highlight the difference between harvested and unharvested rice or wheat regions. Median filter is used to remove the intercrop gap interference and improve the anti-interference ability of rice or wheat image segmentation. The third step is to apply the method of maximum variance to thresholding the rice or wheat images in the operation area. The image is further segmented with the single-point region growth, and the harvesting boundary corner is detected to improve the accuracy of the harvesting boundary recognition. Finally, fitting the harvesting boundary corner point as the navigation path line improves the real-time performance of crop image processing. Findings The experimental results demonstrate that the improved random sampling consensus with an average success rate of 94.6% has higher reliability than the least square method, probabilistic Hough and traditional random sampling consensus detection. It can extract the navigation line of the intelligent combine robot in real time at an average speed of 57.1 ms/frame. Originality/value In the precision agriculture technology, the accurate identification of the navigation path of the intelligent combine robot is the key to realize accurate positioning. In the vision navigation system of harvester, the extraction of navigation line is its core and key, which determines the speed and precision of navigation.


Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1773 ◽  
Author(s):  
Mingjing Gao ◽  
Min Yu ◽  
Hang Guo ◽  
Yuan Xu

Multi-sensor integrated navigation technology has been applied to the indoor navigation and positioning of robots. For the problems of a low navigation accuracy and error accumulation, for mobile robots with a single sensor, an indoor mobile robot positioning method based on a visual and inertial sensor combination is presented in this paper. First, the visual sensor (Kinect) is used to obtain the color image and the depth image, and feature matching is performed by the improved scale-invariant feature transform (SIFT) algorithm. Then, the absolute orientation algorithm is used to calculate the rotation matrix and translation vector of a robot in two consecutive frames of images. An inertial measurement unit (IMU) has the advantages of high frequency updating and rapid, accurate positioning, and can compensate for the Kinect speed and lack of precision. Three-dimensional data, such as acceleration, angular velocity, magnetic field strength, and temperature data, can be obtained in real-time with an IMU. The data obtained by the visual sensor is loosely combined with that obtained by the IMU, that is, the differences in the positions and attitudes of the two sensor outputs are optimally combined by the adaptive fade-out extended Kalman filter to estimate the errors. Finally, several experiments show that this method can significantly improve the accuracy of the indoor positioning of the mobile robots based on the visual and inertial sensors.


Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 120
Author(s):  
Ningbo Li ◽  
Lianwu Guan ◽  
Yanbin Gao ◽  
Zhejun Liu ◽  
Ye Wang ◽  
...  

Vehicles have to rely on satellite navigation in an open environment. However, satellite navigation cannot obtain accurate positioning information for vehicles in the interior of underground parking lots, as they comprise a semi-enclosed navigation space. Therefore, vehicular navigation needs to take into consideration both outdoor and indoor environments. Actually, outdoor navigation and indoor navigation require different positioning methods, and it is of great importance to choose a reasonable navigation and positioning algorithm solution for vehicles. Fortunately, the integrated navigation of the Global Positioning System (GPS) and the Micro-Electro-Mechanical System (MEMS) inertial navigation system could solve the problem of switching navigation algorithms in the entrance and exit of underground parking lots. This paper proposes a low cost vehicular seamless navigation technology based on the reduced inertial sensor system (RISS)/GPS between the outdoors and an underground garage. Specifically, the enhanced RISS is a positioning algorithm based on three inertial sensors and one odometer, which could achieve a similar location effect as the full model integrated navigation, reduce the costs greatly, and improve the efficiency of each sensor.


2015 ◽  
Vol 27 (6) ◽  
pp. 793-802 ◽  
Author(s):  
Hengliang Shi ◽  
Xiaolei Bai ◽  
Jianhui Duan

Purpose – In cloth animation field, the collision detection of fabric under external force is very complex, and difficult to satisfy the needs of reality feeling and real time. The purpose of this paper is to improve reality feeling and real-time requirement. Design/methodology/approach – This paper puts forward a mass-spring model with building bounding-box in the center of particle, and designs the collision detection algorithm based on Mapreduce. At the same time, a method is proposed to detect collision based on geometric unit. Findings – The method can quickly detect the intersection of particle and triangle, and then deal with collision response according to the physical characteristics of fabric. Experiment shows that the algorithm improves real-time and authenticity. Research limitations/implications – Experiments show that 3D fabric simulation can be more efficiency through parallel calculation model − Mapreduce. Practical implications – This method can improve the reality feeling, and reduce calculation quantity. Social implications – This collision-detection can be used into more fields such as 3D games, aero simulation training and garments automation. Originality/value – This model and method have originality, and can be used to 3D animation, digital entertainment, and garment industry.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4787 ◽  
Author(s):  
Nati Daniel ◽  
Itzik Klein

Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial sensor readings are used as input to a machine learning algorithm which performs the classification. There are several approaches to tackle such a task: feature based approaches, one dimensional deep learning algorithms, and two dimensional deep learning architectures. When using deep learning approaches, feature engineering is redundant. In addition, while utilizing two-dimensional deep learning approaches enables to utilize methods from the well-established computer vision domain. In this paper, a framework for smartphone location and human activity recognition, based on the smartphone’s inertial sensors, is proposed. The contributions of this work are a novel time series encoding approach, from inertial signals to inertial images, and transfer learning from computer vision domain to the inertial sensors classification problem. Four different datasets are employed to show the benefits of using the proposed approach. In addition, as the proposed framework performs classification on inertial sensors readings, it can be applied for other classification tasks using inertial data. It can also be adopted to handle other types of sensory data collected for a classification task.


Sign in / Sign up

Export Citation Format

Share Document