Indoor Position and Orientation for the Blind

Author(s):  
Mauricio Sáenz ◽  
Jaime Sánchez
2021 ◽  
Vol 11 (9) ◽  
pp. 4269
Author(s):  
Kamil Židek ◽  
Ján Piteľ ◽  
Michal Balog ◽  
Alexander Hošovský ◽  
Vratislav Hladký ◽  
...  

The assisted assembly of customized products supported by collaborative robots combined with mixed reality devices is the current trend in the Industry 4.0 concept. This article introduces an experimental work cell with the implementation of the assisted assembly process for customized cam switches as a case study. The research is aimed to design a methodology for this complex task with full digitalization and transformation data to digital twin models from all vision systems. Recognition of position and orientation of assembled parts during manual assembly are marked and checked by convolutional neural network (CNN) model. Training of CNN was based on a new approach using virtual training samples with single shot detection and instance segmentation. The trained CNN model was transferred to an embedded artificial processing unit with a high-resolution camera sensor. The embedded device redistributes data with parts detected position and orientation into mixed reality devices and collaborative robot. This approach to assisted assembly using mixed reality, collaborative robot, vision systems, and CNN models can significantly decrease assembly and training time in real production.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1069
Author(s):  
Deyby Huamanchahua ◽  
Adriana Vargas-Martinez ◽  
Ricardo Ramirez-Mendoza

Exoskeletons are an external structural mechanism with joints and links that work in tandem with the user, which increases, reinforces, or restores human performance. Virtual Reality can be used to produce environments, in which the intensity of practice and feedback on performance can be manipulated to provide tailored motor training. Will it be possible to combine both technologies and have them synchronized to reach better performance? This paper consists of the kinematics analysis for the position and orientation synchronization between an n DoF upper-limb exoskeleton pose and a projected object in an immersive virtual reality environment using a VR headset. To achieve this goal, the exoskeletal mechanism is analyzed using Euler angles and the Pieper technique to obtain the equations that lead to its orientation, forward, and inverse kinematic models. This paper extends the author’s previous work by using an early stage upper-limb exoskeleton prototype for the synchronization process.


2021 ◽  
Vol 79 (7) ◽  
pp. 683-695
Author(s):  
Yu-Xia Li ◽  
Jian-Li Wang ◽  
Peng-Fei Guo ◽  
Hong-Wen Li ◽  
Yu-Yan Cao

2021 ◽  
Vol 42 (11) ◽  
pp. 761-770
Author(s):  
Christopher Robertson ◽  
Scott Habershon

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 79
Author(s):  
Chenlei Han ◽  
Michael Frey ◽  
Frank Gauterin

Localization and navigation not only serve to provide positioning and route guidance information for users, but also are important inputs for vehicle control. This paper investigates the possibility of using odometry to estimate the position and orientation of a vehicle with a wheel individual steering system in omnidirectional parking maneuvers. Vehicle models and sensors have been identified for this application. Several odometry versions are designed using a modular approach, which was developed in this paper to help users to design state estimators. Different odometry versions have been implemented and validated both in the simulation environment and in real driving tests. The evaluated results show that the versions using more models and using state variables in models provide both more accurate and more robust estimation.


2021 ◽  
Vol 32 (4) ◽  
Author(s):  
Luigi D’Alfonso ◽  
Emanuele Garone ◽  
Pietro Muraca ◽  
Paolo Pugliese

AbstractIn this work, we face the problem of estimating the relative position and orientation of a camera and an object, when they are both equipped with inertial measurement units (IMUs), and the object exhibits a set of n landmark points with known coordinates (the so-called Pose estimation or PnP Problem). We present two algorithms that, fusing the information provided by the camera and the IMUs, solve the PnP problem with good accuracy. These algorithms only use the measurements given by IMUs’ inclinometers, as the magnetometers usually give inaccurate estimates of the Earth magnetic vector. The effectiveness of the proposed methods is assessed by numerical simulations and experimental tests. The results of the tests are compared with the most recent methods proposed in the literature.


Sign in / Sign up

Export Citation Format

Share Document