Combining Ultrasonic Signals And Multi-Colored Images To Perform Object Tracking And Recognition In Low-Cost Robotic Platforms

2016 ◽  
Author(s):  
Danilo H. F. Menezes ◽  
Thiago D. Mendonca ◽  
Wolney M. Neto ◽  
Hendrik T. Macedo ◽  
Leonardo N. Matos
2016 ◽  
Vol 94 (2) ◽  
pp. 267-282 ◽  
Author(s):  
Youngseop Kim ◽  
Woori Han ◽  
Yong-Hwan Lee ◽  
Cheong Ghil Kim ◽  
Kuinam J. Kim

Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3220 ◽  
Author(s):  
Carlos Veiga Almagro ◽  
Mario Di Castro ◽  
Giacomo Lunghi ◽  
Raúl Marín Prades ◽  
Pedro José Sanz Valero ◽  
...  

Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot using manual control in critical steps, as well as semi-autonomous behaviours in more secure scenarios, by using, for example, object tracking and recognition techniques. This paper describes a novel vision system to track and estimate the depth of metallic targets for robotic interventions. The system has been designed for on-hand monocular cameras, focusing on solving lack of visibility and partial occlusions. This solution has been validated during real interventions at the Centre for Nuclear Research (CERN) accelerator facilities, achieving 95% success in autonomous mode and 100% in a supervised manner. The system increases the safety and efficiency of the robotic operations, reducing the cognitive fatigue of the operator during non-critical mission phases. The integration of such an assistance system is especially important when facing complex (or repetitive) tasks, in order to reduce the work load and accumulated stress of the operator, enhancing the performance and safety of the mission.


2004 ◽  
Author(s):  
Wei Su ◽  
Laurence G. Hassebrook ◽  
Veera G. Yalla

Author(s):  
Timothy Garrett ◽  
Saverio Debernardis ◽  
Rafael Radkowski ◽  
Carl K. Chang ◽  
Michele Fiorentino ◽  
...  

Augmented reality (AR) applications rely on robust and efficient methods for tracking. Tracking methods use a computer-internal representation of the object to track, which can be either sparse or dense representations. Sparse representations use only a limited set of feature points to represent an object to track, whereas dense representations almost mimic the shape of an object. While algorithms performed on sparse representations are faster, dense representations can distinguish multiple objects. The research presented in this paper investigates the feasibility of a dense tracking method for rigid object tracking, which incorporates the both object identification and object tracking steps. We adopted a tracking method that has been developed for the Microsoft Kinect to support single object tracking. The paper describes this method and presents the results. We also compared two different methods for mesh reconstruction in this algorithm. Since meshes are more informative when identifying a rigid object, this comparison indicates which algorithm shows the best performance for this task and guides our future research efforts.


Author(s):  
Rafael Radkowski ◽  
Jarid Ingebrand

This paper examines the fidelity of a commodity range camera for assembly inspection in use cases such as augmented reality-based assembly assistance. The objective of inspection is to determine whether a part is present and correctly aligned. In our scenario, shortly after the mechanics assembled the part, which is denoted as on-the-fly inspection. Our approach is based on object tracking and a subsequent discrepancy analysis. Object tracking determines the presence, position, and orientation of parts. The discrepancy analysis facilitates to determine whether the parts are correctly aligned. In comparison to a naive position and orientation difference approach, the discrepancy analysis incorporates the dimensions of parts, which increases the alignment fidelity. To assess this, an experiment was conducted in order to determine the accuracy range. The results indicate a sufficient accuracy for larger parts a noticeable improvement in comparison to the naive approach.


2019 ◽  
Vol 27 (3) ◽  
pp. 1737-1751
Author(s):  
M. Imran SHEHZAD ◽  
Fazal Wahab KARAM ◽  
Shoaib AZMAT

Sign in / Sign up

Export Citation Format

Share Document