remote perception
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 3)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 2 ◽  
Author(s):  
Marco Laghi ◽  
Manuel G. Catalano ◽  
Giorgio Grioli ◽  
Antonio Bicchi

Abstract Force feedback is often beneficial for robotic teleoperation, as it enhances the user’s remote perception. Over the years, many kinesthetic haptic displays (KHDs) have been proposed for this purpose, which have different types of interaction and feedback, depending on their kinematics and their interface with the operator, including, for example, grounded and wearable devices acting either at the joint or operational space (OS) level. Most KHDs in the literature are for the upper limb, with a majority acting at the shoulder/elbow level, and others focusing on hand movements. A minority exists which addresses wrist motions. In this paper, we present the Wearable Delta (W $ \Delta $ ), a proof-of-concept wearable wrist interface with hybrid parallel–serial kinematics acting in the OS, able to render a desired force directly to the hand involving just the forearm–hand subsystem. It has six degrees of freedom (DoFs), three of which are actuated, and is designed to reduce the obstruction of the range of the user’s wrist. Integrated with positions/inertial sensors at the elbow and upper arm, the W $ \Delta $ allows the remote control of a full articulated robotic arm. The paper covers the whole designing process, from the concept to the validation, as well as a multisubject experimental campaign that investigates its usability. Finally, it presents a section that, starting from the experimental results, aims to discuss and summarize the W $ \Delta $ advantages and limitations and look for possible future improvements and research directions.


2020 ◽  
pp. 1534-1554
Author(s):  
Richard T. Stone ◽  
Thomas Michael Schnieders ◽  
Peihan Zhong

The focus of this article is perception affects and enhancement during ground robotic tele-operation. Three independent factors were studied, namely scale perception, distance perception, and orientation awareness. Enhancements for each factor was proposed, implemented, and evaluated. The results show that under remote perception conditions, where the operator was separated from the environment where the navigation took place, both distance perception and scale perception were significantly impaired as compared with that obtained under direct perception conditions. In addition, for each of the proposed enhancements designed for each critical factor, the corresponding factor was significantly improved. The broader impacts of this work can be applied to various human-robot collaborated applications, such as urban search and rescue. Applying the proposed enhancements will allow the operators to have fewer failures through hallways, doorways, or maneuvering around obstacles, as well as allowing a more accurate understanding of the area's layout when a map is not available.


Medical images can be acquired through different techniques (modalities), which have their own application areas; some of them provide information on the functional activity, while others contain only anatomic information. Usually, in the first case, images have low spatial resolution while in the second case have a higher resolution. However, the analysis of medical images often requires the evaluation of more than one modality; in order provide the specialist with more information for decision making as well as for the analysis and the treatment of diseases. Image fusion aims to combine information from the same sensor or different sensors, so that the image fused retain the information content of each individual image. In remote perception, when multispectral images are analyzed, it is very important to preserve the content of spectral information of each of the bands. The challenge is to obtain good quality images that allow us to extract as much amount of information possible, for which it is sometimes necessary to enhance or modify the image to improve its appearance or combine images or portions thereof to combine the information. An ideal fusion of multispectral images and the band panchromatic will result in a new series of bands with greater spatial resolution and equal spectral content. This paper proposes a PCA, DWT and cultural optimized entropy based DWT fusion with the evaluation parameters; arithmetic mean (SM), Maximum value ( ) and Minimum value ( ).


Author(s):  
Richard T Stone ◽  
Thomas Michael Schnieders ◽  
Peihan Zhong

The focus of this article is perception affects and enhancement during ground robotic tele-operation. Three independent factors were studied, namely scale perception, distance perception, and orientation awareness. Enhancements for each factor was proposed, implemented, and evaluated. The results show that under remote perception conditions, where the operator was separated from the environment where the navigation took place, both distance perception and scale perception were significantly impaired as compared with that obtained under direct perception conditions. In addition, for each of the proposed enhancements designed for each critical factor, the corresponding factor was significantly improved. The broader impacts of this work can be applied to various human-robot collaborated applications, such as urban search and rescue. Applying the proposed enhancements will allow the operators to have fewer failures through hallways, doorways, or maneuvering around obstacles, as well as allowing a more accurate understanding of the area's layout when a map is not available.


Author(s):  
Stephen J. Cauffman ◽  
Douglas J. Gillan

Unmanned Aerial Systems (UASs) are becoming more prevalent in civilian use, such as emergency response and public safety. As a result, UASs pose issues of remote perception for human users (Eyerman, 2013). The purpose of this experiment was to test the effects of combining aerial and ground perspectives on spatial judgments of object positions in an urban environment. Participants were shown randomly ordered image pairs of aerial and ground views of objects in a virtual city and were asked to make judgments about where a missing object was in the second image of the pair. Response times and error were collected with error being calculated using the Euclidean distance formula. The results were consistent with previous research and showed that congruent trials (aerial-aerial and ground-ground) resulted in less error and response time. It was also shown that there was a significant four-way interaction between stimulus image, response image, object density, and stimulus duration. The results of this study are intended to provide the basis for future work in understanding the underlying reasons behind spatial errors that might occur during use of UASs and lead to design implementations for interfaces to reduce these errors.


EXPLORE ◽  
2007 ◽  
Vol 3 (3) ◽  
pp. 254-269 ◽  
Author(s):  
B.J. Dunne ◽  
R.G. Jahn
Keyword(s):  

Author(s):  
James S. Tittle ◽  
Axel Roesler ◽  
David D. Woods

Previous research (e.g., Casper, 2002; Darken, Kempster, & Peterson, 2001) has shown that observers demonstrate poor spatial awareness based on video provided from remote environments. Such a result is understandable given that remote vision systems provide impoverished representations that leave out higher order cues essential to build coherent percepts and models of the world being explored. If tele-presence or remote vision is to be useful in the future, the raw video needs to somehow be augmented to recover what was lost by decoupling the human perceptual processor from the natural environment.


Sign in / Sign up

Export Citation Format

Share Document