scholarly journals Virtual Reality Based Multi-Modal Teleoperation Using Mixed Autonomy

Author(s):  
Muthukkumar S. Kadavasal ◽  
Abhishek Seth ◽  
James H. Oliver

A multi modal teleoperation interface is introduced featuring an integrated virtual reality based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view thereby allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. As both the vehicle and the operator share absolute autonomy in stages, the operation is referred to as mixed autonomous. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The system effectively balances the autonomy between human and on board vehicle intelligence. The stereo vision based obstacle avoidance system is initially implemented on video based teleoperation architecture and experimental results are presented. The VR based multi modal teleoperation interface is expected to be more adaptable and intuitive when compared to other interfaces.

Author(s):  
Muthukkumar S. Kadavasal ◽  
James H. Oliver

A multimodal teleoperation interface is introduced, featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities onboard the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multimodal control interface. VR addresses the typical limitations of video based teleoperation caused by signal lag and limited field of view, allowing the operator to navigate in a continuous fashion. The vehicle incorporates an onboard computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and a real-state tracking system enable temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. The system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR based multimodal teleoperation interface is expected to be more adaptable and intuitive when compared with other interfaces.


Author(s):  
Muthukkumar S. Kadavasal ◽  
James H. Oliver

A multi modal teleoperation interface is introduced featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view. The 3D environment in VR along with visual cues generated from real time sensor data allows the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR interface architecture is discussed and implementation results are presented. The VR based multi modal teleoperation interface is expected to be more adaptable and intuitive when compared to other interfaces.


Author(s):  
Muthukkumar S. Kadavasal ◽  
James H. Oliver

A teleoperation interface is introduced featuring an integrated virtual reality based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The virtual reality system addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view, allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. It also enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. Finally, the system provides real time update to the virtual environment based on anomalies encountered by the vehicle. System architecture and preliminary implementation results are discussed, and future work focused on incorporating dynamic moving objects in the environment is described.


2015 ◽  
Vol 2015 ◽  
pp. 1-5
Author(s):  
Chia-Sui Wang ◽  
Ko-Chun Chen ◽  
Tsung Han Lee ◽  
Kuei-Shu Hsu

A virtual reality (VR) driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing) and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock) variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.


2012 ◽  
Author(s):  
Ta-Te Lin ◽  
An-Chih Tsai ◽  
Kai-Chiang Chuang ◽  
Yu-Chou Chen ◽  
Yu-Sung Chen

2009 ◽  
Vol 419-420 ◽  
pp. 565-568 ◽  
Author(s):  
Chao Ching Ho

Designing a visual tracking system to track an object is a complex task because a large amount of video data must be transmitted and processed in real time. In this study, a stereo vision system is used to acquire the 3D positions of the target, tracking can be achieved by applying the CAMSHIFT algorithm, then apply the fuzzy reasoning control to steer the mobile robot to follow the selected target and avoid the in-path obstacles. The adopted obstacle avoidance component is based on the Harris corner detection and the binocular stereo imaging, which performs the correspondence calculation. Therefore a depth map is created and showing the relative 3D distances of the detected substantial features to the robot, which provides the information of the in-path obstacles in front of the wheeled mobile robot. The designed visual tracking and servo system is less sensitive to lighting influences and thus performs more efficiently. Experimental results showed that the mobile robot vision system successfully finished the target-following task by avoiding obstacles.


2011 ◽  
Vol 48-49 ◽  
pp. 749-752 ◽  
Author(s):  
Xing Zhe Xie ◽  
Heng Wang ◽  
Qian You Luo

This paper employs Bumblebee2 stereo vision system to detect the obstacles for patrol robot in substation environment. Firstly, with the selected points in the disparity image, the ground plane is calculated by the RANSAC (Random Sample Consensus) algorithm. And then, the local occupancy grid map is built for patrol robot, and the obstacles are detected through connected component analysis method. The actual test in substation environment verified the reliability of the system.


Sign in / Sign up

Export Citation Format

Share Document