scholarly journals Autonomous Operation and Human-Robot Interaction on an Indoor Mobile Robot

2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>

2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>


2020 ◽  
Vol 10 (24) ◽  
pp. 8991
Author(s):  
Jiadong Zhang ◽  
Wei Wang ◽  
Xianyu Qi ◽  
Ziwei Liao

For the indoor navigation of service robots, human–robot interaction and adapting to the environment still need to be strengthened, including determining the navigation goal socially, improving the success rate of passing doors, and optimizing the path planning efficiency. This paper proposes an indoor navigation system based on object semantic grid and topological map, to optimize the above problems. First, natural language is used as a human–robot interaction form, from which the target room, object, and spatial relationship can be extracted by using speech recognition and word segmentation. Then, the robot selects the goal point from the target space by object affordance theory. To improve the navigation success rate and safety, we generate auxiliary navigation points on both sides of the door to correct the robot trajectory. Furthermore, based on the topological map and auxiliary navigation points, the global path is segmented into each topological area. The path planning algorithm is carried on respectively in every room, which significantly improves the navigation efficiency. This system has demonstrated to support autonomous navigation based on language interaction and significantly improve the safety, efficiency, and robustness of indoor robot navigation. Our system has been successfully tested in real domestic environments.


2020 ◽  
Vol 20 (14) ◽  
pp. 7918-7928
Author(s):  
Yingzhong Tian ◽  
Guopeng Wang ◽  
Long Li ◽  
Tao Jin ◽  
Fengfeng Xi ◽  
...  

Author(s):  
James Ballantyne ◽  
Edward Johns ◽  
Salman Valibeik ◽  
Charence Wong ◽  
Guang-Zhong Yang

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2180 ◽  
Author(s):  
Prasanna Kolar ◽  
Patrick Benavidez ◽  
Mo Jamshidi

This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.


2019 ◽  
Vol 1 (1) ◽  
pp. 37-53
Author(s):  
Kerstin Thurow ◽  
Lei Zhang ◽  
Hui Liu ◽  
Steffen Junginger ◽  
Norbert Stoll ◽  
...  

AbstractTransportation technologies for mobile robots include indoor navigation, intelligent collision avoidance and target manipulation. This paper discusses the research process and development of these interrelated technologies. An efficient multi-floor laboratory transportation system for mobile robots developed by the group at the Center for Life Science Automation (CELISCA) is then introduced. This system is integrated with the multi-floor navigation and intelligent collision avoidance systems, as well as a labware manipulation system. A multi-floor navigation technology is proposed, comprising sub-systems for mapping and localization, path planning, door control and elevator operation. Based on human–robot interaction technology, a collision avoidance system is proposed that improves the navigation of the robots and ensures the safety of the transportation process. Grasping and placing operation technologies using the dual arms of the robots are investigated and integrated into the multi-floor transportation system. The proposed transportation system is installed on the H20 mobile robots and tested at the CELISCA laboratory. The results show that the proposed system can ensure the mobile robots are successful when performing multi-floor laboratory transportation tasks.


Author(s):  
Robin R. Murphy ◽  
Jennifer L. Burke

The Center for Robot-Assisted Search and Rescue has collected data at three responses (World Trade Center, Hurricane Charley, and the La Conchita mudslide) and nine high fidelity field exercises. Our results can be distilled into four lessons. First, building situation awareness, not autonomous navigation, is the major bottleneck in robot autonomy. Most of the robotics literature assumes a single operator single robot (SOSR), while our work shows that two operators working together are nine times more likely to find a victim. Second, human-robot interaction should not be thought of how to control the robot but rather how a team of experts can exploit the robot as an active information source. The third lesson is that team members use shared visual information to build shared mental models and facilitate team coordination. This suggests that high bandwidth, reliable communications will be necessary for effective teamwork. Fourth, victims and rescuers in close proximity to the robots respond to the robot socially. We conclude with observations about the general challenges in human-robot interaction.


2019 ◽  
Vol 16 (02) ◽  
pp. 1950006 ◽  
Author(s):  
Yan Wei ◽  
Wei Jiang ◽  
Ahmed Rahmani ◽  
Qiang Zhan

A high redundant non-holonomic humanoid mobile dual-arm manipulator system (MDAMS) is presented in this paper, where the motion planning to realize “human-like” autonomous navigation and manipulation tasks is studied. First, an improved MaxiMin NSGA-II algorithm, which optimizes five objective functions to solve the problems of singularity, redundancy and coupling between mobile base and manipulator simultaneously, is proposed to design the optimal pose to manipulate the target object. Then, in order to link the initial pose and that optimal pose, an off-line motion planning algorithm is designed. In detail, an efficient direct-connect bidirectional RRT and gradient descent algorithm is proposed to reduce the sampled nodes largely, and a geometric optimization method is proposed for path pruning. Besides, head forward behaviors are realized by calculating the reasonable orientations and assigning them to the mobile base to improve the quality of human-robot interaction. Third, the extension to online planning is done by introducing real-time sensing, collision-test and control cycles to update robotic motion in dynamic environments. Fourth, an EEs’ via-point-based multi-objective genetic algorithm (MOGA) is proposed to design the “human-like” via-poses by optimizing four objective functions. Finally, numerous simulations are presented to validate the effectiveness of proposed algorithms.


Author(s):  
Carlos Morato ◽  
Krishnanand Kaipa ◽  
Boxuan Zhao ◽  
Satyandra K. Gupta

In this paper, we propose an exteroceptive sensing based framework to achieve safe human-robot interaction during shared tasks. Our approach allows a human to operate in close proximity with the robot, while pausing the robot’s motion whenever a collision between the human and the robot is imminent. The human’s presence is sensed by a N-range sensor based system, which consists of multiple range sensors mounted at various points on the periphery of the work cell. Each range sensor is based on a Microsoft Kinect sensor. Each sensor observes the human and outputs a 20 DOF human model. Positional data of these models are fused together to generate a refined human model. Next, the robot and the human model are approximated by dynamic bounding spheres and the robot’s motion is controlled by tracking the collisions between these spheres. Whereas most previous exteroceptive methods relied on depth data from camera images, our approach is one of the first successful attempts to build an explicit human model online and use it to evaluate human-robot interference. Real-time behavior observed during experiments with a 5 DOF robot and a human safely performing shared assembly tasks validate our approach.


Sign in / Sign up

Export Citation Format

Share Document