The PoundCloud framework for ROS-based cloud robotics: Case studies on autonomous navigation and human-robot interaction

2021 ◽  
pp. 103981
Author(s):  
Ricardo C. Mello ◽  
Sergio D. Sierra M. ◽  
Wandercleyson M. Scheidegger ◽  
Marcela C. Múnera ◽  
Carlos A. Cifuentes ◽  
...  
Author(s):  
James Ballantyne ◽  
Edward Johns ◽  
Salman Valibeik ◽  
Charence Wong ◽  
Guang-Zhong Yang

2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>


AI Magazine ◽  
2011 ◽  
Vol 32 (4) ◽  
pp. 85-99 ◽  
Author(s):  
Julia Peltason ◽  
Britta Wrede

Modeling interaction with robots raises new and different challenges for dialog modeling than traditional dialog modeling with less embodied machines. We present four case studies of implementing a typical human-robot interaction scenario with different state-of-the-art dialog frameworks in order to identify challenges and pitfalls specific to HRI and potential solutions. The results are discussed with a special focus on the interplay between dialog and task modeling on robots.


Author(s):  
Robin R. Murphy ◽  
Jennifer L. Burke

The Center for Robot-Assisted Search and Rescue has collected data at three responses (World Trade Center, Hurricane Charley, and the La Conchita mudslide) and nine high fidelity field exercises. Our results can be distilled into four lessons. First, building situation awareness, not autonomous navigation, is the major bottleneck in robot autonomy. Most of the robotics literature assumes a single operator single robot (SOSR), while our work shows that two operators working together are nine times more likely to find a victim. Second, human-robot interaction should not be thought of how to control the robot but rather how a team of experts can exploit the robot as an active information source. The third lesson is that team members use shared visual information to build shared mental models and facilitate team coordination. This suggests that high bandwidth, reliable communications will be necessary for effective teamwork. Fourth, victims and rescuers in close proximity to the robots respond to the robot socially. We conclude with observations about the general challenges in human-robot interaction.


2020 ◽  
Vol 10 (24) ◽  
pp. 8991
Author(s):  
Jiadong Zhang ◽  
Wei Wang ◽  
Xianyu Qi ◽  
Ziwei Liao

For the indoor navigation of service robots, human–robot interaction and adapting to the environment still need to be strengthened, including determining the navigation goal socially, improving the success rate of passing doors, and optimizing the path planning efficiency. This paper proposes an indoor navigation system based on object semantic grid and topological map, to optimize the above problems. First, natural language is used as a human–robot interaction form, from which the target room, object, and spatial relationship can be extracted by using speech recognition and word segmentation. Then, the robot selects the goal point from the target space by object affordance theory. To improve the navigation success rate and safety, we generate auxiliary navigation points on both sides of the door to correct the robot trajectory. Furthermore, based on the topological map and auxiliary navigation points, the global path is segmented into each topological area. The path planning algorithm is carried on respectively in every room, which significantly improves the navigation efficiency. This system has demonstrated to support autonomous navigation based on language interaction and significantly improve the safety, efficiency, and robustness of indoor robot navigation. Our system has been successfully tested in real domestic environments.


2019 ◽  
Vol 10 (1) ◽  
pp. 219-227 ◽  
Author(s):  
Cathrine Hasse

AbstractThis article argues that a multi-variation approach can be a useful supplement to existing ethnographic studies in the field of Human-Robot Interaction (HRI). The multi-variation approach builds on classical ethnographic case studies, where a researcher studies a delimited field in a microstudy of a particular robot, its makers, users, and affected stakeholders. The approach is also inspired by multi-sited studies, where researchers move across fields, adding to the complexity of the ethnographic findings. Whereas both approaches build on analysis of microstudies, the multi-variation approach is further inspired by postphenomenology, where the main aim is to deliberately seek variation – thus again adding to the complexity of the detailed findings. Here, the multivariation approach includes several researchers studying several types of robots across sites. The analytical approach seeks patterns across this complexity – and the claim is that a multi-variation approach has a strength in findings that are systematic and consistent across cases, sites, and variations. The article gives an example of such cross-variation findings in the robot field – namely the tendency for roboticists across cases and robot types to publicly present their robots as more finished and wellfunctioning than they actually are.


2021 ◽  
Vol 8 ◽  
Author(s):  
Tetsunari Inamura ◽  
Yoshiaki Mizuchi

Research on Human-Robot Interaction (HRI) requires the substantial consideration of an experimental design, as well as a significant amount of time to practice the subject experiment. Recent technology in virtual reality (VR) can potentially address these time and effort challenges. The significant advantages of VR systems for HRI are: 1) cost reduction, as experimental facilities are not required in a real environment; 2) provision of the same environmental and embodied interaction conditions to test subjects; 3) visualization of arbitrary information and situations that cannot occur in reality, such as playback of past experiences, and 4) ease of access to an immersive and natural interface for robot/avatar teleoperations. Although VR tools with their features have been applied and developed in previous HRI research, all-encompassing tools or frameworks remain unavailable. In particular, the benefits of integration with cloud computing have not been comprehensively considered. Hence, the purpose of this study is to propose a research platform that can comprehensively provide the elements required for HRI research by integrating VR and cloud technologies. To realize a flexible and reusable system, we developed a real-time bridging mechanism between the robot operating system (ROS) and Unity. To confirm the feasibility of the system in a practical HRI scenario, we applied the proposed system to three case studies, including a robot competition named RoboCup@Home. via these case studies, we validated the system’s usefulness and its potential for the development and evaluation of social intelligence via multimodal HRI.


Micromachines ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 193
Author(s):  
Marcos Daza ◽  
Dennis Barrios-Aranibar ◽  
José Diaz-Amado ◽  
Yudith Cardinale ◽  
João Vilasboas

Nowadays, mobile robots are playing an important role in different areas of science, industry, academia and even in everyday life. In this sense, their abilities and behaviours become increasingly complex. In particular, in indoor environments, such as hospitals, schools, banks and museums, where the robot coincides with people and other robots, its movement and navigation must be programmed and adapted to robot–robot and human–robot interactions. However, existing approaches are focused either on multi-robot navigation (robot–robot interaction) or social navigation with human presence (human–robot interaction), neglecting the integration of both approaches. Proxemic interaction is recently being used in this domain of research, to improve Human–Robot Interaction (HRI). In this context, we propose an autonomous navigation approach for mobile robots in indoor environments, based on the principles of proxemic theory, integrated with classical navigation algorithms, such as ORCA, Social Momentum, and A*. With this novel approach, the mobile robot adapts its behaviour, by analysing the proximity of people to each other, with respect to it, and with respect to other robots to decide and plan its respective navigation, while showing acceptable social behaviours in presence of humans. We describe our proposed approach and show how proxemics and the classical navigation algorithms are combined to provide an effective navigation, while respecting social human distances. To show the suitability of our approach, we simulate several situations of coexistence of robots and humans, demonstrating an effective social navigation.


2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>


Sign in / Sign up

Export Citation Format

Share Document