scholarly journals NAO-Teach: helping kids to learn societal and theoretical knowledge with friendly human-robot interaction

Author(s):  
Ahmad Hoirul Basori

<span lang="EN-GB">Robotic technology has affected the education field, and even early education involves robot to attract Kids. Technical education is the notion of giving students knowledge of robots and technology. The main contribution of our research is to provide an interactive way of learning for kids through play and fun method. Two approaches proposed here: first, we provide an interactive game by touching robots body parts to teach kids how they were practising their motoric nerve and the listening to the instruction. In this game, kids asked to find some robot parts such as right hand, or left hand, where it equipped with a tactile sensor. The game difficulty can be increased by setting up the time limit for the answer and make kids touch the body parts of the robot very fast. The second learning method is practising number counting and pronunciation with NAO Robots. The robots will do computer vision processing to analyse and pronounce the kids handwriting with an artificial neural network. The result of implementation has obtained more than 75% success rate on recognition part with loss es than 0.6. The system received strong appreciation from kids and their parent, while  This research believed able to attract kids to study in interactive and fun ways.</span>

2021 ◽  
Vol 6 (51) ◽  
pp. eabc8801
Author(s):  
Youcan Yan ◽  
Zhe Hu ◽  
Zhengbao Yang ◽  
Wenzhen Yuan ◽  
Chaoyang Song ◽  
...  

Human skin can sense subtle changes of both normal and shear forces (i.e., self-decoupled) and perceive stimuli with finer resolution than the average spacing between mechanoreceptors (i.e., super-resolved). By contrast, existing tactile sensors for robotic applications are inferior, lacking accurate force decoupling and proper spatial resolution at the same time. Here, we present a soft tactile sensor with self-decoupling and super-resolution abilities by designing a sinusoidally magnetized flexible film (with the thickness ~0.5 millimeters), whose deformation can be detected by a Hall sensor according to the change of magnetic flux densities under external forces. The sensor can accurately measure the normal force and the shear force (demonstrated in one dimension) with a single unit and achieve a 60-fold super-resolved accuracy enhanced by deep learning. By mounting our sensor at the fingertip of a robotic gripper, we show that robots can accomplish challenging tasks such as stably grasping fragile objects under external disturbance and threading a needle via teleoperation. This research provides new insight into tactile sensor design and could be beneficial to various applications in robotics field, such as adaptive grasping, dexterous manipulation, and human-robot interaction.


Author(s):  
J. Lindblom ◽  
B. Alenljung

A fundamental challenge of human interaction with socially interactive robots, compared to other interactive products, comes from them being embodied. The embodied nature of social robots questions to what degree humans can interact ‘naturally' with robots, and what impact the interaction quality has on the user experience (UX). UX is fundamentally about emotions that arise and form in humans through the use of technology in a particular situation. This chapter aims to contribute to the field of human-robot interaction (HRI) by addressing, in further detail, the role and relevance of embodied cognition for human social interaction, and consequently what role embodiment can play in HRI, especially for socially interactive robots. Furthermore, some challenges for socially embodied interaction between humans and socially interactive robots are outlined and possible directions for future research are presented. It is concluded that the body is of crucial importance in understanding emotion and cognition in general, and, in particular, for a positive user experience to emerge when interacting with socially interactive robots.


2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110620
Author(s):  
Jiyuan Song ◽  
Aibin Zhu ◽  
Yao Tu ◽  
Jiajun Zou

In the task of carrying heavy objects, it is easy to cause back injuries and other musculoskeletal diseases. Although wearable robots are designed to reduce this danger, most existing exoskeletons use high-stiffness mechanisms, which are beneficial to load-bearing conduction, but this restricts the natural movement of the human body, thereby causing ergonomic risks. This article proposes a back exoskeleton composed of multiple elastic spherical hinges inspired by the biological spine. This spine exoskeleton can assist in the process of bending the body and ensure flexibility. We deduced the kinematics model of this mechanism and established an analytical biomechanical model of human–robot interaction. The mechanism of joint assistance of the spine exoskeleton was discussed, and experiments were conducted to verify the flexibility of the spine exoskeleton and the effectiveness of the assistance during bending.


2019 ◽  
Vol 4 (29) ◽  
pp. eaav6079
Author(s):  
Kathleen Fitzsimons ◽  
Ana Maria Acosta ◽  
Julius P. A. Dewald ◽  
Todd D. Murphey

This paper applies information theoretic principles to the investigation of physical human-robot interaction. Drawing from the study of human perception and neural encoding, information theoretic approaches offer a perspective that enables quantitatively interpreting the body as an information channel and bodily motion as an information-carrying signal. We show that ergodicity, which can be interpreted as the degree to which a trajectory encodes information about a task, correctly predicts changes due to reduction of a person’s existing deficit or the addition of algorithmic assistance. The measure also captures changes from training with robotic assistance. Other common measures for assessment failed to capture at least one of these effects. This information-based interpretation of motion can be applied broadly, in the evaluation and design of human-machine interactions, in learning by demonstration paradigms, or in human motion analysis.


2012 ◽  
Vol 24 (12) ◽  
pp. 2306-2320 ◽  
Author(s):  
Luigi Tamè ◽  
Christoph Braun ◽  
Angelika Lingnau ◽  
Jens Schwarzbach ◽  
Gianpaolo Demarchi ◽  
...  

Although the somatosensory homunculus is a classically used description of the way somatosensory inputs are processed in the brain, the actual contributions of primary (SI) and secondary (SII) somatosensory cortices to the spatial coding of touch remain poorly understood. We studied adaptation of the fMRI BOLD response in the somatosensory cortex by delivering pairs of vibrotactile stimuli to the finger tips of the index and middle fingers. The first stimulus (adaptor) was delivered either to the index or to the middle finger of the right or left hand, and the second stimulus (test) was always administered to the left index finger. The overall BOLD response evoked by the stimulation was primarily contralateral in SI and was more bilateral in SII. However, our fMRI adaptation approach also revealed that both somatosensory cortices were sensitive to ipsilateral as well as to contralateral inputs. SI and SII adapted more after subsequent stimulation of homologous as compared with nonhomologous fingers, showing a distinction between different fingers. Most importantly, for both somatosensory cortices, this finger-specific adaptation occurred irrespective of whether the tactile stimulus was delivered to the same or to different hands. This result implies integration of contralateral and ipsilateral somatosensory inputs in SI as well as in SII. Our findings suggest that SI is more than a simple relay for sensory information and that both SI and SII contribute to the spatial coding of touch by discriminating between body parts (fingers) and by integrating the somatosensory input from the two sides of the body (hands).


2019 ◽  
Vol 6 (3) ◽  
pp. 180866 ◽  
Author(s):  
Matthew R. Longo ◽  
Anamaria Lulciuc ◽  
Lenka Sotakova

The perceived distance between two touches has been found to be larger for pairs of stimuli oriented across the width of the body than along the length of the body, for several body parts. Nevertheless, the magnitude of such biases varies from place to place, suggesting systematically different distortions of tactile space across the body. Several recent studies have investigated perceived tactile distance on the belly as an implicit measure of body perception in clinical conditions including anorexia nervosa and obesity. In this study, we investigated whether there is an anisotropy of perceived tactile distance on the belly in a sample of adult women. Participants made verbal estimates of the perceived distance between pairs of touches oriented either across body width or along body length on the belly and the dorsum of the left hand. Consistent with previous results, a large anisotropy was apparent on the hand, with across stimuli perceived as larger than along stimuli. In contrast, no such bias was apparent on the belly. These results provide further evidence that anisotropies of perceived tactile distance vary systematically across the body and suggest that there is no anisotropy at all on the belly in healthy women.


2021 ◽  
pp. 147807712110251
Author(s):  
Isla Xi Han ◽  
Forrest Meggers ◽  
Stefana Parascho

Advancements in multi-agent, autonomous, and intelligent robotic systems over the past decades point toward new design and fabrication possibilities. Exploring how humans and robots can create and construct collectively is essential in leveraging robotic technology in the building sector. However, only by making existing knowledge from relevant technological disciplines accessible to designers can we fully exploit current construction methods and further develop them to address the challenges in architecture. To do this, we present a review paper that bridges the gap between Collective Robotic Construction (CRC) and Human–Robot Interaction (HRI) and defines a new research domain in Collective Human–Robot Construction (CHRC) in the architectural design and fabrication context.


Author(s):  
Abdelouahab Zaatri ◽  
Hamama Aboud

Abstract In this paper we discuss some image processing methods that can be used for motion recognition of human body parts such as hands or arms in order to interact with robots. This interaction is usually associated to gesture-based control. The considered image processing methods have been experienced for feature recognition in applications involving human robot interaction. They are namely: Sequential Similarity Detection Algorithm (SSDA), an appearance-based approach that uses image databases to model objects, and Kanade-Lucas-Tomasi (KLT) algorithm which is usually used for feature tracking. We illustrate the gesture-based interaction by using KLT algorithm. We discuss the adaptation of each of these methods to the context of gesture-based robot interaction and some of their related issues.


Author(s):  
Kate Darling

People have a tendency to project lifelike qualities onto robots. As we increasingly create spaces where robotic technology interacts with humans, this inclination raises ethical questions about use and policy. An experiment conducted in our lab on human–robot interaction indicates that framing robots through anthropomorphic language (like a personified name or story) can impact how people perceive and treat a robot. This chapter explores the effects of encouraging or discouraging people to anthropomorphize robots through framing. I discuss concerns about anthropomorphizing robotic technology in certain contexts, but I argue that there are also cases where encouraging anthropomorphism is desirable. Because people respond to framing, framing could help to separate these cases.


Sign in / Sign up

Export Citation Format

Share Document