Robot-assisted feeding: A technical application that combines learning from demonstration and visual interaction

2020 ◽  
pp. 1-6
Author(s):  
Fei Liu ◽  
Peng Xu ◽  
Hongliu Yu

BACKGROUND: The traditional meal assistance robots use human-computer interaction such as buttons, voice, and EEG. However, most of them rely on excellent programming technology for development, in parallelism with exhibiting inconvenient interaction or unsatisfactory recognition rates in most cases. OBJECTIVE: To develop a convenient human-computer interaction mode with a high recognition rate, which allows users to make the robot show excellent adaptability in the new environment without programming ability. METHODS: A visual interaction method based on deep learning was used to develop the feeding robot: when the camera detects that the user’s mouth is open for 2 seconds, the feeding command is turned on, and the feeding is temporarily conducted when the eyes are closed for 2 seconds. A programming method of learning from the demonstration, which is simple and has strong adaptability to different environments, was employed to generate a feeding trajectory. RESULTS: The user is able to eat independently through convenient visual interaction, and it only requires the caregiver to drag and teach the robotic arm once in the face of a new eating environment.

Author(s):  
Chamin Morikawa ◽  
Michael J. Lyons

Interaction methods based on computer-vision hold the potential to become the next powerful technology to support breakthroughs in the field of human-computer interaction. Non-invasive vision-based techniques permit unconventional interaction methods to be considered, including use of movements of the face and head for intentional gestural control of computer systems. Facial gesture interfaces open new possibilities for assistive input technologies. This chapter gives an overview of research aimed at developing vision-based head and face-tracking interfaces. This work has important implications for future assistive input devices. To illustrate this concretely the authors describe work from their own research in which they developed two vision-based facial feature tracking algorithms for human computer interaction and assistive input. Evaluation forms a critical component of this research and the authors provide examples of new quantitative evaluation tasks as well as the use of model real-world applications for the qualitative evaluation of new interaction styles.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
FenTian Peng ◽  
Hongkai Zhang

Human-computer interaction technology simplifies the complicated procedures, which aims at solving the problems of inadequate description and low recognition rate of dance action, studying the action recognition method of dance video image based on human-computer interaction. This method constructs the recognition process based on human-computer interaction technology, constructs the human skeleton model according to the spatial position of skeleton, motion characteristics of skeleton, and change angles of skeleton, describes the dance posture features by generating skeleton node graph, and extracts the key frames of dance video image by using the clustering algorithm to recognize the dance action. The experimental results show that the recognition rate of this method under different entropy values is not less than 88%. Under the test conditions of complex, dark, bright, and multiuser interference, this method can make the model to describe the dance posture accurately. Furthermore, the average recognition rates are 93.43%, 91.27%, 97.15%, and 89.99%, respectively. It is suitable for action recognition of most dance video images.


2020 ◽  
Vol 7 (2) ◽  
pp. 13-21
Author(s):  
Lyu Jianan ◽  
Ashardi Abas

The wide application of information technology and network technology in automobiles has made great changes in the Human-computer interaction. This paper studies the influence of Human-computer interaction modes on driving safety, comfort and efficiency based on physical interaction, touch screen control interaction, augmented reality, speech interaction and somatosensory interaction. The future Human-com-puter interaction modes such as multi-channel Human-computer interaction mode and Human-computer interaction mode based on biometrics and perception techno-logy are also discussed. At last, the method of automobile Human-computer interaction design based on the existing technology is proposed, which has certain guiding significance for the current automobile Human-computer interaction interface design.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Changjun Zhao

With the continuous development and progress of virtual reality technology in recent years, the application of virtual reality technology in all aspects of real life is no longer limited to the military field, medical, or film production fields, but it gradually appears in front of the public, into the lives of ordinary people. The human-computer interaction method in virtual reality and the presentation effect of the virtual scene are the two most important aspects of the virtual reality experience. How to provide a good human-computer interaction method for virtual reality applications and how to improve the final presentation effect of the virtual reality scene is also becoming an important research direction. This paper takes the virtual fitness club experience system as the application background, analyzes the function and performance requirements of the virtual reality experience system in the virtual reality environment, and proposes the use of Kinect as a video acquisition device to extract the user’s somatosensory operation actions through in-depth information to achieve somatosensory control. This article adopts a real human-computer interaction solution, uses Unity 3D game engine to build a virtual reality scene, defines shaders to improve the rendering effect of the scene, and uses Oculus Rift DK2 to complete an immersive 3D scene demonstration. This process greatly reduces resource consumption; it not only enables users to experience unprecedented immersion as users but also helps people create unprecedented scenes and experiences through virtual imagination. The virtual fitness club experience system probably reduces resource consumption by nearly 70%.


2017 ◽  
pp. 67-96
Author(s):  
Chamin Morikawa ◽  
Michael J. Lyons

Interaction methods based on computer-vision hold the potential to become the next powerful technology to support breakthroughs in the field of human-computer interaction. Non-invasive vision-based techniques permit unconventional interaction methods to be considered, including use of movements of the face and head for intentional gestural control of computer systems. Facial gesture interfaces open new possibilities for assistive input technologies. This chapter gives an overview of research aimed at developing vision-based head and face-tracking interfaces. This work has important implications for future assistive input devices. To illustrate this concretely the authors describe work from their own research in which they developed two vision-based facial feature tracking algorithms for human computer interaction and assistive input. Evaluation forms a critical component of this research and the authors provide examples of new quantitative evaluation tasks as well as the use of model real-world applications for the qualitative evaluation of new interaction styles.


2015 ◽  
Vol 30 (3) ◽  
pp. 258-265 ◽  
Author(s):  
Hossein Mousavi Hondori ◽  
Maryam Khademi ◽  
Lucy Dodakian ◽  
Alison McKenzie ◽  
Cristina V. Lopes ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document