Human machine interaction via brain activity monitoring

Author(s):  
D. Wijayasekara ◽  
M. Manic
2020 ◽  
Vol 7 ◽  
Author(s):  
Matteo Spezialetti ◽  
Giuseppe Placidi ◽  
Silvia Rossi

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Niels-Ole Rohweder ◽  
Jan Gertheiss ◽  
Christian Rembe

Abstract Recent research indicates that a direct correlation exists between brain activity and oscillations of the pupil. A publication by Park and Whang shows measurements of excitations in the frequency range below 1 Hz. A similar correlation for frequencies between 1 Hz and 40 Hz has not yet been clarified. In order to evaluate small oscillations, a pupillometer with a spatial resolution of 1 µm is required, exceeding the specifications of existing systems. In this paper, we present a setup able to measure with such a resolution. We consider noise sources, and identify the quantisation noise due to finite pixel sizes as the fundamental noise source. We present a model to describe the quantisation noise, and show that our algorithm to measure the pupil diameter achieves a sub-pixel resolution of about half a pixel of the image or 12 µm. We further consider the processing gains from transforming the diameter time series into frequency space, and subsequently show that we can achieve a sub-micron resolution when measuring pupil oscillations, surpassing established pupillometry systems. This setup could allow for the development of a functional optical, fully-remote electroencephalograph (EEG). Such a device could be a valuable sensor in many areas of AI-based human-machine-interaction.


2021 ◽  
Author(s):  
Nicolas Nieto ◽  
Victoria Peterson ◽  
Hugo Leonardo Rufiner ◽  
Juan Kamienkowski ◽  
Ruben Spies

Surface electroencephalography is a standard and noninvasive way to measure electrical brain activity. Recent advances in artificial intelligence led to significant improvements in the automatic detection of brain patterns, allowing increasingly faster, more reliable and accessible Brain-Computer Interfaces. Different paradigms have been used to enable the human-machine interaction and the last few years have broad a mark increase in the interest for interpreting and characterizing the "inner voice" phenomenon. This paradigm, called inner speech, raises the possibility of executing an order just by thinking about it, allowing a "natural" way of controlling external devices. Unfortunately, the lack of publicly available electroencephalography datasets, restricts the development of new techniques for inner speech recognition. A ten-subjects dataset acquired under this and two others related paradigms, obtained with an acquisition system of 136 channels, is presented. The main purpose of this work is to provide the scientific community with an open-access multiclass electroencephalography database of inner speech commands that could be used for better understanding of the related brain mechanisms.


2021 ◽  
pp. 1-9
Author(s):  
Harshadkumar B. Prajapati ◽  
Ankit S. Vyas ◽  
Vipul K. Dabhi

Face expression recognition (FER) has gained very much attraction to researchers in the field of computer vision because of its major usefulness in security, robotics, and HMI (Human-Machine Interaction) systems. We propose a CNN (Convolutional Neural Network) architecture to address FER. To show the effectiveness of the proposed model, we evaluate the performance of the model on JAFFE dataset. We derive a concise CNN architecture to address the issue of expression classification. Objective of various experiments is to achieve convincing performance by reducing computational overhead. The proposed CNN model is very compact as compared to other state-of-the-art models. We could achieve highest accuracy of 97.10% and average accuracy of 90.43% for top 10 best runs without any pre-processing methods applied, which justifies the effectiveness of our model. Furthermore, we have also included visualization of CNN layers to observe the learning of CNN.


Author(s):  
Xiaochen Zhang ◽  
Lanxin Hui ◽  
Linchao Wei ◽  
Fuchuan Song ◽  
Fei Hu

Electric power wheelchairs (EPWs) enhance the mobility capability of the elderly and the disabled, while the human-machine interaction (HMI) determines how well the human intention will be precisely delivered and how human-machine system cooperation will be efficiently conducted. A bibliometric quantitative analysis of 1154 publications related to this research field, published between 1998 and 2020, was conducted. We identified the development status, contributors, hot topics, and potential future research directions of this field. We believe that the combination of intelligence and humanization of an EPW HMI system based on human-machine collaboration is an emerging trend in EPW HMI methodology research. Particular attention should be paid to evaluating the applicability and benefits of the EPW HMI methodology for the users, as well as how much it contributes to society. This study offers researchers a comprehensive understanding of EPW HMI studies in the past 22 years and latest trends from the evolutionary footprints and forward-thinking insights regarding future research.


ATZ worldwide ◽  
2021 ◽  
Vol 123 (3) ◽  
pp. 46-49
Author(s):  
Tobias Hesse ◽  
Michael Oehl ◽  
Uwe Drewitz ◽  
Meike Jipp

Healthcare ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 834
Author(s):  
Magbool Alelyani ◽  
Sultan Alamri ◽  
Mohammed S. Alqahtani ◽  
Alamin Musa ◽  
Hajar Almater ◽  
...  

Artificial intelligence (AI) is a broad, umbrella term that encompasses the theory and development of computer systems able to perform tasks normally requiring human intelligence. The aim of this study is to assess the radiology community’s attitude in Saudi Arabia toward the applications of AI. Methods: Data for this study were collected using electronic questionnaires in 2019 and 2020. The study included a total of 714 participants. Data analysis was performed using SPSS Statistics (version 25). Results: The majority of the participants (61.2%) had read or heard about the role of AI in radiology. We also found that radiologists had statistically different responses and tended to read more about AI compared to all other specialists. In addition, 82% of the participants thought that AI must be included in the curriculum of medical and allied health colleges, and 86% of the participants agreed that AI would be essential in the future. Even though human–machine interaction was considered to be one of the most important skills in the future, 89% of the participants thought that it would never replace radiologists. Conclusion: Because AI plays a vital role in radiology, it is important to ensure that radiologists and radiographers have at least a minimum understanding of the technology. Our finding shows an acceptable level of knowledge regarding AI technology and that AI applications should be included in the curriculum of the medical and health sciences colleges.


Sign in / Sign up

Export Citation Format

Share Document