Opportunistic Detection Methods for Emotion-Aware Smartphone Applications

2016 ◽  
pp. 670-704
Author(s):  
Igor Bisio ◽  
Alessandro Delfino ◽  
Fabio Lavagetto ◽  
Mario Marchese

Human-machine interaction is performed by devices such as the keyboard, the touch-screen, or speech-to-text applications. For example, a speech-to-text application is software that allows the device to translate the spoken words into text. These tools translate explicit messages but ignore implicit messages, such as the emotional status of the speaker, filtering out a portion of information available in the interaction process. This chapter focuses on emotion detection. An emotion-aware device can also interact more personally with its owner and react appropriately according to the user's mood, making the user-machine interaction less stressful. The chapter gives the guidelines for building emotion-aware smartphone applications in an opportunistic way (i.e., without the user's collaboration). In general, smartphone applications might be employed in different contexts; therefore, the to-be-detected emotions might be different.

Author(s):  
Igor Bisio ◽  
Alessandro Delfino ◽  
Fabio Lavagetto ◽  
Mario Marchese

Human-machine interaction is performed by devices such as the keyboard, the touch-screen, or speech-to-text applications. For example, a speech-to-text application is software that allows the device to translate the spoken words into text. These tools translate explicit messages but ignore implicit messages, such as the emotional status of the speaker, filtering out a portion of information available in the interaction process. This chapter focuses on emotion detection. An emotion-aware device can also interact more personally with its owner and react appropriately according to the user’s mood, making the user-machine interaction less stressful. The chapter gives the guidelines for building emotion-aware smartphone applications in an opportunistic way (i.e., without the user’s collaboration). In general, smartphone applications might be employed in different contexts; therefore, the to-be-detected emotions might be different.


2011 ◽  
Vol 230-232 ◽  
pp. 136-139
Author(s):  
Ou Xie ◽  
Hua Li ◽  
Zhen Yin

A design of Human-machine interaction system of embedded precision CNC internal grinder which based on touch screen is proposed. The master-slave two-stage control mode is used in the system. By developing interface software, the system achieves the integration interactive control for the internal grinding. The performance of Human-machine interaction is improved and the processing efficiency and communication capabilities are increased.


2018 ◽  
Vol 2018 ◽  
pp. 1-16
Author(s):  
Alexandre Alapetite ◽  
Emilie Møllenbach ◽  
Anders Stockmarr ◽  
Katsumi Minakata

We contribute to a project introducing the use of a large single touch-screen as a concept for future airplane cockpits. Human-machine interaction in this new type of cockpit must be optimised to cope with the different types of normal use as well as during moments of turbulence (which can occur during flights varying degrees of severity). We propose an original experimental setup for reproducing turbulence (not limited to aviation) based on a touch-screen mounted on a rollercoaster. Participants had to repeatedly solve three basic touch interactions: a single click, a one-finger drag-and-drop, and a zoom operation involving a 2-finger pinching gesture. The completion times of the different tasks as well as the number of unnecessary interactions with the screen constitute the collected user data. We also propose a data analysis and statistical method to combine user performance with observed turbulence, including acceleration and jerk along the different axes. We then report some of the implications of severe turbulence on touch interaction and make recommendations as to how this can be accommodated in future design solutions.


2021 ◽  
Author(s):  
Chen Fang ◽  

Artificial intelligence (AI)-based solutions are slowly making their way into our daily lives, integrating with our processes to enhance our lifestyles. This is major a technological component regarding the development of autonomous vehicles (AVs). However, as of today, no existing, consumer ready AV design has reached SAE Level 5 automation or fully integrates with the driver. Unsettled Issues in Vehicle Autonomy, AI and Human-Machine Interaction discusses vital issues related to AV interface design, diving into speech interaction, emotion detection and regulation, and driver trust. For each of these aspects, the report presents the current state of research and development, challenges, and solutions worth exploring.


2021 ◽  
Vol 06 (03) ◽  
Author(s):  
Shital S.Yadav ◽  

Automatic emotion detection is a key task in human machine interaction,where emotion detection makes system more natural. In this paper, we propose an emotion detection using deep learning algorithm. The proposed algorithm uses end to end CNN. To increase computational efficiency of the deep network, we make use of trained weight parameters of the MobileNet to initialize the weight parameters of our system. To make our system independent of the input image size, we place global average pooling layer On top of the last convolution layer of it. Proposed system is validated for emotion detection using two benchmark datasets viz. Cohn–Kanade+ (CK+) and Japanese female facial expression (JAFFE). The experimental results show that the proposed method outperforms the other existing methods for emotion detection.


2021 ◽  
pp. 1-9
Author(s):  
Harshadkumar B. Prajapati ◽  
Ankit S. Vyas ◽  
Vipul K. Dabhi

Face expression recognition (FER) has gained very much attraction to researchers in the field of computer vision because of its major usefulness in security, robotics, and HMI (Human-Machine Interaction) systems. We propose a CNN (Convolutional Neural Network) architecture to address FER. To show the effectiveness of the proposed model, we evaluate the performance of the model on JAFFE dataset. We derive a concise CNN architecture to address the issue of expression classification. Objective of various experiments is to achieve convincing performance by reducing computational overhead. The proposed CNN model is very compact as compared to other state-of-the-art models. We could achieve highest accuracy of 97.10% and average accuracy of 90.43% for top 10 best runs without any pre-processing methods applied, which justifies the effectiveness of our model. Furthermore, we have also included visualization of CNN layers to observe the learning of CNN.


Sign in / Sign up

Export Citation Format

Share Document