scholarly journals Facial and Body Feature Extraction for Emotionally-Rich HCI

Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).

Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


2020 ◽  
Vol 12 (2) ◽  
pp. 37-45
Author(s):  
João Marcos Garcia Fagundes ◽  
Allan Rodrigues Rebelo ◽  
Luciano Antonio Digiampietri ◽  
Helton Hideraldo Bíscaro

Bee preservation is important because approximately 70% of all pollination of food crops is made by them and this service costs more than $ 65 billion annually. In order to help this preservation, the identification of the bee species is necessary, and since this is a costly and time-consuming process, techniques that automate and facilitate this identification become relevant. Images of bees' wings in conjunction with computer vision and artificial intelligence techniques can be used to automate this process. This paper presents an approach to do segmentation of bees' wing images and feature extraction. Our approach was evaluated using the modified Hausdorff distance and F measure. The results were, at least, 24% more precise than the related approaches and the proposed approach was able to deal with noisy images.


2018 ◽  
Vol 9 (3) ◽  
pp. 12-22
Author(s):  
Mayur Rahul ◽  
Pushpa Mamoria ◽  
Narendra Kohli ◽  
Rashi Agrawal

Partition-based feature extraction is widely used in the pattern recognition and computer vision. This method is robust to some changes like occlusion, background, etc. In this article, a partition-based technique is used for feature extraction and extension of HMM is used as a classifier. The new introduced multi-stage HMM consists of two layers. In which bottom layer represents the atomic expression made by eyes, nose and lips. Further, the upper layer represents the combination of these atomic expressions such as smile, fear, etc. Six basic facial expressions are recognized, i.e. anger, disgust, fear, joy, sadness and surprise. Experimental results show that the proposed system performs better than normal HMM and has an overall accuracy of 85% using the JAFFE database.


2011 ◽  
pp. 175-200 ◽  
Author(s):  
Kostas Karpouzis ◽  
Amaryllis Raouzaiou ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Themis Balomenos ◽  
...  

This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users.


Author(s):  
E. Nocerino ◽  
F. Menna ◽  
R. Hänsch

Abstract. In the last years, vision-based systems have flourished at an unprecedented pace, fuelled by developments in hardware components (higher resolution and higher sensitivity imaging sensors, smaller and smarter micro controllers, just to name a few), as well as in software or processing techniques, with AI (Artificial Intelligence) leading to a landmark revolution. Several disciplines have fostered and benefited from these advances, but, unfortunately, not always in a coordinated and cooperative way.When it comes to image-based sensing techniques, photogrammetry, computer vision and robotic vision have many contact points and overlapping areas. Yet, as for people of different cultures and languages, communicating among the three different communities can be very harsh and disorienting - especially for beginners and non-specialists.Driven by a strong educational and inclusive ambition, the LightCam project is funded by the ISPRS Education and Capacity Building Initiatives 2020 (ECB). The project’s ambition is to act as an interpreter and ease the dialog among the three actors, i.e. photogrammetry, computer vision and robotics. Two intermediation tools will be developed to serve this aim: (i) a dictionary of concepts, terminology and algorithms, in the form of a knowledge base website, and (ii) a code repository, where pieces of code for the conversion between different formulations implemented in available software solutions will be shared.


2020 ◽  
Vol 10 (8) ◽  
pp. 2956 ◽  
Author(s):  
Chang-Min Kim ◽  
Ellen J. Hong ◽  
Kyungyong Chung ◽  
Roy C. Park

As people communicate with each other, they use gestures and facial expressions as a means to convey and understand emotional state. Non-verbal means of communication are essential to understanding, based on external clues to a person’s emotional state. Recently, active studies have been conducted on the lifecare service of analyzing users’ facial expressions. Yet, rather than a service necessary for everyday life, the service is currently provided only for health care centers or certain medical institutions. It is necessary to conduct studies to prevent accidents that suddenly occur in everyday life and to cope with emergencies. Thus, we propose facial expression analysis using line-segment feature analysis-convolutional recurrent neural network (LFA-CRNN) feature extraction for health-risk assessments of drivers. The purpose of such an analysis is to manage and monitor patients with chronic diseases who are rapidly increasing in number. To prevent automobile accidents and to respond to emergency situations due to acute diseases, we propose a service that monitors a driver’s facial expressions to assess health risks and alert the driver to risk-related matters while driving. To identify health risks, deep learning technology is used to recognize expressions of pain and to determine if a person is in pain while driving. Since the amount of input-image data is large, analyzing facial expressions accurately is difficult for a process with limited resources while providing the service on a real-time basis. Accordingly, a line-segment feature analysis algorithm is proposed to reduce the amount of data, and the LFA-CRNN model was designed for this purpose. Through this model, the severity of a driver’s pain is classified into one of nine types. The LFA-CRNN model consists of one convolution layer that is reshaped and delivered into two bidirectional gated recurrent unit layers. Finally, biometric data are classified through softmax. In addition, to evaluate the performance of LFA-CRNN, the performance was compared through the CRNN and AlexNet Models based on the University of Northern British Columbia and McMaster University (UNBC-McMaster) database.


Author(s):  
Kostas Karpouzis ◽  
Amaryllis Raouzaiou ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Themis Balomenos ◽  
...  

This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users.


Author(s):  
Federico D’Antoni ◽  
Fabrizio Russo ◽  
Luca Ambrosio ◽  
Luca Vollero ◽  
Gianluca Vadalà ◽  
...  

Chronic Low Back Pain (LBP) is a symptom that may be caused by several diseases, and it is currently the leading cause of disability worldwide. The increased amount of digital images in orthopaedics has led to the development of methods related to artificial intelligence, and to computer vision in particular, which aim to improve diagnosis and treatment of LBP. In this manuscript, we have systematically reviewed the available literature on the use of computer vision in the diagnosis and treatment of LBP. A systematic research of PubMed electronic database was performed. The search strategy was set as the combinations of the following keywords: “Artificial Intelligence”, “Feature Extraction”, “Segmentation”, “Computer Vision”, “Machine Learning”, “Deep Learning”, “Neural Network”, “Low Back Pain”, “Lumbar”. Results: The search returned a total of 558 articles. After careful evaluation of the abstracts, 358 were excluded, whereas 124 papers were excluded after full-text examination, taking the number of eligible articles to 76. The main applications of computer vision in LBP include feature extraction and segmentation, which are usually followed by further tasks. Most recent methods use deep learning models rather than digital image processing techniques. The best performing methods for segmentation of vertebrae, intervertebral discs, spinal canal and lumbar muscles achieve Sørensen–Dice scores greater than 90%, whereas studies focusing on localization and identification of structures collectively showed an accuracy greater than 80%. Future advances in artificial intelligence are expected to increase systems’ autonomy and reliability, thus providing even more effective tools for the diagnosis and treatment of LBP.


Author(s):  
Dr. Suma V.

The paper is a review on the computer vision that is helpful in the interaction between the human and the machines. The computer vision that is termed as the subfield of the artificial intelligence and the machine learning is capable of training the computer to visualize, interpret and respond back to the visual world in a similar way as the human vision does. Nowadays the computer vision has found its application in broader areas such as the heath care, safety security, surveillance etc. due to the progress, developments and latest innovations in the artificial intelligence, deep learning and neural networks. The paper presents the enhanced capabilities of the computer vision experienced in various applications related to the interactions between the human and machines involving the artificial intelligence, deep learning and the neural networks.


Sign in / Sign up

Export Citation Format

Share Document