scholarly journals Web Application for Emotion-based Music Player

Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera

Author(s):  
Pushkal Kumar Shukla

Human emotion plays an essential role in social relationships. Emotions are reflected from verbalization, hand gestures of a body, through outward appearances and facial expressions. Music is an art form that soothes and calms the human brain and body. To analyze the mood of an individual, we first need to examine its emotions. If we detect an individual's emotions, then we can also detect an individual's mood. Taking the above two aspects and blending them, our system deals with detecting the emotion of a person through facial expression and playing music according to the emotion detected that will alleviate the mood or calm the individual and can also get quicker songs according to the emotion, saving time from looking up different songs. Different expressions of the face could be angry, happy, sad, and neutral. Facial emotions can be captured and detected through an inbuilt camera or a webcam. In our project, the Fisherface Algorithm is used for the detection of human emotions. After detecting an individual's emotion, our system will play the music automatically based on the emotion of an individual.


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


2019 ◽  
Vol 9 (21) ◽  
pp. 4542 ◽  
Author(s):  
Marco Leo ◽  
Pierluigi Carcagnì ◽  
Cosimo Distante ◽  
Pier Luigi Mazzeo ◽  
Paolo Spagnolo ◽  
...  

The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.


2021 ◽  
Vol 3 (2) ◽  
pp. 414-434
Author(s):  
Liangfei Zhang ◽  
Ognjen Arandjelović

Facial expressions provide important information concerning one’s emotional state. Unlike regular facial expressions, microexpressions are particular kinds of small quick facial movements, which generally last only 0.05 to 0.2 s. They reflect individuals’ subjective emotions and real psychological states more accurately than regular expressions which can be acted. However, the small range and short duration of facial movements when microexpressions happen make them challenging to recognize both by humans and machines alike. In the past decade, automatic microexpression recognition has attracted the attention of researchers in psychology, computer science, and security, amongst others. In addition, a number of specialized microexpression databases have been collected and made publicly available. The purpose of this article is to provide a comprehensive overview of the current state of the art automatic facial microexpression recognition work. To be specific, the features and learning methods used in automatic microexpression recognition, the existing microexpression data sets, the major outstanding challenges, and possible future development directions are all discussed.


Author(s):  
Apurva Varade

Humans tend to connect the music they hear, to the emotion they are feeling. The song playlists though are, at periods too large to sort out automatically. It would be accommodating if the music player was “smart enough” to sort out the music based on the current state of emotion the individual is feeling. The main idea of this project is to automatically play songs based upon the emotions of the adherent. Based on the emotion, the music will be played from the predefined playlist. It aims to deliver user-preferred music with emotional attentiveness. In the existing system user want to manually select the songs, randomly played songs may not accede to the feel of the adherent, user has to classify the songs into various emotions and for playing the songs user has to manually choose a particular emotion. These difficulties can be avoided by using our project. This is a novel way that helps the handler to automatically play songs based on the emotions of the handler. It recognizes the facial emotions of the adherent and plays the songs based on their emotion. The emotions are recognized using a machine learning method Support Vector Machine (SVM) algorithm. The human twist is an important organ of an individual's body and it especially plays an important role in the heritage of an individual's behaviours and emotional appearance.


2020 ◽  
Author(s):  
Navin Ipe

Emotion recognition by the human brain, normally incorporates context, body language, facial expressions, verbal cues, non-verbal cues, gestures and tone of voice. When considering only the face, piecing together various aspects of each facial feature is critical in identifying the emotion. Since viewing a single facial feature in isolation may result in inaccuracies, this paper attempts training neural networks to first identify specific<br>facial features in isolation, and then use the general pattern of expressions on the face to identify the overall emotion. The reason for classification inaccuracies are also examined.<br>


Author(s):  
Vivek Kumar Pandey

With the advent of COVID-19 pandemic, use of mask is mandatory as per WHO/ ICMR guidelines to avert spread of CORONA virus. The post lockdown period has seen increase in cases day by day as people have now stepped out of their home to resume their work and recreational activities. Wearing mask all the time has still not found an enduring place in our day to day routine practices. It is a natural human tendency to be complacent and to remove mask while talking, working or after prolong use just use to relax and breathe properly. Thus not only risking own life but also of others who might have come in contact with the person during the period when he/she was not wearing mask. Presently the inspection of people with/ without mask is being done manually and visually by sentries/ guards present at entry/ exit points. Guards/ Sentries cannot be stationed at every place to keep a check on such people who remove their mask and roam around without restraint once they have been scrutinized at the entry gate. In the proposed system, efforts have been made in inspecting people with/ without mask automatically with the help of Computer vision and Artificial Intelligence. This module detects the face of the individual, identifies whether he/she is wearing mask or not and raises an alarm if the person is detected without wearing mask.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 578 ◽  
Author(s):  
Moisés Márquez-Olivera ◽  
Antonio-Gustavo Juárez-Gracia ◽  
Viridiana Hernández-Herrera ◽  
Amadeo-José Argüelles-Cruz ◽  
Itzamá López-Yáñez

Face recognition is a natural skill that a child performs from the first days of life; unfortunately, there are people with visual or neurological problems that prevent the individual from performing the process visually. This work describes a system that integrates Artificial Intelligence which learns the face of the people with whom the user interacts daily. During the study we propose a new hybrid model of Alpha-Beta Associative memories (Amαβ) with Correlation Matrix (CM) and K-Nearest Neighbors (KNN), where the Amαβ-CMKNN was trained with characteristic biometric vectors generated from images of faces from people who present different facial expressions such as happiness, surprise, anger and sadness. To test the performance of the hybrid model, two experiments that differ in the selection of parameters that characterize the face are conducted. The performance of the proposed model was tested in the databases CK+, CAS-PEAL-R1 and Face-MECS (own), which test the Amαβ-CMKNN with faces of subjects of both sexes, different races, facial expressions, poses and environmental conditions. The hybrid model was able to remember 100% of all the faces learned during their training, while in the test in which faces are presented that have variations with respect to those learned the results range from 95.05% in controlled environments and 86.48% in real environments using the proposed integrated system.


2001 ◽  
Vol 13 (7) ◽  
pp. 937-951 ◽  
Author(s):  
Noam Sagiv ◽  
Shlomo Bentin

The range of specificity and the response properties of the extrastriate face area were investigated by comparing the N170 event-related potential (ERP) component elicited by photographs of natural faces, realistically painted portraits, sketches of faces, schematic faces, and by nonface meaningful and meaningless visual stimuli. Results showed that the N170 distinguished between faces and nonface stimuli when the concept of a face was clearly rendered by the visual stimulus, but it did not distinguish among different face types: Even a schematic face made from simple line fragments triggered the N170. However, in a second experiment, inversion seemed to have a different effect on natural faces in which face components were available and on the pure gestalt-based schematic faces: The N170 amplitude was enhanced when natural faces were presented upside down but reduded when schematic faces were inverted. Inversion delayed the N170 peak latency for both natural and schematic faces. Together, these results suggest that early face processing in the human brain is subserved by a multiple-component neural system in which both whole-face configurations and face parts are processed. The relative involvement of the two perceptual processes is probably determined by whether the physiognomic value of the stimuli depends upon holistic configuration, or whether the individual components can be associated with faces even when presented outside the face context.


Sign in / Sign up

Export Citation Format

Share Document