scholarly journals PERSUASIVE TECHNOLOGY TO GENERATE SONGS PLAYLIST USING EMOTION BASED MUSIC PLAYER

Author(s):  
Uma Yadav ◽  
Shweta Kharat

— The advanced approach that offers the user with automated generated playlist of songs based on the mood of the user. In today’s world everyone uses the music to relax him or herself. To automate the Playlist generation process lots of algorithm were developed and proposed. Emotion Based Music Player aims at perusing and inferring the data from facial expressions and creating a Playlist based on the parameters extracted. Human moods are proposed for common understanding and sharing feelings and aims. Depending upon the current mood of the user the player automatically selects the song and plays it. The proposed system focuses on developing the Emotion Based Musing Player by detecting the human emotions through facial expression extraction technique. The proposed system works on Playlist generation and Classification of Emotions. The system is designed in such a way that the Facial expressions are captured through an inbuilt camera, analyze the extracted features of the image and determines the mood of the user and arranges the playlist accordingly.

Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


Author(s):  
Pushkal Kumar Shukla

Human emotion plays an essential role in social relationships. Emotions are reflected from verbalization, hand gestures of a body, through outward appearances and facial expressions. Music is an art form that soothes and calms the human brain and body. To analyze the mood of an individual, we first need to examine its emotions. If we detect an individual's emotions, then we can also detect an individual's mood. Taking the above two aspects and blending them, our system deals with detecting the emotion of a person through facial expression and playing music according to the emotion detected that will alleviate the mood or calm the individual and can also get quicker songs according to the emotion, saving time from looking up different songs. Different expressions of the face could be angry, happy, sad, and neutral. Facial emotions can be captured and detected through an inbuilt camera or a webcam. In our project, the Fisherface Algorithm is used for the detection of human emotions. After detecting an individual's emotion, our system will play the music automatically based on the emotion of an individual.


Author(s):  
Yanqiu Liang

To solve the problem of emotional loss in teaching and improve the teaching effect, an intelligent teaching method based on facial expression recognition was studied. The traditional active shape model (ASM) was improved to extract facial feature points. Facial expression was identified by using the geometric features of facial features and support vector machine (SVM). In the expression recognition process, facial geometry and SVM methods were used to generate expression classifiers. Results showed that the SVM method based on the geometric characteristics of facial feature points effectively realized the automatic recognition of facial expressions. Therefore, the automatic classification of facial expressions is realized, and the problem of emotional deficiency in intelligent teaching is effectively solved.


Author(s):  
Arnaja Sen ◽  
Dhaval Popat ◽  
Hardik Shah ◽  
Priyanka Kuwor ◽  
Era Johri

<p>In day to day stressful environment of IT Industry, there is a truancy for the appropriate relaxation time for all working professionals. To keep a person stress free, various technical or non-technical stress releasing methods are now being adopted. We can categorize the people working on computers as administrators, programmers, etc. each of whom require varied ways in order to ease themselves. The work pressure and the vexation of any kind for a person can be depicted by their emotions. Facial expressions are the key to analyze the current psychology of the person. In this paper, we discuss a user intuitive smart music player. This player will capture the facial expressions of a person working on the computer and identify the current emotion. Intuitively the music will be played for the user to relax them. The music player will take into account the foreground processes which the person is executing on the computer. Since various sort of music is available to boost one's enthusiasm, taking into consideration the tasks executed on the system by the user and the current emotions they carry, an ideal playlist of songs will be created and played for the person. The person can browse the playlist and modify it to make the system more flexible. This music player will thus allow the working professionals to stay relaxed in spite of their workloads.</p>


Author(s):  
Akash Kumar ◽  
Athira B. Nair ◽  
Swarnaprabha Jena ◽  
Debaraj Rana ◽  
Subrat Kumar Pradhan

Facial expressions are a vital part of human life. Each day has a number of instances and all instances include numerous amounts of communication. Every communication expressed with emotion tells us about the state of the person. The interpersonal as well as security purposes are solved through facial expressions. The mischievous intention of a person can be caught by his expressions. The human mind can capture visual information faster. So, a machine recognizing it will be a challenge. As the saying goes- “A picture is worth a thousand words”- only when it is represented well. A machine being able to detect the atmosphere by the means of expression is less of a manual work. This paper detects the faces, extract the features as well classify them into different categories which ultimately lead to expression recognition. We evaluate our proposed method with the dataset which we used and the recall of angry, fear, happy, neutral, sad, and surprise is 60%, 31%, 84%, 22%, 57% and 58% respectively and the f1-score is 51% 35%, 82%, 25%, 51% and 64% respectively. Experimental results demonstrate the competitive classification of our proposed system.


2020 ◽  
Vol 3 (2) ◽  
pp. 210-215
Author(s):  
Juliansyah Putra Tanjung ◽  
Muhathir Muhathir

The face is one of the human biometric which is often utilized as an important information of a person. One of the unique information of the face is facial expressions, expressions are information that is given indirectly about an expression of one's feelings. Because facial expressions have a unique pattern for each expression so that the pattern of facial expression will be tested with the computer by utilizing the Histogram of oriented gradient (HOG) descriptor as the extraction of existing features in each expression Face and information acquisition from HOG will be classified by utilizing the Support vector Mechine (SVM) method. The results of facial expression classification by utilizing the Extracaski HOG features reached 76.57% at a value of K = 500 with an average accuracy of 72.57%.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Sign in / Sign up

Export Citation Format

Share Document