scholarly journals Music Playlist Generation using Facial Expression Analysis and Task Extraction

Author(s):  
Arnaja Sen ◽  
Dhaval Popat ◽  
Hardik Shah ◽  
Priyanka Kuwor ◽  
Era Johri

<p>In day to day stressful environment of IT Industry, there is a truancy for the appropriate relaxation time for all working professionals. To keep a person stress free, various technical or non-technical stress releasing methods are now being adopted. We can categorize the people working on computers as administrators, programmers, etc. each of whom require varied ways in order to ease themselves. The work pressure and the vexation of any kind for a person can be depicted by their emotions. Facial expressions are the key to analyze the current psychology of the person. In this paper, we discuss a user intuitive smart music player. This player will capture the facial expressions of a person working on the computer and identify the current emotion. Intuitively the music will be played for the user to relax them. The music player will take into account the foreground processes which the person is executing on the computer. Since various sort of music is available to boost one's enthusiasm, taking into consideration the tasks executed on the system by the user and the current emotions they carry, an ideal playlist of songs will be created and played for the person. The person can browse the playlist and modify it to make the system more flexible. This music player will thus allow the working professionals to stay relaxed in spite of their workloads.</p>

Author(s):  
Uma Yadav ◽  
Shweta Kharat

— The advanced approach that offers the user with automated generated playlist of songs based on the mood of the user. In today’s world everyone uses the music to relax him or herself. To automate the Playlist generation process lots of algorithm were developed and proposed. Emotion Based Music Player aims at perusing and inferring the data from facial expressions and creating a Playlist based on the parameters extracted. Human moods are proposed for common understanding and sharing feelings and aims. Depending upon the current mood of the user the player automatically selects the song and plays it. The proposed system focuses on developing the Emotion Based Musing Player by detecting the human emotions through facial expression extraction technique. The proposed system works on Playlist generation and Classification of Emotions. The system is designed in such a way that the Facial expressions are captured through an inbuilt camera, analyze the extracted features of the image and determines the mood of the user and arranges the playlist accordingly.


2012 ◽  
Vol 43 (4) ◽  
pp. 738-746 ◽  
Author(s):  
Kristie L. Young ◽  
Eve Mitsopoulos-Rubens ◽  
Christina M. Rudin-Brown ◽  
Michael G. Lenné

2021 ◽  
Vol 4 (1) ◽  
pp. 59-64
Author(s):  
Rahmatul Husna Arsyah ◽  
Astri Indah Juwita

Abstract: Nagari Pariangan as the most beautiful tourist village in the world has a place to help the community's economy, increase local revenue (PAD). Local industrial products that have been owned by the community can become souvenirs for visiting tourists. However, in fact Nagari Pariangan does not have the media to promote it. This study aims to analyze the convergence of media in marketing the local industrial handicraft products of the community. This research approach is descriptive qualitative, with data collection methods, namely by means of observation and interviews and literature review. The results of this study reveal that Nagari Pariangan is an area with tourism potential that has become the spotlight of the world, there is a need for a media that helps the community in introducing local Nagari products in order to increase local community income. The main key to convergence is digitization, Nagari Pariangan does not yet have digital media as a forum for supporting community industrial output. Based on the 3C technology dimension (Communication, Compute and Contents), which consists of the IT Industry, Telcom Infrastructure Provides, and the Content Industry. Nagari Pariangan is considered capable of building a digitalized medium, in order to be able to make the economy of the people in areas that have tourism potential much better.     Keywords: convergence; craft produk; media  Abstrak: Nagari Pariangan sebagai desa wisata terindah dunia memiliki wadah untuk membantu perekonomian masyarakat, menambah pendapatan asli daerah (PAD). Hasil Industri lokal yang selama ini dimiliki oleh masyarakat bisa menjadi oleh-oleh bagi wisatawan yang berkunjung. Namun, pada kenyataanya Nagari pariangan belum memiliki media dalam mempromosikannya. Penelitian ini bertujuan untuk melakukan analisis konvergensi media dalam memasarkan produk kerajinan industri lokal masyarakat. Metode penelitian ini adalah kualitatif deskriptif, dengan metode pengumpulan data yaitu dengan cara observasi dan wawancara serta kajian literatur. Hasil dari penelitian ini mengungkapkan bahwa Nagari Pariangan merupakan daerah dengan potensi wisata yang sudah menjadi sorotan dunia, dan perlu adanya sebuah media yang membantu masyarakat dalam memperkenal produk lokal nagari agar bisa menambah pendapatan masyarakat setempat. Kunci utama konvergensi adalah digitalisasi, Nagari Pariangan belum memiliki media digital sebagai wadah dalam mendukung hasil industri masyarakat. Berdasarkan dimensi teknologi 3C (Communication, Compute and Contents), yang terdiri dari IT Industry, Telcom Infrastructure Provides, serta Content Industry. Nagari Pariangan dirasa mampu untuk membangun sebuah media yang digitalisasi, agar mampu menjadikan ekonomi masyarakat di daerah yang memiliki potensi wisata jauh lebih baik.Kata kunci: konvergensi; media; produk kerajinan


Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


Author(s):  
Yongmian Zhang ◽  
Jixu Chen ◽  
Yan Tong ◽  
Qiang Ji

This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.


Author(s):  
Abdolhossein Sarrafzadeh ◽  
Samuel T.V. Alexander ◽  
Jamshid Shanbehzadeh

Intelligent tutoring systems (ITS) are still not as effective as one-on-one human tutoring. The next generation of intelligent tutors are expected to be able to take into account the emotional state of students. This paper presents research on the development of an Affective Tutoring System (ATS). The system called “Easy with Eve” adapts to students via a lifelike animated agent who is able to detect student emotion through facial expression analysis, and can display emotion herself. Eve’s adaptations are guided by a case-based method for adapting to student states; this method uses data that was generated by an observational study of human tutors. This paper presents an analysis of facial expressions of students engaged in learning with human tutors and how a facial expression recognition system, a life like agent and a case based system based on this analysis have been integrated to develop an ATS for mathematics.


Human feelings are mental conditions of sentiments that emerge immediately as opposed to cognitive exertion. Some of the basic feelings are happy, angry, neutral, sad and surprise. These internal feelings of a person are reflected on the face as Facial Expressions. This paper presents a novel methodology for Facial Expression Analysis which will aid to develop a facial expression recognition system. This system can be used in real time to classify five basic emotions. The recognition of facial expressions is important because of its applications in many domains such as artificial intelligence, security and robotics. Many different approaches can be used to overcome the problems of Facial Expression Recognition (FER) but the best suited technique for automated FER is Convolutional Neural Networks(CNN). Thus, a novel CNN architecture is proposed and a combination of multiple datasets such as FER2013, FER+, JAFFE and CK+ is used for training and testing. This helps to improve the accuracy and develop a robust real time system. The proposed methodology confers quite good results and the obtained accuracy may give encouragement and offer support to researchers to build better models for Automated Facial Expression Recognition systems.


A sarcasm / joke is a language of expressing the feeling in an opposite manner. In case of aural or cinematographic information the sarcasm can be identified easily through the tonal stress, gestures and facial expressions. But the main challenge of sarcasm is sarcastic detection in textual information where there is an absence of expressions or tonal stress. Presently a days, the utilization of informal organizations are enormously expanded. Most of the people express their feelings in their posts and comments in sarcastic manner. Sarcasm creates inquisitiveness and courtesy towards it in sentimental analysis. Wistful Analysis is the procedure or investigation of examining the sentiments. In this project, I have chosen twitter comments for this sarcastic sentimental analysis which is commonly an opinion mining. The importance of the project is to increase the accuracy rate by feeding huge data set for training. The purpose of finding the sarcasm in social networks is to block the user who points particularly or attack any victim which is not considered as sarcasm.


This project presents a system to automatically detect emotional dichotomy and mixed emotional experience using a Linux based system. Facial expressions, head movements and facial gestures were captured from pictorial input in order to create attributes such as distance, coordinates and movement of tracked points. Web camera is used to extract spectral attributes. Features are calculated using Fisherface algorithm. Emotion detected by cascade classifier and feature level fusion was used to create a combined feature vector. Live actions of user are to be used for recording emotions. As per calculated result system will play songs and display books list.


Sign in / Sign up

Export Citation Format

Share Document