scholarly journals Happy Cow or Thinking Pig? WUR Wolf Facial Coding Platform for Measuring Emotions in Farm Animals

2021 ◽  
Author(s):  
Suresh Neethirajan

Emotions play an indicative and informative role in the investigation of farm animal behaviors. Systems that respond and can measure emotions provide a natural user interface in enabling the digitalization of animal welfare platforms. The faces of farm animals can be one of the richest channels for expressing emotions. We present WUR Wolf (Wageningen University & Research: Wolf Mascot)a real-time facial expression recognition platform that can automatically code the emotions of farm animals. Using Python-based algorithms, we detect and track the facial features of cows and pigs, analyze the appearance, ear postures, and eye white regions, and correlate with the mental/emotional states of the farm animals. The system is trained on dataset of facial features of images of the farm animals collected in over 6 farms and has been optimized to operate with an average accuracy of 85%. From these, we infer the emotional states of animals in real time. The software detects 13 facial actions and 9 emotional states, including whether the animal is ag-gressive, calm, or neutral. A real-time emotion recognition system based on YoloV3, and Faster YoloV4-based facial detection platform and an ensemble Convolutional Neural Networks (RCNN) is presented. Detecting expressions of farm animals simultaneously in real time makes many new interfaces for automated decision-making tools possible for livestock farmers. Emotions sensing offers a vast amount of potential for improving animal welfare and animal-human interactions.

AI ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 342-354
Author(s):  
Suresh Neethirajan

Emotions play an indicative and informative role in the investigation of farm animal behaviors. Systems that respond and can measure emotions provide a natural user interface in enabling the digitalization of animal welfare platforms. The faces of farm animals can be one of the richest channels for expressing emotions. WUR Wolf (Wageningen University & Research: Wolf Mascot), a real-time facial recognition platform that can automatically code the emotions of farm animals, is presented in this study. The developed Python-based algorithms detect and track the facial features of cows and pigs, analyze the appearance, ear postures, and eye white regions, and correlate these with the mental/emotional states of the farm animals. The system is trained on a dataset of facial features of images of farm animals collected in over six farms and has been optimized to operate with an average accuracy of 85%. From these, the emotional states of animals in real time are determined. The software detects 13 facial actions and an inferred nine emotional states, including whether the animal is aggressive, calm, or neutral. A real-time emotion recognition system based on YoloV3, a Faster YoloV4-based facial detection platform and an ensemble Convolutional Neural Networks (RCNN) is presented. Detecting facial features of farm animals simultaneously in real time enables many new interfaces for automated decision-making tools for livestock farmers. Emotion sensing offers a vast potential for improving animal welfare and animal–human interactions.


Author(s):  
Hady Pranoto ◽  
Oktaria Kusumawardani

The number of times students attend lectures has been identified as one of many success factors in the learning process in many studies. We proposed a framework of the student attendance system by using face recognition as authentication. Triplet loss embedding in FaceNet is suitable for face recognition systems because the architecture has high accuracy, quite lightweight, and easy to implement in the real-time face recognition system. In our research, triplet loss embedding shows good performance in terms of the ability to recognize faces. It can also be used for real-time face recognition for the authentication process in the attendance recording system that uses RFID. In our study, the performance for face recognition using k-NN and SVM classification methods achieved results of 96.2 +/- 0.1% and 95.2 +/- 0.1% accordingly. Attendance recording systems using face recognition as an authentication process will increase student attendance in lectures. The system should be difficult to be faked; the system will validate the user or student using RFID cards using facial biometric marks. Finally, students will always be present in lectures, which in turn will improve the quality of the existing education process. The outcome can be changed in the future by using a high-resolution camera. A face recognition system with facial expression recognition can be added to improve the authentication process. For better results, users are required to perform an expression instructed by face recognition using a database and the YOLO process.


Author(s):  
Siu-Yeung Cho ◽  
Teik-Toe Teoh ◽  
Yok-Yen Nguwi

Facial expression recognition is a challenging task. A facial expression is formed by contracting or relaxing different facial muscles on human face that results in temporally deformed facial features like wide-open mouth, raising eyebrows or etc. The challenges of such system have to address with some issues. For instances, lighting condition is a very difficult problem to constraint and regulate. On the other hand, real-time processing is also a challenging problem since there are so many facial features to be extracted and processed and sometimes, conventional classifiers are not even effective in handling those features and produce good classification performance. This chapter discusses the issues on how the advanced feature selection techniques together with good classifiers can play a vital important role of real-time facial expression recognition. Several feature selection methods and classifiers are discussed and their evaluations for real-time facial expression recognition are presented in this chapter. The content of this chapter is a way to open-up a discussion about building a real-time system to read and respond to the emotions of people from facial expressions.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4047-4051

The automatic detection of facial expressions is an active research topic, since its wide fields of applications in human-computer interaction, games, security or education. However, the latest studies have been made in controlled laboratory environments, which is not according to real world scenarios. For that reason, a real time Facial Expression Recognition System (FERS) is proposed in this paper, in which a deep learning approach is applied to enhance the detection of six basic emotions: happiness, sadness, anger, disgust, fear and surprise in a real-time video streaming. This system is composed of three main components: face detection, face preparation and face expression classification. The results of proposed FERS achieve a 65% of accuracy, trained over 35558 face images..


Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


2008 ◽  
Vol 381-382 ◽  
pp. 375-378
Author(s):  
K.T. Song ◽  
M.J. Han ◽  
F.Y. Chang ◽  
S.H. Chang

The capability of recognizing human facial expression plays an important role in advanced human-robot interaction development. Through recognizing facial expressions, a robot can interact with a user in a more natural and friendly manner. In this paper, we proposed a facial expression recognition system based on an embedded image processing platform to classify different facial expressions on-line in real time. A low-cost embedded vision system has been designed and realized for robotic applications using a CMOS image sensor and digital signal processor (DSP). The current design acquires thirty 640x480 image frames per second (30 fps). The proposed emotion recognition algorithm has been successfully implemented on the real-time vision system. Experimental results on a pet robot show that the robot can interact with a person in a responding manner. The developed image processing platform is effective for accelerating the recognition speed to 25 recognitions per second with an average on-line recognition rate of 74.4% for five facial expressions.


2020 ◽  
Author(s):  
Avelyne S. Villain ◽  
Mathilde Lanthony ◽  
Carole Guérin ◽  
Camille Noûs ◽  
Céline Tallet

1AbstractEnriching the life of farm animals is an obligation in intensive farming conditions. In pigs, manipulable materials are mandatory when no bedding is available. Like manipulable objects, positive human interactions might be considered as enrichment, as they provide the animals occasions to interact, increase their activity and lead to positive emotional states. In this study, we investigated how weaned piglets perceived a manipulable object, and a familiar human. After a similar familiarization to both stimuli, twenty-four weaned piglets were tested for a potential preference for one of the stimuli and submitted to isolation/reunion tests to evaluate the emotional value of the stimuli. We hypothesized that being reunited with a stimulus would attenuate the stress of social isolation and promote positive behaviors, and even more that the stimulus has a positive emotional value for piglets. Although our behavioural data did not allow to show a preference for one of the stimuli, piglets approached more often the human and were observed laying down only near the human. Using behavioural and bioacoustic data, we showed that reunion with the human decreased more the time spent in an attentive state and mobility of piglets than reunion with the object, and isolation. Vocalizations differed between reunions with the object and the human, and were different from vocalizations during isolation. The human presence led to higher frequency range, more noisy and shorter grunts. Finally, both stimuli decreased the isolation stress of piglets, and piglets seemed to be in a more positive emotional state with the human compared to the object. It confirms the potential need for positive human interactions to be used as pseudo-social enrichment in pigs.


Due to the highly variant face geometry and appearances, Facial Expression Recognition (FER) is still a challenging problem. CNN can characterize 2-D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (Support Vector Machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.


Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


Sign in / Sign up

Export Citation Format

Share Document