Leveraging the Deep Learning Paradigm for Continuous Affect Estimation from Facial Expressions

Author(s):  
Meshia Cedric Oveneke ◽  
Yong Zhao ◽  
Ercheng Pei ◽  
Abel Diaz Berenguer ◽  
Dongmei Jiang ◽  
...  
Author(s):  
Fallon Branch ◽  
Allison JoAnna Lewis ◽  
Isabella Noel Santana ◽  
Jay Hegdé

AbstractCamouflage-breaking is a special case of visual search where an object of interest, or target, can be hard to distinguish from the background even when in plain view. We have previously shown that naive, non-professional subjects can be trained using a deep learning paradigm to accurately perform a camouflage-breaking task in which they report whether or not a given camouflage scene contains a target. But it remains unclear whether such expert subjects can actually detect the target in this task, or just vaguely sense that the two classes of images are somehow different, without being able to find the target per se. Here, we show that when subjects break camouflage, they can also localize the camouflaged target accurately, even though they had received no specific training in localizing the target. The localization was significantly accurate when the subjects viewed the scene as briefly as 50 ms, but more so when the subjects were able to freely view the scenes. The accuracy and precision of target localization by expert subjects in the camouflage-breaking task were statistically indistinguishable from the accuracy and precision of target localization by naive subjects during a conventional visual search where the target ‘pops out’, i.e., is readily visible to the untrained eye. Together, these results indicate that when expert camouflage-breakers detect a camouflaged target, they can also localize it accurately.


Author(s):  
Toshiyuki Hata ◽  
Takahito Miyake ◽  
Aya Koyanagi ◽  
Saori Bouno ◽  
Yasunari Miyagi

2020 ◽  
Vol 9 (3) ◽  
pp. 1208-1219
Author(s):  
Hendra Kusuma ◽  
Muhammad Attamimi ◽  
Hasby Fahrudin

In general, a good interaction including communication can be achieved when verbal and non-verbal information such as body movements, gestures, facial expressions, can be processed in two directions between the speaker and listener. Especially the facial expression is one of the indicators of the inner state of the speaker and/or the listener during the communication. Therefore, recognizing the facial expressions is necessary and becomes the important ability in communication. Such ability will be a challenge for the visually impaired persons. This fact motivated us to develop a facial recognition system. Our system is based on deep learning algorithm. We implemented the proposed system on a wearable device which enables the visually impaired persons to recognize facial expressions during the communication. We have conducted several experiments involving the visually impaired persons to validate our proposed system and the promising results were achieved.


Author(s):  
Rahul Kumar Gupta ◽  
Shreeja Lakhlani ◽  
Zahabiya Khedawala ◽  
Vishal Chudasama ◽  
Kishor P. Upla

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 58683-58699
Author(s):  
Khalil Khan ◽  
Rehan Ullah Khan ◽  
Kashif Ahmad ◽  
Farman Ali ◽  
Kyung-Sup Kwak

2018 ◽  
Vol 155 ◽  
pp. 165-177 ◽  
Author(s):  
Mainak Biswas ◽  
Venkatanareshbabu Kuppili ◽  
Damodar Reddy Edla ◽  
Harman S. Suri ◽  
Luca Saba ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document