SonicFace

Author(s):  
Yang Gao ◽  
Yincheng Jin ◽  
Seokmin Choi ◽  
Jiyang Li ◽  
Junjie Pan ◽  
...  

Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.

Author(s):  
Dhruv Verma ◽  
Sejal Bhalla ◽  
Dhruv Sahnan ◽  
Jainendra Shukla ◽  
Aman Parnami

Continuous and unobtrusive monitoring of facial expressions holds tremendous potential to enable compelling applications in a multitude of domains ranging from healthcare and education to interactive systems. Traditional, vision-based facial expression recognition (FER) methods, however, are vulnerable to external factors like occlusion and lighting, while also raising privacy concerns coupled with the impractical requirement of positioning the camera in front of the user at all times. To bridge this gap, we propose ExpressEar, a novel FER system that repurposes commercial earables augmented with inertial sensors to capture fine-grained facial muscle movements. Following the Facial Action Coding System (FACS), which encodes every possible expression in terms of constituent facial movements called Action Units (AUs), ExpressEar identifies facial expressions at the atomic level. We conducted a user study (N=12) to evaluate the performance of our approach and found that ExpressEar can detect and distinguish between 32 Facial AUs (including 2 variants of asymmetric AUs), with an average accuracy of 89.9% for any given user. We further quantify the performance across different mobile scenarios in presence of additional face-related activities. Our results demonstrate ExpressEar's applicability in the real world and open up research opportunities to advance its practical adoption.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Tuba Aktürk ◽  
Tom A. de Graaf ◽  
Yasemin Abra ◽  
Sevilay Şahoğlu-Göktaş ◽  
Dilek Özkan ◽  
...  

Abstract Background Recognition of facial expressions (FEs) plays a crucial role in social interactions. Most studies on FE recognition use static (image) stimuli, even though real-life FEs are dynamic. FE processing is complex and multifaceted, and its neural correlates remain unclear. Transitioning from static to dynamic FE stimuli might help disentangle the neural oscillatory mechanisms underlying face processing and recognition of emotion expression. To our knowledge, we here present the first time–frequency exploration of oscillatory brain mechanisms underlying the processing of dynamic FEs. Results Videos of joyful, fearful, and neutral dynamic facial expressions were presented to 18 included healthy young adults. We analyzed event-related activity in electroencephalography (EEG) data, focusing on the delta, theta, and alpha-band oscillations. Since the videos involved a transition from neutral to emotional expressions (onset around 500 ms), we identified time windows that might correspond to face perception initially (time window 1; first TW), and emotion expression recognition subsequently (around 1000 ms; second TW). First TW showed increased power and phase-locking values for all frequency bands. In the first TW, power and phase-locking values were higher in the delta and theta bands for emotional FEs as compared to neutral FEs, thus potentially serving as a marker for emotion recognition in dynamic face processing. Conclusions Our time–frequency exploration revealed consistent oscillatory responses to complex, dynamic, ecologically meaningful FE stimuli. We conclude that while dynamic FE processing involves complex network dynamics, dynamic FEs were successfully used to reveal temporally separate oscillation responses related to face processing and subsequently emotion expression recognition.


2021 ◽  
Vol 13 (03) ◽  
pp. 01-21
Author(s):  
Madhuri N. Gedam ◽  
B. B. Meshram

Oracle is one of the largest vendors and the best DBMS solution of Object Relational DBMS in the IT world. Oracle Database is one of the three market-leading database technologies, along with Microsoft SQL Server's Database and IBM's DB2. Hence in this paper, we have tried to answer the million-dollar question “What is user’s responsibility to harden the oracle database for its security?” This paper gives practical guidelines for hardening the oracle database, so that attacker will be prevented to get access into the database. The practical lookout for protecting TNS, Accessing Remote Server and Prevention, Accessing Files on Remote Server, Fetching Environment Variables, Privileges and Authorizations, Access Control, writing security policy, Database Encryption, Oracle Data Mask, Standard built in Auditing and Fine Grained Auditing (FGA) is illustrated with SQL syntax and executed with suitable real life examples and its output is tested and verified. This structured method acts as Data Invictus wall for the attacker and protect user’s database.


2020 ◽  
Vol 8 (6) ◽  
pp. 4519-4523

In the real-life situation, facial expressions and feelings are nothing more than responses to human external and internal events. In human computer association, acknowledgment of end client's demeanors and feelings from the video gushing assumes significant job. In such frameworks it is required to follow the dynamic changes in human face developments rapidly so as to convey the necessary reaction framework. The one constant application is physical exhaustion location dependent on facial discovery and articulations, for example, driver weariness recognition so as to forestall the mishaps on street. Face appearance based physical weariness investigation or location is out of extent of this paper, however this paper uncovers concentrate on various techniques those are introduced as of late for outward appearance as well as feelings acknowledgment utilizing video. This paper introducing the procedures as far as highlight extraction and arrangement utilized in outward appearance as well as feeling acknowledgment techniques with their near investigation. The relative examination is done dependent on precision, usage apparatus, preferences and hindrances. The result of this paper is the ebb and flow explore hole and research difficulties those are as yet open to illuminate for video based facial discovery and acknowledgment frameworks. The review on ongoing strategies is properly introduced all through this paper by considering future research works.


Author(s):  
Eleonora FIORE ◽  
Giuliano SANSONE ◽  
Chiara Lorenza REMONDINO ◽  
Paolo Marco TAMBORRINI

Interest in offering Entrepreneurship Education (EE) to all kinds of university students is increasing. Therefore, universities are increasing the number of entrepreneurship courses intended for students from different fields of study and with different education levels. Through a single case study of the Contamination Lab of Turin (CLabTo), we suggest how EE may be taught to all kinds of university students. We have combined design methods with EE to create a practical-oriented entrepreneurship course which allows students to work in transdisciplinary teams through a learning-by-doing approach on real-life projects. Professors from different departments have been included to create a multidisciplinary environment. We have drawn on programme assessment data, including pre- and post-surveys. Overall, we have found a positive effect of the programme on the students’ entrepreneurial skills. However, when the data was broken down according to the students’ fields of study and education levels, mixed results emerged.


2018 ◽  
Vol 60 (1) ◽  
pp. 55-65
Author(s):  
Krystyna Ilmurzyńska

Abstract This article investigates the suitability of traditional and participatory planning approaches in managing the process of spatial development of existing housing estates, based on the case study of Warsaw’s Ursynów Północny district. The basic assumption of the article is that due to lack of government schemes targeted at the restructuring of large housing estates, it is the business environment that drives spatial transformations and through that shapes the development of participation. Consequently the article focuses on the reciprocal relationships between spatial transformations and participatory practices. Analysis of Ursynów Północny against the background of other estates indicates that it presents more endangered qualities than issues to be tackled. Therefore the article focuses on the potential of the housing estate and good practices which can be tracked throughout its lifetime. The paper focuses furthermore on real-life processes, addressing the issue of privatisation, development pressure, formal planning procedures and participatory budgeting. In the conclusion it attempts to interpret the existing spatial structure of the estate as a potential framework for a participatory approach.


2014 ◽  
Vol 30 (2) ◽  
pp. 113-126 ◽  
Author(s):  
Dominic Detzen ◽  
Tobias Stork genannt Wersborg ◽  
Henning Zülch

ABSTRACT This case originates from a real-life business situation and illustrates the application of impairment tests in accordance with IFRS and U.S. GAAP. In the first part of the case study, students examine conceptual questions of impairment tests under IFRS and U.S. GAAP with respect to applicable accounting standards, definitions, value concepts, and frequency of application. In addition, the case encourages students to discuss the impairment regime from an economic point of view. The second part of the instructional resource continues to provide instructors with the flexibility of applying U.S. GAAP and/or IFRS when students are asked to test a long-lived asset for impairment and, if necessary, allocate any potential impairment. This latter part demonstrates that impairment tests require professional judgment that students are to exercise in the case.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 52
Author(s):  
Tianyi Zhang ◽  
Abdallah El Ali ◽  
Chen Wang ◽  
Alan Hanjalic ◽  
Pablo Cesar

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.


Sign in / Sign up

Export Citation Format

Share Document