nonverbal behaviors
Recently Published Documents


TOTAL DOCUMENTS

222
(FIVE YEARS 49)

H-INDEX

25
(FIVE YEARS 3)

2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 480-481
Author(s):  
Carissa Coleman ◽  
Kristine Williams ◽  
Kacie Inderhees ◽  
Michaela Richardson

Abstract Communication is fundamental for dementia care and identifying communication behaviors is key to identifying strategies that facilitate or impede communication. To measure caregiver nonverbal communication, we adapted the Verbal and Nonverbal Interaction Scale for Caregivers (VNVIS-CG) for second-by-second behavioral coding of video observations. The VNVIS-CG was adapted for computer-assisted Noldus Observer coding of video interactions captured at home by family caregivers from the FamTechCare clinical trial. Operational definitions for nonverbal communication behaviors were developed and inter-rater reliability was excellent (Kappa = .88) using two independent coders. Videos N=232 were coded featuring 51 dyads; caregivers who were primarily female (80%) spouses (69%) of men (55%) diagnosed with moderate to severe dementia (64.7%). Mean caregiver age was 65 years. Emotional tone conveyed by caregivers was primarily respectful, occurring 68.1% of the time, followed by overly nurturing (9%), bossy, harsh, or antagonistic (6.2%), and silence occurred 16.7 % of the time. Caregiver gestures and positive postures (i.e., animated facial expressions, head nodding, or caregiver body movements) were the most commonly occurring overt behaviors (46.5%), followed by changing the environment to help the PWD (19.9%), and expressing laughter/joy (18.9%). The least common nonverbal behaviors were negative posture, aggression, compassion, and rejecting. The adapted behavioral coding scheme provides a reliable measure that characterizes dementia caregiver nonverbal communication behaviors for analysis of video observations. Ongoing research will identify strategies that facilitate communication as well as determine how strategies vary by dementia stage, diagnosis, and dyad characteristics.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mitchel Kappen ◽  
Marnix Naber

AbstractSociety suffers from biases and discrimination, a longstanding dilemma that stems from ungrounded, subjective judgments. Especially unequal opportunities in labor remain a persistent challenge, despite the recent inauguration of top-down diplomatic measures. Here we propose a solution by using an objective approach to the measurement of nonverbal behaviors of job candidates that trained for a job assessment. First, we implemented and developed artificial intelligence, computer vision, and unbiased machine learning software to automatically detect facial muscle activity and emotional expressions to predict the candidates’ self-reported motivation levels. The motivation judgments by our model outperformed recruiters’ unreliable, invalid, and sometimes biased judgments. These findings mark the necessity and usefulness of novel, bias-free, and scientific approaches to candidate and employee screening and selection procedures in recruitment and human resources.


2021 ◽  
Vol 33 (6) ◽  
pp. 1-14
Author(s):  
Yanqun Huang ◽  
Gaofeng Pan ◽  
Xu Li ◽  
Zhe Sun ◽  
Shinichi Koyama ◽  
...  

This study proposes a method for mining potential user requirements from users’ nonverbal behaviors by analyzing their operational problems, since human behaviors reflect emotions and operational bottlenecks in human-machine interactions. Taking a single daily operation task as an example, three key steps were included in the method: first, modeling users’ operation and constructing the operation chain; second, finding emotional or physical problems in the operation chain, where the problems were defined mathematically as an emotional or physical load at each suboperation; and third, defining and obtaining potential user requirements by improving the operational problems when performing a task. Furthermore, a daily operation task was introduced to demonstrate and validate the method of mining user potential requirements. The results indicate that it is effective to discover the potential needs for a specific product and provide satisfactory solutions by calculating and optimizing operational problems.


Author(s):  
Roberta Bevilacqua ◽  
Elisa Felici ◽  
Filippo Cavallo ◽  
Giulio Amabili ◽  
Elvira Maranesi

The aim of this paper was to explore the psychosocial determinants that lead to acceptability and willingness to interact with a service robot, starting with an analysis of older users’ behaviors toward the Robot-Era platform, in order to provide strategies for the promotion of social assistive robotics. A mixed-method approach was used to collect information on acceptability, usability, and human–robot interaction, by analyzing nonverbal behaviors, emotional expressions, and verbal communication. The study involved 35 older adults. Twenty-two were women and thirteen were men, aged 73.8 (±6) years old. Video interaction analysis was conducted to capture the users’ gestures, statements, and expressions. A coded scheme was designed on the basis of the literature in the field. Percentages of time and frequency of the selected events are reported. The statements of the users were collected and analyzed. The results of the behavioral analysis reveal a largely positive attitude, inferred from nonverbal clues and nonverbal emotional expressions. The results highlight the need to provide robotic solutions that respect the tasks they offer to the users It is necessary to give older consumers dedicated training in technological literacy to guarantee proper, long-lasting, and successful use.


2021 ◽  
Author(s):  
Davide Cannata ◽  
Simon Mats Breil ◽  
Mitja Back ◽  
Bruno Lepri ◽  
Denis O'Hora

Our first impressions of the people we meet are the subject of considerable interest, academic and non-academic. Such initial estimates of another’s personality (e.g., their sociality or agreeableness) are vital, since they enable us to predict the outcomes of interactions (e.g., can we trust them?). Nonverbal behaviors are a key medium through which personality is expressed and detected. The character and reliability of these expression and detection processes have been investigated within two major fields: Psychological research on personality judgments accuracy and Artificial Intelligence research on personality computing. Communication between these fields has, however, been infrequent. In the present perspective, we summarize the contributions and open questions of both fields and propose an integrative approach to combine their strengths and overcome their limitations. The integrated framework will enable novel research programs, such as (i), identifying which detection tasks better suit humans or computers, (ii), harmonizing the nonverbal features extracted by humans and computers, and (iii), integrating human and artificial agents in hybrid systems.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nada Kojovic ◽  
Shreyasvi Natraj ◽  
Sharada Prasanna Mohanty ◽  
Thomas Maillart ◽  
Marie Schaer

AbstractClinical research in autism has recently witnessed promising digital phenotyping results, mainly focused on single feature extraction, such as gaze, head turn on name-calling or visual tracking of the moving object. The main drawback of these studies is the focus on relatively isolated behaviors elicited by largely controlled prompts. We recognize that while the diagnosis process understands the indexing of the specific behaviors, ASD also comes with broad impairments that often transcend single behavioral acts. For instance, the atypical nonverbal behaviors manifest through global patterns of atypical postures and movements, fewer gestures used and often decoupled from visual contact, facial affect, speech. Here, we tested the hypothesis that a deep neural network trained on the non-verbal aspects of social interaction can effectively differentiate between children with ASD and their typically developing peers. Our model achieves an accuracy of 80.9% (F1 score: 0.818; precision: 0.784; recall: 0.854) with the prediction probability positively correlated to the overall level of symptoms of autism in social affect and repetitive and restricted behaviors domain. Provided the non-invasive and affordable nature of computer vision, our approach carries reasonable promises that a reliable machine-learning-based ASD screening may become a reality not too far in the future.


2021 ◽  
Vol 2 ◽  
Author(s):  
Fatemeh Tavassoli ◽  
Diane M. Howell ◽  
Erik W. Black ◽  
Benjamin Lok ◽  
Juan E. Gilbert

This initial exploratory study’s primary focus is to investigate the effectiveness of a virtual patient training platform to present a health condition with a range of symptoms and severity levels. The secondary goal is to examine visualization’s role in better demonstrating variances of symptoms and severity levels to improve learning outcomes. We designed and developed a training platform with a four-year-old pediatric virtual patient named JAYLA to teach medical learners the spectrum of symptoms and severity levels of Autism Spectrum Disorder in young children. JAYLA presents three sets of verbal and nonverbal behaviors associated with age-appropriate, mild autism, and severe autism. To better distinguish the severity levels, we designed an innovative interface called the spectrum-view, displaying all three simulated severity levels side-by-side and within the eye span. We compared its effectiveness with a traditional single-view interface, displaying only one severity level at a time. We performed a user study with thirty-four pediatric trainees to evaluate JAYLA’s effectiveness. Results suggest that training with JAYLA improved the trainees’ performance in careful observation and accurate classification of real children’s behaviors in video vignettes. However, we did not find any significant difference between the two interface conditions. The findings demonstrate the applicability of the JAYLA platform to enhance professional training for early detection of autism in young children, which is essential to improve the quality of life for affected individuals, their families, and society.


Author(s):  
Thomas I. Vaughan-Johnston ◽  
Joshua J. Guyer ◽  
Leandre R. Fabrigar ◽  
Charlie Shen

AbstractPast research has largely focused on how emotional expressions provide information about the speaker’s emotional state, but has generally neglected vocal affect’s influence over communication effectiveness. This is surprising given that other nonverbal behaviors often influence communication between individuals. In the present theory paper, we develop a novel perspective called the Contextual Influences of Vocal Affect (CIVA) model to predict and explain the psychological processes by which vocal affect may influence communication through three broad categories of process: emotion origin/construal, changing emotions, and communication source inferences. We describe research that explores potential moderators (e.g., affective/cognitive message types, message intensity), and mechanisms (e.g., emotional assimilation, attributions, surprise) shaping the effects of vocally expressed emotions on communication. We discuss when and why emotions expressed through the voice can influence the effectiveness of communication. CIVA advances theoretical and applied psychology by providing a clear theoretical account of vocal affect’s diverse impacts on communication.


Sign in / Sign up

Export Citation Format

Share Document