scholarly journals Facial responses during listening predict authenticity judgments of vocal emotions

2020 ◽  
Author(s):  
Cesar Lima ◽  
Patricia Arriaga ◽  
Andrey Anikin ◽  
Ana Rita Pires ◽  
Sofia Frade ◽  
...  

The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear, however, whether and how similar mechanisms extend to audition. To address this issue, we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect a genuine or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Genuine laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to performance in a subsequent authenticity detection task. Stronger responses in the orbicularis predicted improved recognition of genuine laughs. Stronger responses in the corrugator, a muscle associated with negative affect, predicted improved recognition of posed laughs. Moreover, genuine laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, though. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.

Author(s):  
Светлана Игоревна Буркова

В статье на примере русского жестового языка (РЖЯ) делается попытка показать, что инструменты оценки жизнеспособности и сохранности языка, разработанные на материале звуковых языков, не вполне подходят для оценки жизнеспособности и сохранности жестовых языков. Если, например, оценивать жизнеспособность РЖЯ по шестибалльной шкале в системе «девяти факторов», предложенной в документе ЮНЕСКО (Language vitality…, 2003) и используемой в Атласе языков, находящихся под угрозой исчезновения, то эта оценка составит не более 3 баллов, т. е. РЖЯ будет характеризоваться как язык, находящийся под угрозой исчезновения. Это бесписьменный язык, преимущественно используемый в сфере бытового общения, существующий в окружении функционально несопоставимо более мощного русского звукового языка; подавляющее большинство носителей РЖЯ являются билингвами, в той или иной степени владеющими русским звуковым языком в его устной или письменной форме; большая часть носителей РЖЯ усваивают жестовый язык не в семье, с рождения, а в более позднем возрасте; условия усвоения РЖЯ влияют на языковую компетенцию его носителей; окружающий русский звуковой язык влияет на лексику и грамматику РЖЯ; этот язык остается пока недостаточно изученным и слабо задокументированным, и т. д. Однако в действительности РЖЯ в этих условиях стабильно сохраняется, а в последнее время даже расширяет свой словарный состав и сферы использования. Главный фактор, который обеспечивает сохранность жестового языка и который не учитывается в существующих методиках, предназначенных для оценки витальности языков — это модальность, в которой существует жестовый язык. Глухие люди, в силу того что им недоступна или плохо доступна аудиальная модальность, не могут полностью перейти на звуковой язык. Наиболее естественной для коммуникации для них остается визуальная модальность, при этом современные средства связи и интернет открывают дополнительные возможности для подержания и развития языка в визуальной модальности. The paper discusses sociolinguistic aspects of Russian Sign Language (RSL) and attempts to show that the tools used to access the degree of language vitality, which were developed for spoken languages, are not quite suitable to access vitality of sign languages. For example, if to try to assess the vitality of RSL in terms of six-point scale of the “nine factors” system proposed by UNESCO (Language vitality ..., 2003), which is used in the Atlas of Endangered Languages, the assessment of RSL would be no more than 3 points. In other words, RSL would be characterized as an endangered language. It is an unwritten language, mainly used in everyday communication; it exists in the environment of functionally much more powerful spoken Russian; the overwhelming majority of RSL signers are bilinguals, they use spoken Russian, at least in its written form; most deaf children acquire RSL not in the family, from birth, but later in life, at kindergartens or schools; the conditions of RSL acquisition affect the deaf signers’ language proficiency, as well as spoken Russian affects RSL’s lexicon and grammar; RSL still remains insufficiently studied and poorly documented, etc. However, RSL, as a native communication system of the Deaf, based on visual modality, is not only well maintained, but even expands some spheres of use. The main factor, which supports maintenance of RSL and which is not taken into account in the existing tools to access the degree of language vitality is visual modality. The auditory modality is inaccessible or poorly accessible for the deaf, so they can not completely shift to spoken Russian. Visual modality remains the most natural for their communication. In addition, modern technologies and the internet provide much more opportunities for the existence of RSL in this modality and for its development.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara D. Martin

AbstractLanguage perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


2019 ◽  
Vol 28 (4) ◽  
pp. 1016-1036
Author(s):  
Mengyu Miranda Gao ◽  
Aryanne D. de Silva ◽  
E. Mark Cummings ◽  
Patrick T. Davies

Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2017 ◽  
Vol 30 (4) ◽  
pp. 379-395 ◽  
Author(s):  
Alia J. Crum ◽  
Modupe Akinola ◽  
Ashley Martin ◽  
Sean Fath

2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


2014 ◽  
Vol 18 (3) ◽  
pp. 490-501 ◽  
Author(s):  
ROBERTO FILIPPI ◽  
JOHN MORRIS ◽  
FIONA M. RICHARDSON ◽  
PETER BRIGHT ◽  
MICHAEL S.C. THOMAS ◽  
...  

Studies measuring inhibitory control in the visual modality have shown a bilingual advantage in both children and adults. However, there is a lack of developmental research on inhibitory control in the auditory modality. This study compared the comprehension of active and passive English sentences in 7–10 years old bilingual and monolingual children. The task was to identify the agent of a sentence in the presence of verbal interference. The target sentence was cued by the gender of the speaker. Children were instructed to focus on the sentence in the target voice and ignore the distractor sentence. Results indicate that bilinguals are more accurate than monolinguals in comprehending syntactically complex sentences in the presence of linguistic noise. This supports previous findings with adult participants (Filippi, Leech, Thomas, Green & Dick, 2012). We therefore conclude that the bilingual advantage in interference control begins early in life and is maintained throughout development.


2020 ◽  
Vol 16 (3) ◽  
pp. 419-438
Author(s):  
Ting Wu

AbstractThe development of new media enlarges the repertoire of semantic resources in creating a discourse. Apart from language, visual and sound symbols can all become semantic sources, and a synergy of different modality and symbols can be used to complete argumentative reasoning and evaluation. In the framework of multimodal argumentation and appraisal theory, this study conducted quantitative and multimodal discourse analysis on a new media discourse Building a community of shared future for humankind and found that visual symbols can independently fulfill both reasoning and evaluation in the argumentative discourse. An interplay of multiple modalities constructs a multi-layered semantic source, with verbal subtitles as a frame and a sound system designed to reinforce the theme and mood. In addition, visual modality is implicit in constructing the stance and evaluation of the discourse, with the verbal mode playing the role of “anchoring,” i.e. providing explicit explanation. A synergy of visual, acoustic, and verbal modalities could effectively transmit conceptual, interpersonal, and discursive meanings, but the persuasive result with the audience from different cultural backgrounds might be mixed.


1981 ◽  
Vol 24 (3) ◽  
pp. 351-357 ◽  
Author(s):  
Paula Tallal ◽  
Rachel Stark ◽  
Clayton Kallman ◽  
David Mellits

A battery of nonverbal perceptual and memory tests were given to 35 language-impaired (LI) and 38 control subjects. Three modalities of tests were given: auditory, visual, and cross-modal (auditory and visual). The purpose was to reexamine some nonverbal perceptual and memory abilities of LI children as a function of age and modality of stimulation. Results failed to replicate previous findings of a temporal processing deficit that is specific to the auditory modality in LI children. The LI group made significantly more errors than did controls regardless of modality of stimulation when 2-item sequences were presented rapidly, or when more than two stimuli were presented in series. However, further analyses resolved this apparent conflict between the present and earlier studies by demonstrating that age is an important variable underlying modality specificity of perceptual performance in LI children. Whereas younger LI children were equally impaired when responding to stimuli presented rapidly to the auditory and visual modality, older LI subjects made nearly twice as many errors responding to rapidly presented auditory rather than visual stimuli. This developmental difference did not occur for the control group.


Sign in / Sign up

Export Citation Format

Share Document