auditory modality
Recently Published Documents


TOTAL DOCUMENTS

222
(FIVE YEARS 74)

H-INDEX

29
(FIVE YEARS 2)

Cognition ◽  
2022 ◽  
Vol 222 ◽  
pp. 105009
Author(s):  
Jacques Pesnot Lerousseau ◽  
Céline Hidalgo ◽  
Stéphane Roman ◽  
Daniele Schön

Languages ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 12
Author(s):  
Peiyao Chen ◽  
Ashley Chung-Fat-Yim ◽  
Viorica Marian

Emotion perception frequently involves the integration of visual and auditory information. During multisensory emotion perception, the attention devoted to each modality can be measured by calculating the difference between trials in which the facial expression and speech input exhibit the same emotion (congruent) and trials in which the facial expression and speech input exhibit different emotions (incongruent) to determine the modality that has the strongest influence. Previous cross-cultural studies have found that individuals from Western cultures are more distracted by information in the visual modality (i.e., visual interference), whereas individuals from Eastern cultures are more distracted by information in the auditory modality (i.e., auditory interference). These results suggest that culture shapes modality interference in multisensory emotion perception. It is unclear, however, how emotion perception is influenced by cultural immersion and exposure due to migration to a new country with distinct social norms. In the present study, we investigated how the amount of daily exposure to a new culture and the length of immersion impact multisensory emotion perception in Chinese-English bilinguals who moved from China to the United States. In an emotion recognition task, participants viewed facial expressions and heard emotional but meaningless speech either from their previous Eastern culture (i.e., Asian face-Mandarin speech) or from their new Western culture (i.e., Caucasian face-English speech) and were asked to identify the emotion from either the face or voice, while ignoring the other modality. Analyses of daily cultural exposure revealed that bilinguals with low daily exposure to the U.S. culture experienced greater interference from the auditory modality, whereas bilinguals with high daily exposure to the U.S. culture experienced greater interference from the visual modality. These results demonstrate that everyday exposure to new cultural norms increases the likelihood of showing a modality interference pattern that is more common in the new culture. Analyses of immersion duration revealed that bilinguals who spent more time in the United States were equally distracted by faces and voices, whereas bilinguals who spent less time in the United States experienced greater visual interference when evaluating emotional information from the West, possibly due to over-compensation when evaluating emotional information from the less familiar culture. These findings suggest that the amount of daily exposure to a new culture and length of cultural immersion influence multisensory emotion perception in bilingual immigrants. While increased daily exposure to the new culture aids with the adaptation to new cultural norms, increased length of cultural immersion leads to similar patterns in modality interference between the old and new cultures. We conclude that cultural experience shapes the way we perceive and evaluate the emotions of others.


2021 ◽  
Author(s):  
Christos Halkiopoulos

This is my BSc dissertation completed at, and submitted to, UCL's Psychology Department in 1981. It reports on my attentional probe paradigm initially used by myself in the auditory modality to demonstrate attentional biases in the processing of threatening information by participants with identifiable personality characteristics. A group of researchers at St. George's (University of London), introduced to this paradigm by M. W Eysenck, applied my attentional probe paradigm in the visual modality (dot probe paradigm). This dissertation is hand-written, rather hurriedly put together, but still easy to read. The experimental work which introduced the attentional probe paradigm appears towards the end of the dissertation.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Bianca Maria Serena Inguscio ◽  
Giulia Cartocci ◽  
Nicolina Sciaraffa ◽  
Claudia Nasta ◽  
Andrea Giorgi ◽  
...  

Exploration of specific brain areas involved in verbal working memory (VWM) is a powerful but not widely used tool for the study of different sensory modalities, especially in children. In this study, for the first time, we used electroencephalography (EEG) to investigate neurophysiological similarities and differences in response to the same verbal stimuli, expressed in the auditory and visual modality during the n-back task with varying memory load in children. Since VWM plays an important role in learning ability, we wanted to investigate whether children elaborated the verbal input from auditory and visual stimuli through the same neural patterns and if performance varies depending on the sensory modality. Performance in terms of reaction times was better in visual than auditory modality ( p  = 0.008) and worse as memory load increased regardless of the modality ( p  < 0.001). EEG activation was proportionally influenced by task level and was evidenced in theta band over the prefrontal cortex ( p  = 0.021), along the midline ( p  = 0.003), and on the left hemisphere ( p  = 0.003). Differences in the effects of the two modalities were seen only in gamma band in the parietal cortices ( p  = 0.009). The values of a brainwave-based engagement index, innovatively used here to test children in a dual-modality VWM paradigm, varied depending on n-back task level ( p  = 0.001) and negatively correlated ( p  = 0.002) with performance, suggesting its computational effectiveness in detecting changes in mental state during memory tasks involving children. Overall, our findings suggest that auditory and visual VWM involved the same brain cortical areas (frontal, parietal, occipital, and midline) and that the significant differences in cortical activation in theta band were more related to memory load than sensory modality, suggesting that VWM function in the child’s brain involves a cross-modal processing pattern.


Author(s):  
Светлана Игоревна Буркова

В статье на примере русского жестового языка (РЖЯ) делается попытка показать, что инструменты оценки жизнеспособности и сохранности языка, разработанные на материале звуковых языков, не вполне подходят для оценки жизнеспособности и сохранности жестовых языков. Если, например, оценивать жизнеспособность РЖЯ по шестибалльной шкале в системе «девяти факторов», предложенной в документе ЮНЕСКО (Language vitality…, 2003) и используемой в Атласе языков, находящихся под угрозой исчезновения, то эта оценка составит не более 3 баллов, т. е. РЖЯ будет характеризоваться как язык, находящийся под угрозой исчезновения. Это бесписьменный язык, преимущественно используемый в сфере бытового общения, существующий в окружении функционально несопоставимо более мощного русского звукового языка; подавляющее большинство носителей РЖЯ являются билингвами, в той или иной степени владеющими русским звуковым языком в его устной или письменной форме; большая часть носителей РЖЯ усваивают жестовый язык не в семье, с рождения, а в более позднем возрасте; условия усвоения РЖЯ влияют на языковую компетенцию его носителей; окружающий русский звуковой язык влияет на лексику и грамматику РЖЯ; этот язык остается пока недостаточно изученным и слабо задокументированным, и т. д. Однако в действительности РЖЯ в этих условиях стабильно сохраняется, а в последнее время даже расширяет свой словарный состав и сферы использования. Главный фактор, который обеспечивает сохранность жестового языка и который не учитывается в существующих методиках, предназначенных для оценки витальности языков — это модальность, в которой существует жестовый язык. Глухие люди, в силу того что им недоступна или плохо доступна аудиальная модальность, не могут полностью перейти на звуковой язык. Наиболее естественной для коммуникации для них остается визуальная модальность, при этом современные средства связи и интернет открывают дополнительные возможности для подержания и развития языка в визуальной модальности. The paper discusses sociolinguistic aspects of Russian Sign Language (RSL) and attempts to show that the tools used to access the degree of language vitality, which were developed for spoken languages, are not quite suitable to access vitality of sign languages. For example, if to try to assess the vitality of RSL in terms of six-point scale of the “nine factors” system proposed by UNESCO (Language vitality ..., 2003), which is used in the Atlas of Endangered Languages, the assessment of RSL would be no more than 3 points. In other words, RSL would be characterized as an endangered language. It is an unwritten language, mainly used in everyday communication; it exists in the environment of functionally much more powerful spoken Russian; the overwhelming majority of RSL signers are bilinguals, they use spoken Russian, at least in its written form; most deaf children acquire RSL not in the family, from birth, but later in life, at kindergartens or schools; the conditions of RSL acquisition affect the deaf signers’ language proficiency, as well as spoken Russian affects RSL’s lexicon and grammar; RSL still remains insufficiently studied and poorly documented, etc. However, RSL, as a native communication system of the Deaf, based on visual modality, is not only well maintained, but even expands some spheres of use. The main factor, which supports maintenance of RSL and which is not taken into account in the existing tools to access the degree of language vitality is visual modality. The auditory modality is inaccessible or poorly accessible for the deaf, so they can not completely shift to spoken Russian. Visual modality remains the most natural for their communication. In addition, modern technologies and the internet provide much more opportunities for the existence of RSL in this modality and for its development.


2021 ◽  
Vol 15 ◽  
Author(s):  
Justyna O. Ekert ◽  
Matthew A. Kirkman ◽  
Mohamed L. Seghier ◽  
David W. Green ◽  
Cathy J. Price

Background: Pre- and intra-operative language mapping in neurosurgery patients frequently involves an object naming task. The choice of the optimal object naming paradigm remains challenging due to lack of normative data and standardization in mapping practices. The aim of this study was to identify object naming paradigms that robustly and consistently activate classical language regions and could therefore be used to improve the sensitivity of language mapping in brain tumor and epilepsy patients.Methods: Functional magnetic resonance imaging (fMRI) data from two independent groups of healthy controls (total = 79) were used to generate threshold-weighted voxel-based consistency maps. This novel approach allowed us to compare inter-subject consistency of activation for naming single objects in the visual and auditory modality and naming two objects in a phrase or a sentence.Results: We found that the consistency of activation in language regions was greater for naming two objects per picture than one object per picture, even when controlling for the number of names produced in 5 s.Conclusion: More consistent activation in language areas for naming two objects compared to one object suggests that two-object naming tasks may be more suitable for delimiting language eloquent regions with pre- and intra-operative language testing. More broadly, we propose that the functional specificity of brain mapping paradigms for a whole range of different linguistic and non-linguistic functions could be enhanced by referring to databased models of inter-subject consistency and variability in typical and atypical brain responses.


2021 ◽  
Author(s):  
Polina Iamshchinina ◽  
Agnessa Karapetian ◽  
Daniel Kaiser ◽  
Radoslaw Martin Cichy

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study we used EEG (N=47) and time-resolved multivariate pattern analysis to investigate (1) the time course with which object category information emerges in the auditory modality and (2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that (1) that auditory object category representations can be reliably extracted from EEG signals and (2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, we did not find evidence for a shared supra-modal code, suggesting that the contents of the different sensory hierarchies are ultimately modality-unique.


2021 ◽  
Vol 15 ◽  
Author(s):  
Fabian Kiepe ◽  
Nils Kraus ◽  
Guido Hesselmann

Self-generated auditory input is perceived less loudly than the same sounds generated externally. The existence of this phenomenon, called Sensory Attenuation (SA), has been studied for decades and is often explained by motor-based forward models. Recent developments in the research of SA, however, challenge these models. We review the current state of knowledge regarding theoretical implications about the significance of Sensory Attenuation and its role in human behavior and functioning. Focusing on behavioral and electrophysiological results in the auditory domain, we provide an overview of the characteristics and limitations of existing SA paradigms and highlight the problem of isolating SA from other predictive mechanisms. Finally, we explore different hypotheses attempting to explain heterogeneous empirical findings, and the impact of the Predictive Coding Framework in this research area.


Author(s):  
Yuan Feng ◽  
Giulia Perugia ◽  
Suihuai Yu ◽  
Emilia I. Barakova ◽  
Jun Hu ◽  
...  

AbstractEngaging people with dementia (PWD) in meaningful activities is the key to promote their quality of life. Design towards a higher level of user engagement has been extensively studied within the human-computer interaction community, however, few extend to PWD. It is generally considered that increased richness of experiences can lead to enhanced engagement. Therefore, this paper explores the effects of rich interaction in terms of the role of system interactivity and multimodal stimuli by engaging participants in context-enhanced human-robot interaction activities. The interaction with a social robot was considered context-enhanced due to the additional responsive sensory feedback from an augmented reality display. A field study was conducted in a Dutch nursing home with 16 residents. The study followed a two by two mixed factorial design with one within-subject variable - multimodal stimuli - and one between-subject variable - system interactivity. A mixed method of video coding analysis and observational rating scales was adopted to assess user engagement comprehensively. Results disclose that when additional auditory modality was included besides the visual-tactile stimuli, participants had significantly higher scores on attitude, more positive behavioral engagement during activity, and a higher percentage of communications displayed. The multimodal stimuli also promoted social interaction between participants and the facilitator. The findings provide sufficient evidence regarding the significant role of multimodal stimuli in promoting PWD’s engagement, which could be potentially used as a motivation strategy in future research to improve emotional aspects of activity-related engagement and social interaction with the human partner.


2021 ◽  
Vol 16 (1) ◽  
pp. 23-48
Author(s):  
Filip Nenadić ◽  
Petar Milin ◽  
Benjamin V. Tucker

Abstract A multitude of studies show the relevance of both inflectional paradigms (word form frequency distributions, i.e., inflectional entropy) and inflectional classes (whole class frequency distributions) for visual lexical processing. Their interplay has also been proven significant, measured as the difference between paradigm and class frequency distributions (relative entropy). Relative entropy effects have now been recorded in nouns, verbs, adjectives, and prepositional phrases. However, all of these studies used visual stimuli – either written words or picture-naming tasks. The goal of our study is to test whether the effects of relative entropy can also be captured in the auditory modality. Forty young native speakers of Romanian (60% female) living in Serbia as part of the Romanian ethnic minority participated in an auditory lexical decision task. Stimuli were 168 Romanian verbs from two inflectional classes. Verbs were presented in four forms: present and imperfect 1st person singular, present 3rd person plural, and imperfect 2nd person plural. The results show that relative entropy influences both response accuracy and response latency. We discuss alternative operationalizations of relative entropy and how they can help us test hypotheses about the structure of the mental lexicon.


Sign in / Sign up

Export Citation Format

Share Document