visual modality
Recently Published Documents


TOTAL DOCUMENTS

285
(FIVE YEARS 107)

H-INDEX

29
(FIVE YEARS 3)

2022 ◽  
Vol 15 ◽  
Author(s):  
Franck Di Rienzo ◽  
Pierric Joassy ◽  
Thiago Ferreira Dias Kanthack ◽  
François Moncel ◽  
Quentin Mercier ◽  
...  

Motor Imagery (MI) reproduces cognitive operations associated with the actual motor preparation and execution. Postural recordings during MI reflect somatic motor commands targeting peripheral effectors involved in balance control. However, how these relate to the actual motor expertise and may vary along with the MI modality remains debated. In the present experiment, two groups of expert and non-expert gymnasts underwent stabilometric assessments while performing physically and mentally a balance skill. We implemented psychometric measures of MI ability, while stabilometric variables were calculated from the center of pressure (COP) oscillations. Psychometric evaluations revealed greater MI ability in experts, specifically for the visual modality. Experts exhibited reduced surface COP oscillations in the antero-posterior axis compared to non-experts during the balance skill (14.90%, 95% CI 34.48–4.68, p < 0.05). Experts further exhibited reduced length of COP displacement in the antero-posterior axis and as a function of the displacement area during visual and kinesthetic MI compared to the control condition (20.51%, 95% CI 0.99–40.03 and 21.85%, 95% CI 2.33–41.37, respectively, both p < 0.05). Predictive relationships were found between the stabilometric correlates of visual MI and physical practice of the balance skill, as well as between the stabilometric correlates of kinesthetic MI and the training experience in experts. Present results provide original stabilometric insights into the relationships between MI and expertise level. While data support the incomplete inhibition of postural commands during MI, whether postural responses during MI of various modalities mirror the level of motor expertise remains unclear.


Languages ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 12
Author(s):  
Peiyao Chen ◽  
Ashley Chung-Fat-Yim ◽  
Viorica Marian

Emotion perception frequently involves the integration of visual and auditory information. During multisensory emotion perception, the attention devoted to each modality can be measured by calculating the difference between trials in which the facial expression and speech input exhibit the same emotion (congruent) and trials in which the facial expression and speech input exhibit different emotions (incongruent) to determine the modality that has the strongest influence. Previous cross-cultural studies have found that individuals from Western cultures are more distracted by information in the visual modality (i.e., visual interference), whereas individuals from Eastern cultures are more distracted by information in the auditory modality (i.e., auditory interference). These results suggest that culture shapes modality interference in multisensory emotion perception. It is unclear, however, how emotion perception is influenced by cultural immersion and exposure due to migration to a new country with distinct social norms. In the present study, we investigated how the amount of daily exposure to a new culture and the length of immersion impact multisensory emotion perception in Chinese-English bilinguals who moved from China to the United States. In an emotion recognition task, participants viewed facial expressions and heard emotional but meaningless speech either from their previous Eastern culture (i.e., Asian face-Mandarin speech) or from their new Western culture (i.e., Caucasian face-English speech) and were asked to identify the emotion from either the face or voice, while ignoring the other modality. Analyses of daily cultural exposure revealed that bilinguals with low daily exposure to the U.S. culture experienced greater interference from the auditory modality, whereas bilinguals with high daily exposure to the U.S. culture experienced greater interference from the visual modality. These results demonstrate that everyday exposure to new cultural norms increases the likelihood of showing a modality interference pattern that is more common in the new culture. Analyses of immersion duration revealed that bilinguals who spent more time in the United States were equally distracted by faces and voices, whereas bilinguals who spent less time in the United States experienced greater visual interference when evaluating emotional information from the West, possibly due to over-compensation when evaluating emotional information from the less familiar culture. These findings suggest that the amount of daily exposure to a new culture and length of cultural immersion influence multisensory emotion perception in bilingual immigrants. While increased daily exposure to the new culture aids with the adaptation to new cultural norms, increased length of cultural immersion leads to similar patterns in modality interference between the old and new cultures. We conclude that cultural experience shapes the way we perceive and evaluate the emotions of others.


2022 ◽  
Vol 12 (1) ◽  
pp. 527
Author(s):  
Fei Ma ◽  
Yang Li ◽  
Shiguang Ni ◽  
Shaolun Huang ◽  
Lin Zhang

Audio–visual emotion recognition is the research of identifying human emotional states by combining the audio modality and the visual modality simultaneously, which plays an important role in intelligent human–machine interactions. With the help of deep learning, previous works have made great progress for audio–visual emotion recognition. However, these deep learning methods often require a large amount of data for training. In reality, data acquisition is difficult and expensive, especially for the multimodal data with different modalities. As a result, the training data may be in the low-data regime, which cannot be effectively used for deep learning. In addition, class imbalance may occur in the emotional data, which can further degrade the performance of audio–visual emotion recognition. To address these problems, we propose an efficient data augmentation framework by designing a multimodal conditional generative adversarial network (GAN) for audio–visual emotion recognition. Specifically, we design generators and discriminators for audio and visual modalities. The category information is used as their shared input to make sure our GAN can generate fake data of different categories. In addition, the high dependence between the audio modality and the visual modality in the generated multimodal data is modeled based on Hirschfeld–Gebelein–Re´nyi (HGR) maximal correlation. In this way, we relate different modalities in the generated data to approximate the real data. Then, the generated data are used to augment our data manifold. We further apply our approach to deal with the problem of class imbalance. To the best of our knowledge, this is the first work to propose a data augmentation strategy with a multimodal conditional GAN for audio–visual emotion recognition. We conduct a series of experiments on three public multimodal datasets, including eNTERFACE’05, RAVDESS, and CMEW. The results indicate that our multimodal conditional GAN has high effectiveness for data augmentation of audio–visual emotion recognition.


Author(s):  
Leona Polyanskaya

AbstractTwo classes of cognitive mechanisms have been proposed to explain segmentation of continuous sensory input into discrete recurrent constituents: clustering and boundary-finding mechanisms. Clustering mechanisms are based on identifying frequently co-occurring elements and merging them together as parts that form a single constituent. Bracketing (or boundary-finding) mechanisms work by identifying rarely co-occurring elements that correspond to the boundaries between discrete constituents. In a series of behavioral experiments, I tested which mechanisms are at play in the visual modality both during segmentation of a continuous syllabic sequence into discrete word-like constituents and during recognition of segmented constituents. Additionally, I explored conscious awareness of the products of statistical learning—whole constituents versus merged clusters of smaller subunits. My results suggest that both online segmentation and offline recognition of extracted constituents rely on detecting frequently co-occurring elements, a process likely based on associative memory. However, people are more aware of having learnt whole tokens than of recurrent composite clusters.


2021 ◽  
Author(s):  
Christos Halkiopoulos

This is my BSc dissertation completed at, and submitted to, UCL's Psychology Department in 1981. It reports on my attentional probe paradigm initially used by myself in the auditory modality to demonstrate attentional biases in the processing of threatening information by participants with identifiable personality characteristics. A group of researchers at St. George's (University of London), introduced to this paradigm by M. W Eysenck, applied my attentional probe paradigm in the visual modality (dot probe paradigm). This dissertation is hand-written, rather hurriedly put together, but still easy to read. The experimental work which introduced the attentional probe paradigm appears towards the end of the dissertation.


2021 ◽  
Vol 11 (11) ◽  
pp. 182-187
Author(s):  
A. Kondratenko

Today, type II diabetes mellitus (T2DM) is considered to be the most important nosological cause of decreased cognitive functions. A number of studies have found that hyperglycemia and duration of diabetes are associated with cognitive deficits, with the prevalence of cognitive impairment in type 2 diabetes mellitus being 20% in men and 18% in women over 60 years of age. To achieve this goal, it was conducted a comprehensive clinical-psychopathological and psychodiagnostic examination of 82 patients with moderate type 2 diabetes mellitus (46 women and 36 men) aged 35.9±10.1 years in accordance with the principles of bioethics and deontology. The mean duration of diabetes was 7.9±5.2 years. The severity of diabetes in most cases was defined as moderate (84.1%), and in 15.9% of cases corresponded to severe. 30.2% of patients used insulin as a basic hypoglycemic therapy, 69.8% - tablets. According to the analysis of the emotional state of patients with T2DM were characterized by complaints of low, depressed mood (69.5% of examined patients), uncontrolled emotional reactions (46.2%), feelings of anxiety, constant internal tension (44.7%), paresthesias (29.1%), sleep-wake cycle disorders (56.2%), general weakness, lethargy and fatigue (58.2%), fatigue (90.0%), frequent mood swings, with a predominance of decreased mood background (23.3%), emotional lability with excessive vulnerability and sensitivity (16.6%), irritability (16.6%). The clinical and psychopathological structure of emotional disorders is represented by anxious (43.4%), depressive (26.6%), astheno-hypochondriac (19.8%), hysteroform (10.2%) syndromes. Clinical examination of patients with DM showed that more often (in 95.0% of cases) in patients with T2DM there is a decrease in memory of auditory and visual modality, impaired intellectual abilities, slow thinking, lack of attention and information processing.


Author(s):  
Светлана Игоревна Буркова

В статье на примере русского жестового языка (РЖЯ) делается попытка показать, что инструменты оценки жизнеспособности и сохранности языка, разработанные на материале звуковых языков, не вполне подходят для оценки жизнеспособности и сохранности жестовых языков. Если, например, оценивать жизнеспособность РЖЯ по шестибалльной шкале в системе «девяти факторов», предложенной в документе ЮНЕСКО (Language vitality…, 2003) и используемой в Атласе языков, находящихся под угрозой исчезновения, то эта оценка составит не более 3 баллов, т. е. РЖЯ будет характеризоваться как язык, находящийся под угрозой исчезновения. Это бесписьменный язык, преимущественно используемый в сфере бытового общения, существующий в окружении функционально несопоставимо более мощного русского звукового языка; подавляющее большинство носителей РЖЯ являются билингвами, в той или иной степени владеющими русским звуковым языком в его устной или письменной форме; большая часть носителей РЖЯ усваивают жестовый язык не в семье, с рождения, а в более позднем возрасте; условия усвоения РЖЯ влияют на языковую компетенцию его носителей; окружающий русский звуковой язык влияет на лексику и грамматику РЖЯ; этот язык остается пока недостаточно изученным и слабо задокументированным, и т. д. Однако в действительности РЖЯ в этих условиях стабильно сохраняется, а в последнее время даже расширяет свой словарный состав и сферы использования. Главный фактор, который обеспечивает сохранность жестового языка и который не учитывается в существующих методиках, предназначенных для оценки витальности языков — это модальность, в которой существует жестовый язык. Глухие люди, в силу того что им недоступна или плохо доступна аудиальная модальность, не могут полностью перейти на звуковой язык. Наиболее естественной для коммуникации для них остается визуальная модальность, при этом современные средства связи и интернет открывают дополнительные возможности для подержания и развития языка в визуальной модальности. The paper discusses sociolinguistic aspects of Russian Sign Language (RSL) and attempts to show that the tools used to access the degree of language vitality, which were developed for spoken languages, are not quite suitable to access vitality of sign languages. For example, if to try to assess the vitality of RSL in terms of six-point scale of the “nine factors” system proposed by UNESCO (Language vitality ..., 2003), which is used in the Atlas of Endangered Languages, the assessment of RSL would be no more than 3 points. In other words, RSL would be characterized as an endangered language. It is an unwritten language, mainly used in everyday communication; it exists in the environment of functionally much more powerful spoken Russian; the overwhelming majority of RSL signers are bilinguals, they use spoken Russian, at least in its written form; most deaf children acquire RSL not in the family, from birth, but later in life, at kindergartens or schools; the conditions of RSL acquisition affect the deaf signers’ language proficiency, as well as spoken Russian affects RSL’s lexicon and grammar; RSL still remains insufficiently studied and poorly documented, etc. However, RSL, as a native communication system of the Deaf, based on visual modality, is not only well maintained, but even expands some spheres of use. The main factor, which supports maintenance of RSL and which is not taken into account in the existing tools to access the degree of language vitality is visual modality. The auditory modality is inaccessible or poorly accessible for the deaf, so they can not completely shift to spoken Russian. Visual modality remains the most natural for their communication. In addition, modern technologies and the internet provide much more opportunities for the existence of RSL in this modality and for its development.


2021 ◽  
Author(s):  
Polina Iamshchinina ◽  
Agnessa Karapetian ◽  
Daniel Kaiser ◽  
Radoslaw Martin Cichy

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study we used EEG (N=47) and time-resolved multivariate pattern analysis to investigate (1) the time course with which object category information emerges in the auditory modality and (2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that (1) that auditory object category representations can be reliably extracted from EEG signals and (2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, we did not find evidence for a shared supra-modal code, suggesting that the contents of the different sensory hierarchies are ultimately modality-unique.


Autism ◽  
2021 ◽  
pp. 136236132110567
Author(s):  
Mirko Uljarević ◽  
Gail A Alvares ◽  
Morgan Steele ◽  
Jaelyn Edwards ◽  
Thomas W Frazier ◽  
...  

Despite their high prevalence and clinical importance in autism, unusual and restricted interests remain under-researched and poorly understood. This study aimed to characterize the frequency and type of interests in autism by coding caregivers’ open-ended responses in a sample of 237 autistic children and adolescents ( Mage = 8.27 years, SDage = 4.07; range: 2.08–18.25 years). It further aimed to explore the effects of age, sex, cognitive functioning and social and communication deficits on the number and type of interests. We found that 75% of autistic youth had at least one interest and that 50% of those children showed two or more different interests. The most frequent interests were sensory-based (43%), with a majority of these interests relating to the visual modality. Interest within vehicles/transportation, fictional characters, television/digital versatile disk/movies, computers, and video games, constructive, mechanical objects, animals and plants, and attachment to specific objects were also prevalent. Logistic regression showed that being male, having a co-occurring intellectual disability and having more severe social and communication impairments were associated with a higher probability of having one or more restricted interests. Sex was significantly associated with the type (χ2 = 37.52, Phi = 0.37, p = 0.021) of restricted interests, with females showing a significantly higher percentage of creative interests and males significantly higher percentage of interest in characters, vehicles/transportation, computers/video games, mechanical objects and constructive interests. Theoretical and measurement implications are discussed. Lay abstract Despite being highly prevalent among people with autism, restricted and unusual interests remain under-researched and poorly understood. This article confirms that restricted interests are very frequent and varied among children and adolescents with autism. It also further extends current knowledge in this area by characterizing the relationship between the presence, number, and type of restricted interests with chronological age, sex, cognitive functioning, and social and communication symptoms.


2021 ◽  
Author(s):  
◽  
Paige Badart

<p>Failures of attention can be hazardous, especially within the workplace where sustaining attention has become an increasingly important skill. This has produced a necessity for the development of methods to improve attention. One such method is the practice of meditation. Previous research has shown that meditation can produce beneficial changes to attention and associated brain regions. In particular, sustained attention has shown to be significantly improved by meditation. While this effect has shown to occur in the visual modality, there is less research on the effects of meditation and auditory sustained attention. Furthermore, there is currently no research which examines meditation on crossmodal sustained attention. This is relevant not only because visual and auditory are perceived simultaneously in reality, but also as it may assist in the debate as to whether sustained attention is managed by modality-specific systems or a single overarching supramodal system.  The current research was conducted to examine the effects of meditation on visual, auditory and audiovisual crossmodal sustained attention by using variants of the Sustained Attention to Response Task. In these tasks subjects were presented with either visual, auditory, or a combination of visual and auditory stimuli, and were required to respond to infrequent targets over an extended period of time. It was found that for all of the tasks, meditators significantly differed in accuracy compared to non-meditating control groups. The meditators made less errors without sacrificing response speed, with the exception of the Auditory-target crossmodal task. This demonstrates the benefit of meditation for improving sustained attention across sensory modalities and also lends support to the argument that sustained attention is governed by a supramodal system rather than modality-specific systems.</p>


Sign in / Sign up

Export Citation Format

Share Document