semantic task
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 13)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
pp. 1-21
Author(s):  
Daniel Gurman ◽  
Colin R. McCormick ◽  
Raymond M. Klein

Abstract Crossmodal correspondences are defined as associations between crossmodal stimuli based on seemingly irrelevant stimulus features (i.e., bright shapes being associated with high-pitched sounds). There is a large body of research describing auditory crossmodal correspondences involving pitch and volume, but not so much involving auditory timbre, the character or quality of a sound. Adeli and colleagues (2014, Front. Hum. Neurosci. 8, 352) found evidence of correspondences between timbre and visual shape. The present study aimed to replicate Adeli et al.’s findings, as well as identify novel timbre–shape correspondences. Participants were tested using two computerized tasks: an association task, which involved matching shapes to presented sounds based on best perceived fit, and a semantic task, which involved rating shapes and sounds on a number of scales. The analysis of association matches reveals nonrandom selection, with certain stimulus pairs being selected at a much higher frequency. The harsh/jagged and smooth/soft correspondences observed by Adeli et al. were found to be associated with a high level of consistency. Additionally, high matching frequency of sounds with unstudied timbre characteristics suggests the existence of novel correspondences. Finally, the ability of the semantic task to supplement existing crossmodal correspondence assessments was demonstrated. Convergent analysis of the semantic and association data demonstrates that the two datasets are significantly correlated (−0.36) meaning stimulus pairs associated with a high level of consensus were more likely to hold similar perceived meaning. The results of this study are discussed in both theoretical and applied contexts.


2021 ◽  
Author(s):  
Setareh Rahimi ◽  
Seyedeh-Rezvan Farahibozorg ◽  
Rebecca L Jackson ◽  
Olaf Hauk

How does brain activity in distributed semantic brain networks evolve over time, and how do these regions interact to retrieve the meaning of words? We compared spatiotemporal brain dynamics between visual lexical and semantic decision tasks (LD and SD), analysing whole-cortex evoked responses and spectral functional connectivity (coherence) in source-estimated electroencephalography and magnetoencephalography (EEG and MEG) recordings. Our evoked analysis revealed generally larger activation for SD compared to LD, starting in primary visual area (PVA) and angular gyrus (AG), followed by left posterior temporal cortex (PTC) and left anterior temporal lobe (ATL). The earliest activation effects in ATL were significantly left-lateralised. Our functional connectivity results showed significant connectivity between left and right ATLs and PTC and right ATL in an early time window, as well as between left ATL and IFG in a later time window. The connectivity of AG was comparatively sparse. We quantified the limited spatial resolution of our source estimates via a leakage index for careful interpretation of our results. Our findings suggest that semantic task demands modulate visual and attentional processes early-on, followed by modulation of multimodal semantic information retrieval in ATLs and then control regions (PTC and IFG) in order to extract task-relevant semantic features for response selection. Whilst our evoked analysis suggests a dominance of left ATL for semantic processing, our functional connectivity analysis also revealed significant involvement of right ATL in the more demanding semantic task. Our findings demonstrate the complementarity of evoked and functional connectivity analysis, as well as the importance of dynamic information for both types of analyses.


2021 ◽  
Vol 14 ◽  
Author(s):  
Lihuan Zhang ◽  
Jiali Hu ◽  
Xin Liu ◽  
Emily S. Nichols ◽  
Chunming Lu ◽  
...  

Reading disability has been considered as a disconnection syndrome. Recently, an increasing number of studies have emphasized the role of subcortical regions in reading. However, the majority of research on reading disability has focused on the connections amongst brain regions within the classic cortical reading network. Here, we used graph theoretical analysis to investigate whether subcortical regions serve as hubs (regions highly connected with other brain regions) during reading both in Chinese children with reading disability (N = 15, age ranging from 11.03 to 13.08 years) and in age-matched typically developing children (N = 16, age ranging from 11.17 to 12.75 years) using a visual rhyming judgment task and a visual meaning judgment task. We found that the bilateral thalami were the unique hubs for typically developing children across both tasks. Additionally, subcortical regions (right putamen, left pallidum) were also unique hubs for typically developing children but only in the rhyming task. Among these subcortical hub regions, the left pallidum showed reduced connectivity with inferior frontal regions in the rhyming judgment but not semantic task in reading disabled compared with typically developing children. These results suggest that subcortical-cortical disconnection, which may be particularly relevant to the phonological and phonology-related learning process, may be associated with Chinese reading disability.


2021 ◽  
pp. 1-26
Author(s):  
Anna A. Ivanova ◽  
Zachary Mineroff ◽  
Vitor Zimmerer ◽  
Nancy Kanwisher ◽  
Rosemary Varley ◽  
...  

The ability to combine individual concepts of objects, properties, and actions into complex representations of the world is often associated with language. Yet combinatorial event-level representations can also be constructed from nonverbal input, such as visual scenes. Here, we test whether the language network in the human brain is involved in and necessary for semantic processing of events presented nonverbally. In Experiment 1, we scanned participants with fMRI while they performed a semantic plausibility judgment task versus a difficult perceptual control task on sentences and line drawings that describe/depict simple agent–patient interactions. We found that the language network responded robustly during the semantic task performed on both sentences and pictures (although its response to sentences was stronger). Thus, language regions in healthy adults are engaged during a semantic task performed on pictorial depictions of events. But is this engagement necessary? In Experiment 2, we tested two individuals with global aphasia, who have sustained massive damage to perisylvian language areas and display severe language difficulties, against a group of age-matched control participants. Individuals with aphasia were severely impaired on the task of matching sentences to pictures. However, they performed close to controls in assessing the plausibility of pictorial depictions of agent–patient interactions. Overall, our results indicate that the left frontotemporal language network is recruited but not necessary for semantic processing of nonverbally presented events.


2021 ◽  
Vol 33 (1) ◽  
pp. 8-27
Author(s):  
Mylène Barbaroux ◽  
Arnaud Norena ◽  
Maud Rasamimanana ◽  
Eric Castet ◽  
Mireille Besson

Musical expertise has been shown to positively influence high-level speech abilities such as novel word learning. This study addresses the question whether low-level enhanced perceptual skills causally drives successful novel word learning. We used a longitudinal approach with psychoacoustic procedures to train 2 groups of nonmusicians either on pitch discrimination or on intensity discrimination, using harmonic complex sounds. After short (approximately 3 hr) psychoacoustic training, discrimination thresholds were lower on the specific feature (pitch or intensity) that was trained. Moreover, compared to the intensity group, participants trained on pitch were faster to categorize words varying in pitch. Finally, although the N400 components in both the word learning phase and in the semantic task were larger in the pitch group than in the intensity group, no between-group differences were found at the behavioral level in the semantic task. Thus, these results provide mixed evidence that enhanced perception of relevant features through a few hours of acoustic training with harmonic sounds causally impacts the categorization of speech sounds as well as novel word learning. These results are discussed within the framework of near and far transfer effects from music training to speech processing.


PsyCh Journal ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 760-763
Author(s):  
Zhanna Garakh ◽  
Ekaterina Larionova ◽  
Yuliya Zaytseva

Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1194
Author(s):  
Sergio Muñoz ◽  
Enrique Sánchez ◽  
Carlos A. Iglesias

E-learning has become a critical factor in the academic environment due to the endless number of possibilities that it opens for the learning context. However, these platforms often suppose to increase the difficulties for the communication between teachers and students. Without having real contact between teachers and students, the former finds it harder to adapt their methods and content to their students, while the students also find complications for maintaining their focus. This paper aims to address this challenge with the use of emotion and engagement recognition techniques. We propose an emotion-aware e-learning platform architecture that recognizes students’ emotions and attention in order to improve their academic performance. The system integrates a semantic task automation system that allows users to easily create and configure their own automation rules to adapt the study environment. The main contributions of this paper are: (1) the design of an emotion-aware learning analytics architecture; (2) the integration of this architecture in a semantic task automation platform; and (3) the validation of the use of emotion recognition in the e-learning platform using partial least squares structural equation modeling (PLS-SEM) methodology.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Cheng Chen ◽  
Yizhen Wen ◽  
Shaoyang Cui ◽  
Xiangao Qi ◽  
Zhenhong Liu ◽  
...  

This paper presents a multichannel functional continuous-wave near-infrared spectroscopy (fNIRS) system, which collects data under a dual-level light intensity mode to optimize SNR for channels with multiple source-detector separations. This system is applied to classify different cortical activation states of the prefrontal cortex (PFC). Mental arithmetic, digit span, semantic task, and rest state were selected as four mental tasks. A deep forest algorithm is employed to achieve high classification accuracy. By employing multigrained scanning to fNIRS data, this system can extract the structural features and result in higher performance. The proposed system with proper optimization can achieve 86.9% accuracy on the self-built dataset, which is the highest result compared to the existing systems.


2020 ◽  
Vol 32 (1) ◽  
pp. 36-49 ◽  
Author(s):  
Jin Wang ◽  
Mabel L. Rice ◽  
James R. Booth

Previous studies have found specialized syntactic and semantic processes in the adult brain during language comprehension. Young children have sophisticated semantic and syntactic aspects of language, yet many previous fMRI studies failed to detect this specialization, possibly due to experimental design and analytical methods. In this current study, 5- to 6-year-old children completed a syntactic task and a semantic task to dissociate these two processes. Multivoxel pattern analysis was used to examine the correlation of patterns within a task (between runs) or across tasks. We found that the left middle temporal gyrus showed more similar patterns within the semantic task compared with across tasks, whereas there was no difference in the correlation within the syntactic task compared with across tasks, suggesting its specialization in semantic processing. Moreover, the left superior temporal gyrus showed more similar patterns within both the semantic task and the syntactic task as compared with across tasks, suggesting its role in integration of semantic and syntactic information. In contrast to the temporal lobe, we did not find specialization or integration effects in either the opercular or triangular part of the inferior frontal gyrus. Overall, our study showed that 5- to 6-year-old children have already developed specialization and integration in the temporal lobe, but not in the frontal lobe, consistent with developmental neurocognitive models of language comprehension in typically developing young children.


Sign in / Sign up

Export Citation Format

Share Document