HOW PURE IS PURE ALEXIA? A NEUROPSYCHOLOGICAL ANALYSIS OF A CASE SERIES OF PATIENTS WITH ALEXIA DUE TO LEFT HEMISPHERIC STROKES

2020 ◽  
Vol 18 (4) ◽  
pp. 493-506
Author(s):  
Marcin M. Leśniak ◽  
Aleksandra J. Zielińska ◽  
Wojciech Czepiel ◽  
Joanna Seniów

The aim of this study was to analyze a case series with acquired alexia after stroke within the posterior areas of the left hemisphere, in the context of the current criteria for pure alexia and their relevance to the set of symptoms observable in clinical practice. Seven patients with ischemic strokes and an initial diagnosis of pure alexia were enrolled for detailed analysis. The evaluation consisted of neuropsychological assessment in the form of standardized tests and non-standardized reading tasks, while oculomotor activity during reading was measured. Language functions, visual object and space perception, verbal and nonverbal memory, and visuospatial constructional ability were among the domains assessed. In five of the participants, pure alexia was recognized based on sig- nificant and specific discrepancies between test scores, indicating primary abnormalities in the visual processing of letter strings as a basic mechanism of the disorder. In most of the patients, coexisting cognitive deficits were revealed; however, these were dispropor- tionately milder and less functionally significant than reading disturbances. Pure alexia is a relatively rare disorder after a stroke, but it considerably affects the quality of everyday independent functioning. Its clinical characteristics in practice rarely meet all the criteria proposed in the subject literature. The differential diagnosis of this form of alexia and other reading disorders requires detailed clinical analysis.

2019 ◽  
Vol 27 ◽  
pp. 59-65 ◽  
Author(s):  
Marjorie de Oliveira Gallinari ◽  
Luciano Tavares Angelo Cintra ◽  
Morganna Borges de Almeida Souza ◽  
Ana Carolina Souza Barboza ◽  
Lara Maria Bueno Esteves ◽  
...  

Neurology ◽  
2020 ◽  
Vol 95 (12) ◽  
pp. e1672-e1685 ◽  
Author(s):  
Colin Groot ◽  
B.T. Thomas Yeo ◽  
Jacob W. Vogel ◽  
Xiuming Zhang ◽  
Nanbo Sun ◽  
...  

ObjectiveTo determine whether atrophy relates to phenotypical variants of posterior cortical atrophy (PCA) recently proposed in clinical criteria (i.e., dorsal, ventral, dominant-parietal, and caudal) we assessed associations between latent atrophy factors and cognition.MethodsWe employed a data-driven Bayesian modeling framework based on latent Dirichlet allocation to identify latent atrophy factors in a multicenter cohort of 119 individuals with PCA (age 64 ± 7 years, 38% male, Mini-Mental State Examination 21 ± 5, 71% β-amyloid positive, 29% β-amyloid status unknown). The model uses standardized gray matter density images as input (adjusted for age, sex, intracranial volume, MRI scanner field strength, and whole-brain gray matter volume) and provides voxelwise probabilistic maps for a predetermined number of atrophy factors, allowing every individual to express each factor to a degree without a priori classification. Individual factor expressions were correlated to 4 PCA-specific cognitive domains (object perception, space perception, nonvisual/parietal functions, and primary visual processing) using general linear models.ResultsThe model revealed 4 distinct yet partially overlapping atrophy factors: right-dorsal, right-ventral, left-ventral, and limbic. We found that object perception and primary visual processing were associated with atrophy that predominantly reflects the right-ventral factor. Furthermore, space perception was associated with atrophy that predominantly represents the right-dorsal and right-ventral factors. However, individual participant profiles revealed that the large majority expressed multiple atrophy factors and had mixed clinical profiles with impairments across multiple domains, rather than displaying a discrete clinical–radiologic phenotype.ConclusionOur results indicate that specific brain behavior networks are vulnerable in PCA, but most individuals display a constellation of affected brain regions and symptoms, indicating that classification into 4 mutually exclusive variants is unlikely to be clinically useful.


2009 ◽  
Vol 24 (4) ◽  
pp. 355-370 ◽  
Author(s):  
J. Pena-Casanova ◽  
M. Quintana-Aparicio ◽  
S. Quinones-Ubeda ◽  
M. Aguilar ◽  
J. L. Molinuevo ◽  
...  

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2014 ◽  
Vol 31 (5-6) ◽  
pp. 413-436 ◽  
Author(s):  
Thomas Habekost ◽  
Anders Petersen ◽  
Marlene Behrmann ◽  
Randi Starrfelt

2019 ◽  
Author(s):  
Leslie Y. Lai ◽  
Romy Frömer ◽  
Elena K. Festa ◽  
William C. Heindel

ABSTRACTWhen recognizing objects in our environments, we rely on both what we see and what we know. While elderly adults have been found to display increased sensitivity to top-down influences of contextual information during object recognition, the locus of this increased sensitivity remains unresolved. To address this issue, we examined the effects of aging on the neural dynamics of bottom-up and top-down visual processing during rapid object recognition. Specific EEG ERP components indexing bottom-up and top-down processes along the visual processing stream were assessed while systematically manipulating the degree of object ambiguity and scene context congruity. An increase in early attentional feedback mechanisms (as indexed by N1) as well as a functional reallocation of executive attentional resources (as indexed by P200) prior to object identification were observed in elderly adults, while post-perceptual semantic integration (as indexed by N400) remained intact. These findings suggest that compromised bottom-up perceptual processing of visual input in healthy aging leads to an increased involvement of top-down processes to resolve greater perceptual ambiguity during object recognition.


2019 ◽  
Author(s):  
Talia Brandman ◽  
Chiara Avancini ◽  
Olga Leticevscaia ◽  
Marius V. Peelen

AbstractSounds (e.g., barking) help us to visually identify objects (e.g., a dog) that are distant or ambiguous. While neuroimaging studies have revealed neuroanatomical sites of audiovisual interactions, little is known about the time-course by which sounds facilitate visual object processing. Here we used magnetoencephalography (MEG) to reveal the time-course of the facilitatory influence of natural sounds (e.g., barking) on visual object processing, and compared this to the facilitatory influence of spoken words (e.g., “dog”). Participants viewed images of blurred objects preceded by a task-irrelevant natural sound, a spoken word, or uninformative noise. A classifier was trained to discriminate multivariate sensor patterns evoked by animate and inanimate intact objects with no sounds, presented in a separate experiment, and tested on sensor patterns evoked by the blurred objects in the three auditory conditions. Results revealed that both sounds and words, relative to uninformative noise, significantly facilitated visual object category decoding between 300-500 ms after visual onset. We found no evidence for earlier facilitation by sounds than by words. These findings provide evidence for a semantic route of facilitation by both natural sounds and spoken words, whereby the auditory input first activates semantic object representations, which then modulate the visual processing of objects.


2020 ◽  
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L. Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

AbstractVisual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognised as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we show that the balance between selectivity and invariance in complex cells is more diverse than thought. Phase invariance is frequently partial, thus retaining sensitivity to brightness polarity, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


2020 ◽  
Vol 30 (9) ◽  
pp. 5067-5087
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

Abstract Visual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognized as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we quantitatively describe the balance between selectivity and invariance in complex cells. Phase invariance is frequently partial, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


Sign in / Sign up

Export Citation Format

Share Document