object representations
Recently Published Documents


TOTAL DOCUMENTS

484
(FIVE YEARS 104)

H-INDEX

44
(FIVE YEARS 6)

2021 ◽  
Author(s):  
Vladislav Ayzenberg ◽  
Samoni Nag ◽  
Amy Krivoshik ◽  
Stella F. Lourenco

To accurately represent an object, it must be individuated from the surrounding objects and then classified with the appropriate category or identity. To this end, adults flexibly weight different visual cues when perceiving objects. However, less is known about whether, and how, the weighting of visual object information changes over development. The current study examined how children use different types of information— spatial (e.g., left/right location) and featural (e.g., color)—in different object tasks. In Experiment 1, we tested whether infants and preschoolers extract both the spatial and featural properties of objects, and, importantly, how these cues are weighted when pitted against each other. We found that infants relied primarily on spatial cues and neglected featural cues. By contrast, preschoolers showed the opposite pattern of weighting, placing greater weight on featural information. In Experiment 2, we tested the hypothesis that the developmental shift from spatial to featural weighting reflects a shift from a priority on object individuation (how many objects) in infancy to object classification (what are the objects) at preschool age. Here, we found that preschoolers weighted spatial information more than features when the task required individuating objects without identifying them, consistent with a specific role for spatial information in object individuation. We discuss the relevance of spatial-featural weighting in relation to developmental changes in children’s object representations.


2021 ◽  
Author(s):  
Sophia Shatek ◽  
Amanda K Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson

The ability to perceive moving objects is crucial for survival and threat identification. The association between the ability to move and being alive is learned early in childhood, yet not all moving objects are alive. Natural, non-agentive movement (e.g., clouds, fire) causes confusion in children and adults under time pressure. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. Most prior work has used only moving stimuli that are also animate, so it is difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the neural data, during both a classification task and passive viewing. These results show that capacity for movement is an important dimension in the structure of human visual object representations.


2021 ◽  
Vol 21 (9) ◽  
pp. 2624
Author(s):  
Wenyan Bi ◽  
Aalap D. Shah ◽  
Kimberly W. Wong ◽  
Brian Scholl ◽  
Ilker Yildirim

2021 ◽  
Author(s):  
Marek A. Pedziwiatr ◽  
Elisabeth von dem Hagen ◽  
Christoph Teufel

Humans constantly move their eyes to explore the environment and obtain information. Competing theories of gaze guidance consider the factors driving eye movements within a dichotomy between low-level visual features and high-level object representations. However, recent developments in object perception indicate a complex and intricate relationship between features and objects. Specifically, image-independent object-knowledge can generate objecthood by dynamically reconfiguring how feature space is carved up by the visual system. Here, we adopt this emerging perspective of object perception, moving away from the simplifying dichotomy between features and objects in explanations of gaze guidance. We recorded eye movements in response to stimuli that appear as meaningless patches on initial viewing but are experienced as coherent objects once relevant object-knowledge has been acquired. We demonstrate that gaze guidance differs substantially depending on whether observers experienced the same stimuli as meaningless patches or organised them into object representations. In particular, fixations on identical images became object-centred, less dispersed, and more consistent across observers once exposed to relevant prior object-knowledge. Observers' gaze behaviour also indicated a shift from exploratory information-sampling to a strategy of extracting information mainly from selected, object-related image areas. These effects were evident from the first fixations on the image. Importantly, however, eye-movements were not fully determined by object representations but were best explained by a simple model that integrates image-computable features and high-level, knowledge-dependent object representations. Overall, the results show how information sampling via eye-movements in humans is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organisation.


2021 ◽  
Author(s):  
Lixiang Chen ◽  
Radoslaw Martin Cichy ◽  
Daniel Kaiser

AbstractDuring natural vision, objects rarely appear in isolation, but often within a semantically related scene context. Previous studies reported that semantic consistency between objects and scenes facilitates object perception, and that scene-object consistency is reflected in changes in the N300 and N400 components in EEG recordings. Here, we investigate whether these N300/N400 differences are indicative of changes in the cortical representation of objects. In two experiments, we recorded EEG signals while participants viewed semantically consistent or inconsistent objects within a scene; in Experiment 1, these objects were task-irrelevant, while in Experiment 2, they were directly relevant for behavior. In both experiments, we found reliable and comparable N300/400 differences between consistent and inconsistent scene-object combinations. To probe the quality of object representations, we performed multivariate classification analyses, in which we decoded the category of the objects contained in the scene. In Experiment 1, in which the objects were not task-relevant, object category could be decoded from around 100 ms after the object presentation, but no difference in decoding performance was found between consistent and inconsistent objects. By contrast, when the objects were task-relevant in Experiment 2, we found enhanced decoding of semantically consistent, compared to semantically inconsistent, objects. These results show that differences in N300/N400 components related to scene-object consistency do not index changes in cortical object representations, but rather reflect a generic marker of semantic violations. Further, our findings suggest that facilitatory effects between objects and scenes are task-dependent rather than automatic.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.


2021 ◽  
Author(s):  
Jeanne L. Shinskey

Experience with an object’s photo changes 9-month-olds’ preference for the referent object, confirming they can form picture-based object representations (Shinskey & Jachens, 2014). However, infants’ picture-based representations often appear weaker than object-based ones. The current study’s first objective was to investigate age differences in infants’ recognition memory for a real object after familiarization with its picture. The second objective was to test whether age differences in object permanence sensitivity with picture-based representations conceptually replicate those found with object-based representations, whereby 7-month-olds search more for familiar hidden objects but 11-month-olds search more for novel ones (Shinskey & Munakata, 2005; 2010). Twenty 6-month-olds and 20 11-month-olds were familiarized with an object’s photo and tested on their representation of the real referent object by comparing their preferential reaching for it versus a novel distractor object. The objects were visible in one condition testing recognition memory and hidden in another condition testing object permanence. Like 9-month-olds, 6- and 11-month-olds had a novelty preference with visible objects. This finding shows robust early recognition memory for an object after familiarization with its photo as well as developmental continuity. Unlike 9-month-olds, who switched to a familiarity preference with hidden objects, 6- and 11-month-olds switched to null preference. This U-shaped pattern fails to conceptually replicate 7- and 11-month-olds’ preferences with hidden objects after familiarization with a real object. It reveals discontinuity in sensitivity to an object’s permanence after familiarization with its picture, and suggests that such picture-based representations are weaker than object-based ones.


Sign in / Sign up

Export Citation Format

Share Document