scholarly journals Spatial and featural cue weighting in children’s developing object representations

2021 ◽  
Author(s):  
Vladislav Ayzenberg ◽  
Samoni Nag ◽  
Amy Krivoshik ◽  
Stella F. Lourenco

To accurately represent an object, it must be individuated from the surrounding objects and then classified with the appropriate category or identity. To this end, adults flexibly weight different visual cues when perceiving objects. However, less is known about whether, and how, the weighting of visual object information changes over development. The current study examined how children use different types of information— spatial (e.g., left/right location) and featural (e.g., color)—in different object tasks. In Experiment 1, we tested whether infants and preschoolers extract both the spatial and featural properties of objects, and, importantly, how these cues are weighted when pitted against each other. We found that infants relied primarily on spatial cues and neglected featural cues. By contrast, preschoolers showed the opposite pattern of weighting, placing greater weight on featural information. In Experiment 2, we tested the hypothesis that the developmental shift from spatial to featural weighting reflects a shift from a priority on object individuation (how many objects) in infancy to object classification (what are the objects) at preschool age. Here, we found that preschoolers weighted spatial information more than features when the task required individuating objects without identifying them, consistent with a specific role for spatial information in object individuation. We discuss the relevance of spatial-featural weighting in relation to developmental changes in children’s object representations.

2020 ◽  
Vol 10 (11) ◽  
pp. 854 ◽  
Author(s):  
Rafał Czajkowski ◽  
Bartosz Zglinicki ◽  
Emilia Rejmak ◽  
Witold Konopka

The retrosplenial cortex (RSC) belongs to the spatial memory circuit, but the precise timeline of its involvement and the relation to hippocampal activation have not been sufficiently described. We trained rats in a modified version of the T maze with transparent walls and distant visual cues to induce the formation of allocentric spatial memory. We used two distinct salient contexts associated with opposite sequences of turns. Switching between contexts allowed us to test the ability of animals to utilize spatial information. We then applied a CatFISH approach with a probe directed against the Arc immediate early gene in order to visualize the associated memory engrams in the RSC and the hippocampus. After training, rats displayed two strategies to solve the maze, with half of the animals relying on distant spatial cues (allocentric) and the other half using egocentric strategy. Rats that did not utilize the spatial cues showed higher Arc levels in the RSC compared to the allocentric group. The overlap between the two context engrams in the RSC was similar in both groups. These results show differential involvement of the RSC and hippocampus during spatial memory acquisition and point toward their distinct roles in forming the cognitive maps.


Author(s):  
Mauricio Carlos Henrich ◽  
Ken Steffen Frahm ◽  
Ole K. Andersen

Spatial information of nociceptive stimuli applied in the skin of healthy humans is integrated in the spinal cord to determine the appropriate withdrawal reflex response. Double-simultaneous stimulus applied in different skin sites are integrated, eliciting a larger reflex response. The temporal characteristics of the stimuli also modulate the reflex e.g. by temporal summation. The primary aim of this study was to investigate how the combined tempo-spatial aspects of two stimuli are integrated in the nociceptive system. This was investigated by delivering single and double simultaneous stimulation, and sequential stimulation with different inter-stimulus intervals (ISIs ranging 30-500 ms.) to the sole of the foot of fifteen healthy subjects. The primary outcome measure was the size of the nociceptive withdrawal reflex (NWR) recorded from the Tibialis Anterior (TA) and Biceps Femoris (BF) muscles. Pain intensity was measured using an NRS scale. Results showed spatial summation in both TA and BF when delivering simultaneous stimulation. Simultaneous stimulation provoked larger reflexes than sequential stimulation in TA, but not in BF. Larger ISIs elicited significantly larger reflexes in TA, while the opposite pattern occurred in BF. This differential modulation between proximal and distal muscles suggests the presence of spinal circuits eliciting a functional reflex response based on the specific tempo-spatial characteristics of a noxious stimulus. No modulation was observed in pain intensity ratings across ISIs. Absence of modulation in the pain intensity ratings argues for an integrative mechanism located within the spinal cord governed by a need for efficient withdrawal from a potentially harmful stimulus.


2005 ◽  
Vol 93 (2) ◽  
pp. 1104-1110 ◽  
Author(s):  
Jonathan J. Marotta ◽  
Gerald P. Keith ◽  
J. Douglas Crawford

We tested between three levels of visuospatial adaptation (global map, parallel feature modules, and parallel sensorimotor transformations) by training subjects to reach and grasp virtual objects viewed through a left-right reversing prism, with either visual location or orientation feedback. Even though spatial information about the global left-right reversal was present in every training session, subjects trained with location feedback reached to the correct location but with the wrong (reversed) grasp orientation. Subjects trained with orientation feedback showed the opposite pattern. These errors were task-specific and not feature-specific; subjects trained to correctly grasp visually reversed–oriented bars failed to show knowledge of the reversal when asked to point to the end locations of these bars. These results show that adaptation to visuospatial distortion—even global reversals—is implemented through learning rules that operate on parallel sensorimotor transformations (e.g., reach vs. grasp).


2021 ◽  
Author(s):  
Sophia Shatek ◽  
Amanda K Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson

The ability to perceive moving objects is crucial for survival and threat identification. The association between the ability to move and being alive is learned early in childhood, yet not all moving objects are alive. Natural, non-agentive movement (e.g., clouds, fire) causes confusion in children and adults under time pressure. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. Most prior work has used only moving stimuli that are also animate, so it is difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the neural data, during both a classification task and passive viewing. These results show that capacity for movement is an important dimension in the structure of human visual object representations.


2016 ◽  
Vol 78 (4) ◽  
pp. 1145-1162 ◽  
Author(s):  
Katharine B. Porter ◽  
Veronica Mazza ◽  
Annie Garofalo ◽  
Alfonso Caramazza

Author(s):  
Elizabeth Thorpe Davis ◽  
Larry F. Hodges

Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.


2020 ◽  
Vol 39 (3) ◽  
pp. 3769-3781
Author(s):  
Zhisong Han ◽  
Yaling Liang ◽  
Zengqun Chen ◽  
Zhiheng Zhou

Video-based person re-identification aims to match videos of pedestrians captured by non-overlapping cameras. Video provides spatial information and temporal information. However, most existing methods do not combine these two types of information well and ignore that they are of different importance in most cases. To address the above issues, we propose a two-stream network with a joint distance metric for measuring the similarity of two videos. The proposed two-stream network has several appealing properties. First, the spatial stream focuses on multiple parts of a person and outputs robust local spatial features. Second, a lightweight and effective temporal information extraction block is introduced in video-based person re-identification. In the inference stage, the distance of two videos is measured by the weighted sum of spatial distance and temporal distance. We conduct extensive experiments on four public datasets, i.e., MARS, PRID2011, iLIDS-VID and DukeMTMC-VideoReID to show that our proposed approach outperforms existing methods in video-based person re-ID.


1979 ◽  
Vol 23 (1) ◽  
pp. 449-451
Author(s):  
T. J. Triggs ◽  
W. G. Harris ◽  
B. N. Fildes

Formal delineation schemes on rural roads need to supply several types of information to the driver under night conditions. He needs longer-term delineation information or reasonable preview in order to plan ahead on approach to curves. The experiment reported explored the effect of various delineation schemes, road contour, distance to the curve, and the curve direction of turn under two reaction time instructional conditions. The results demonstrated that road-side post delineation provides effective information of this type, while no benefit was found from edgelining. Right-hand curves were responded to faster and were more easily detected than left-handers.* Several interesting interactions were found between the factors studied.


Sign in / Sign up

Export Citation Format

Share Document