scholarly journals Children are sensitive to mutual information in intermediate-complexity face and nonface features

Author(s):  
Benjamin Balas ◽  
Amanda Auen ◽  
Alyson Saville ◽  
Jamie Schmidt ◽  
Assaf Harel

Uncovering when children learn to use specific visual information for recognizingobject categories is essential for understanding how experience shapes recognition.Research on the development of face recognition has focused on children’s use oflow-level information (e.g. orientation sub-bands), or on children's use of high-levelinformation, namely, configural or holistic information. Do children also useintermediate complexity features for categorizing faces and objects, and if so, atwhat age? Intermediate-complexity features bridge the gap between low- and high- level processing: they have computational benefits for object detection and segmentation, and are known to drive neural responses in the ventral visual system.Here, we asked when children develop sensitivity to diagnostic category information in intermediate-complexity features. We presented children (5-10 years old) and adults with image fragments of faces (Experiment 1) and cars (Experiment 2) varying in their mutual information, which quantities a fragment's diagnosticity of a specific category. Our goal was to determine whether children were sensitive to the amount of mutual information in these fragments, and if their information usage is different from adults’. We found that despite better overall categorization performance in adults, all children were sensitive to fragment diagnosticity in both categories, suggesting that intermediate representations of appearance are established early in childhood. Moreover, children's usage of mutual information was not limited to face fragments, suggesting the extracting intermediate complexity features is a process that is not specific only to faces. We discuss the implications of our findings for developmental theories of face and object recognition.

2019 ◽  
Vol 9 (7) ◽  
pp. 154
Author(s):  
Benjamin Balas ◽  
Assaf Harel ◽  
Amanda Auen ◽  
Alyson Saville

One way in which face recognition develops during infancy and childhood is with regard to the visual information that contributes most to recognition judgments. Adult face recognition depends on critical features spanning a hierarchy of complexity, including low-level, intermediate, and high-level visual information. To date, the development of adult-like information biases for face recognition has focused on low-level features, which are computationally well-defined but low in complexity, and high-level features, which are high in complexity, but not defined precisely. To complement this existing literature, we examined the development of children’s neural responses to intermediate-level face features characterized using mutual information. Specifically, we examined children’s and adults’ sensitivity to varying levels of category diagnosticity at the P100 and N170 components. We found that during middle childhood, sensitivity to mutual information shifts from early components to later ones, which may indicate a critical restructuring of face recognition mechanisms that takes place over several years. This approach provides a useful bridge between the study of low- and high-level visual features for face recognition and suggests many intriguing questions for further investigation.


2017 ◽  
Author(s):  
Noam Roth ◽  
Nicole C. Rust

AbstractFinding a sought visual target object requires combining visual information about a scene with a remembered representation of the target to create a “target match” signal that indicates when a target is in view. Target match signals have been reported to exist within high-level visual brain areas including inferotemporal cortex (IT), where they are mixed with representations of image and object identity. However, these signals are not well understood, particularly in the context of the real-world challenge that the objects we search for typically appear at different positions, sizes, and within different background contexts. To investigate these signals, we recorded neural responses in IT as two rhesus monkeys performed a delayed-match-to-sample object search task in which target objects could appear at a variety of identity-preserving transformations. Consistent with the existence of behaviorally-relevant target match signals in IT, we found that IT contained a linearly separable target match representation that reflected behavioral confusions on trials in which the monkeys made errors. Additionally, target match signals were highly distributed across the IT population, and while a small fraction of units reflected target match signals as target match suppression, most units reflected target match signals as target match enhancement. Finally, we found that the potentially detrimental impact of target match signals on visual representations was mitigated by target match modulation that was approximately (albeit imperfectly) multiplicative. Together, these results support the existence of a robust, behaviorally-relevant target match representation in IT that is configured to minimally interfere with IT visual representations.


Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


2021 ◽  
Vol 7 (22) ◽  
pp. eabe7547
Author(s):  
Meenakshi Khosla ◽  
Gia H. Ngo ◽  
Keith Jamison ◽  
Amy Kuceyeski ◽  
Mert R. Sabuncu

Naturalistic stimuli, such as movies, activate a substantial portion of the human brain, invoking a response shared across individuals. Encoding models that predict neural responses to arbitrary stimuli can be very useful for studying brain function. However, existing models focus on limited aspects of naturalistic stimuli, ignoring the dynamic interactions of modalities in this inherently context-rich paradigm. Using movie-watching data from the Human Connectome Project, we build group-level models of neural activity that incorporate several inductive biases about neural information processing, including hierarchical processing, temporal assimilation, and auditory-visual interactions. We demonstrate how incorporating these biases leads to remarkable prediction performance across large areas of the cortex, beyond the sensory-specific cortices into multisensory sites and frontal cortex. Furthermore, we illustrate that encoding models learn high-level concepts that generalize to task-bound paradigms. Together, our findings underscore the potential of encoding models as powerful tools for studying brain function in ecologically valid conditions.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Chih-Hua Tai ◽  
Kuo-Hsuan Chung ◽  
Ya-Wen Teng ◽  
Feng-Ming Shu ◽  
Yue-Shan Chang

2014 ◽  
Vol 112 (6) ◽  
pp. 1584-1598 ◽  
Author(s):  
Marino Pagan ◽  
Nicole C. Rust

The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance ( d′).


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


2018 ◽  
Author(s):  
Janna M. Gottwald

This thesis assesses the link between action and cognition early in development. Thus the notion of an embodied cognition is investigated by tying together two levels of action control in the context of reaching in infancy: prospective motor control and executive functions. The ability to plan our actions is the inevitable foundation of reaching our goals. Thus actions can be stratified on different levels of control. There is the relatively low level of prospective motor control and the comparatively high level of cognitive control. Prospective motor control is concerned with goal-directed actions on the level of single movements and movement combinations of our body and ensures purposeful, coordinated movements, such as reaching for a cup of coffee. Cognitive control, in the context of this thesis more precisely referred to as executive functions, deals with goal-directed actions on the level of whole actions and action combinations and facilitates directedness towards mid- and long-term goals, such as finishing a doctoral thesis. Whereas prospective motor control and executive functions are well studied in adulthood, the early development of both is not sufficiently understood.This thesis comprises three empirical motion-tracking studies that shed light on prospective motor control and executive functions in infancy. Study I investigated the prospective motor control of current actions by having 14-month-olds lift objects of varying weights. In doing so, multi-cue integration was addressed by comparing the use of visual and non-visual information to non-visual information only. Study II examined the prospective motor control of future actions in action sequences by investigating reach-to-place actions in 14-month-olds. Thus the extent to which Fitts’ law can explain movement duration in infancy was addressed. Study III lifted prospective motor control to a higher that is cognitive level, by investigating it relative to executive functions in 18-months-olds.Main results were that 14-month-olds are able to prospectively control their manual actions based on object weight. In this action planning process, infants use different sources of information. Beyond this ability to prospectively control their current action, 14-month-olds also take future actions into account and plan their actions based on the difficulty of the subsequentaction in action sequences. In 18-month-olds, prospective motor control in manual actions, such as reaching, is related to early executive functions, as demonstrated for behavioral prohibition and working memory. These findings are consistent with the idea that executive functions derive from prospective motor control. I suggest that executive functions could be grounded in the development of motor control. In other words, early executive functions should be seen as embodied.


Sign in / Sign up

Export Citation Format

Share Document