Shared Spatiotemporal Category Representations in Biological and Artificial Deep Neural Networks
AbstractUnderstanding the computational transformations that enable invariant visual categorization is a fundamental challenge in both systems and cognitive neuroscience. Recently developed deep convolutional neural networks (CNNs) perform visual categorization at accuracies that rival humans, providing neuroscientists with the opportunity to interrogate the series of representational transformations that enable categorization in silico. The goal of the current study is to assess the extent to which sequential visual representations built by a CNN map onto those built in the human brain as assessed by high-density, time-resolved event-related potentials (ERPs). We found correspondence both over time and across the scalp: earlier ERP activity was best explained by early CNN layers at all electrodes. Later neural activity was best explained by the later, conceptual layers of the CNN. This effect was especially true both in frontal and right occipital sites. Together, we conclude that deep artificial neural networks trained to perform scene categorization traverse similar representational stages as the human brain. Thus, examining these networks will allow neuroscientists to better understand the transformations that enable invariant visual categorization.