scholarly journals Getting the gist faster: Blurry images enhance the early temporal similarity between neural signals and convolutional neural networks

2021 ◽  
Author(s):  
David A. Tovar ◽  
Tijl Grootswagers ◽  
James Jun ◽  
Oakyoon Cha ◽  
Randolph Blake ◽  
...  

Humans are able to recognize objects under a variety of noisy conditions, so models of the human visual system must account for how this feat is accomplished. In this study, we investigated how image perturbations, specifically reducing images to their low spatial frequency (LSF) components, affected correspondence between convolutional neural networks (CNNs) and brain signals recorded using magnetoencephalography (MEG). Using the high temporal resolution of MEG, we found that CNN-Brain correspondence for deeper and more complex layers across CNN architectures emerged earlier for LSF images than for their unfiltered broadband counterparts. The early emergence of LSF components is consistent with the coarse-to-fine theoretical framework for visual image processing, but surprisingly shows that LSF signals from images are more prominent when high spatial frequencies are removed. In addition, we decomposed MEG signals into oscillatory components and found correspondence varied based on frequency bands, painting a full picture of how CNN-Brain correspondence varies with time, frequency, and MEG sensor locations. Finally, we varied image properties of CNN training sets, and found marked changes in CNN processing dynamics and correspondence to brain activity. In sum, we show that image perturbations affect CNN-Brain correspondence in unexpected ways, as well as provide a rich methodological framework for assessing CNN-Brain correspondence across space, time, and frequency.

Author(s):  
Rohit Bhat ◽  
Akshay Deshpande ◽  
Rahul Rai ◽  
Ehsan Tarkesh Esfahani

The aim of this paper is to explore a new multimodal Computer Aided Design (CAD) platform based on brain-computer interfaces and touch based systems. The paper describes experiments and algorithms for manipulating geometrical objects in CAD systems using touch-based gestures and movement imagery detected though brain waves. Gestures associated with touch based systems are subjected to ambiguity since they are two dimensional in nature. Brain signals are considered here as the main source to resolve these ambiguities. The brainwaves are recorded in terms of electroencephalogram (EEG) signals. Users wear a neuroheadset and try to move and rotate a target object on a touch screen. As they perform these actions, the EEG headset collects brain activity from 14 locations on the scalp. The data is analyzed in the time-frequency domain to detect the desynchronizations of certain frequency bands (3–7Hz, 8–13 Hz, 14–20Hz 21–29Hz and 30–50Hz) in the temporal cortex as an indication of motor imagery.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chi Zhang ◽  
Xiao-Han Duan ◽  
Lin-Yuan Wang ◽  
Yong-Li Li ◽  
Bin Yan ◽  
...  

Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN—AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009456
Author(s):  
Bruce C. Hansen ◽  
Michelle R. Greene ◽  
David J. Field

A number of neuroimaging techniques have been employed to understand how visual information is transformed along the visual pathway. Although each technique has spatial and temporal limitations, they can each provide important insights into the visual code. While the BOLD signal of fMRI can be quite informative, the visual code is not static and this can be obscured by fMRI’s poor temporal resolution. In this study, we leveraged the high temporal resolution of EEG to develop an encoding technique based on the distribution of responses generated by a population of real-world scenes. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. Our analyses of the mapping results revealed that scenes undergo a series of nonuniform transformations that prioritize different spatial frequencies at different regions of scenes over time. This mapping technique offers a potential avenue for future studies to explore how dynamic feedforward and recurrent processes inform and refine high-level representations of our visual world.


Sign in / Sign up

Export Citation Format

Share Document