Time-Frequency Image Resizing Using Interpolation for Acoustic Event Recognition with Convolutional Neural Networks

Author(s):  
Roneel V. Sharan ◽  
Tom J. Moir
2016 ◽  
Vol 45 (4) ◽  
pp. 734-759 ◽  
Author(s):  
Qicong Wang ◽  
Jinhao Zhao ◽  
Dingxi Gong ◽  
Yehu Shen ◽  
Maozhen Li ◽  
...  

2021 ◽  
Author(s):  
David A. Tovar ◽  
Tijl Grootswagers ◽  
James Jun ◽  
Oakyoon Cha ◽  
Randolph Blake ◽  
...  

Humans are able to recognize objects under a variety of noisy conditions, so models of the human visual system must account for how this feat is accomplished. In this study, we investigated how image perturbations, specifically reducing images to their low spatial frequency (LSF) components, affected correspondence between convolutional neural networks (CNNs) and brain signals recorded using magnetoencephalography (MEG). Using the high temporal resolution of MEG, we found that CNN-Brain correspondence for deeper and more complex layers across CNN architectures emerged earlier for LSF images than for their unfiltered broadband counterparts. The early emergence of LSF components is consistent with the coarse-to-fine theoretical framework for visual image processing, but surprisingly shows that LSF signals from images are more prominent when high spatial frequencies are removed. In addition, we decomposed MEG signals into oscillatory components and found correspondence varied based on frequency bands, painting a full picture of how CNN-Brain correspondence varies with time, frequency, and MEG sensor locations. Finally, we varied image properties of CNN training sets, and found marked changes in CNN processing dynamics and correspondence to brain activity. In sum, we show that image perturbations affect CNN-Brain correspondence in unexpected ways, as well as provide a rich methodological framework for assessing CNN-Brain correspondence across space, time, and frequency.


Sign in / Sign up

Export Citation Format

Share Document