scholarly journals Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification

Author(s):  
ZongYuan Ge ◽  
Chris McCool ◽  
Conrad Sanderson ◽  
Peng Wang ◽  
Lingqiao Liu ◽  
...  
2020 ◽  
Vol 22 (7) ◽  
pp. 1785-1795 ◽  
Author(s):  
Chuanbin Liu ◽  
Hongtao Xie ◽  
Zhengjun Zha ◽  
Lingyun Yu ◽  
Zhineng Chen ◽  
...  

2020 ◽  
Author(s):  
Mingli Liang ◽  
Jingyi Zheng ◽  
Eve Isham ◽  
Arne Ekstrom

AbstractJudging how far something is and how long it takes to get there are critical to memory and navigation. Yet, the neural codes for spatial and temporal information remain unclear, particularly how and whether neural oscillations might be important for such codes. To address these issues, participants traveled through teleporters in a virtual town while we simultaneously recorded scalp EEG. Participants judged the distance and time spent inside the teleporter. Our findings suggest that alpha power relates to distance judgments while frontal theta power relates to temporal judgments. In contrast, changes in alpha frequency and beta power indexed both spatial and temporal judgments. We also found evidence for fine-grained temporal coding and an effect of past trials on temporal but not spatial judgments. Together, these findings support partially independent coding schemes for spatial and temporal information, and suggest that low-frequency oscillations play important roles in coding both space and time.


2017 ◽  
Vol 19 (6) ◽  
pp. 1245-1256 ◽  
Author(s):  
Bo Zhao ◽  
Xiao Wu ◽  
Jiashi Feng ◽  
Qiang Peng ◽  
Shuicheng Yan

2015 ◽  
Vol 15 (12) ◽  
pp. 1167
Author(s):  
Clara Fannjiang ◽  
Marius Catalin Iordan ◽  
Diane Beck ◽  
Li Fei-Fei

2017 ◽  
Vol 26 (8) ◽  
pp. 3965-3980 ◽  
Author(s):  
Sezer Karaoglu ◽  
Ran Tao ◽  
Jan C. van Gemert ◽  
Theo Gevers

2019 ◽  
Author(s):  
Aria Y. Wang ◽  
Leila Wehbe ◽  
Michael J. Tarr

AbstractConvolutional neural networks (CNNs) trained for object recognition have been widely used to account for visually-driven neural responses in both the human and primate brains. However, because of the generality and complexity of the task of object classification, it is often difficult to make precise inferences about neural information processing using CNN representations from object classification despite the fact that these representations are effective for predicting brain activity. To better understand underlying the nature of the visual features encoded in different brain regions of the human brain, we predicted brain responses to images using fine-grained representations drawn from 19 specific computer vision tasks. Individual encoding models for each task were constructed and then applied to BOLD5000—a large-scale dataset comprised of fMRI scans collected while observers viewed over 5000 naturalistic scene and object images. Because different encoding models predict activity in different brain regions, we were able to associate specific vision tasks with each region. For example, within scene-selective brain regions, features from 3D tasks such as 3D keypoints and 3D edges explain greater variance as compared to 2D tasks—a pattern that replicates across the whole brain. Using results across all 19 task representations, we constructed a “task graph” based on the spatial layout of well-predicted brain areas from each task. We then compared the brain-derived task structure with the task structure derived from transfer learning accuracy in order to assess the degree of shared information between the two task spaces. These computationally-driven results—arising out of state-of-the-art computer vision methods—begin to reveal the task-specific architecture of the human visual system.


Sign in / Sign up

Export Citation Format

Share Document