scholarly journals Qualitative Analysis of Dynamic Activity Patterns in Neural Networks

2011 ◽  
Vol 2011 ◽  
pp. 1-2 ◽  
Author(s):  
Ivanka Stamova ◽  
Haydar Akca ◽  
Gani Stamov

Author(s):  
Mihaela-Hanako Matcovschi ◽  
Octavian Pastravanu


2017 ◽  
Author(s):  
Stefania Bracci ◽  
Ioannis Kalfas ◽  
Hans Op de Beeck

AbstractRecent studies showed agreement between how the human brain and neural networks represent objects, suggesting that we might start to understand the underlying computations. However, we know that the human brain is prone to biases at many perceptual and cognitive levels, often shaped by learning history and evolutionary constraints. Here we explore one such bias, namely the bias to perceive animacy, and used the performance of neural networks as a benchmark. We performed an fMRI study that dissociated object appearance (how an object looks like) from object category (animate or inanimate) by constructing a stimulus set that includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), and, crucially, inanimate objects that look like the animate objects (e.g., a cow-mug). Behavioral judgments and deep neural networks categorized images mainly by animacy, setting all objects (lookalike and inanimate) apart from the animate ones. In contrast, activity patterns in ventral occipitotemporal cortex (VTC) were strongly biased towards object appearance: animals and lookalikes were similarly represented and separated from the inanimate objects. Furthermore, this bias interfered with proper object identification, such as failing to signal that a cow-mug is a mug. The bias in VTC to represent a lookalike as animate was even present when participants performed a task requiring them to report the lookalikes as inanimate. In conclusion, VTC representations, in contrast to neural networks, fail to veridically represent objects when visual appearance is dissociated from animacy, probably due to a biased processing of visual features typical of animate objects.



1989 ◽  
Vol 36 (2) ◽  
pp. 229-243 ◽  
Author(s):  
A.N. Michel ◽  
J.A. Farrell ◽  
W. Porod


2019 ◽  
Vol 8 (1) ◽  
pp. 45 ◽  
Author(s):  
Caglar Koylu ◽  
Chang Zhao ◽  
Wei Shao

Thanks to recent advances in high-performance computing and deep learning, computer vision algorithms coupled with spatial analysis methods provide a unique opportunity for extracting human activity patterns from geo-tagged social media images. However, there are only a handful of studies that evaluate the utility of computer vision algorithms for studying large-scale human activity patterns. In this article, we introduce an analytical framework that integrates a computer vision algorithm based on convolutional neural networks (CNN) with kernel density estimation to identify objects, and infer human activity patterns from geo-tagged photographs. To demonstrate our framework, we identify bird images to infer birdwatching activity from approximately 20 million publicly shared images on Flickr, across a three-year period from December 2013 to December 2016. In order to assess the accuracy of object detection, we compared results from the computer vision algorithm to concept-based image retrieval, which is based on keyword search on image metadata such as textual description, tags, and titles of images. We then compared patterns in birding activity generated using Flickr bird photographs with patterns identified using eBird data—an online citizen science bird observation application. The results of our eBird comparison highlight the potential differences and biases in casual and serious birdwatching, and similarities and differences among behaviors of social media and citizen science users. Our analysis results provide valuable insights into assessing the credibility and utility of geo-tagged photographs in studying human activity patterns through object detection and spatial analysis.



2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Thinh Nguyen ◽  
Thomas Potter ◽  
Trac Nguyen ◽  
Christof Karmonik ◽  
Robert Grossman ◽  
...  

Understanding the mechanism of neuroplasticity is the first step in treating neuromuscular system impairments with cognitive rehabilitation approaches. To characterize the dynamics of the neural networks and the underlying neuroplasticity of the central motor system, neuroimaging tools with high spatial and temporal accuracy are desirable. EEG and fMRI stand among the most popular noninvasive neuroimaging modalities with complementary features, yet achieving both high spatial and temporal accuracy remains a challenge. A novel multimodal EEG/fMRI integration method was developed in this study to achieve high spatiotemporal accuracy by employing the most probable fMRI spatial subsets to guide EEG source localization in a time-variant fashion. In comparison with the traditional fMRI constrained EEG source imaging method in a visual/motor activation task study, the proposed method demonstrated superior localization accuracy with lower variation and identified neural activity patterns that agreed well with previous studies. This spatiotemporal fMRI constrained source imaging method was then implemented in a “sequential multievent-related potential” paradigm where motor activation is evoked by emotion-related visual stimuli. Results demonstrate that the proposed method can be used as a powerful neuroimaging tool to unveil the dynamics and neural networks associated with the central motor system, providing insights into neuroplasticity modulation mechanism.



Sign in / Sign up

Export Citation Format

Share Document