spatiotemporal representation
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 35)

H-INDEX

10
(FIVE YEARS 2)

Science ◽  
2021 ◽  
Vol 373 (6551) ◽  
pp. 242-247
Author(s):  
Nicholas M. Dotson ◽  
Michael M. Yartsev

Navigation occurs through a continuum of space and time. The hippocampus is known to encode the immediate position of moving animals. However, active navigation, especially at high speeds, may require representing navigational information beyond the present moment. Using wireless electrophysiological recordings in freely flying bats, we demonstrate that neural activity in area CA1 predominantly encodes nonlocal spatial information up to meters away from the bat’s present position. This spatiotemporal representation extends both forward and backward in time, with an emphasis on future locations, and is found during both random exploration and goal-directed navigation. The representation of position thus extends along a continuum, with each moment containing information about past, present, and future, and may provide a key mechanism for navigating along self-selected and remembered paths.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Eslam Mounier ◽  
Bassem Abdullah ◽  
Hani Mahdi ◽  
Seif Eldawlatly

AbstractThe Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing.


2021 ◽  
Author(s):  
Christoph Feichtenhofer ◽  
Haoqi Fan ◽  
Bo Xiong ◽  
Ross Girshick ◽  
Kaiming He

2021 ◽  
Vol 38 (1) ◽  
pp. 89-95
Author(s):  
Yunfang Xie ◽  
Su Zhang ◽  
Yingdi Liu

Artificial intelligence and fifth generation (5G) technology are widely adopted to evaluate the classroom poses of college students, with the help of campus video surveillance equipment. To ensure the effective learning in class, it is important to detect and intervene in abnormal behaviors like sleeping and using cellphones in time. Based on spatiotemporal representation learning, this paper presents a deep learning algorithm to evaluate classroom poses of college students. Firstly, feature engineering was adopted to mine the moving trajectories of college students, which were used to determine student distribution and establish a classroom prewarning system. Then, k-means clustering (KMC) was employed for cluster analysis on different student groups, and identify the features of each group. For a specific student group, the classroom surveillance video was decomposed into several frames; the edge of each frame was extracted by edge detection algorithm, and imported to the proposed convolutional neural network (CNN). Experimental results show that our algorithm is 5% more accurate than the benchmark three-dimensional CNN (C3D), making it an effective tool to recognize abnormal behaviors of college students in class.


Author(s):  
Sayyedjavad Ziaratnia ◽  
Peeraya Sripian ◽  
Tipporn Laohakangvalvit ◽  
Kazuo Ohzeki ◽  
Midori Sugaya

Sign in / Sign up

Export Citation Format

Share Document