scholarly journals Low-Dimensional Subspace Estimation of Continuous-Doppler-Spread Channel in OTFS Systems

Author(s):  
Huiyang Qu ◽  
Guanghui Liu ◽  
Lei Zhang ◽  
Muhammad Ali Imran ◽  
Shan Wen
2021 ◽  
Author(s):  
Corson N Areshenkoff ◽  
Daniel J Gale ◽  
Joe Y Nashed ◽  
Dominic Standage ◽  
John Randall Flanagan ◽  
...  

Humans vary greatly in their motor learning abilities, yet little is known about the neural mechanisms that underlie this variability. Recent neuroimaging and electrophysiological studies demonstrate that large-scale neural dynamics inhabit a low-dimensional subspace or manifold, and that learning is constrained by this intrinsic manifold architecture. Here we asked, using functional MRI, whether subject-level differences in neural excursion from manifold structure can explain differences in learning across participants. We had subjects perform a sensorimotor adaptation task in the MRI scanner on two consecutive days, allowing us to assess their learning performance across days, as well as continuously measure brain activity. We find that the overall neural excursion from manifold activity in both cognitive and sensorimotor brain networks is associated with differences in subjects' patterns of learning and relearning across days. These findings suggest that off-manifold activity provides an index of the relative engagement of different neural systems during learning, and that intersubject differences in patterns of learning and relearning across days are related to reconfiguration processes in cognitive and sensorimotor networks during learning.


Author(s):  
Samuel Melton ◽  
Sharad Ramanathan

Abstract Motivation Recent technological advances produce a wealth of high-dimensional descriptions of biological processes, yet extracting meaningful insight and mechanistic understanding from these data remains challenging. For example, in developmental biology, the dynamics of differentiation can now be mapped quantitatively using single-cell RNA sequencing, yet it is difficult to infer molecular regulators of developmental transitions. Here, we show that discovering informative features in the data is crucial for statistical analysis as well as making experimental predictions. Results We identify features based on their ability to discriminate between clusters of the data points. We define a class of problems in which linear separability of clusters is hidden in a low-dimensional space. We propose an unsupervised method to identify the subset of features that define a low-dimensional subspace in which clustering can be conducted. This is achieved by averaging over discriminators trained on an ensemble of proposed cluster configurations. We then apply our method to single-cell RNA-seq data from mouse gastrulation, and identify 27 key transcription factors (out of 409 total), 18 of which are known to define cell states through their expression levels. In this inferred subspace, we find clear signatures of known cell types that eluded classification prior to discovery of the correct low-dimensional subspace. Availability and implementation https://github.com/smelton/SMD. Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Vol 12 (18) ◽  
pp. 2979
Author(s):  
Le Sun ◽  
Chengxun He ◽  
Yuhui Zheng ◽  
Songze Tang

During the process of signal sampling and digital imaging, hyperspectral images (HSI) inevitably suffer from the contamination of mixed noises. The fidelity and efficiency of subsequent applications are considerably reduced along with this degradation. Recently, as a formidable implement for image processing, low-rank regularization has been widely extended to the restoration of HSI. Meanwhile, further exploration of the non-local self-similarity of low-rank images are proven useful in exploiting the spatial redundancy of HSI. Better preservation of spatial-spectral features is achieved under both low-rank and non-local regularizations. However, existing methods generally regularize the original space of HSI, the exploration of the intrinsic properties in subspace, which leads to better denoising performance, is relatively rare. To address these challenges, a joint method of subspace low-rank learning and non-local 4-d transform filtering, named SLRL4D, is put forward for HSI restoration. Technically, the original HSI is projected into a low-dimensional subspace. Then, both spectral and spatial correlations are explored simultaneously by imposing low-rank learning and non-local 4-d transform filtering on the subspace. The alternating direction method of multipliers-based algorithm is designed to solve the formulated convex signal-noise isolation problem. Finally, experiments on multiple datasets are conducted to illustrate the accuracy and efficiency of SLRL4D.


2006 ◽  
Vol 03 (01) ◽  
pp. 45-51
Author(s):  
YANWEI PANG ◽  
ZHENGKAI LIU ◽  
YUEFANG SUN

Subspace-based face recognition method aims to find a low-dimensional subspace of face appearance embedded in a high-dimensional image space. The differences between different methods lie in their different motivations and objective functions. The objective function of the proposed method is formed by combining the ideas of linear Laplacian eigenmaps and linear discriminant analysis. The actual computation of the subspace reduces to a maximum eigenvalue problem. Major advantage of the proposed method over traditional methods is that it utilizes both local manifold structure information and discriminant information of the training data. Experimental results on the AR face databases demonstrate the effectiveness of the proposed method.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Aishwarya Parthasarathy ◽  
Cheng Tang ◽  
Roger Herikstad ◽  
Loong Fah Cheong ◽  
Shih-Cheng Yen ◽  
...  

Abstract Maintenance of working memory is thought to involve the activity of prefrontal neuronal populations with strong recurrent connections. However, it was recently shown that distractors evoke a morphing of the prefrontal population code, even when memories are maintained throughout the delay. How can a morphing code maintain time-invariant memory information? We hypothesized that dynamic prefrontal activity contains time-invariant memory information within a subspace of neural activity. Using an optimization algorithm, we found a low-dimensional subspace that contains time-invariant memory information. This information was reduced in trials where the animals made errors in the task, and was also found in periods of the trial not used to find the subspace. A bump attractor model replicated these properties, and provided predictions that were confirmed in the neural data. Our results suggest that the high-dimensional responses of prefrontal cortex contain subspaces where different types of information can be simultaneously encoded with minimal interference.


Author(s):  
Akira Imakura ◽  
Momo Matsuda ◽  
Xiucai Ye ◽  
Tetsuya Sakurai

Dimensionality reduction methods that project highdimensional data to a low-dimensional space by matrix trace optimization are widely used for clustering and classification. The matrix trace optimization problem leads to an eigenvalue problem for a low-dimensional subspace construction, preserving certain properties of the original data. However, most of the existing methods use only a few eigenvectors to construct the low-dimensional space, which may lead to a loss of useful information for achieving successful classification. Herein, to overcome the deficiency of the information loss, we propose a novel complex moment-based supervised eigenmap including multiple eigenvectors for dimensionality reduction. Furthermore, the proposed method provides a general formulation for matrix trace optimization methods to incorporate with ridge regression, which models the linear dependency between covariate variables and univariate labels. To reduce the computational complexity, we also propose an efficient and parallel implementation of the proposed method. Numerical experiments indicate that the proposed method is competitive compared with the existing dimensionality reduction methods for the recognition performance. Additionally, the proposed method exhibits high parallel efficiency.


2011 ◽  
Vol 341-342 ◽  
pp. 790-797 ◽  
Author(s):  
Zhi Yan Xiang ◽  
Tie Yong Cao ◽  
Peng Zhang ◽  
Tao Zhu ◽  
Jing Feng Pan

In this paper, an object tracking approach is introduced for color video sequences. The approach presents the integration of color distributions and probabilistic principal component analysis (PPCA) into particle filtering framework. Color distributions are robust to partial occlusion, are rotation and scale invariant and are calculated efficiently. Principal Component Analysis (PCA) is used to update the eigenbasis and the mean, which can reflect the appearance changes of the tracked object. And a low dimensional subspace representation of PPCA efficiently adapts to these changes of appearance of the target object. At the same time, a forgetting factor is incorporated into the updating process, which can be used to economize on processing time and enhance the efficiency of object tracking. Computer simulation experiments demonstrate the effectiveness and the robustness of the proposed tracking algorithm when the target object undergoes pose and scale changes, defilade and complex background.


Sign in / Sign up

Export Citation Format

Share Document