Toeplitz channel matrix decomposition with vectorization for very large-scale MIMO

Author(s):  
Mohammad Abu Hanif ◽  
Moon Ho Lee
Author(s):  
Yanzhe (Murray) Lei ◽  
Stefanus Jasin ◽  
Joline Uichanco ◽  
Andrew Vakhutinsky

Problem definition: We study a joint product framing and order fulfillment problem with both inventory and cardinality constraints faced by an e-commerce retailer. There is a finite selling horizon and no replenishment opportunity. In each period, the retailer needs to decide how to “frame” (i.e., display, rank, price) each product on his or her website as well as how to fulfill a new demand. Academic/practical relevance: E-commerce retail is known to suffer from thin profit margins. Using the data from a major U.S. retailer, we show that jointly planning product framing and order fulfillment can have a significant impact on online retailers’ profitability. This is a technically challenging problem as it involves both inventory and cardinality constraints. In this paper, we make progress toward resolving this challenge. Methodology: We use techniques such as randomized algorithms and graph-based algorithms to provide a tractable solution heuristic that we analyze through asymptotic analysis. Results: Our proposed randomized heuristic policy is based on the solution of a deterministic approximation to the stochastic control problem. The key challenge is in constructing a randomization scheme that is easy to implement and that guarantees the resulting policy is asymptotically optimal. We propose a novel two-step randomization scheme based on the idea of matrix decomposition and a rescaling argument. Managerial implications: Our numerical tests show that the proposed policy is very close to optimal, can be applied to large-scale problems in practice, and highlights the value of jointly optimizing product framing and order fulfillment decisions. When inventory across the network is imbalanced, the widespread practice of planning product framing without considering its impact on fulfillment can result in high shipping costs, regardless of the fulfillment policy used. Our proposed policy significantly reduces shipping costs by using product framing to manage demand so that it occurs close to the location of the inventory.


2013 ◽  
Vol 5 (04) ◽  
pp. 477-493 ◽  
Author(s):  
Wen Chen ◽  
Ji Lin ◽  
C.S. Chen

AbstractIn this paper, we investigate the method of fundamental solutions (MFS) for solving exterior Helmholtz problems with high wave-number in axisymmetric domains. Since the coefficient matrix in the linear system resulting from the MFS approximation has a block circulant structure, it can be solved by the matrix decomposition algorithm and fast Fourier transform for the fast computation of large-scale problems and meanwhile saving computer memory space. Several numerical examples are provided to demonstrate its applicability and efficacy in two and three dimensional domains.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jengnan Tzeng

The singular value decomposition (SVD) is a fundamental matrix decomposition in linear algebra. It is widely applied in many modern techniques, for example, high- dimensional data visualization, dimension reduction, data mining, latent semantic analysis, and so forth. Although the SVD plays an essential role in these fields, its apparent weakness is the order three computational cost. This order three computational cost makes many modern applications infeasible, especially when the scale of the data is huge and growing. Therefore, it is imperative to develop a fast SVD method in modern era. If the rank of matrix is much smaller than the matrix size, there are already some fast SVD approaches. In this paper, we focus on this case but with the additional condition that the data is considerably huge to be stored as a matrix form. We will demonstrate that this fast SVD result is sufficiently accurate, and most importantly it can be derived immediately. Using this fast method, many infeasible modern techniques based on the SVD will become viable.


Author(s):  
Ju Yong Park ◽  
Mohammad Abu Hanif ◽  
Jeong Su Kim ◽  
Sang Seob Song ◽  
Moon Ho Lee

2021 ◽  
Vol 12 ◽  
Author(s):  
Nicholas Panchy ◽  
Kazuhide Watanabe ◽  
Tian Hong

Large-scale transcriptome data, such as single-cell RNA-sequencing data, have provided unprecedented resources for studying biological processes at the systems level. Numerous dimensionality reduction methods have been developed to visualize and analyze these transcriptome data. In addition, several existing methods allow inference of functional variations among samples using gene sets with known biological functions. However, it remains challenging to analyze transcriptomes with reduced dimensions that are interpretable in terms of dimensions’ directionalities, transferrable to new data, and directly expose the contribution or association of individual genes. In this study, we used gene set non-negative principal component analysis (gsPCA) and non-negative matrix factorization (gsNMF) to analyze large-scale transcriptome datasets. We found that these methods provide low-dimensional information about the progression of biological processes in a quantitative manner, and their performances are comparable to existing functional variation analysis methods in terms of distinguishing multiple cell states and samples from multiple conditions. Remarkably, upon training with a subset of data, these methods allow predictions of locations in the functional space using data from experimental conditions that are not exposed to the models. Specifically, our models predicted the extent of progression and reversion for cells in the epithelial-mesenchymal transition (EMT) continuum. These methods revealed conserved EMT program among multiple types of single cells and tumor samples. Finally, we demonstrate this approach is broadly applicable to data and gene sets beyond EMT and provide several recommendations on the choice between the two linear methods and the optimal algorithmic parameters. Our methods show that simple constrained matrix decomposition can produce to low-dimensional information in functionally interpretable and transferrable space, and can be widely useful for analyzing large-scale transcriptome data.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 290
Author(s):  
Qunsheng Ruan ◽  
Yiru Zhang ◽  
Yuhui Zheng ◽  
Yingdong Wang ◽  
Qingfeng Wu ◽  
...  

The traditional heterogeneous embedding method based on a random walk strategy does not focus on the random walk fundamentally because of higher-order Markov chains. One of the important properties of Markov chains is stationary distributions (SDs). However, in large-scale network computation, SDs are not feasible and consume a lot of memory. So, we use a non-Markovian space strategy, i.e., a heterogeneous personalized spacey random walk strategy, to efficiently get SDs between nodes and skip some unimportant intermediate nodes, which allows for more accurate vector representation and memory savings. This heterogeneous personalized spacey random walk strategy was extended to heterogeneous space embedding methods in combination with vector learning, which is better than the traditional heterogeneous embedding methods for node classification tasks. As an excellent embedding method can obtain more accurate vector representations, it is important for the improvement of the recommendation model. In this article, recommendation algorithm research was carried out based on the heterogeneous personalized spacey embedding method. For the problem that the standard random walk strategy used to compute the stationary distribution consumes a large amount of memory, which may lead to inefficient node vector representation, we propose a meta-path-based heterogenous personalized spacey random walk for recommendation (MPHSRec). The meta-path-based heterogeneous personalized spacey random walk strategy is used to generate a meaningful sequence of nodes for network representation learning, and the learned embedded vectors of different meta-paths are transformed by a nonlinear fusion function and integrated into a matrix decomposition model for rating prediction. The experimental results demonstrate that MPHSRec not only improves the accuracy, but also reduces the memory cost compared with other excellent algorithms.


2014 ◽  
Vol 2014 ◽  
pp. 1-10
Author(s):  
Gao Xi jun ◽  
Chen Zi li ◽  
Hu Yong Jiang

Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.


Sign in / Sign up

Export Citation Format

Share Document