singular vectors
Recently Published Documents


TOTAL DOCUMENTS

282
(FIVE YEARS 30)

H-INDEX

30
(FIVE YEARS 1)

Author(s):  
Dmitry Kleinbock ◽  
Nikolay Moshchevitin ◽  
Barak Weiss
Keyword(s):  

2021 ◽  
pp. 000370282110447
Author(s):  
Joseph Dubrovkin

Storage, processing, and transfer of huge matrices are becoming challenging tasks in the process analytical technology and scientific research. Matrix compression can solve these problems successfully. We developed a novel compression method of spectral data matrix based on its low-rank approximation and the fast Fourier transform of the singular vectors. This method differs from the known ones in that it does not require restoring the low-rank approximated matrix for further Fourier processing. Therefore, the compression ratio increases. A compromise between the losses of the accuracy of the data matrix restoring and the compression ratio was achieved by selecting the processing parameters. The method was applied to multivariate chemometrics analysis of the cow milk for determining fat and protein content using two data matrices (the file sizes were 5.7 and 12.0 MB) restored from their compressed form. The corresponding compression ratios were about 52 and 114, while the loss of accuracy of the analysis was less than 1% compared with processing of the non-compressed matrix. A huge, simulated matrix, compressed from 400 MB to 1.9 MB, was successfully used for multivariate calibration and segment cross-validation. The data set simulated a large matrix of 10 000 low-noise infrared spectra, measured in the range 4000–400 cm−1 with a resolution of 0.5 cm−1. The corresponding file was compressed from 262.8 MB to 19.8 MB. The discrepancies between original and restored spectra were less than the standard deviation of the noise. The method developed in the article clearly demonstrated its potential for future applications to chemometrics-enhanced spectrometric analysis with limited options of memory size and data transfer rate. The algorithm used the standard routines of Matlab software.


2021 ◽  
Vol 69 (5) ◽  
pp. 451-459
Author(s):  
Yongjie Zhuang ◽  
Xuchen Wang ◽  
Yangfan Liu

In the design of multichannel active noise control filters, the disturbance enhancement phenomenon will sometimes occur, i.e., the resulting sound is enhanced instead of being reduced in some frequency bands, if the control filter is designed to minimize the power of error signals in other frequency bands or across all frequencies. In previous work, a truncated singular value decomposition method was applied to the system autocorrelation matrix to mitigate the disturbance enhancement. Some small singular values and the associated singular vectors are removed, if they are responsible for unwanted disturbance enhancement in some frequency bands. However, some of these removed singular vectors may still contribute to the noise control performance in other frequency bands; thus, a direct truncation will degrade the noise control performance. In the present work, through an additional filtering process, the set of singular vectors that causes the disturbance enhancement is replaced by a set of new singular vectors whose frequency responses are attenuated in the frequency band where disturbance enhancement occurs, while the frequency responses in other frequency bands are unchanged. Compared with truncation approach, the proposed method can maintain the performance in the noise reduction bands, while mitigating the influence in disturbance enhancement bands.


2021 ◽  
Author(s):  
Courtney Quinn ◽  
Terence O'Kane ◽  
Dylan Harries

Singular vectors (SVs) have long been employed in the initialization of ensemble numerical weather prediction (NWP) in order to capture the structural organization and growth rates of those perturbations or “errors” associated with initial condition errors and instability processes of the large scale flow. Due to their (super) exponential growth rates and spatial scales, initial SVs are typically combined empirically with evolved SVs in order to generate forecast perturbations whose structures and growth rates are tuned for specified lead-times. Here we present a systematic approach to generating finite time or "mixed" SVs (MSVs) based on a method for the calculation of covariant Lyapunov vectors (CLVs) and appropriate choices of the matrix cocycle. We first derive a data-driven reduced order model to characterize persistent geopotential height anomalies over Europe and Western Asia (Eurasia) over the period 1979-present from the NCEPv1 reanalysis. We then characterize and compare the MSVs and SVs of each persistent state over Eurasia for particular lead-times from a day to over a week. Finally, we compare the spatio-temporal properties of SVs and MSVs in an examination of the dynamics of the 2010 Russian heatwave. We show that MSVs provide a systematic approach to generate initial forecast perturbations projected onto relevant expanding directions in phase space for typical NWP forecast lead-times.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 483
Author(s):  
Xin Wang ◽  
Zhixin Song ◽  
Youle Wang

Singular value decomposition is central to many problems in engineering and scientific fields. Several quantum algorithms have been proposed to determine the singular values and their associated singular vectors of a given matrix. Although these algorithms are promising, the required quantum subroutines and resources are too costly on near-term quantum devices. In this work, we propose a variational quantum algorithm for singular value decomposition (VQSVD). By exploiting the variational principles for singular values and the Ky Fan Theorem, we design a novel loss function such that two quantum neural networks (or parameterized quantum circuits) could be trained to learn the singular vectors and output the corresponding singular values. Furthermore, we conduct numerical simulations of VQSVD for random matrices as well as its applications in image compression of handwritten digits. Finally, we discuss the applications of our algorithm in recommendation systems and polar decomposition. Our work explores new avenues for quantum information processing beyond the conventional protocols that only works for Hermitian data, and reveals the capability of matrix decomposition on near-term quantum devices.


2021 ◽  
Vol 11 (11) ◽  
pp. 4874
Author(s):  
Milan Brankovic ◽  
Eduardo Gildin ◽  
Richard L. Gibson ◽  
Mark E. Everett

Seismic data provides integral information in geophysical exploration, for locating hydrocarbon rich areas as well as for fracture monitoring during well stimulation. Because of its high frequency acquisition rate and dense spatial sampling, distributed acoustic sensing (DAS) has seen increasing application in microseimic monitoring. Given large volumes of data to be analyzed in real-time and impractical memory and storage requirements, fast compression and accurate interpretation methods are necessary for real-time monitoring campaigns using DAS. In response to the developments in data acquisition, we have created shifted-matrix decomposition (SMD) to compress seismic data by storing it into pairs of singular vectors coupled with shift vectors. This is achieved by shifting the columns of a matrix of seismic data before applying singular value decomposition (SVD) to it to extract a pair of singular vectors. The purpose of SMD is data denoising as well as compression, as reconstructing seismic data from its compressed form creates a denoised version of the original data. By analyzing the data in its compressed form, we can also run signal detection and velocity estimation analysis. Therefore, the developed algorithm can simultaneously compress and denoise seismic data while also analyzing compressed data to estimate signal presence and wave velocities. To show its efficiency, we compare SMD to local SVD and structure-oriented SVD, which are similar SVD-based methods used only for denoising seismic data. While the development of SMD is motivated by the increasing use of DAS, SMD can be applied to any seismic data obtained from a large number of receivers. For example, here we present initial applications of SMD to readily available marine seismic data.


Author(s):  
Nicoletta Cantarini ◽  
Fabrizio Caselli ◽  
Victor Kac

AbstractGiven a Lie superalgebra $${\mathfrak {g}}$$ g with a subalgebra $${\mathfrak {g}}_{\ge 0}$$ g ≥ 0 , and a finite-dimensional irreducible $${\mathfrak {g}}_{\ge 0}$$ g ≥ 0 -module F, the induced $${\mathfrak {g}}$$ g -module $$M(F)={\mathcal {U}}({\mathfrak {g}})\otimes _{{\mathcal {U}}({\mathfrak {g}}_{\ge 0})}F$$ M ( F ) = U ( g ) ⊗ U ( g ≥ 0 ) F is called a finite Verma module. In the present paper we classify the non-irreducible finite Verma modules over the largest exceptional linearly compact Lie superalgebra $${\mathfrak {g}}=E(5,10)$$ g = E ( 5 , 10 ) with the subalgebra $${\mathfrak {g}}_{\ge 0}$$ g ≥ 0 of minimal codimension. This is done via classification of all singular vectors in the modules M(F). Besides known singular vectors of degree 1,2,3,4 and 5, we discover two new singular vectors, of degrees 7 and 11. We show that the corresponding morphisms of finite Verma modules of degree 1,4,7, and 11 can be arranged in an infinite number of bilateral infinite complexes, which may be viewed as “exceptional” de Rham complexes for E(5, 10).


Geophysics ◽  
2021 ◽  
pp. 1-51
Author(s):  
Chao Wang ◽  
Yun Wang

Reduced-rank filtering is a common method for attenuating noise in seismic data. As conventional reduced-rank filtering distinguishes signals from noises only according to singular values, it performs poorly when the signal-to-noise ratio is very low, or when data contain high levels of isolate or coherent noise. Therefore, we developed a novel and robust reduced-rank filtering based on the singular value decomposition in the time-space domain. In this method, noise is recognized and attenuated according to the characteristics of both singular values and singular vectors. The left and right singular vectors corresponding to large singular values are selected firstly. Then, the right singular vectors are classified into different categories according to their curve characteristics, such as jump, pulse, and smooth. Each kind of right singular vector is related to a type of noise or seismic event, and is corrected by using a different filtering technology, such as mean filtering, edge-preserving smoothing or edge-preserving median filtering. The left singular vectors are also corrected by using the filtering methods based on frequency attributes like main-frequency and frequency bandwidth. To process seismic data containing a variety of events, local data are extracted along the local dip of event. The optimal local dip is identified according to the singular values and singular vectors of the data matrices that are extracted along different trial directions. This new filtering method has been applied to synthetic and field seismic data, and its performance is compared with that of several conventional filtering methods. The results indicate that the new method is more robust for data with a low signal-to-noise ratio, strong isolate noise, or coherent noise. The new method also overcomes the difficulties associated with selecting an optimal rank.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Majid Afshar ◽  
Hamid Usefi

AbstractA common problem in machine learning and pattern recognition is the process of identifying the most relevant features, specifically in dealing with high-dimensional datasets in bioinformatics. In this paper, we propose a new feature selection method, called Singular-Vectors Feature Selection (SVFS). Let $$D= [A \mid \mathbf {b}]$$ D = [ A ∣ b ] be a labeled dataset, where $$\mathbf {b}$$ b is the class label and features (attributes) are columns of matrix A. We show that the signature matrix $$S_A=I-A^{\dagger }A$$ S A = I - A † A can be used to partition the columns of A into clusters so that columns in a cluster correlate only with the columns in the same cluster. In the first step, SVFS uses the signature matrix $$S_D$$ S D of D to find the cluster that contains $$\mathbf {b}$$ b . We reduce the size of A by discarding features in the other clusters as irrelevant features. In the next step, SVFS uses the signature matrix $$S_A$$ S A of reduced A to partition the remaining features into clusters and choose the most important features from each cluster. Even though SVFS works perfectly on synthetic datasets, comprehensive experiments on real world benchmark and genomic datasets shows that SVFS exhibits overall superior performance compared to the state-of-the-art feature selection methods in terms of accuracy, running time, and memory usage. A Python implementation of SVFS along with the datasets used in this paper are available at https://github.com/Majid1292/SVFS.


Sign in / Sign up

Export Citation Format

Share Document