factorization algorithms
Recently Published Documents


TOTAL DOCUMENTS

134
(FIVE YEARS 24)

H-INDEX

18
(FIVE YEARS 2)

2021 ◽  
Vol 26 (1) ◽  
pp. 1-47
Author(s):  
Diego Arroyuelo ◽  
Rodrigo Cánovas ◽  
Johannes Fischer ◽  
Dominik Köppl ◽  
Marvin Löbel ◽  
...  

The Lempel-Ziv 78 ( LZ78 ) and Lempel-Ziv-Welch ( LZW ) text factorizations are popular, not only for bare compression but also for building compressed data structures on top of them. Their regular factor structure makes them computable within space bounded by the compressed output size. In this article, we carry out the first thorough study of low-memory LZ78 and LZW text factorization algorithms, introducing more efficient alternatives to the classical methods, as well as new techniques that can run within less memory space than the necessary to hold the compressed file. Our results build on hash-based representations of tries that may have independent interest.


2021 ◽  
Vol 5 (4 (113)) ◽  
pp. 45-54
Author(s):  
Alexander Nechaev ◽  
Vasily Meltsov ◽  
Dmitry Strabykin

Many advanced recommendatory models are implemented using matrix factorization algorithms. Experiments show that the quality of their performance depends significantly on the selected hyperparameters. Analysis of the effectiveness of using various methods for solving this problem of optimizing hyperparameters was made. It has shown that the use of classical Bayesian optimization which treats the model as a «black box» remains the standard solution. However, the models based on matrix factorization have a number of characteristic features. Their use makes it possible to introduce changes in the optimization process leading to a decrease in the time required to find the sought points without losing quality. Modification of the Gaussian process core which is used as a surrogate model for the loss function when performing the Bayesian optimization was proposed. The described modification at first iterations increases the variance of the values predicted by the Gaussian process over a given region of the hyperparameter space. In some cases, this makes it possible to obtain more information about the real form of the investigated loss function in less time. Experiments were carried out using well-known data sets for recommendatory systems. Total optimization time when applying the modification was reduced by 16 % (or 263 seconds) at best and remained the same at worst (less than 1-second difference). In this case, the expected error of the recommendatory model did not change (the absolute difference in values is two orders of magnitude lower than the value of error reduction in the optimization process). Thus, the use of the proposed modification contributes to finding a better set of hyperparameters in less time without loss of quality


Author(s):  
Pu Tian

Factorization reduces computational complexity and is therefore an important tool in statistical machine learning of high dimensional systems. Conventional molecular modeling, including molecular dynamics and Monte Carlo simulations of molecular systems, is a large research field based on approximate factorization of molecular interactions. Recently, the local distribution theory was proposed to factorize global joint distribution of a given molecular system into trainable local distributions. Belief propagation algorithms are a family of exact factorization algorithms for trees and are extended to approximate loopy belief propagation algorithms for graphs with loops. Despite the fact that factorization of probability distribution is their common foundation, computational research in molecular systems and machine learning studies utilizing belief propagation algorithms have been carried out independently with respective track of algorithm development. The connection and differences among these factorization algorithms are briefly presented in this perspective, with the hope to intrigue further development in factorization algorithms for physical modeling of complex molecular systems.


2021 ◽  
Vol 11 (12) ◽  
pp. 5724
Author(s):  
Jialu Sui ◽  
Jian Yin

Nowadays, as the number of items is increasing and the number of items that users have access to is limited, user-item preference matrices in recommendation systems are always sparse. This leads to a data sparsity problem. The latent factor analysis (LFA) model has been proposed as the solution to the data sparsity problem. As the basis of the LFA model, the singular value decomposition (SVD) model, especially the biased SVD model, has great recommendation effects in high-dimensional sparse (HiDs) matrices. However, it has the disadvantage of requiring several iterations before convergence. Besides, the model PID-incorporated SGD-based LFA (PSL) introduces the principle of discrete PID controller into the stochastic gradient descent (SGD), the learning algorithm of the SVD model. It could solve the problem of slow convergence speed, but its accuracy of recommendation needs to be improved. In order to make better solution, this paper fuses the PSL model with the biased SVD model, hoping to obtain better recommendation result by combining their advantages and reconciling their disadvantages. The experiments show that this biased PSL model performs better than the traditional matrix factorization algorithms on different sizes of datasets.


2021 ◽  
Author(s):  
Shalin Shah

Recommender systems aim to personalize the experience of user by suggesting items to the user based on the preferences of a user. The preferences are learned from the user’s interaction history or through explicit ratings that the user has given to the items. The system could be part of a retail website, an online bookstore, a movie rental service or an online education portal and so on. In this paper, I will focus on matrix factorization algorithms as applied to recommender systems and discuss the singular value decomposition, gradient descent-based matrix factorization and parallelizing matrix factorization for large scale applications.


Author(s):  
Vincent C. K. Cheung ◽  
Kazuhiko Seki

The central nervous system (CNS) may produce coordinated motor outputs via the combination of motor modules representable as muscle synergies. Identification of muscle synergies has hitherto relied on applying factorization algorithms to multi-muscle electromyographic data (EMGs) recorded during motor behaviors. Recent studies have attempted to validate the neural basis of the muscle synergies identified by independently retrieving the muscle synergies through CNS manipulations and analytic techniques such as spike-triggered averaging of EMGs. Experimental data have demonstrated the pivotal role of the spinal premotor interneurons in the synergies' organization and the presence of motor cortical loci whose stimulations offer access to the synergies, but whether the motor cortex is also involved in organizing the synergies has remained unsettled. We argue that one difficulty inherent in current approaches to probing the synergies' neural basis is that the EMG generative model based on linear combination of synergies and the decomposition algorithms used for synergy identification are not grounded on enough prior knowledge from neurophysiology. Progress may be facilitated by constraining or updating the model and algorithms with knowledge derived directly from CNS manipulations or recordings. An investigative framework based on evaluating the relevance of neurophysiologically constrained models of muscle synergies to natural motor behaviors will allow a more sophisticated understanding of motor modularity, which will help the community move forward from the current debate on the neural versus non-neural origin of muscle synergies.


2021 ◽  
Vol 31 (2) ◽  
Author(s):  
Johannes Siegele ◽  
Martin Pfurner ◽  
Hans-Peter Schröcker

AbstractIn this paper we investigate factorizations of polynomials over the ring of dual quaternions into linear factors. While earlier results assume that the norm polynomial is real (“motion polynomials”), we only require the absence of real polynomial factors in the primal part and factorizability of the norm polynomial over the dual numbers into monic quadratic factors. This obviously necessary condition is also sufficient for existence of factorizations. We present an algorithm to compute factorizations of these polynomials and use it for new constructions of mechanisms which cannot be obtained by existing factorization algorithms for motion polynomials. While they produce mechanisms with rotational or translational joints, our approach yields mechanisms consisting of “vertical Darboux joints”. They exhibit mechanical deficiencies so that we explore ways to replace them by cylindrical joints while keeping the overall mechanism sufficiently constrained.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0244026
Author(s):  
John Golden ◽  
Daniel O’Malley

It was recently shown that quantum annealing can be used as an effective, fast subroutine in certain types of matrix factorization algorithms. The quantum annealing algorithm performed best for quick, approximate answers, but performance rapidly plateaued. In this paper, we utilize reverse annealing instead of forward annealing in the quantum annealing subroutine for nonnegative/binary matrix factorization problems. After an initial global search with forward annealing, reverse annealing performs a series of local searches that refine existing solutions. The combination of forward and reverse annealing significantly improves performance compared to forward annealing alone for all but the shortest run times.


2020 ◽  
Vol 25 (4) ◽  
pp. 63
Author(s):  
Anthony Overmars ◽  
Sitalakshmi Venkatraman

The security of RSA relies on the computationally challenging factorization of RSA modulus N=p1 p2 with N being a large semi-prime consisting of two primes p1and p2, for the generation of RSA keys in commonly adopted cryptosystems. The property of p1 and p2, both congruent to 1 mod 4, is used in Euler’s factorization method to theoretically factorize them. While this caters to only a quarter of the possible combinations of primes, the rest of the combinations congruent to 3 mod 4 can be found by extending the method using Gaussian primes. However, based on Pythagorean primes that are applied in RSA, the semi-prime has only two sums of two squares in the range of possible squares N−1, N/2 . As N becomes large, the probability of finding the two sums of two squares becomes computationally intractable in the practical world. In this paper, we apply Pythagorean primes to explore how the number of sums of two squares in the search field can be increased thereby increasing the likelihood that a sum of two squares can be found. Once two such sums of squares are found, even though many may exist, we show that it is sufficient to only find two solutions to factorize the original semi-prime. We present the algorithm showing the simplicity of steps that use rudimentary arithmetic operations requiring minimal memory, with search cycle time being a factor for very large semi-primes, which can be contained. We demonstrate the correctness of our approach with practical illustrations for breaking RSA keys. Our enhanced factorization method is an improvement on our previous work with results compared to other factorization algorithms and continues to be an ongoing area of our research.


Author(s):  
Kongari Mounika ◽  
B. V. N. Krishna Suresh

The matrix factorization algorithms such as the matrix factorization technique (MF), singular value decomposition (SVD) and the probability matrix factorization (PMF) and so on, are summarized and compared. Based on the above research work, a kind of improved probability matrix factorization algorithm called MPMF is proposed in this paper. MPMF determines the optimal value of dimension D of both the user feature vector and the item feature vector through experiments. The complexity of the algorithm scales linearly with the number of observations, which can be applied to massive data and has very good scalability. Experimental results show that MPMF can not only achieve higher recommendation accuracy, but also improve the efficiency of the algorithm in sparse and unbalanced data sets compared with other related algorithms.


Sign in / Sign up

Export Citation Format

Share Document