binary matrix
Recently Published Documents


TOTAL DOCUMENTS

202
(FIVE YEARS 78)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
Vol 122 ◽  
pp. 103350
Author(s):  
Divyanshu Talwar ◽  
Aanchal Mongia ◽  
Emilie Chouzenoux ◽  
Angshul Majumdar

Author(s):  
Ahmad Al-Jarrah ◽  
Amer Albsharat ◽  
Mohammad Al-Jarrah

<p>This paper proposes a new algorithm for text encryption utilizing English words as a unit of encoding. The algorithm vanishes any feature that could be used to reveal the encrypted text through adopting variable code lengths for the English words, utilizing a variable-length encryption key, applying two-dimensional binary shuffling techniques at the bit level, and utilizing four binary logical operations with randomized shuffling inputs. English words that alphabetically sorted are divided into four lookup tables where each word has assigned an index. The strength of the proposed algorithm concluded from having two major components. Firstly, each lookup table utilizes different index sizes, and all index sizes are not multiples of bytes. Secondly, the shuffling operations are conducted on a two-dimensional binary matrix with variable length. Lastly, the parameters of the shuffling operation are randomized based on a randomly selected encryption key with varying size. Thus, the shuffling operations move adjacent bits away in a randomized fashion. Definitively, the proposed algorithm vanishes any signature or any statistical features of the original message. Moreover, the proposed algorithm reduces the size of the encrypted message as an additive advantage which is achieved through utilizing the smallest possible index size for each lookup table.</p>


Author(s):  
I. L. Kuznetsova ◽  
A. S. Poljakov

The problem of ensuring the integrity of the transmitted information in modern information and communication systems is considered in this paper. An optimized algorithm for detecting and correcting errors in the information transmitted over communication lines is proposed. It was developed on the basis of the results of previous studies of the error correction method based on the parity values of the coordinates of a binary matrix. An easy-to-implement, high-speed and efficient error detection algorithm is proposed which is focused on the use of small binary matrices, for example, (4 × 8) or (7 × 8) bits. In such matrices, the possible number of errors that appear in them during the transfer of information is relatively small and easily detected.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chen Wenbai ◽  
Liu Chang ◽  
Chen Weizhao ◽  
Liu Huixiang ◽  
Chen Qili ◽  
...  

We present a prediction framework to estimate the remaining useful life (RUL) of equipment based on the generative adversarial imputation net (GAIN) and multiscale deep convolutional neural network and long short-term memory (MSDCNN-LSTM). The method we proposed addresses the problem of missing data caused by sensor failures in engineering applications. First, a binary matrix is used to adjust the proportion of “0” to simulate the number of missing data in the engineering environment. Then, the GAIN model is used to impute the missing data and approximate the true sample distribution. Finally, the MSDCNN-LSTM model is used for RUL prediction. Experiments are carried out on the commercial modular aero-propulsion system simulation (C-MAPSS) dataset to validate the proposed method. The prediction results show that the proposed method outperforms other methods when packet loss occurs, showing significant improvements in the root mean square error (RMSE) and the score function value.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261250
Author(s):  
Osman Asif Malik ◽  
Hayato Ushijima-Mwesigwa ◽  
Arnab Roy ◽  
Avradip Mandal ◽  
Indradeep Ghosh

Many fundamental problems in data mining can be reduced to one or more NP-hard combinatorial optimization problems. Recent advances in novel technologies such as quantum and quantum-inspired hardware promise a substantial speedup for solving these problems compared to when using general purpose computers but often require the problem to be modeled in a special form, such as an Ising or quadratic unconstrained binary optimization (QUBO) model, in order to take advantage of these devices. In this work, we focus on the important binary matrix factorization (BMF) problem which has many applications in data mining. We propose two QUBO formulations for BMF. We show how clustering constraints can easily be incorporated into these formulations. The special purpose hardware we consider is limited in the number of variables it can handle which presents a challenge when factorizing large matrices. We propose a sampling based approach to overcome this challenge, allowing us to factorize large rectangular matrices. In addition to these methods, we also propose a simple baseline algorithm which outperforms our more sophisticated methods in a few situations. We run experiments on the Fujitsu Digital Annealer, a quantum-inspired complementary metal-oxide-semiconductor (CMOS) annealer, on both synthetic and real data, including gene expression data. These experiments show that our approach is able to produce more accurate BMFs than competing methods.


2021 ◽  
Author(s):  
Mark Bustoros ◽  
Shankara Anand ◽  
Romanos Sklavenitis-Pistofidis ◽  
Robert Redd ◽  
Eileen M. Boyle ◽  
...  

AbstractSmoldering multiple myeloma (SMM) is a precursor condition of multiple myeloma (MM) with significant heterogeneity in disease progression. Existing clinical models of progression risk do not fully capture this heterogeneity. Here we integrated 42 genetic alterations from 214 SMM patients using unsupervised binary matrix factorization (BMF) clustering and identified six distinct genetic subtypes. These subtypes were differentially associated with established MM-related RNA signatures, oncogenic and immune transcriptional profiles, and evolving clinical biomarkers. Three subtypes were associated with increased risk of progression to active MM in both the primary and validation cohorts, indicating they can be used to better predict high and low-risk patients within the currently used clinical risk stratification model.


2021 ◽  
Author(s):  
Amol Tagad ◽  
Reman Kumar Singh ◽  
G Naresh Patwari

Protein aggregation is a common and complex phenomenon in biological processes, yet a robust analysis of this aggregation process remains elusive. The commonly used methods such as centre-of-mass to centre-of-mass (COM-COM) distance, the radius of gyration (Rg), hydrogen bonding (HB) and solvent accessible surface area (SASA) do not quantify the aggregation accurately. Herein, a new and robust method that uses an aggregation matrix (AM) approach to investigate peptide aggregation in a MD simulation trajectory is presented. A nxn two-dimensional aggregation matrix (AM) is created by using the inter-peptide CA-CA cut-off distances which are binarily encoded (0 or 1). These aggregation matrices are analysed to enumerate, hierarchically order and structurally classify the aggregates. Moreover, the comparison between the present AM method and the conventional Rg, COM-COM and HB methods shows that the conventional methods grossly underestimate the aggregation propensity. Additionally, the conventional methods do not address the hierarchy and structural ordering of the aggregates, which the present AM method does. Finally, the present AM method utilises only nxn two-dimensional matrices to analyse aggregates consisting of several peptide units. To the best of our knowledge, this is a maiden approach to enumerate, hierarchically order and structurally classify peptide aggregation.


2021 ◽  
Vol 4 (4) ◽  
Author(s):  
Ajinkya Borle ◽  
Vincent Elfving ◽  
Samuel J. Lomonaco

The quantum approximate optimization algorithm (QAOA) by Farhi et al. is a quantum computational framework for solving quantum or classical optimization tasks. Here, we explore using QAOA for binary linear least squares (BLLS); a problem that can serve as a building block of several other hard problems in linear algebra, such as the non-negative binary matrix factorization (NBMF) and other variants of the non-negative matrix factorization (NMF) problem. Most of the previous efforts in quantum computing for solving these problems were done using the quantum annealing paradigm. For the scope of this work, our experiments were done on noiseless quantum simulators, a simulator including a device-realistic noise-model, and two IBM Q 5-qubit machines. We highlight the possibilities of using QAOA and QAOA-like variational algorithms for solving such problems, where trial solutions can be obtained directly as samples, rather than being amplitude-encoded in the quantum wavefunction. Our numerics show that even for a small number of steps, simulated annealing can outperform QAOA for BLLS at a QAOA depth of p\leq3p≤3 for the probability of sampling the ground state. Finally, we point out some of the challenges involved in current-day experimental implementations of this technique on cloud-based quantum computers.


2021 ◽  
Vol 2021 (11) ◽  
pp. 113106
Author(s):  
Giuseppe Mussardo ◽  
André LeClair

Abstract The validity of the Riemann hypothesis (RH) on the location of the non-trivial zeros of the Riemann ζ-function is directly related to the growth of the Mertens function M ( x ) = ∑ k = 1 x μ ( k ) , where μ(k) is the Möbius coefficient of the integer k; the RH is indeed true if the Mertens function goes asymptotically as M(x) ∼ x 1/2+ϵ , where ϵ is an arbitrary strictly positive quantity. We argue that this behavior can be established on the basis of a new probabilistic approach based on the global properties of the Mertens function, namely, based on reorganizing globally in distinct blocks the terms of its series. With this aim, we focus attention on the square-free numbers and we derive a series of probabilistic results concerning the prime number distribution along the series of square-free numbers, the average number of prime divisors, the Erdős–Kac theorem for square-free numbers, etc. These results point to the conclusion that the Mertens function is subject to a normal distribution as much as any other random walk. We also present an argument in favor of the thesis that the validity of the RH also implies the validity of the generalized RH for the Dirichlet L-functions. Next we study the local properties of the Mertens function, i.e. its variation induced by each Möbius coefficient restricted to the square-free numbers. Motivated by the natural curiosity to see how closely to a purely random walk any sub-sequence is extracted by the sequence of the Möbius coefficients for the square-free numbers, we perform a massive statistical analysis on these coefficients, applying to them a series of randomness tests of increasing precision and complexity; together with several frequency tests within a block, the list of our tests includes those for the longest run of ones in a block, the binary matrix rank test, the discrete Fourier transform test, the non-overlapping template matching test, the entropy test, the cumulative sum test, the random excursion tests, etc, for a total of 18 different tests. The successful outputs of all these tests (each of them with a level of confidence of 99% that all the sub-sequences analyzed are indeed random) can be seen as impressive ‘experimental’ confirmations of the Brownian nature of the restricted Möbius coefficients and the probabilistic normal law distribution of the Mertens function analytically established earlier. In view of the theoretical probabilistic argument and the large battery of statistical tests, we can conclude that while a violation of the RH is strictly speaking not impossible, it is however extremely improbable.


Author(s):  
Sivaranjan Goswami ◽  
Kumaresh Sarmah ◽  
Kandarpa Kumar Sarma ◽  
Nikos E. Mastorakis

Computer aided synthesis of sparse array is a popular area of research worldwide for the application in radar and wireless communication. The trend is observing new heights with the launch of 5G millimeter wave wireless communication. A sparse array has a fewer number of elements than a conventional antenna array. In this work, a sparse array is synthesized from a 16×16 uniform rectangular array (URA). The synthesis includes an artificial neural network (ANN) model for estimation of the excitation weights of the URA for a given scan-angle. The weights of the sparse array are computed by the Hadamard product of the weight matrix of the URA with a binary matrix that is obtained using particle swarm optimization (PSO). The objective function of the optimization problem is formulated to ensure that the PSLL is minimized for multiple scan-angles. It is shown from experimental analysis that apart from minimizing the PSLL, the proposed approach yields a narrower beam-width than the original URA


Sign in / Sign up

Export Citation Format

Share Document