scholarly journals Deciding associativity for partial multiplication tables of order $3$

1978 ◽  
Vol 32 (142) ◽  
pp. 593-593
Author(s):  
Paul W. Bunting ◽  
Jan van Leeuwen ◽  
Dov Tamari
2020 ◽  
Author(s):  
ahmad hazaymeh

The sieve method is used to separate prime numbers from non-prime numbers. If the set of prime odd numbers cannot be written as multiplication tables, the set of non-prime odd numbers can be written as multiplication tables. Thus, each odd number that does not appear in these multiplication tables is certainly a prime odd number. Based on these tables, it was proved by the opposite method that the series of prime numbers are random series. Although they are random, they can be easily tracked using the opposite method. The counter-example is used to proof that it is not possible to write whole multiplication tables of prime odd numbers on formula of [(𝑎×𝑏)+𝑐] or [(𝑎×𝑏)−𝑐]. Instead, partial multiplication tables can be used. It also was proved that the number 1 is a prime odd number.


1978 ◽  
Vol 32 (142) ◽  
pp. 593 ◽  
Author(s):  
Paul W. Bunting ◽  
Jan van Leeuwen ◽  
Dov Tamari

Author(s):  
Michał Dębski ◽  
Jarosław Grytczuk

2001 ◽  
Vol 63 (2) ◽  
pp. 186-200 ◽  
Author(s):  
David Mix Barrington ◽  
Peter Kadau ◽  
Klaus-Jörn Lange ◽  
Pierre McKenzie

10.14311/1029 ◽  
2008 ◽  
Vol 48 (4) ◽  
Author(s):  
I. Šimeček

Sparse matrix-vector multiplication (shortly SpM×V) is one of most common subroutines in numerical linear algebra. The problem is that the memory access patterns during SpM×V are irregular, and utilization of the cache can suffer from low spatial or temporal locality. Approaches to improve the performance of SpM×V are based on matrix reordering and register blocking. These matrix transformations are designed to handle randomly occurring dense blocks in a sparse matrix. The efficiency of these transformations depends strongly on the presence of suitable blocks. The overhead of reorganization of a matrix from one format to another is often of the order of tens of executions ofSpM×V. For this reason, such a reorganization pays off only if the same matrix A is multiplied by multiple different vectors, e.g., in iterative linear solvers.This paper introduces an unusual approach to accelerate SpM×V. This approach can be combined with other acceleration approaches andconsists of three steps:1) dividing matrix A into non-empty regions,2) choosing an efficient way to traverse these regions (in other words, choosing an efficient ordering of partial multiplications),3) choosing the optimal type of storage for each region.All these three steps are tightly coupled. The first step divides the whole matrix into smaller parts (regions) that can fit in the cache. The second step improves the locality during multiplication due to better utilization of distant references. The last step maximizes the machine computation performance of the partial multiplication for each region.In this paper, we describe aspects of these 3 steps in more detail (including fast and time-inexpensive algorithms for all steps). Ourmeasurements prove that our approach gives a significant speedup for almost all matrices arising from various technical areas.


Author(s):  
Trevor Davis Lipscombe

This chapter presents advice on how to avoid simple mistakes when performing mental calculations at high speed. It includes a method to speed up the rate at which you recite your multiplication tables. This can save fractions of a second, which, in an exam with many such multiplications, can be crucial. It urges neat handwriting, and shows the superfluity of zeros at the end, or decimal points in the middle of a number, provided you make estimates before calculating an answer. It presents a quick look at factors, which can slash seconds from the time it take to multiply and divide, and introduces the art of shunting.


Sign in / Sign up

Export Citation Format

Share Document