storage complexity
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 4)

H-INDEX

6
(FIVE YEARS 1)

CALCOLO ◽  
2021 ◽  
Vol 58 (3) ◽  
Author(s):  
Niklas Angleitner ◽  
Markus Faustmann ◽  
Jens Markus Melenk

AbstractWe consider the approximation of the inverse of the finite element stiffness matrix in the data sparse $${\mathcal{H}}$$ H -matrix format. For a large class of shape regular but possibly non-uniform meshes including algebraically graded meshes, we prove that the inverse of the stiffness matrix can be approximated in the $${\mathcal{H}}$$ H -matrix format at an exponential rate in the block rank. Since the storage complexity of the hierarchical matrix is logarithmic-linear and only grows linearly in the block-rank, we obtain an efficient approximation that can be used, e.g., as an approximate direct solver or preconditioner for iterative solvers.


Author(s):  
Wildan Suharso ◽  
Abims Fardiansa ◽  
Yuda Munarko ◽  
Hardianto Wibowo

Libraries are service units with high storage complexity as evidenced by more data being stored for each year. The data that is not integrated makes the complex problem because every year the process that is carried out continues to increase, especially for the circulation of loans. As the number of books increases, the circulation of borrowing increases every year. On the other hand, the library must know exactly what collection of books they have and the transactions it has made. A lot of data is owned by the library cannot be utilized optimally, so that the managerial is unable to make full use of the data. In University scale libraries, this problem increases when the data is not fully integrated. In this study, the implementation of a star schema was carried out to solve problems related to data integration using a nine-step methodology, which includes selection, item selection, process dimensions, fact selection, fact storage, ensuring dimension tables, selecting database duration, changing dimensions, determining priorities, and query models. The results of this study indicate that the star schema can be implemented in the case of libraries, data warehouses and OLAP to support decision making for adding books, and produced 3 dimensions of the 4 grains found.  


Author(s):  
D. E. Keyes ◽  
H. Ltaief ◽  
G. Turkiyyah

A traditional goal of algorithmic optimality, squeezing out flops, has been superseded by evolution in architecture. Flops no longer serve as a reasonable proxy for all aspects of complexity. Instead, algorithms must now squeeze memory, data transfers, and synchronizations, while extra flops on locally cached data represent only small costs in time and energy. Hierarchically low-rank matrices realize a rarely achieved combination of optimal storage complexity and high-computational intensity for a wide class of formally dense linear operators that arise in applications for which exascale computers are being constructed. They may be regarded as algebraic generalizations of the fast multipole method. Methods based on these hierarchical data structures and their simpler cousins, tile low-rank matrices, are well proportioned for early exascale computer architectures, which are provisioned for high processing power relative to memory capacity and memory bandwidth. They are ushering in a renaissance of computational linear algebra. A challenge is that emerging hardware architecture possesses hierarchies of its own that do not generally align with those of the algorithm. We describe modules of a software toolkit, hierarchical computations on manycore architectures, that illustrate these features and are intended as building blocks of applications, such as matrix-free higher-order methods in optimization and large-scale spatial statistics. Some modules of this open-source project have been adopted in the software libraries of major vendors. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 78 ◽  
Author(s):  
Zidi Qin ◽  
Di Zhu ◽  
Xingwei Zhu ◽  
Xuan Chen ◽  
Yinghuan Shi ◽  
...  

As a key ingredient of deep neural networks (DNNs), fully-connected (FC) layers are widely used in various artificial intelligence applications. However, there are many parameters in FC layers, so the efficient process of FC layers is restricted by memory bandwidth. In this paper, we propose a compression approach combining block-circulant matrix-based weight representation and power-of-two quantization. Applying block-circulant matrices in FC layers can reduce the storage complexity from O ( k 2 ) to O ( k ) . By quantizing the weights into integer powers of two, the multiplications in the reference can be replaced by shift and add operations. The memory usages of models for MNIST, CIFAR-10 and ImageNet can be compressed by 171 × , 2731 × and 128 × with minimal accuracy loss, respectively. A configurable parallel hardware architecture is then proposed for processing the compressed FC layers efficiently. Without multipliers, a block matrix-vector multiplication module (B-MV) is used as the computing kernel. The architecture is flexible to support FC layers of various compression ratios with small footprint. Simultaneously, the memory access can be significantly reduced by using the configurable architecture. Measurement results show that the accelerator has a processing power of 409.6 GOPS, and achieves 5.3 TOPS/W energy efficiency at 800 MHz.


2018 ◽  
Vol 7 (4.24) ◽  
pp. 33 ◽  
Author(s):  
Devendra Reddy Rachapalli ◽  
Hemantha Kumar Kalluri

This article presents hierarchical fusion models for multi-biometric systems with improved recognition rate. Isolated texture regions are used to encode spatial variations from the composite biometric image which is generated by signal level fusion scheme. In this paper, the prominent issues of the existing multi-biometric system, namely, fusion methodology, storage complexity, reliability and template security are discussed. Here wavelet decomposition driven multi-resolution approach is used to generate the composite images. Texture feature metrics are extracted from multi-level texture regions and principal component analyzes are evaluated to select potentially useful texture values during template creation. Here through consistency and correlation driven hierarchical feature selection both inter-class similarity and intra-class variance problems can be solved. Finally, t-normalized feature level fusion is incorporated as a last stage to create the most reliable template for the identification process. To ensure the security and add robustness to spoof attacks random key driven permutations are used to encrypt the generated multi-biometric templates before storing it in a database.  Our experimental results proved that the proposed hierarchical fusion and feature selection approach can embed fine detailed information about the input multi modal biometric images with the least complex identification process.


Author(s):  
Zhi Liu ◽  
Jing Zhang ◽  
Mengmeng Zhang ◽  
Ang Zhu ◽  
Changzhi An

With the rapid development of electronic and multimedia technologies, screen contents are widely used in video related applications. However, hash-based block matching, one of the important coding tools designed to improve the coding efficiency of screen content video, is faced with the limitation of constrained block shapes. In this paper, based on the analysis of the time and storage complexity in adding nonsquare blocks to the latest block matching scheme, an improved block matching scheme is proposed by introducing two kinds of nonsquare blocks, i.e. [Formula: see text] and [Formula: see text] blocks to the coding tool, which can improve the coding efficiency and trade-off the efficiency and complexity. Compared with the latest HM-16.9[Formula: see text]SCM-8.0, the proposed scheme achieves 1.47% and 0.94% bitrate saving in low delay and random access configurations with the cost of negligible encoding time increasing for all test text sequences.


2016 ◽  
Vol 26 (1) ◽  
pp. 175-189 ◽  
Author(s):  
Pawel Trajdos ◽  
Marek Kurzynski

Abstract Nowadays, multiclassifier systems (MCSs) are being widely applied in various machine learning problems and in many different domains. Over the last two decades, a variety of ensemble systems have been developed, but there is still room for improvement. This paper focuses on developing competence and interclass cross-competence measures which can be applied as a method for classifiers combination. The cross-competence measure allows an ensemble to harness pieces of information obtained from incompetent classifiers instead of removing them from the ensemble. The cross-competence measure originally determined on the basis of a validation set (static mode) can be further easily updated using additional feedback information on correct/incorrect classification during the recognition process (dynamic mode). The analysis of computational and storage complexity of the proposed method is presented. The performance of the MCS with the proposed cross-competence function was experimentally compared against five reference MCSs and one reference MCS for static and dynamic modes, respectively. Results for the static mode show that the proposed technique is comparable with the reference methods in terms of classification accuracy. For the dynamic mode, the system developed achieves the highest classification accuracy, demonstrating the potential of the MCS for practical applications when feedback information is available.


2016 ◽  
Vol 49 (12) ◽  
pp. 237-242 ◽  
Author(s):  
Joey Fung ◽  
Yakov Zinder ◽  
Gaurav Singh

Sign in / Sign up

Export Citation Format

Share Document