scholarly journals Categorification of the Müller-Wichards System Performance Estimation Model: Model Symmetries, Invariants, and Closed Forms

Systems ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 6
Author(s):  
Allen D. Parks ◽  
David J. Marchette

The Müller-Wichards model (MW) is an algebraic method that quantitatively estimates the performance of sequential and/or parallel computer applications. Because of category theory’s expressive power and mathematical precision, a category theoretic reformulation of MW, i.e., CMW, is presented in this paper. The CMW is effectively numerically equivalent to MW and can be used to estimate the performance of any system that can be represented as numerical sequences of arithmetic, data movement, and delay processes. The CMW fundamental symmetry group is introduced and CMW’s category theoretic formalism is used to facilitate the identification of associated model invariants. The formalism also yields a natural approach to dividing systems into subsystems in a manner that preserves performance. Closed form models are developed and studied statistically, and special case closed form models are used to abstractly quantify the effect of parallelization upon processing time vs. loading, as well as to establish a system performance stationary action principle.

2016 ◽  
Vol 9 (3) ◽  
pp. 123-136
Author(s):  
Bo-Qian Wang ◽  
Qi Yu ◽  
Xin Liu ◽  
Li Shen ◽  
Zhi-ying Wang

Author(s):  
Rachna Singh ◽  
Arvind Rajawat

FPGAs have been used as a target platform because they have increasingly interesting in system design and due to the rapid technological progress ever larger devices are commercially affordable. These trends make FPGAs an alternative in application areas where extensive data processing plays an important role. Consequently, the desire emerges for early performance estimation in order to quantify the FPGA approach. A mathematical model has been presented that estimates the maximum number of LUTs consumed by the hardware synthesized for different FPGAs using LLVM.. The motivation behind this research work is to design an area modeling approach for FPGA based implementation at an early stage of design. The equation based area estimation model permits immediate and accurate estimation of resources. Two important criteria used to judge the quality of the results were estimation accuracy and runtime. Experimental results show that estimation error is in the range of 1.33% to 7.26% for Spartan 3E, 1.6% to 5.63% for Virtex-2pro and 2.3% to 6.02% for Virtex-5.


Author(s):  
Ahmed Kovacevic ◽  
Nikola Stosic ◽  
Elvedin Mujic ◽  
Ian K. Smith

Changes caused by differential expansion between the rotors and the casing occur in the high pressure clearances in both the compressor and expander sections when expansion and compression are performed together in a single oil free machine of the twin-screw type. These are more difficult to control than when the two functions are carried out in separate machines. The clearance changes affect both the performance and reliability of the machine but can be controlled by using different materials of construction for each section. Clearances, predicted by the assumption of linear expansion of the components, were included in a well-proven software package for performance estimation of both screw compressors and expanders and the results compared with experimental data. It was found that the clearance in the machine, being dependent on the temperature, could be estimated fairly accurately by matching measured discharge temperature and the temperature obtained by the estimation model. Therefore, a simple expansion analysis of the main machine clearances appeared to be an adequate tool for use in the design of these machines in order to optimise performance and prevent the machine seizing as a result of differential thermal expansion during operation.


2019 ◽  
Author(s):  
Priyanka Ghosh ◽  
Sriram Krishnamoorthy ◽  
Ananth Kalyanaraman

AbstractDe novo genome assembly is a fundamental problem in the field of bioinformatics, that aims to assemble the DNA sequence of an unknown genome from numerous short DNA fragments (aka reads) obtained from it. With the advent of high-throughput sequencing technologies, billions of reads can be generated in a matter of hours, necessitating efficient parallelization of the assembly process. While multiple parallel solutions have been proposed in the past, conducting a large-scale assembly at scale remains a challenging problem because of the inherent complexities associated with data movement, and irregular access footprints of memory and I/O operations. In this paper, we present a novel algorithm, called PaKman, to address the problem of performing large-scale genome assemblies on a distributed memory parallel computer. Our approach focuses on improving performance through a combination of novel data structures and algorithmic strategies for reducing the communication and I/O footprint during the assembly process. PaKman presents a solution for the two most time-consuming phases in the full genome assembly pipeline, namely, k-mer counting and contig generation.A key aspect of our algorithm is its graph data structure, which comprises fat nodes (or what we call “macro-nodes”) that reduce the communication burden during contig generation. We present an extensive performance and qualitative evaluation of our algorithm, including comparisons to other state-of-the-art parallel assemblers. Our results demonstrate the ability to achieve near-linear speedups on up to 8K cores (tested); outperform state-of-the-art distributed memory and shared memory tools in performance while delivering comparable (if not better) quality; and reduce time to solution significantly. For instance, PaKman is able to generate a high-quality set of assembled contigs for complex genomes such as the human and wheat genomes in a matter of minutes on 8K cores.


2021 ◽  
Author(s):  
Kehinde Lydia Ajayi ◽  
Victor Azeta ◽  
Isaac Odun-Ayo ◽  
Ambrose Azeta ◽  
Ajayi Peter Taiwo ◽  
...  

Abstract One of the current research areas is speech recognition by aiding in the recognition of speech signals through computer applications. In this research paper, Acoustic Nudging, (AN) Model is used in re-formulating the persistence automatic speech recognition (ASR) errors that involves user’s acoustic irrational behavior which alters speech recognition accuracy. GMM helped in addressing low-resourced attribute of Yorùbá language to achieve better accuracy and system performance. From the simulated results given, it is observed that proposed Acoustic Nudging-based Gaussian Mixture Model (ANGM) improves accuracy and system performance which is evaluated based on Word Recognition Rate (WRR) and Word Error Rate (WER)given by validation accuracy, testing accuracy, and training accuracy. The evaluation results for the mean WRR accuracy achieved for the ANGM model is 95.277% and the mean Word Error Rate (WER) is 4.723%when compared to existing models. This approach thereby reduce error rate by 1.1%, 0.5%, 0.8%, 0.3%, and 1.4% when compared with other models. Therefore this work was able to discover a foundation for advancing current understanding of under-resourced languages and at the same time, development of accurate and precise model for speech recognition.


2019 ◽  
Vol 48 (3) ◽  
pp. 454-463
Author(s):  
Zoran Peric ◽  
Milan Tancic ◽  
Nikola Simic ◽  
Vladimir Despotovic

We propose a speech coding scheme based on the simple transform coding and forward adaptive quantization for discrete input signal processing in this paper. The quasi-logarithmic quantizer is applied to discretization of continuous input signal, i.e. for preparing discrete input. The application of forward adaptation based on the input signal variance provides more efficient bandwidth usage, whereas utilization of transform coding provides sub-sequences with more predictable signal characteristics that ensure higher quality of signal reconstruction at the receiving end. In order to provide additional compression, transform coding precedes adaptive quantization. As an objective measure of system performance we use signal-to-quantization-noise ratio. Sysem performance is discussed for two typical cases. In the first case, we consider that the information about continuous signal variance is available whereas the second case considers system performance estimation when we know only the information about discretized signal variance which means that there is a loss of input signal information. The main goal of such performance estimation comparison of the proposed speech signal coding model is to explore what is the objectivity of performance if we do not have information about a continuous source, which is a common phenomenon in digital systems.


Sign in / Sign up

Export Citation Format

Share Document