Statistical analysis of Parallel Matrix Multiplication in SIMD model using ‘p’,‘p2’,‘p3’ processor's with different interconnection network

Author(s):  
Sunil Kumar Panigrahi ◽  
Soubhik Chakraborty ◽  
Jibitesh Mishra
Author(s):  
E. A. Ashcroft ◽  
A. A. Faustini ◽  
R. Jaggannathan ◽  
W. W. Wadge

In Chapter 1, we saw how Lucid could be used to express solutions to standard problems such as sorting and matrix multiplication. One of the unique characteristics of Lucid is not only that it can be used as a programming language but it can also be used as a “composition” language. That is, instead of using Lucid to specify computations, it can be used to express how computation components (expressed in some other language) can be “glued” together to form a coherent application. By doing so, the resulting application can enjoy some of the practical benefits attributable to Lucid such as high performance through exploitation of implicit parallelism and robustness through software fault tolerance. In this chapter, we discuss one such use of Lucid—as part of a hybrid language to construct parallel applications to be executed on conventional parallel computers. A conventional parallel computer either consists of a number of processors each with local memory interconnected by a network (as in distributed memory architectures) or a number of processors that share memory possibly using an interconnection network (as in shared memory architectures). The past decade has seen the advent of conventional parallel computers starting with the Denelcor HEP evolving to the CM-2 and Intel Hypercube and further evolving to the CM-5, Intel Paragon, Cray T3D, and IBM SP-2. Even networks of workstations (or workstation clusters) are seen as low-cost (“poor man’s”) parallel computers. Programming of conventional parallel computers has proven to be far more challenging than had been expected. Part of the reason is the continued use of low-level, explicitly parallel programming models such as PVM [42], Linda [10]. Two factors have fueled the continuing use of such languages despite their limited success. 1. The need to reuse existing sequential code because the cost of rewriting legacy applications from scratch is considered prohibitive both in economic and technical terms. 2. The need to run on conventional parallel computers that view a “parallel program” at a low level—as consisting of sequential processes that frequently synchronize and communicate with each other using some form of message passing.


2005 ◽  
Vol 06 (04) ◽  
pp. 417-433
Author(s):  
Srabani Mukhopadhyaya ◽  
Bhabani P. Sinha

Generalized Hypercube-Connected-Cycles (GHCC), is a challenging interconnection network, proposed earlier in the literature. In this paper, we discuss how some important, useful algorithms, like, matrix transpose, matrix multiplication and sorting can efficiently be implemented on GHCC. Matrix transpose and matrix-by-matrix multiplication of matrices of order n × n, [Formula: see text], takes O(l) and [Formula: see text] time, respectively, on GHCC(l,m), with lml processors. Using the same number of processors, a list of ml numbers can be sorted in O(l2 log 3 m) time.


1966 ◽  
Vol 24 ◽  
pp. 188-189
Author(s):  
T. J. Deeming

If we make a set of measurements, such as narrow-band or multicolour photo-electric measurements, which are designed to improve a scheme of classification, and in particular if they are designed to extend the number of dimensions of classification, i.e. the number of classification parameters, then some important problems of analytical procedure arise. First, it is important not to reproduce the errors of the classification scheme which we are trying to improve. Second, when trying to extend the number of dimensions of classification we have little or nothing with which to test the validity of the new parameters.Problems similar to these have occurred in other areas of scientific research (notably psychology and education) and the branch of Statistics called Multivariate Analysis has been developed to deal with them. The techniques of this subject are largely unknown to astronomers, but, if carefully applied, they should at the very least ensure that the astronomer gets the maximum amount of information out of his data and does not waste his time looking for information which is not there. More optimistically, these techniques are potentially capable of indicating the number of classification parameters necessary and giving specific formulas for computing them, as well as pinpointing those particular measurements which are most crucial for determining the classification parameters.


Author(s):  
Gianluigi Botton ◽  
Gilles L'espérance

As interest for parallel EELS spectrum imaging grows in laboratories equipped with commercial spectrometers, different approaches were used in recent years by a few research groups in the development of the technique of spectrum imaging as reported in the literature. Either by controlling, with a personal computer both the microsope and the spectrometer or using more powerful workstations interfaced to conventional multichannel analysers with commercially available programs to control the microscope and the spectrometer, spectrum images can now be obtained. Work on the limits of the technique, in terms of the quantitative performance was reported, however, by the present author where a systematic study of artifacts detection limits, statistical errors as a function of desired spatial resolution and range of chemical elements to be studied in a map was carried out The aim of the present paper is to show an application of quantitative parallel EELS spectrum imaging where statistical analysis is performed at each pixel and interpretation is carried out using criteria established from the statistical analysis and variations in composition are analyzed with the help of information retreived from t/γ maps so that artifacts are avoided.


2001 ◽  
Vol 6 (3) ◽  
pp. 187-193 ◽  
Author(s):  
John R. Nesselroade

A focus on the study of development and other kinds of changes in the whole individual has been one of the hallmarks of research by Magnusson and his colleagues. A number of different approaches emphasize this individual focus in their respective ways. This presentation focuses on intraindividual variability stemming from Cattell's P-technique factor analytic proposals, making several refinements to make it more tractable from a research design standpoint and more appropriate from a statistical analysis perspective. The associated methods make it possible to study intraindividual variability both within and between individuals. An empirical example is used to illustrate the procedure.


Sign in / Sign up

Export Citation Format

Share Document