Computing Schemes for Longitudinal Train Dynamics: Sequential, Parallel and Hybrid

Author(s):  
Qing Wu ◽  
Colin Cole

Conventionally, force elements in longitudinal train dynamics (LTD) are determined sequentially. Actually, all these force elements are independent from each other, i.e., determination of each one does not require inputs from others. This independent feature makes LTD feasible for parallel computing. A parallel scheme has been proposed and compared with the conventional sequential scheme in regard to computational efficiency. The parallel scheme is tested as not suitable for LTD; computing time of the parallel scheme is about 165% of the sequential scheme on a four-CPU personal computer (PC). A modified parallel scheme named the hybrid scheme was then proposed. The computing time of the hybrid scheme is only 70% of the sequential scheme. The other advantage of the hybrid scheme is that only two processors are required, which means the hybrid scheme can be implemented on PCs.

Author(s):  
Qing Wu ◽  
Colin Cole ◽  
Maksym Spiryagin

Due to the high computing demand of whole-trip train dynamics simulations and the iterative nature of optimizations, whole-trip train dynamics optimizations using sequential computing schemes are practically impossible. This paper reports advancements in whole-trip train dynamics optimizations enabled by using the parallel computing technique. A parallel computing scheme for whole-trip train dynamics optimizations is presented and discussed. Two case studies using parallel multiobjective particle swarm optimization (pMOPSO) and parallel multiobjective genetic algorithm (pMOGA), respectively, were performed to optimize a friction draft gear design. Linear speed-up was achieved by using parallel computing to cut down the computing time from 18 months to just 11 days. Optimized results using pMOPSO and pMOGA were in agreement with each other; Pareto fronts were identified to provide technical evidence for railway manufacturers and operators.


Author(s):  
D.R. Rasmussen ◽  
N.-H. Cho ◽  
C.B. Carter

Domains in GaAs can exist which are related to one another by the inversion symmetry, i.e., the sites of gallium and arsenic in one domain are interchanged in the other domain. The boundary between these two different domains is known as an antiphase boundary [1], In the terminology used to describe grain boundaries, the grains on either side of this boundary can be regarded as being Σ=1-related. For the {110} interface plane, in particular, there are equal numbers of GaGa and As-As anti-site bonds across the interface. The equilibrium distance between two atoms of the same kind crossing the boundary is expected to be different from the length of normal GaAs bonds in the bulk. Therefore, the relative position of each grain on either side of an APB may be translated such that the boundary can have a lower energy situation. This translation does not affect the perfect Σ=1 coincidence site relationship. Such a lattice translation is expected for all high-angle grain boundaries as a way of relaxation of the boundary structure.


Author(s):  
Y. Ishida ◽  
H. Ishida ◽  
K. Kohra ◽  
H. Ichinose

IntroductionA simple and accurate technique to determine the Burgers vector of a dislocation has become feasible with the advent of HVEM. The conventional image vanishing technique(1) using Bragg conditions with the diffraction vector perpendicular to the Burgers vector suffers from various drawbacks; The dislocation image appears even when the g.b = 0 criterion is satisfied, if the edge component of the dislocation is large. On the other hand, the image disappears for certain high order diffractions even when g.b ≠ 0. Furthermore, the determination of the magnitude of the Burgers vector is not easy with the criterion. Recent image simulation technique is free from the ambiguities but require too many parameters for the computation. The weak-beam “fringe counting” technique investigated in the present study is immune from the problems. Even the magnitude of the Burgers vector is determined from the number of the terminating thickness fringes at the exit of the dislocation in wedge shaped foil surfaces.


Author(s):  
Stuart McKernan

For many years the concept of quantitative diffraction contrast experiments might have consisted of the determination of dislocation Burgers vectors using a g.b = 0 criterion from several different 2-beam images. Since the advent of the personal computer revolution, the available computing power for performing image-processing and image-simulation calculations is enormous and ubiquitous. Several programs now exist to perform simulations of diffraction contrast images using various approximations. The most common approximations are the use of only 2-beams or a single systematic row to calculate the image contrast, or calculating the image using a column approximation. The increasing amount of literature showing comparisons of experimental and simulated images shows that it is possible to obtain very close agreement between the two images; although the choice of parameters used, and the assumptions made, in performing the calculation must be properly dealt with. The simulation of the images of defects in materials has, in many cases, therefore become a tractable problem.


1962 ◽  
Vol 08 (03) ◽  
pp. 434-441 ◽  
Author(s):  
Edmond R Cole ◽  
Ewa Marciniak ◽  
Walter H Seegers

SummaryTwo quantitative procedures for autoprothrombin C are described. In one of these purified prothrombin is used as a substrate, and the activity of autoprothrombin C can be measured even if thrombin is in the preparation. In this procedure a reaction mixture is used wherein the thrombin titer which develops in 20 minutes is proportional to the autoprothrombin C in the reaction mixture. A unit is defined as the amount which will generate 70 units of thrombin in the standardized reaction mixture. In the other method thrombin interferes with the result, because a standard bovine plasma sample is recalcified and the clotting time is noted. Autoprothrombin C shortens the clotting time, and the extent of this is a quantitative measure of autoprothrombin C activity.


1983 ◽  
Vol 50 (02) ◽  
pp. 563-566 ◽  
Author(s):  
P Hellstern ◽  
K Schilz ◽  
G von Blohn ◽  
E Wenzel

SummaryAn assay for rapid factor XIII activity measurement has been developed based on the determination of the ammonium released during fibrin stabilization. Factor XIII was activated by thrombin and calcium. Ammonium was measured by an ammonium-sensitive electrode. It was demonstrated that the assay procedure yields accurate and precise results and that factor XIII-catalyzed fibrin stabilization can be measured kinetically. The amount of ammonium released during the first 90 min of fibrin stabilization was found to be 7.8 ± 0.5 moles per mole fibrinogen, which is in agreement with the findings of other authors. In 15 normal subjects and in 15 patients suffering from diseases with suspected factor XIII deficiency there was a satisfactory correlation between the results obtained by the “ammonium-release-method”, Bohn’s method, and the immunological assay (r1 = 0.65; r2= 0.70; p<0.01). In 3 of 5 patients with paraproteinemias the values of factor XIII activity determined by the ammonium-release method were markedly lower than those estimated by the other methods. It could be shown that inhibitor mechanisms were responsible for these discrepancies.


Author(s):  
Amos Golan

In this chapter I provide additional rationalization for using the info-metrics framework. This time the justifications are in terms of the statistical, mathematical, and information-theoretic properties of the formalism. Specifically, in this chapter I discuss optimality, statistical and computational efficiency, sufficiency, the concentration theorem, the conditional limit theorem, and the concept of information compression. These properties, together with the other properties and measures developed in earlier chapters, provide logical, mathematical, and statistical justifications for employing the info-metrics framework.


Genetics ◽  
2001 ◽  
Vol 157 (3) ◽  
pp. 1387-1395 ◽  
Author(s):  
Sudhir Kumar ◽  
Sudhindra R Gadagkar ◽  
Alan Filipski ◽  
Xun Gu

AbstractGenomic divergence between species can be quantified in terms of the number of chromosomal rearrangements that have occurred in the respective genomes following their divergence from a common ancestor. These rearrangements disrupt the structural similarity between genomes, with each rearrangement producing additional, albeit shorter, conserved segments. Here we propose a simple statistical approach on the basis of the distribution of the number of markers in contiguous sets of autosomal markers (CSAMs) to estimate the number of conserved segments. CSAM identification requires information on the relative locations of orthologous markers in one genome and only the chromosome number on which each marker resides in the other genome. We propose a simple mathematical model that can account for the effect of the nonuniformity of the breakpoints and markers on the observed distribution of the number of markers in different conserved segments. Computer simulations show that the number of CSAMs increases linearly with the number of chromosomal rearrangements under a variety of conditions. Using the CSAM approach, the estimate of the number of conserved segments between human and mouse genomes is 529 ± 84, with a mean conserved segment length of 2.8 cM. This length is &lt;40% of that currently accepted for human and mouse genomes. This means that the mouse and human genomes have diverged at a rate of ∼1.15 rearrangements per million years. By contrast, mouse and rat are diverging at a rate of only ∼0.74 rearrangements per million years.


1975 ◽  
Vol 68 ◽  
pp. 239-241
Author(s):  
John C. Brown ◽  
H. F. Van Beek

SummaryThe importance and difficulties of determining the height of hard X-ray sources in the solar atmosphere, in order to distinguish source models, have been discussed by Brown and McClymont (1974) and also in this Symposium (Brown, 1975; Datlowe, 1975). Theoretical predictions of this height, h, range between and 105 km above the photosphere for different models (Brown and McClymont, 1974; McClymont and Brown, 1974). Equally diverse values have been inferred from observations of synchronous chromospheric EUV bursts (Kane and Donnelly, 1971) on the one hand and from apparently behind-the-limb events (e.g. Datlowe, 1975) on the other.


1973 ◽  
Vol 56 (6) ◽  
pp. 1475-1479 ◽  
Author(s):  
Ugo R Cieri

Abstract Sulfaquinoxaline is determined by its UV absorbance at about 358 nm, where the other 3 sulfonamides do not absorb. Sulfathiazole, sulfamerazine, and sulfamethazine are determined by a quantitative TLC procedure, based on the separation of the compounds on silica gel plates; the spots are extracted and the centrifuged extracts are analyzed spectro-photometrically. A method of calculating the total sulfonamide content, independent of the individual components, is also introduced.


Sign in / Sign up

Export Citation Format

Share Document