scholarly journals CELLULAR AUTOMATON FOR THE FRACTURE OF ELASTIC MEDIA

1993 ◽  
Vol 04 (01) ◽  
pp. 127-136 ◽  
Author(s):  
PETER OSSADNIK

We study numerically the growth of a crack in an elastic medium under the influence of a travelling shockwave. We describe the implementation of a fast algorithm which is perfectly suited for a data parallel computer. Using large scale simulations on the Connection Machine we generate cracks with more than 10000 sites on a 1024 × 1024 lattice. We show that the resulting patterns are fractal with a fractal dimension that depends on the chosen breaking criterion and varies between 1. and 2.

1991 ◽  
Vol 02 (03) ◽  
pp. 719-733 ◽  
Author(s):  
R. BOURBONNAIS ◽  
H.J. HERRMANN ◽  
T. VICSEK

We present results of large scale simulations on the Connection Machine (CM) on the scaling behavior of the Zhang model and its variants for the kinetics of self-affine interfaces with power-law noise. Details on implementing this problem on a massively parallel computer such as the CM are given. Our calculations for the case when the amplitude η of the noise has a distribution P(η)~η−1−µ are in good agreement with earlier findings of non-universality for µ<7. We present data which suggest that for µ≥7 the model is in the universality class of Gaussian noise.


1991 ◽  
Vol 02 (01) ◽  
pp. 430-436
Author(s):  
ELAINE S. ORAN ◽  
JAY P. BORIS

This paper describes model development and computations of multidimensional, highly compressible, time-dependent reacting on a Connection Machine (CM). We briefly discuss computational timings compared to a Cray YMP speed, optimal use of the hardware and software available, treatment of boundary conditions, and parallel solution of terms representing chemical reactions. In addition, we show the practical use of the system for large-scale reacting and nonreacting flows.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Lorenzo L. Pesce ◽  
Hyong C. Lee ◽  
Mark Hereld ◽  
Sid Visser ◽  
Rick L. Stevens ◽  
...  

Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.


1992 ◽  
Vol 1 (2) ◽  
pp. 153-161 ◽  
Author(s):  
L.H. Yang ◽  
E.D. Brooks III ◽  
J. Belak

A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP) programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor) communication is minimized.


1992 ◽  
Vol 291 ◽  
Author(s):  
Norman J. Wagner ◽  
Brad Lee Holian

ABSTRACTLarge scale molecular dynamics simulations on a massively parallel computer are performed to investigate the mechanical behavior of 2-dimensional materials. A model embedded atom many- body potential is examined, corresponding to “ductile” materials. A parallel MD algorithm is developed to exploit the architecture of the Connection Machine, enabling simulations of > 106atoms. A model spallation experiment is performed on a 2-D triagonal crystal with a well-defined nanocrystalline defect on the spall plane. The process of spallation is modelled as a uniform adiabatic expansion. The spall strength is shown to be proportional to the logarithm of the applied strain rate and a dislocation dynamics model is used to explain the results. Good predictions for the onset of spallation in the computer experiments is found from the simple model. The nanocrystal defect affects the propagation of the shock front and failure is enhanced along the grain boundary.


Author(s):  
Jian Tao ◽  
Werner Benger ◽  
Kelin Hu ◽  
Edwin Mathews ◽  
Marcel Ritter ◽  
...  

SLEEP ◽  
2021 ◽  
Author(s):  
Dorothee Fischer ◽  
Elizabeth B Klerman ◽  
Andrew J K Phillips

Abstract Study Objectives Sleep regularity predicts many health-related outcomes. Currently, however, there is no systematic approach to measuring sleep regularity. Traditionally, metrics have assessed deviations in sleep patterns from an individual’s average. Traditional metrics include intra-individual standard deviation (StDev), Interdaily Stability (IS), and Social Jet Lag (SJL). Two metrics were recently proposed that instead measure variability between consecutive days: Composite Phase Deviation (CPD) and Sleep Regularity Index (SRI). Using large-scale simulations, we investigated the theoretical properties of these five metrics. Methods Multiple sleep-wake patterns were systematically simulated, including variability in daily sleep timing and/or duration. Average estimates and 95% confidence intervals were calculated for six scenarios that affect measurement of sleep regularity: ‘scrambling’ the order of days; daily vs. weekly variation; naps; awakenings; ‘all-nighters’; and length of study. Results SJL measured weekly but not daily changes. Scrambling did not affect StDev or IS, but did affect CPD and SRI; these metrics, therefore, measure sleep regularity on multi-day and day-to-day timescales, respectively. StDev and CPD did not capture sleep fragmentation. IS and SRI behaved similarly in response to naps and awakenings but differed markedly for all-nighters. StDev and IS required over a week of sleep-wake data for unbiased estimates, whereas CPD and SRI required larger sample sizes to detect group differences. Conclusions Deciding which sleep regularity metric is most appropriate for a given study depends on a combination of the type of data gathered, the study length and sample size, and which aspects of sleep regularity are most pertinent to the research question.


Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.


Sign in / Sign up

Export Citation Format

Share Document