scholarly journals An algorithm to generate high dense packing of particles with various shapes

2018 ◽  
Vol 219 ◽  
pp. 05004 ◽  
Author(s):  
Konrad Miśkiewicz ◽  
Robert Banasiak ◽  
Maciej Niedostatkiewicz ◽  
Krzysztof Grudzień ◽  
Laurent Babout

Discrete Element Method (DEM) is one of available numerical methods to compute movement of particles in large scale simulations. The method has been frequently applied to simulate the cases of grain or bulk material as the major research issue. The paper describes a new method of generating high dense packing with mixed material of two different shape used in DEM simulation. The initial packing is an important parameter to control, because have influence on the first few seconds after start the simulation. Some-times when the material in silo is arranged with loose packing before the start, the particles move downward gravity. These changes between the start and the first few seconds in simulations act strongly on the results at the end of a discharging process in silo. At the initial simulation time it is important to prepare proper packing with mixed material, in order to make sure that the particles will not move due to gravity action. This solution is a necessary step to integrate in the simulation procedure in order to compare later the computer simulation with experimental measurements of material discharge in a silo.

SIMULATION ◽  
2020 ◽  
Vol 96 (7) ◽  
pp. 567-581
Author(s):  
John P Morrissey ◽  
Prabhat Totoo ◽  
Kevin J Hanley ◽  
Stefanos-Aldo Papanicolopulos ◽  
Jin Y Ooi ◽  
...  

Regardless of its origin, in the near future the challenge will not be how to generate data, but rather how to manage big and highly distributed data to make it more easily handled and more accessible by users on their personal devices. VELaSSCo (Visualization for Extremely Large-Scale Scientific Computing) is a platform developed to provide new visual analysis methods for large-scale simulations serving the petabyte era. The platform adopts Big Data tools/architectures to enable in-situ processing for analytics of engineering and scientific data and hardware-accelerated interactive visualization. In large-scale simulations, the domain is partitioned across several thousand nodes, and the data (mesh and results) are stored on those nodes in a distributed manner. The VELaSSCo platform accesses this distributed information, processes the raw data, and returns the results to the users for local visualization by their specific visualization clients and tools. The global goal of VELaSSCo is to provide Big Data tools for the engineering and scientific community, in order to better manipulate simulations with billions of distributed records. The ability to easily handle large amounts of data will also enable larger, higher resolution simulations, which will allow the scientific and engineering communities to garner new knowledge from simulations previously considered too large to handle. This paper shows, by means of selected Discrete Element Method (DEM) simulation use cases, that the VELaSSCo platform facilitates distributed post-processing and visualization of large engineering datasets.


Author(s):  
Jian Tao ◽  
Werner Benger ◽  
Kelin Hu ◽  
Edwin Mathews ◽  
Marcel Ritter ◽  
...  

SLEEP ◽  
2021 ◽  
Author(s):  
Dorothee Fischer ◽  
Elizabeth B Klerman ◽  
Andrew J K Phillips

Abstract Study Objectives Sleep regularity predicts many health-related outcomes. Currently, however, there is no systematic approach to measuring sleep regularity. Traditionally, metrics have assessed deviations in sleep patterns from an individual’s average. Traditional metrics include intra-individual standard deviation (StDev), Interdaily Stability (IS), and Social Jet Lag (SJL). Two metrics were recently proposed that instead measure variability between consecutive days: Composite Phase Deviation (CPD) and Sleep Regularity Index (SRI). Using large-scale simulations, we investigated the theoretical properties of these five metrics. Methods Multiple sleep-wake patterns were systematically simulated, including variability in daily sleep timing and/or duration. Average estimates and 95% confidence intervals were calculated for six scenarios that affect measurement of sleep regularity: ‘scrambling’ the order of days; daily vs. weekly variation; naps; awakenings; ‘all-nighters’; and length of study. Results SJL measured weekly but not daily changes. Scrambling did not affect StDev or IS, but did affect CPD and SRI; these metrics, therefore, measure sleep regularity on multi-day and day-to-day timescales, respectively. StDev and CPD did not capture sleep fragmentation. IS and SRI behaved similarly in response to naps and awakenings but differed markedly for all-nighters. StDev and IS required over a week of sleep-wake data for unbiased estimates, whereas CPD and SRI required larger sample sizes to detect group differences. Conclusions Deciding which sleep regularity metric is most appropriate for a given study depends on a combination of the type of data gathered, the study length and sample size, and which aspects of sleep regularity are most pertinent to the research question.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2019 ◽  
Vol 16 (1) ◽  
Author(s):  
Włodzisław Duch ◽  
Dariusz Mikołajewski

Abstract Despite great progress in understanding the functions and structures of the central nervous system (CNS) the brain stem remains one of the least understood systems. We know that the brain stem acts as a decision station preparing the organism to act in a specific way, but such functions are rather difficult to model with sufficient precision to replicate experimental data due to the scarcity of data and complexity of large-scale simulations of brain stem structures. The approach proposed in this article retains some ideas of previous models, and provides more precise computational realization that enables qualitative interpretation of the functions played by different network states. Simulations are aimed primarily at the investigation of general switching mechanisms which may be executed in brain stem neural networks, as far as studying how the aforementioned mechanisms depend on basic neural network features: basic ionic channels, accommodation, and the influence of noise.


Author(s):  
Eric Y. Hu ◽  
Jean-Marie C. Bouteiller ◽  
Dong Song ◽  
Michel Baudry ◽  
Theodore W. Berger

Sign in / Sign up

Export Citation Format

Share Document