scholarly journals A Task-Resource Mapping Algorithm for Large-Scale Batch-Mode Computational Marine Hydrodynamics Codes on Containerized Private Cloud

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 127943-127955 ◽  
Author(s):  
Yiyi Xu ◽  
Pengfei Liu ◽  
Irene Penesis ◽  
Guanghua He
2021 ◽  
Vol 1055 (1) ◽  
pp. 012096
Author(s):  
G K Kamalam ◽  
T Kalaiyarasi ◽  
S V Monaa ◽  
B Gurudharshini

1983 ◽  
Vol 38 ◽  
pp. 1-9
Author(s):  
Herbert F. Weisberg

We are now entering a new era of computing in political science. The first era was marked by punched-card technology. Initially, the most sophisticated analyses possible were frequency counts and tables produced on a counter-sorter, a machine that specialized in chewing up data cards. By the early 1960s, batch processing on large mainframe computers became the predominant mode of data analysis, with turnaround time of up to a week. By the late 1960s, turnaround time was cut down to a matter of a few minutes and OSIRIS and then SPSS (and more recently SAS) were developed as general-purpose data analysis packages for the social sciences. Even today, use of these packages in batch mode remains one of the most efficient means of processing large-scale data analysis.


1997 ◽  
Vol 52 (1) ◽  
pp. 110-116
Author(s):  
Michael Gerster ◽  
Martin Maier ◽  
Nils Clausen ◽  
Jens Schewitz ◽  
Ernst Bayer

Sulphurization is a crucial step during synthesis of phosphorothioate oligonucleotides. Insufficient reaction leads to inhomogeneous products with phosphodiester defects and subsequently to destabilization of the oligomers in biological media. To achieve a maximum extent of sulphur incorporation, various sulphurizing agents have been investigated. Solely, the use of Beaucage reagent provided satisfactory results on PS-PEG supports. Based on our investigations in small scale synthesis (1 μmol) with continuous-flow technique, upscaling to the 0.1-0.25 mmolar range has been achieved using a peptide synthesizer. The syntheses were performed in batch mode with standard phosphoramidite chemistry. Additionally, large scale synthesis of a phosphodiester oligonucleotide has been carried out on PS-PEG with optimized protocols and compared to small scale synthesis on different supports. Products were analysed by 31P NMR, capillary gel electrophoresis and electrospray mass spectrometry. An extent of sulphurization of 99% and coupling effiencies of more than 99% were obtained and the products proved to have similar purity compared to small scale syntheses on CPG


2012 ◽  
Author(s):  
Piotr J. Bandyk ◽  
Justin Freimuth ◽  
George Hazen

Object-oriented programming offers a natural approach to solving complex problems by focusing on individual aspects, or objects, and describing the ways in which they interact using interfaces. Modularity, extensibility, and code re-use often make OOP more appealing than its procedural counterpart. Code can be implemented in a more intuitive way and often mirrors the theory it derives from. Two examples are given in the form of real programs: a 3D panel code solver and a system-of-systems model for seabasing and environment sensing. Both are examples of large-scale frameworks and leverage the benefits offered by the object-oriented paradigm.


2019 ◽  
Vol 7 (2) ◽  
pp. 147-161 ◽  
Author(s):  
Maria L.A.D. Lestari ◽  
Rainer H. Müller ◽  
Jan P. Möschwitzer

Background: Miniaturization of nanosuspensions preparation is a necessity in order to enable proper formulation screening before nanosizing can be performed on a large scale. Ideally, the information generated at small scale is predictive for large scale production. Objective: This study was aimed to investigate the scalability when producing nanosuspensions starting from a 10 g scale of nanosuspension using low energy wet ball milling up to production scales of 120 g nanosuspension and 2 kg nanosuspension by using a standard high energy wet ball milling operated in batch mode or recirculation mode, respectively. Methods: Two different active pharmaceutical ingredients, i.e. curcumin and hesperetin, have been used in this study. The investigated factors include the milling time, milling speed, and the type of mill. Results: Comparable particle sizes of about 151 nm to 190 nm were obtained for both active pharmaceutical ingredients at the same milling time and milling speed when the drugs were processed at 10 g using low energy wet ball milling or 120 g using high energy wet ball milling in batch mode, respectively. However, an adjustment of the milling speed was needed for the 2 kg scale produced using high energy wet ball milling in recirculation mode to obtain particle sizes comparable to the small scale process. Conclusion: These results confirm in general, the scalability of wet ball milling as well as the suitability of small scale processing in order to correctly identify the most suitable formulations for large scale production using high energy milling.


Author(s):  
Chunyi Wu ◽  
Gaochao Xu ◽  
Yan Ding ◽  
Jia Zhao

Large-scale tasks processing based on cloud computing has become crucial to big data analysis and disposal in recent years. Most previous work, generally, utilize the conventional methods and architectures for general scale tasks to achieve tons of tasks disposing, which is limited by the issues of computing capability, data transmission, etc. Based on this argument, a fat-tree structure-based approach called LTDR (Large-scale Tasks processing using Deep network model and Reinforcement learning) has been proposed in this work. Aiming at exploring the optimal task allocation scheme, a virtual network mapping algorithm based on deep convolutional neural network and [Formula: see text]-learning is presented herein. After feature extraction, we design and implement a policy network to make node mapping decisions. The link mapping scheme can be attained by the designed distributed value-function based reinforcement learning model. Eventually, tasks are allocated onto proper physical nodes and processed efficiently. Experimental results show that LTDR can significantly improve the utilization of physical resources and long-term revenue while satisfying task requirements in big data.


2011 ◽  
Vol 21 (03) ◽  
pp. 279-299 ◽  
Author(s):  
I-HSIN CHUNG ◽  
CHE-RUNG LEE ◽  
JIAZHENG ZHOU ◽  
YEH-CHING CHUNG

As the high performance computing systems scale up, mapping the tasks of a parallel application onto physical processors to allow efficient communication becomes one of the critical performance issues. Existing algorithms were usually designed to map applications with regular communication patterns. Their mapping criterion usually overlooks the size of communicated messages, which is the primary factor of communication time. In addition, most of their time complexities are too high to process large scale problems. In this paper, we present a hierarchical mapping algorithm (HMA), which is capable of mapping applications with irregular communication patterns. It first partitions tasks according to their run-time communication information. The tasks that communicate with each other more frequently are regarded as strongly connected. Based on their connectivity strength, the tasks are partitioned into supernodes based on the algorithms in spectral graph theory. The hierarchical partitioning reduces the mapping algorithm complexity to achieve scalability. Finally, the run-time communication information will be used again in fine tuning to explore better mappings. With the experiments, we show how the mapping algorithm helps to reduce the point-to-point communication time for the PDGEMM, a ScaLAPACK matrix multiplication computation kernel, up to 20% and the AMG2006, a tier 1 application of the Sequoia benchmark, up to 7%.


Author(s):  
Stefan Lemvig Glimberg ◽  
Allan Peter Engsig-Karup ◽  
Luke N Olson

The focus of this article is on the parallel scalability of a distributed multigrid framework, known as the DTU Compute GPUlab Library, for execution on graphics processing unit (GPU)-accelerated supercomputers. We demonstrate near-ideal weak scalability for a high-order fully nonlinear potential flow (FNPF) time domain model on the Oak Ridge Titan supercomputer, which is equipped with a large number of many-core CPU-GPU nodes. The high-order finite difference scheme for the solver is implemented to expose data locality and scalability, and the linear Laplace solver is based on an iterative multilevel preconditioned defect correction method designed for high-throughput processing and massive parallelism. In this work, the FNPF discretization is based on a multi-block discretization that allows for large-scale simulations. In this setup, each grid block is based on a logically structured mesh with support for curvilinear representation of horizontal block boundaries to allow for an accurate representation of geometric features such as surface-piercing bottom-mounted structures—for example, mono-pile foundations as demonstrated. Unprecedented performance and scalability results are presented for a system of equations that is historically known as being too expensive to solve in practical applications. A novel feature of the potential flow model is demonstrated, being that a modest number of multigrid restrictions is sufficient for fast convergence, improving overall parallel scalability as the coarse grid problem diminishes. In the numerical benchmarks presented, we demonstrate using 8192 modern Nvidia GPUs enabling large-scale and high-resolution nonlinear marine hydrodynamics applications.


2012 ◽  
Vol 706-709 ◽  
pp. 1781-1786 ◽  
Author(s):  
You Liang He ◽  
Fei Gao ◽  
Bao Yun Song ◽  
Rong Fu ◽  
Gui Ming Wu ◽  
...  

Effective grain refinement through equal channel angular pressing (ECAP) for magnesium (Mg) alloys has been demonstrated by many researchers. Although with the capability to achieve superplasticity, the batch mode nature of this method and the required repetitive processing to attain ultrafine grained structure have prohibited it from being widely used in large-scale industrial production. In this study, a well-established metal forming method – the continuous extrusion forming (CONFORM) process – was employed as a severe plastic deformation route to refine the microstructure of Mg alloys. Cast Mg-3%Al-1%Zn (AZ31) rods were used as the feedstock and the cast structure (grain size of ~150 microns) was refined to ~1 micron afteronepass CONFORM extrusion. Uniaxial tensile tests of the as-extruded samples were conducted at a temperature of 473K and an elongation of ~200% was achieved under a strain rate of 1×10-4s-1. The significant grain refinement effect was attributed to the severe shear deformation occurred during the CONFORM process, which is very similar to ECAP but with even higher effective strains. The most important advantage of CONFORM over ECAP is that the former is a continuous route, so it is able to produce long products. It was also shown that CONFORM could be an additional forming method for Mg alloys to conventional rolling, forging and extrusion.


Sign in / Sign up

Export Citation Format

Share Document