scholarly journals Tuning Parallel Data Compression and I/O for Large-scale Earthquake Simulation

Author(s):  
Houjun Tang ◽  
Suren Byna ◽  
N Anders Petersson ◽  
David McCallen
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Gong-Xu Luo ◽  
Ya-Ting Yang ◽  
Rui Dong ◽  
Yan-Hong Chen ◽  
Wen-Bo Zhang

Neural machine translation (NMT) for low-resource languages has drawn great attention in recent years. In this paper, we propose a joint back-translation and transfer learning method for low-resource languages. It is widely recognized that data augmentation methods and transfer learning methods are both straight forward and effective ways for low-resource problems. However, existing methods, which utilize one of these methods alone, limit the capacity of NMT models for low-resource problems. In order to make full use of the advantages of existing methods and further improve the translation performance of low-resource languages, we propose a new method to perfectly integrate the back-translation method with mainstream transfer learning architectures, which can not only initialize the NMT model by transferring parameters of the pretrained models, but also generate synthetic parallel data by translating large-scale monolingual data of the target side to boost the fluency of translations. We conduct experiments to explore the effectiveness of the joint method by incorporating back-translation into the parent-child and the hierarchical transfer learning architecture. In addition, different preprocessing and training methods are explored to get better performance. Experimental results on Uygur-Chinese and Turkish-English translation demonstrate the superiority of the proposed method over the baselines that use single methods.


Author(s):  
Zhou Fang ◽  
Zhiping Chen ◽  
Guodong Jia ◽  
Hui Wang ◽  
Xiang Li

A large-scale earthquake simulation experiment about the unanchored cylindrical steel liquid storage model tanks has been completed. The self-vibration characteristics of the model tanks with liquid inside were investigated based on the experimental data of the acceleration dynamic response. The seismic table test, the analysis methods are designed and conducted, and experimental results of the model tanks were carefully measured. Furthermore, ANSYS finite element software was used to simulate and calculate the low order natural frequency and fundamental frequency of the model tank systems according to the national design standard. The reasons for the existence of consistency and differences among the results obtained from experiments, numerical simulation and national design standard were discussed.


Author(s):  
Hammad Mazhar

This paper describes an open source parallel simulation framework capable of simulating large-scale granular and multi-body dynamics problems. This framework, called Chrono::Parallel, builds upon the modeling capabilities of Chrono::Engine, another open source simulation package, and leverages parallel data structures to enable scalable simulation of large problems. Chrono::Parallel is somewhat unique in that it was designed from the ground up to leverage parallel data structures and algorithms so that it scales across a wide range of computer architectures and yet has a rich modeling capability for simulating many different types of problems. The modeling capabilities of Chrono::Parallel will be demonstrated in the context of additive manufacturing and 3D printing by modeling the Selective Layer Sintering layering process and simulating large complex interlocking structures which require compression and folding to fit into a 3D printer’s build volume.


2020 ◽  
Vol 10 (21) ◽  
pp. 7636
Author(s):  
Dandan Jiang ◽  
Zhaofa Zeng ◽  
Shuai Zhou ◽  
Yanwu Guan ◽  
Tao Lin ◽  
...  

Three-dimensional magnetic inversion allows the distribution of magnetic parameters to be obtained, and it is an important tool for geological exploration and interpretation. However, because of the redundancy of the data obtained from large-scale investigations or high-density sampling, it is very computationally intensive to use these data for iterative inversion calculations. In this paper, we propose a method for compressing magnetic data by using an adaptive quadtree decomposition method, which divides the two-dimensional data region into four quadrants and progressively subdivides them by recursion until the data in each quadrant meets the regional consistency criterion. The method allows for dense sampling at the abnormal boundaries with large amplitude changes and sparse sampling at regions with small amplitude changes, and achieves the best approximation to the original data with the least amount of data, thus retaining more anomalous information while achieving the purpose of data compression. In addition, assigning values to the data in the quadrants using the averaging method is essentially equivalent to average filtering, which reduces the noise of the magnetic data. Testing the synthetic model and applying the method to mineral exploration a prove that it can effectively compress the magnetic data and greatly improve the computational efficiency.


2016 ◽  
Vol 12 (S325) ◽  
pp. 311-315 ◽  
Author(s):  
Dany Vohl ◽  
Christopher J. Fluke ◽  
Amr H. Hassan ◽  
David G. Barnes ◽  
Virginia A. Kilborn

AbstractRadio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform – allowing the research process to continue wherever you are.


Algorithms ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 197 ◽  
Author(s):  
Sebastian Götschel ◽  
Martin Weiser

Solvers for partial differential equations (PDEs) are one of the cornerstones of computational science. For large problems, they involve huge amounts of data that need to be stored and transmitted on all levels of the memory hierarchy. Often, bandwidth is the limiting factor due to the relatively small arithmetic intensity, and increasingly due to the growing disparity between computing power and bandwidth. Consequently, data compression techniques have been investigated and tailored towards the specific requirements of PDE solvers over the recent decades. This paper surveys data compression challenges and discusses examples of corresponding solution approaches for PDE problems, covering all levels of the memory hierarchy from mass storage up to the main memory. We illustrate concepts for particular methods, with examples, and give references to alternatives.


Sign in / Sign up

Export Citation Format

Share Document