A Data-Intensive Numerical Modeling Method for Large-Scale Rock Strata and Its Application in Mining Subsidence Prediction

Author(s):  
Ya-Qiang Gong ◽  
Guang-Li Guo ◽  
Li-Ping Wang ◽  
Huai-Zhan Li ◽  
Guang-Xue Zhang ◽  
...  
2021 ◽  
Vol 2108 (1) ◽  
pp. 012030
Author(s):  
Lei Wang ◽  
Hongjun Zhang ◽  
Hui Hu ◽  
Liping Hao ◽  
Wei Xu

Abstract Modular multilevel converter (MMC) contains a large number of power electronic switching devices. The modeling method based on switching circuit model needs a lot of resources and the simulation speed is slow, so it is difficult to realize large-scale real-time simulation of electromagnetic transient. A MMC electromagnetic transient numerical modeling method based on ideal transformer model (ITM) is presented. Firstly, the MMC system is divided into the main circuit network and the sub module group network by ITM method, and the error caused by decoupling delay in serial and parallel real time simulation is compensated respectively by interpolation prediction and advanced interpolation prediction. Secondly, the capacitor in sub module is discreted respectively by trapezoidal integration method, backward Euler method and Gear-2 method. Based on the above numerical integration, the difference equations of capacitance voltage, capacitance current and output voltage of half bridge and full bridge sub modules are derived. Then, in order to improve the calculation speed, a simplified numerical model of half bridge and full bridge sub module based on switching function is proposed. Finally, the MMC based on switching circuit model runs off-line simulation in the simulation software, and the above MMC numerical modeling method runs real-time simulation in Speedgoat real-time simulator. The off-line and real-time simulation results of the MMC numerical modeling method and the switching circuit model are compared. And the simulation results verify the feasibility and effectiveness of the above MMC numerical modeling method in real time simulation.


2020 ◽  
Vol 2 (1) ◽  
pp. 92
Author(s):  
Rahim Rahmani ◽  
Ramin Firouzi ◽  
Sachiko Lim ◽  
Mahbub Alam

The major challenges of operating data-intensive of Distributed Ledger Technology (DLT) are (1) to reach consensus on the main chain as a set of validators cast public votes to decide on which blocks to finalize and (2) scalability on how to increase the number of chains which will be running in parallel. In this paper, we introduce a new proximal algorithm that scales DLT in a large-scale Internet of Things (IoT) devices network. We discuss how the algorithm benefits the integrating DLT in IoT by using edge computing technology, taking the scalability and heterogeneous capability of IoT devices into consideration. IoT devices are clustered dynamically into groups based on proximity context information. A cluster head is used to bridge the IoT devices with the DLT network where a smart contract is deployed. In this way, the security of the IoT is improved and the scalability and latency are solved. We elaborate on our mechanism and discuss issues that should be considered and implemented when using the proposed algorithm, we even show how it behaves with varying parameters like latency or when clustering.


2002 ◽  
Vol 28 (11) ◽  
pp. 1763-1785 ◽  
Author(s):  
Gustavo C. Buscaglia ◽  
Fabián A. Bombardelli ◽  
Marcelo H. Garcı́a

2018 ◽  
Vol 61 ◽  
pp. 1-37 ◽  
Author(s):  
Paola F. Antonietti ◽  
Alberto Ferroni ◽  
Ilario Mazzieri ◽  
Roberto Paolucci ◽  
Alfio Quarteroni ◽  
...  

We present a comprehensive review of Discontinuous Galerkin Spectral Element (DGSE) methods on hybrid hexahedral/tetrahedral grids for the numerical modeling of the ground motion induced by large earthquakes. DGSE methods combine the exibility of discontinuous Galerkin meth-ods to patch together, through a domain decomposition paradigm, Spectral Element blocks where high-order polynomials are used for the space discretization. This approach allows local adaptivity on discretization parameters, thus improving the quality of the solution without affecting the compu-tational costs. The theoretical properties of the semidiscrete formulation are also revised, including well-posedness, stability and error estimates. A discussion on the dissipation, dispersion and stability properties of the fully-discrete (in space and time) formulation is also presented. Here space dis-cretization is obtained based on employing the leap-frog time marching scheme. The capabilities of the present approach are demonstrated through a set of computations of realistic earthquake scenar-ios obtained using the code SPEED (http://speed.mox.polimi.it), an open-source code specifically designed for the numerical modeling of large-scale seismic events jointly developed at Politecnico di Milano by The Laboratory for Modeling and Scientific Computing MOX and by the Department of Civil and Environmental Engineering.


Author(s):  
Valentin Tablan ◽  
Ian Roberts ◽  
Hamish Cunningham ◽  
Kalina Bontcheva

Cloud computing is increasingly being regarded as a key enabler of the ‘democratization of science’, because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research—GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost–benefit analysis and usage evaluation.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Sol Ji Kang ◽  
Sang Yeon Lee ◽  
Keon Myung Lee

With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.


Sign in / Sign up

Export Citation Format

Share Document