high performance
Recently Published Documents


TOTAL DOCUMENTS

170865
(FIVE YEARS 44297)

H-INDEX

386
(FIVE YEARS 71)

2022 ◽  
Vol 15 (2) ◽  
pp. 1-35
Author(s):  
Tom Hogervorst ◽  
Răzvan Nane ◽  
Giacomo Marchiori ◽  
Tong Dong Qiu ◽  
Markus Blatt ◽  
...  

Scientific computing is at the core of many High-Performance Computing applications, including computational flow dynamics. Because of the utmost importance to simulate increasingly larger computational models, hardware acceleration is receiving increased attention due to its potential to maximize the performance of scientific computing. Field-Programmable Gate Arrays could accelerate scientific computing because of the possibility to fully customize the memory hierarchy important in irregular applications such as iterative linear solvers. In this article, we study the potential of using Field-Programmable Gate Arrays in High-Performance Computing because of the rapid advances in reconfigurable hardware, such as the increase in on-chip memory size, increasing number of logic cells, and the integration of High-Bandwidth Memories on board. To perform this study, we propose a novel Sparse Matrix-Vector multiplication unit and an ILU0 preconditioner tightly integrated with a BiCGStab solver kernel. We integrate the developed preconditioned iterative solver in Flow from the Open Porous Media project, a state-of-the-art open source reservoir simulator. Finally, we perform a thorough evaluation of the FPGA solver kernel in both stand-alone mode and integrated in the reservoir simulator, using the NORNE field, a real-world case reservoir model using a grid with more than 10 5 cells and using three unknowns per cell.


2022 ◽  
Vol 142 ◽  
pp. 106453
Author(s):  
Charan Kuchi ◽  
Nunna Guru Prakash ◽  
Kumcham Prasad ◽  
Yenugu Veera Manohara Reddy ◽  
Bathinapatla Sravani ◽  
...  

2022 ◽  
Vol 284 ◽  
pp. 116995
Author(s):  
Amélie Schultheiss ◽  
Abderrahime Sekkatz ◽  
Viet Huong Nguyen ◽  
Alexandre Carella ◽  
Anass Benayad ◽  
...  

2022 ◽  
Vol 45 ◽  
pp. 786-795
Author(s):  
Qiao Xu ◽  
Xianglei Liu ◽  
Qingyang Luo ◽  
Yang Tian ◽  
Chunzhuo Dang ◽  
...  

Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


2022 ◽  
Vol 23 ◽  
pp. 100614
Author(s):  
H. Tonnoir ◽  
D. Huo ◽  
R.L.S. Canevesi ◽  
V. Fierro ◽  
A. Celzard ◽  
...  

2022 ◽  
Vol 204 ◽  
pp. 111181
Author(s):  
Wei Yong ◽  
Hongtao Zhang ◽  
Huadong Fu ◽  
Yaliang Zhu ◽  
Jie He ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document