A High-Performance Framework for Instruction-Set Simulator

Author(s):  
Zhu Hao ◽  
Peng Chu ◽  
Tiejun Zhang ◽  
Donghui Wang ◽  
Chaohuan Hou
2021 ◽  
Vol 336 ◽  
pp. 04018
Author(s):  
Ping Deng ◽  
Xiaolong Zhu ◽  
Haiyan Sun ◽  
Yi Ren

The processor FT_MX is a high-performance chip independently developed by the National University of Defense Technology, with an innovative architecture and instruction set. LLVM architecture is a widely used and efficient open source compiler framework initiated by the University of Illinois. This paper introduces the basic architecture and functions of LLVM, analyzes the back-end migration mechanism of the architecture in detail, and gives the specific process of implementing FT_MX back-end migration, and realizes the support of LLVM architecture to the back-end of FT_MX processor.


2021 ◽  
Author(s):  
Megan Grodowitz ◽  
Luis E. Pena ◽  
Curtis Dunham ◽  
Dong Zhong ◽  
Pavel Shamis ◽  
...  

2001 ◽  
Vol 356 (1412) ◽  
pp. 1209-1228 ◽  
Author(s):  
Nigel H. Goddard ◽  
Michael Hucka ◽  
Fred Howell ◽  
Hugo Cornelis ◽  
Kavita Shankar ◽  
...  

Biological nervous systems and the mechanisms underlying their operation exhibit astonishing complexity. Computational models of these systems have been correspondingly complex. As these models become ever more sophisticated, they become increasingly difficult to define, comprehend, manage and communicate. Consequently, for scientific understanding of biological nervous systems to progress, it is crucial for modellers to have software tools that support discussion, development and exchange of computational models. We describe methodologies that focus on these tasks, improving the ability of neuroscientists to engage in the modelling process. We report our findings on the requirements for these tools and discuss the use of declarative forms of model description—equivalent to object–oriented classes and database schema—which we call templates. We introduce NeuroML, a mark–up language for the neurosciences which is defined syntactically using templates, and its specific component intended as a common format for communication between modelling–related tools. Finally, we propose a template hierarchy for this modelling component of NeuroML, sufficient for describing models ranging in structural levels from neuron cell membranes to neural networks. These templates support both a framework for user–level interaction with models, and a high–performance framework for efficient simulation of the models.


Author(s):  
Dong Cao ◽  
Shanshan Wang ◽  
Qun Li ◽  
Zhenxiang Cheny ◽  
Qiben Yan ◽  
...  

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Dau-Chyrh Chang ◽  
Lihong Zhang ◽  
Xiaoling Yang ◽  
Shao-Hsiang Yen ◽  
Wenhua Yu

We introduce a hardware acceleration technique for the parallel finite difference time domain (FDTD) method using the SSE (streaming (single instruction multiple data) SIMD extensions) instruction set. The implementation of SSE instruction set to parallel FDTD method has achieved the significant improvement on the simulation performance. The benchmarks of the SSE acceleration on both the multi-CPU workstation and computer cluster have demonstrated the advantages of (vector arithmetic logic unit) VALU acceleration over GPU acceleration. Several engineering applications are employed to demonstrate the performance of parallel FDTD method enhanced by SSE instruction set.


Sign in / Sign up

Export Citation Format

Share Document