Accelerated Discovery and Design of Nano-Material Applications in Nuclear Power by Using High Performance Scientific Computing

Author(s):  
Liviu Popa-Simil

The accelerated development of nano-sciences and nano-material systems and technologies is made possible through the use of High Performance Scientific Computing (HPSC). HPSC exploration ranges from nano-clusters to nano-material behavior at mezzo-scale and specific macro-scale products. These novel nano-materials and nano-technologies developed using HPSC can be applied to improve nuclear devices' safety and performance. This chapter explores the use of HPSC.

Author(s):  
Liviu Popa-Simil

Present High Performance Scientific Computing (HPSC) systems are facing strong limitations when full integration from nano-materials to operational system is desired. The HPSC have to be upgraded from the actual designed exa-scale machines probably available after 2015 to even higher computer power and storage capability to yotta-scale in order to simulate systems from nano-scale up to macro scale as a way to greatly improve the safety and performances of the future advanced nuclear power structures. The road from the actual peta-scale systems to yotta-scale computers, which would barely be sufficient for current calculation needs, is difficult and requires new revolutionary ideas in HPSC, and probably the large-scale use of Quantum Supercomputers (QSC) that are now in the development stage.


Author(s):  
Issaku Fujita ◽  
Kotaro Machii ◽  
Teruaki Sakata

Moisture Separator Reheaters (MSRs) of Nuclear power plants, especially 1st generation type (commercial operation started from between 1970 and 1982), has been suffered from various problems like severe erosion, moisture separation performance deterioration, drain sub cooling. To solve these problems and performance improvement, improved MSR was developed. At the new MSR, high performance SS439 stainless steel round type tube bundle was applied, where heating steam distribution is optimized by orifice plate in order to minimize the drain sub cooling. Based on the CFD approach, cycle steam distribution was optimized and FAC resistant material application for the internal parts of MSRs was determined. As a result, pressure drop was reduced by 0.6% against the HP turbine exhaust pressure. Performance of moisture separation was improved by the latest chevron type separator. Where, the reverse pressure is locally caused at the drainage area of the separator because remarkable longitudinal pressure distribution is formed by the high-speed steam flow in the manifold. Then, a new moisture separation structure was developed in consideration of the influence that this reverse pressure gave to the separator performance.


2003 ◽  
Vol 11 (4) ◽  
pp. 321-327 ◽  
Author(s):  
Martin J. Cole ◽  
Steven G. Parker

Generic programming using the C++ template facility has been a successful method for creating high-performance, yet general algorithms for scientific computing and visualization. However, adding template code tends to require more template code in surrounding structures and algorithms to maintain generality. Compiling all possible expansions of these templates can lead to massive template bloat. Furthermore, compile-time binding of templates requires that all possible permutations be known at compile time, limiting the runtime extensibility of the generic code. We present a method for deferring the compilation of these templates until an exact type is needed. This dynamic compilation mechanism will produce the minimum amount of compiled code needed for a particular application, while maintaining the generality and performance that templates innately provide. Through a small amount of supporting code within each templated class, the proper templated code can be generated at runtime without modifying the compiler. We describe the implementation of this goal within the SCIRun dataflow system. SCIRun is freely available online for research purposes.


Author(s):  
D. E. Newbury ◽  
R. D. Leapman

Trace constituents, which can be very loosely defined as those present at concentration levels below 1 percent, often exert influence on structure, properties, and performance far greater than what might be estimated from their proportion alone. Defining the role of trace constituents in the microstructure, or indeed even determining their location, makes great demands on the available array of microanalytical tools. These demands become increasingly more challenging as the dimensions of the volume element to be probed become smaller. For example, a cubic volume element of silicon with an edge dimension of 1 micrometer contains approximately 5×1010 atoms. High performance secondary ion mass spectrometry (SIMS) can be used to measure trace constituents to levels of hundreds of parts per billion from such a volume element (e. g., detection of at least 100 atoms to give 10% reproducibility with an overall detection efficiency of 1%, considering ionization, transmission, and counting).


2020 ◽  
Vol 12 (2) ◽  
pp. 19-50 ◽  
Author(s):  
Muhammad Siddique ◽  
Shandana Shoaib ◽  
Zahoor Jan

A key aspect of work processes in service sector firms is the interconnection between tasks and performance. Relational coordination can play an important role in addressing the issues of coordinating organizational activities due to high level of interdependence complexity in service sector firms. Research has primarily supported the aspect that well devised high performance work systems (HPWS) can intensify organizational performance. There is a growing debate, however, with regard to understanding the “mechanism” linking HPWS and performance outcomes. Using relational coordination theory, this study examines a model that examine the effects of subsets of HPWS, such as motivation, skills and opportunity enhancing HR practices on relational coordination among employees working in reciprocal interdependent job settings. Data were gathered from multiple sources including managers and employees at individual, functional and unit levels to know their understanding in relation to HPWS and relational coordination (RC) in 218 bank branches in Pakistan. Data analysis via structural equation modelling, results suggest that HPWS predicted RC among officers at the unit level. The findings of the study have contributions to both, theory and practice.


2019 ◽  
Vol 14 ◽  
pp. 155892501989525
Author(s):  
Yu Yang ◽  
Yanyan Jia

Ultrafine crystallization of industrial pure titanium allowed for higher tensile strength, corrosion resistance, and thermal stability and is therefore widely used in medical instrumentation, aerospace, and passenger vehicle manufacturing. However, the ultrafine crystallizing batch preparation of tubular industrial pure titanium is limited by the development of the spinning process and has remained at the theoretical research stage. In this article, the tubular TA2 industrial pure titanium was taken as the research object, and the ultrafine crystal forming process based on “5-pass strong spin-heat treatment-3 pass-spreading-heat treatment” was proposed. Based on the spinning process test, the ultimate thinning rate of the method is explored and the evolution of the surface microstructure was analyzed by metallographic microscope. The research suggests that the multi-pass, medium–small, and thinning amount of spinning causes the grain structure to be elongated in the axial and tangential directions, and then refined, and the axial fiber uniformity is improved. The research results have certain scientific significance for reducing the consumption of high-performance metals improving material utilization and performance, which also promote the development of ultrafine-grain metals’ preparation technology.


Author(s):  
Kersten Schuster ◽  
Philip Trettner ◽  
Leif Kobbelt

We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1587
Author(s):  
Duo Sheng ◽  
Hsueh-Ru Lin ◽  
Li Tai

High performance and complex system-on-chip (SoC) design require a throughput and stable timing monitor to reduce the impacts of uncertain timing and implement the dynamic voltage and frequency scaling (DVFS) scheme for overall power reduction. This paper presents a multi-stage timing monitor, combining three timing-monitoring stages to achieve a high timing-monitoring resolution and a wide timing-monitoring range simultaneously. Additionally, because the proposed timing monitor has high immunity to the process–voltage–temperature (PVT) variation, it provides a more stable time-monitoring results. The time-monitoring resolution and range of the proposed timing monitor are 47 ps and 2.2 µs, respectively, and the maximum measurement error is 0.06%. Therefore, the proposed multi-stage timing monitor provides not only the timing information of the specified signals to maintain the functionality and performance of the SoC, but also makes the operation of the DVFS scheme more efficient and accurate in SoC design.


2021 ◽  
Vol 2 (1) ◽  
pp. 46-62
Author(s):  
Santiago Iglesias-Baniela ◽  
Juan Vinagre-Ríos ◽  
José M. Pérez-Canosa

It is a well-known fact that the 1989 Exxon Valdez disaster caused the escort towing of laden tankers in many coastal areas of the world to become compulsory. In order to implement a new type of escort towing, specially designed to be employed in very adverse weather conditions, considerable changes in the hull form of escort tugs had to be made to improve their stability and performance. Since traditional winch and ropes technologies were only effective in calm waters, tugs had to be fitted with new devices. These improvements allowed the remodeled tugs to counterbalance the strong forces generated by the maneuvers in open waters. The aim of this paper is to perform a comprehensive literature review of the new high-performance automatic dynamic winches. Furthermore, a thorough analysis of the best available technologies regarding towline, essential to properly exploit the new winches, will be carried out. Through this review, the way in which the escort towing industry has faced this technological challenge is shown.


2021 ◽  
Vol 11 (3) ◽  
pp. 923
Author(s):  
Guohua Li ◽  
Joon Woo ◽  
Sang Boem Lim

The complexity of high-performance computing (HPC) workflows is an important issue in the provision of HPC cloud services in most national supercomputing centers. This complexity problem is especially critical because it affects HPC resource scalability, management efficiency, and convenience of use. To solve this problem, while exploiting the advantage of bare-metal-level high performance, container-based cloud solutions have been developed. However, various problems still exist, such as an isolated environment between HPC and the cloud, security issues, and workload management issues. We propose an architecture that reduces this complexity by using Docker and Singularity, which are the container platforms most often used in the HPC cloud field. This HPC cloud architecture integrates both image management and job management, which are the two main elements of HPC cloud workflows. To evaluate the serviceability and performance of the proposed architecture, we developed and implemented a platform in an HPC cluster experiment. Experimental results indicated that the proposed HPC cloud architecture can reduce complexity to provide supercomputing resource scalability, high performance, user convenience, various HPC applications, and management efficiency.


Sign in / Sign up

Export Citation Format

Share Document