The Role of Large Scale Demonstration Experiments in Supporting the Implementation of a High Level Waste Programme

Author(s):  
K. Yoshimura ◽  
I. Gaus ◽  
K. Kaku ◽  
T. Sakaki ◽  
A. Deguchi ◽  
...  

Large scale demonstration experiments in underground research laboratories (both onsite and off-site) are currently undertaken by most high level radioactive waste management organisations. The decision to plan and implement prototype experiments, which might have a life of several decades, has both important strategic and budgetary consequences for the organisation. Careful definition of experimental objectives based on the design and safety requirements is critical. The implementation requires the involvement of many parties and needs flexible but consequent management as, for example, additional goals for the experiments, identified in the course of the implementation, might jeopardise initial primary goals. The outcomes of an international workshop in which European and Japanese implementers (SKB, Posiva, Andra, ONDRAF, NUMO and Nagra) but also certain research organisations (JAEA, RWMC) participated identified which experiments are likely to be needed depending on the progress in implementing a disposal programme. Already earlier in a programme, large scale demonstrations are generally performed aiming at reducing uncertainties identified during the safety case development such as thermo-hydraulic-mechanical process validation in the engineered barrier system and target host rock. Also feasibility testing of underground construction in a potential host rock at relevant depth might be required. Later in a programme, i.e., closer to the license application, large scale experiments aim largely at demonstrating engineering feasibility and performance confirmation of complete repository components. Ultimately, before licensing repository operation, 1:1 scale commissioning testing will be required. Factors contributing to the successful completion of large scale demonstration experiments in terms of planning, defining the objectives, optimising results and main lessons learned over the last 30 years are being discussed. The need for international coordination in defining the objectives of new large scale demonstration experiments is addressed. The paper is expected to provide guidance to implementing organisations (especially those in their early stages of the programme), considering participating in and/or or conducting on their own large scale experiments in the near future.

2000 ◽  
Vol 663 ◽  
Author(s):  
I.G. McKinley ◽  
H. Kawamura ◽  
H. Tsuchi

ABSTRACTMost national high-level waste (HLW) disposal programs actually reflect, or are based on, concepts which were developed during the '70s or early '80s. Although suitable for demonstration of concept feasibility, designs of the engineered barrier system (EBS) do not take into account the tremendous developments in system understanding and materials technology over the last two decades, the practicality (and cost) of their quality assurance and implementation on an industrial scale and the transparency of the demonstration of the safety case. In many ways, due to the increased significance of popular acceptance over the last decade, the last point may be of particular relevance.This paper reviews the work already carried out on “2nd generation” concepts and extends this to identify the key attributes of an ideal design for the specific case of disposal of vitrified HLW from reprocessing in a “wet” host rock (either crystalline or sedimentary). Based on the concept developed, key R&D requirements are identified.


Author(s):  
Robert E. Prince ◽  
Bradley W. Bowan

This paper describes actual experience applying a technology to achieve volume reduction while producing a stable waste form for low and intermediate level liquid (L/ILW) wastes, and the L/ILW fraction produced from pre-processing of high level wastes. The chief process addressed will be vitrification. The joule-heated ceramic melter vitrification process has been used successfully on a number of waste streams produced by the U.S. Department of Energy (DOE). This paper will address lessons learned in achieving dramatic improvements in process throughput, based on actual pilot and full-scale waste processing experience. Since 1991, Duratek, Inc., and its long-term research partner, the Vitreous State Laboratory of The Catholic University of America, have worked to continuously improve joule heated ceramic melter vitrification technology in support of waste stabilization and disposition in the United States. From 1993 to 1998, under contact to the DOE, the team designed, built, and operated a joule-heated melter (the DuraMelterTM) to process liquid mixed (hazardous/low activity) waste material at the Savannah River Site (SRS) in South Carolina. This melter produced 1,000,000 kilograms of vitrified waste, achieving a volume reduction of approximately 70 percent and ultimately producing a waste form that the U.S. Environmental Protection Agency (EPA) delisted for its hazardous classification. The team built upon its SRS M Area experience to produce state-of-the-art melter technology that will be used at the DOE’s Hanford site in Richland, Washington. Since 1998, the DuraMelterTM has been the reference vitrification technology for processing both the high level waste (HLW) and low activity waste (LAW) fractions of liquid HLW waste from the U.S. DOE’s Hanford site. Process innovations have doubled the throughput and enhanced the ability to handle problem constituents in LAW. This paper provides lessons learned from the operation and testing of two facilities that provide the technology for a vitrification system that will be used in the stabilization of the low level fraction of Hanford’s high level tank wastes.


2006 ◽  
Vol 15 (03) ◽  
pp. 391-413 ◽  
Author(s):  
ASIT DAN ◽  
KAVITHA RANGANATHAN ◽  
CATALIN L. DUMITRESCU ◽  
MATEI RIPEANU

In large-scale, distributed systems such as Grids, an agreement between a client and a service provider specifies service level objectives both as expressions of client requirements and as provider assurances. From an application perspective, these objectives should be expressed in a high-level, service or application-specific manner rather than requiring clients to detail the necessary resources. Resource providers on the other hand, expect low-level, resource-specific performance criteria that are uniform across applications and can be easily interpreted and provisioned. This paper presents a framework for service management that addresses this gap between high-level specification of client performance objectives and existing resource management infrastructures. The paper identifies three levels of abstraction for resource requirements a service provider needs to manage, namely: detailed specification of raw resources, virtualization of heterogeneous resources as abstract resources, and performance objectives at an application level. The paper also identifies three key functions for managing service-level agreements, namely: translation of resource requirements across abstraction layers, arbitration in allocating resources to client requests, and aggregation and allocation of resources from multiple lower-level resource managers. One or more of these key functions may be present at each abstraction layer of a service-level manager. Thus, layering and the composition of these functions across abstraction layers enables modeling of a wide array of management scenarios. The framework we present uses service metadata and/or service performance models to map client requirements to resource capabilities, uses business value associated with objectives to arbitrate between competing requests, and allocates resources based on previously negotiated agreements. We instantiate this framework for three different scenarios and explain how the architectural principles we introduce are used in the real-word.


Author(s):  
Len LeBlanc ◽  
Walter Kresic ◽  
Sean Keane ◽  
John Munro

This paper describes the integrity management framework utilized within the Enbridge Liquids Pipelines Integrity Management Program. The role of the framework is to provide the high-level structure used by the company to prepare and demonstrate integrity safety decisions relative to mainline pipelines, and facility piping segments where applicable. The scope is directed to corrosion, cracking, and deformation threats and all variants within those broad categories. The basis for the framework centers on the use of a safety case to provide evidence that the risks affecting the system have been effectively mitigated. A ‘safety case’, for the purposes of this methodology is defined as a structured argument demonstrating that the evidence is sufficient to show that the system is safe.[1] The decision model brings together the aspects of data integration and determination of maintenance timing; execution of prevention, monitoring, and mitigation; confirmation that the execution has met reliability targets; application of additional steps if targets are not met; and then the collation of the results into an engineering assessment of the program effectiveness (safety case). Once the program is complete, continuous improvement is built into the next program through the incorporation of research and development solutions, lessons learned, and improvements to processes. On the basis of a wide range of experiences, investigations and research, it was concluded that there are combinations of monitoring and mitigation methods required in an integrity program to effectively manage integrity threats. A safety case approach ultimately provides the structure for measuring the effectiveness of integrity monitoring and mitigation efforts, and the methodology to assess whether a pipeline is sufficiently safe with targets for continuous improvement. Hence, the need for the safety case is to provide transparent, quantitative integrity program performance results which are continually improved upon through ongoing revalidations and improvement to the methods utilized. This enables risk reduction, better stakeholder awareness, focused innovation, opportunities for industry information sharing along with other benefits.


2014 ◽  
Vol 5 (1) ◽  
pp. 52
Author(s):  
Waldir Vilalva Dezan

The benefits gained in design mediated by Building Information Modelling (BIM) technology are manifold, among them stand out the early visualization, the generation of accurate 2D drawings, collaboration, verification of design intent, the extraction of cost estimates and performance evaluations. By adopting this modeling technology and using to produce, communicate and analyze architectural or engineering solutions practice is transformed. Therefore, the implementation of this new method of working in architectural design and engineering firms finds resistance, implies in adoption stages where incremental adjustments must occur to overcome difficulties and ensure learning and gaining with the new process. The Architectural and Engineering Office COORDENADORIA DE PROJETOS (CPROJ ), belonging to the School of Civil and Architecture and Urban Planning of the University of Campinas, seeks continually innovations therefore incorporated BIM in its design method. This paper presents a practical case, that is, the first large scale project developed with BIM, considered to be a BIM pilot study at CPROJ. The pilot study was the research laboratory of the Center of Molecular and Cellular Engineering of the Boldrini Children’s Hospital. Training efforts and ownership of BIM previous to the pilot study and the pilot study itself are presented. The highlights and lessons learned in this process are summarized. The understanding of how BIM changed the office production and qualitatively benefits achieved are presented.


Author(s):  
I. CAÑAMÓN ◽  
F. J. ELORZA ◽  
A. MANGIN ◽  
P. L. MARTÍN ◽  
R. RODRÍGUEZ

This work analyzes the physical processes occurring in the Mock-up test of the FEBEX I and II projects. FEBEX I and II is an European research project (1996–2004) led by ENRESA, that has financial support from the European Commission. This experiment is based in two large-scale heating tests ("in-situ" test and "Mock-up" test) simulating a radioactive waste repository, and tries to analyze the thermo-hydro-mechanical (THM) processes that could eventually happen in this kind of repositories.The main objectives of this study have been the following: to identify the physical processes occurring in the Mock-up experiment and to characterize them quantitatively; to understand the nature and consequences of several incidents happening in the Mock-up during the heating phase; and, finally, to analyze the data reliability of the sensors measurements and to predict possible failures.The analysis techniques used in this work are both statistical (time series correlation and spectral analysis, wavelets analysis, matching pursuit analysis) and non-statistical (spatial distribution analysis of data). These methods aim to establish the existing relationships between several data series registered in the experiment, corresponding to the measured parameters, and to characterize on time and frequency the non-stationary response of the series. A better understanding of the main THM processes affecting the engineered barriers used for the isolation of the radioactive waste is then available with the results obtained from those analyses.


Author(s):  
Brian D. Preussner ◽  
Joseph A. Nenni ◽  
Vondell J. Balls

The Calcine Disposition Project (CDP) of the Idaho Cleanup Project (ICP) has the responsibility to retrieve, treat, and dispose of the calcine stored at the Idaho Nuclear Technology and Engineering Center (INTEC) located at the Idaho National Laboratory. Calcine is the granular product of thermally treating, or calcining liquid high-level waste (HLW) that was produced at INTEC during the reprocessing of spent nuclear fuel (SNF) to recover uranium. The CDP is currently designing the Hot Isostatic Pressure (HIP) treatment for the calcine to provide monolithic, glass-ceramic waste form suitable for transport and disposition outside of Idaho by 2035 in compliance with the Idaho Settlement Agreement. The HIP process has been used by industry since its invention, by Battelle Institute, in 1955. Hot isostatic pressing can be used for upgrading castings, densifying pre-sintered components, and consolidate powders. It involves the simultaneous application of a high pressure and temperature in a specially constructed vessel. The pressure is applied on all sides with a gas (usually inert) and, so, is isostatic. The CDP will use this treatment process (10,000 psi at 1,150 C) to combine physically and chemically a mixture of calcine and granular additives into a non leachable waste-form. The HIP process for calcine involves filling a metal can with calcine and additives, heating and evacuating the can to remove volatiles, sealing the can under vacuum, and placing the can within the HIP machine for treatment. Although the HIP process has been in use for over 50 years it has not been applied in large scale radioactive service. Challenges with retrofitting such a system for Calcine treatment include 1) filling and sealing the HIP can cleanly and remotely, 2) remotely loading and unloading the HIP machine, and 3) performing maintenance and repair on a 300 ton, hydraulically actuated machine in a highly radioactive hot cell environment. In this article, a systems engineering approach, including use of industry-proven design-for-quality tools and quantitative assessment techniques is summarized. Discussions on how these techniques were used to improve high-consequence risk management and more effectively apply failure mode, RAMI, and time and motion analyses at the earliest possible stages of design are provided.


Sign in / Sign up

Export Citation Format

Share Document