A Model of the Relationship between the Parameters of Oxidation of Zirconium Alloys in Water Steam at 1000°C and the Alloy Composition: II. Model Selection, Comparison with Experiment

2021 ◽  
Vol 12 (3) ◽  
pp. 700-706
Author(s):  
V. G. Kritsky ◽  
E. A. Motkova ◽  
A. S. Yashin
Author(s):  
V.G. Kritsky ◽  
◽  
E.A. Motkova ◽  
A.S. Yashin ◽  
◽  
...  

In the framework of the previously developed model for the oxidation of zirconium alloys in water stream during the LOCA (Loss of Coolant Accident), the role of each element in the alloy composition in the oxidation kinetics is estimated. The contribution of each component to oxidation resistance was determined by the method of solving the “inverse problem”. Algorithm, flowchart, tables of step-by-step solution of “inverse” and “direct” problems, and the results of calculations of the parabolic oxidation constant K and the time t to transition to the linear stage of oxidation for each alloy are presented. The contribution of each alloying and impurity element to the oxidation resistance of the alloy is determined by the stoichiometric coefficients and corresponds to the thermodynamic sequence of the formation of oxides of these elements in water vapor at 1000°C. The thermodynamic meaning of the parameters included in the kinetic equations is determined, which are proportional to the standard formation energy ∆Gi of the oxide of the corresponding alloy component.


2021 ◽  
Vol 871 ◽  
pp. 20-26
Author(s):  
Yu Gao ◽  
Hong Yu ◽  
Yu Zhou ◽  
Xin Jie Zhu ◽  
Qun Bo Fan

Traditional high-throughput experiments increase the test efficiency by designing component gradient tests and other methods. This article intends to improve the traditional high-throughput experiments and proposes an experimental scheme combining nanoindentation technology and electron probe microanalysis (EPMA). Based on a new Ti-Mo-Al-Zr-Cr-Sn alloy, micro-region composition and corresponding performance at multiple indentations are directly characterized, including a series of different alloy compositions composed of 8 elements such as Mo, Al and the corresponding hardness (H) and elastic modulus (E). Then the principal analysis method in statistics, the theory of molybdenum equivalent and aluminum equivalent are used to process the obtained data, and a series of atlases such as "E-H-component characteristic parameters" and "E-H-alloy equivalents" are constructed, which has achieved high-throughput characterization of the relationship between composition and performance of titanium alloy. Related work can not only quickly determine the alloy composition range corresponding to high E and high H values, but also provide guidance for further optimization of titanium alloy composition design.


1992 ◽  
Vol 114 (4) ◽  
pp. 459-464 ◽  
Author(s):  
W. E. Henderer

Tool-life tests are reported which show the relationship between the alloy composition of high-speed steel twist drills and performance. Tool-life is shown to depend primarily on the composition of the matrix consisting of tempered martensite and precipitated secondary carbides. The longest tool-life was obtained from alloys with high vanadium content and low tungsten or molybdenum content. This observation is consistent with the dispersion characteristics of vanadium carbide which precipitate during tempering.


1986 ◽  
Vol 23 (A) ◽  
pp. 127-141 ◽  
Author(s):  
Ritei Shibata

The relationship between consistency of model selection and that of parameter estimation is investigated. It is shown that the consistency of model selection is achieved at the cost of a lower order of consistency of the resulting estimate of parameters in some domain. The situation is different when selecting autoregressive moving average models, since the information matrix becomes singular when overfitted. Some detailed analyses of the consistency are given in this case.


2020 ◽  
Vol 376 (1815) ◽  
pp. 20190632
Author(s):  
Bradley C. Love

Notions of mechanism, emergence, reduction and explanation are all tied to levels of analysis. I cover the relationship between lower and higher levels, suggest a level of mechanism approach for neuroscience in which the components of a mechanism can themselves be further decomposed and argue that scientists' goals are best realized by focusing on pragmatic concerns rather than on metaphysical claims about what is ‘real'. Inexplicably, neuroscientists are enchanted by both reduction and emergence. A fascination with reduction is misplaced given that theory is neither sufficiently developed nor formal to allow it, whereas metaphysical claims of emergence bring physicalism into question. Moreover, neuroscience's existence as a discipline is owed to higher-level concepts that prove useful in practice. Claims of biological plausibility are shown to be incoherent from a level of mechanism view and more generally are vacuous. Instead, the relevant findings to address should be specified so that model selection procedures can adjudicate between competing accounts. Model selection can help reduce theoretical confusions and direct empirical investigations. Although measures themselves, such as behaviour, blood-oxygen-level-dependent (BOLD) and single-unit recordings, are not levels of analysis, like levels, no measure is fundamental and understanding how measures relate can hasten scientific progress. This article is part of the theme issue ‘Key relationships between non-invasive functional neuroimaging and the underlying neuronal activity'.


2018 ◽  
Vol 48 (1) ◽  
pp. 52-87
Author(s):  
Michael Schultz

Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference. To address this problem, the author proposes a new paradigm for inference-oriented model selection that evaluates models on the basis of a trade-off between model fit and model plausibility. By comparing the fits of sequentially nested models, it is possible to derive an empirical lower bound for the subjective plausibility of assumptions. To demonstrate the effectiveness of this approach, the method is applied to models of the relationship between cultural tastes and network composition.


Coatings ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 421 ◽  
Author(s):  
Na Ta ◽  
Lijun Zhang ◽  
Yong Du

Phase-field modeling coupled with calculation of phase diagram (CALPHAD) database was utilized to perform a series of two-dimensional phase-field simulations of microstructure evolution in the γ + γ′/γ + γ′ Ni–Al–Cr mode bond coat/substrate systems. With the aid of phase-field simulated microstructure evolution, the relationship between the interdiffusion microstructure and the cohesiveness/aluminum protective property with different alloy compositions and bond coat thicknesses was fully discussed. A semi-quantitative tie-line selection criteria for alloy composition of the bond coat/substrate system with the identical elements, i.e., that the equilibrium Al concentrations of γ′ and γ phases in the bond coat should be similar to those in substrate, while the phase fraction of γ′ in the bond coat tends to be higher than that in the substrate, was then proposed to reduce the formation of polycrystalline structure and thermal shock from the temperature gradient.


Author(s):  
Jiři Závorka ◽  
Radek Škoda

The problem with higher nuclear fuel enrichment is its high initial reactivity. It has a negative effect on the peaking factor, which is one of the license conditions. The second major problem is the ability to control the reactivity of the reactor, and thereby maintaining the multiplication factor in the core equal to 1. Long-term control of the reactivity in PWR reactors is typically conducted by the concentrated boric acid (H3BO3) in the coolant; its highest possible concentration is determined by the requirement to maintain a negative reactivity coefficient. Another option are burnable absorbers. This work deals with usage of hafnium as an advanced type of burnable absorber. Based on the model of computing code UWB1 for the study of burnable absorbers, a new cladding of nuclear fuel is designed with a thin protective layer made of hafnium. This cladding is used as a burnable absorber that helps reducing excess of fuel reactivity and prolongs the life of the fuel assemblies, which increases economic coefficient of the use of nuclear power plants. This cladding would also work as a protective layer increasing endurance and safety of nuclear power plants. Today zirconium alloys are exclusively used for this purpose. The main disadvantages of zirconium alloys include rapid high temperature oxidation of zirconium — a highly exothermic reaction between zirconium and water steam at temperatures above 800 °C. During this reaction hydrogen and inconsiderable amount of heat are released. Hydrogen excess, released heat, and damaged cover of fuel may deepen the severity and consequences of possible accidents. Another disadvantage of zirconium alloys is their gradual oxidation under standard operating conditions and ZrH formation, which leads to cladding embrittlement.


Sign in / Sign up

Export Citation Format

Share Document