scholarly journals Models of hydrogen influence on the mechanical properties of metals and alloys

2020 ◽  
pp. 136-160
Author(s):  
Yu. A Yakovlev ◽  
V. A Polyanskiy ◽  
Yu. S Sedova ◽  
A. K Belyaev

The article yields a survey of the key models of mechanics that are used to describe the effects of hydrogen embrittlement, hydrogen cracking, and hydrogen-induced destruction. The main attention is paid to models which are used to calculate the stress-strain state of metal samples, parts and machine components and have the potential for specific engineering applications. From a mechanical perspective, the effect of hydrogen on the material properties is a classic problem of the influence of a small parameter, since the hydrogen concentrations critical for the strength and ductility of metals are usually small. In the vast majority of models this effect is reduced to the hydrogen redistribution within the material volume and localization of concentrations in the critical fracture zones. The authors identified four main approaches that allow one to take into account the influence of a small parameter: (i) hydrogen-enhanced decohesion (HEDE), (ii) hydrogen-enhanced localized plasticity (HELP), (iii) account for additional internal pressure due to the hydrogen dissolved in metals, and (iv) bi-continuum approach that takes into account the internal hydrogen pressure and weakening of material in the framework of a special model of a solid. The links between the main approaches are established. Systematization of publications was carried out, similarities and differences in the description of the internal transport and accumulation of hydrogen in metals are highlighted. It is indicated that the predominant number of publications is devoted to the HEDE model, but so far there is no published data on the application of this model to real problems of engineering practice; only modeling the results of mechanical tests of cylindrical and prismatic samples were considered. In fact, other less popular approaches have more practical applications. The main unresolved issue in the verification of all models is the local concentration of hydrogen, which is a source of premature destruction of metals under load. All the methods for measuring local concentrations are indirect. Even in the case of applying sophisticated physical methods, mechanical surface preparation is required, which destroys the initial natural concentration of hydrogen. The lack of reliable data on the distribution of hydrogen concentration excludes the possibility of unambiguously determination of all the model parameters. On the one hand, it allows fitting to any experimental data, and on the other hand, it reduces the predictive engineering value of all models, since a qualitative fitting is not sufficient for engineering strength analysis.

2007 ◽  
Vol 73 (8) ◽  
pp. 2468-2478 ◽  
Author(s):  
Bernadette Klotz ◽  
D. Leo Pyle ◽  
Bernard M. Mackey

ABSTRACT A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.


2022 ◽  
Vol 12 (1) ◽  
pp. 1-24
Author(s):  
D. Reid ◽  
R. Fanni ◽  
A. Fourie

The cross-anisotropic nature of soil strength has been studied and documented for decades, including the increased propensity for cross-anisotropy in layered materials. However, current engineering practice for tailings storage facilities (TSFs) does not appear to generally include cross-anisotropy considerations in the development of shear strengths. This being despite the very common layering profile seen in subaerially-deposited tailings. To provide additional data to highlight the strength cross-anisotropy of tailings, high quality block samples from three TSFs were obtained and trimmed to enable Hollow Cylinder Torsional Shear tests to be sheared at principal stress angles of 0 and 45 degrees during undrained shearing. Consolidation procedures were carried out such that the drained rotation of principal stress angle that would precede potential undrained shear events for below-slope tailings was reasonably simulated. The results indicated the significant effects of cross-anisotropy on the undrained strength, instability stress ratio, contractive tendency and brittleness of each of the three tailings types. The magnitude of cross-anisotropy effects seen was generally consistent with previous published data on sands.


Author(s):  
Shunki Nishii ◽  
Yudai Yamasaki

Abstract To achieve high thermal efficiency and low emission in automobile engines, advanced combustion technologies using compression autoignition of premixtures have been studied, and model-based control has attracted attention for their practical applications. Although simplified physical models have been developed for model-based control, appropriate values for their model parameters vary depending on the operating conditions, the engine driving environment, and the engine aging. Herein, we studied an onboard adaptation method of model parameters in a heat release rate (HRR) model. This method adapts the model parameters using neural networks (NNs) considering the operating conditions and can respond to the driving environment and the engine aging by training the NNs onboard. Detailed studies were conducted regarding the training methods. Furthermore, the effectiveness of this adaptation method was confirmed by evaluating the prediction accuracy of the HRR model and model-based control experiments.


2019 ◽  
Vol 36 (4) ◽  
pp. 1364-1383 ◽  
Author(s):  
Wilma Polini ◽  
Andrea Corrado

Purpose The purpose of this paper is to model how geometric errors of a machined surface (or manufacturing errors) are related to locators’ error, workpiece form error and machine tool volumetric error. A kinematic model is presented that puts into relationship the locator error, the workpiece form deviations and the machine tool volumetric error. Design/methodology/approach The paper presents a general and systematic approach for geometric error modelling in drilling because of the geometric errors of locators positioning, of workpiece datum surface and of machine tool. The model can be implemented in four steps: (1) calculation of the deviation in the workpiece reference frame because of deviations of locator positions; (2) evaluation of the deviation in the workpiece reference frame owing to form deviations in the datum surfaces of the workpiece; (3) formulation of the volumetric error of the machine tool; and (4) combination of those three models. Findings The advantage of this approach lies in that it enables the source errors affecting the drilling accuracy to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for accuracy improvement through suitable measures, i.e. component tolerancing in design, machining and so on. Two typical drilling operations are taken as examples to illustrate the generality and effectiveness of this approach. Research limitations/implications Some source errors, such as the dynamic behaviour of the machine tool, are not taken into consideration, which will be modelled in practical applications. Practical implications The proposed kinematic model may be set by means of experimental tests, concerning the industrial specific application, to identify the values of the model parameters, such as standard deviation of the machine tool axes positioning and rotational errors. Then, it may be easily used to foresee the location deviation of a single or a pattern of holes. Originality/value The approaches present in the literature aim to model only one or at most two sources of machining error, such as fixturing, machine tool or workpiece datum. This paper goes beyond the state of the art because it considers the locator errors together with the form deviation on the datum surface into contact with the locators and, then, the volumetric error of the machine tool.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Qinghu Liao ◽  
Zubair Ahmad ◽  
Eisa Mahmoudi ◽  
G. G. Hamedani

Many studies have suggested the modifications and generalizations of the Weibull distribution to model the nonmonotone hazards. In this paper, we combine the logarithms of two cumulative hazard rate functions and propose a new modified form of the Weibull distribution. The newly proposed distribution may be called a new flexible extended Weibull distribution. Corresponding hazard rate function of the proposed distribution shows flexible (monotone and nonmonotone) shapes. Three different characterizations along with some mathematical properties are provided. We also consider the maximum likelihood estimation procedure to estimate the model parameters. For the illustrative purposes, two real applications from reliability engineering with bathtub-shaped hazard functions are analyzed. The practical applications show that the proposed model provides better fits than the other nonnested models.


2020 ◽  
Vol 16 (1) ◽  
pp. 65-78 ◽  
Author(s):  
Gabriel J. Bowen ◽  
Brenden Fischer-Femal ◽  
Gert-Jan Reichart ◽  
Appy Sluijs ◽  
Caroline H. Lear

Abstract. Paleoclimatic and paleoenvironmental reconstructions are fundamentally uncertain because no proxy is a direct record of a single environmental variable of interest; all proxies are indirect and sensitive to multiple forcing factors. One productive approach to reducing proxy uncertainty is the integration of information from multiple proxy systems with complementary, overlapping sensitivity. Mostly, such analyses are conducted in an ad hoc fashion, either through qualitative comparison to assess the similarity of single-proxy reconstructions or through step-wise quantitative interpretations where one proxy is used to constrain a variable relevant to the interpretation of a second proxy. Here we propose the integration of multiple proxies via the joint inversion of proxy system and paleoenvironmental time series models in a Bayesian hierarchical framework. The “Joint Proxy Inversion” (JPI) method provides a statistically robust approach to producing self-consistent interpretations of multi-proxy datasets, allowing full and simultaneous assessment of all proxy and model uncertainties to obtain quantitative estimates of past environmental conditions. Other benefits of the method include the ability to use independent information on climate and environmental systems to inform the interpretation of proxy data, to fully leverage information from unevenly and differently sampled proxy records, and to obtain refined estimates of proxy model parameters that are conditioned on paleo-archive data. Application of JPI to the marine Mg∕Ca and δ18O proxy systems at two distinct timescales demonstrates many of the key properties, benefits, and sensitivities of the method, and it produces new, statistically grounded reconstructions of Neogene ocean temperature and chemistry from previously published data. We suggest that JPI is a universally applicable method that can be implemented using proxy models of wide-ranging complexity to generate more robust, quantitative understanding of past climatic and environmental change.


2020 ◽  
Vol 34 (01) ◽  
pp. 19-26 ◽  
Author(s):  
Chong Chen ◽  
Min Zhang ◽  
Yongfeng Zhang ◽  
Weizhi Ma ◽  
Yiqun Liu ◽  
...  

Recent studies on recommendation have largely focused on exploring state-of-the-art neural networks to improve the expressiveness of models, while typically apply the Negative Sampling (NS) strategy for efficient learning. Despite effectiveness, two important issues have not been well-considered in existing methods: 1) NS suffers from dramatic fluctuation, making sampling-based methods difficult to achieve the optimal ranking performance in practical applications; 2) although heterogeneous feedback (e.g., view, click, and purchase) is widespread in many online systems, most existing methods leverage only one primary type of user feedback such as purchase. In this work, we propose a novel non-sampling transfer learning solution, named Efficient Heterogeneous Collaborative Filtering (EHCF) for Top-N recommendation. It can not only model fine-grained user-item relations, but also efficiently learn model parameters from the whole heterogeneous data (including all unlabeled data) with a rather low time complexity. Extensive experiments on three real-world datasets show that EHCF significantly outperforms state-of-the-art recommendation methods in both traditional (single-behavior) and heterogeneous scenarios. Moreover, EHCF shows significant improvements in training efficiency, making it more applicable to real-world large-scale systems. Our implementation has been released 1 to facilitate further developments on efficient whole-data based neural methods.


Transport ◽  
2017 ◽  
Vol 33 (2) ◽  
pp. 489-501 ◽  
Author(s):  
Oussama Derbel ◽  
Tamás Péter ◽  
Benjamin Mourllion ◽  
Michel Basset

In case of the Intelligent Driver Model (IDM) the actual Velocity–Density law V(D) applied by this dynamic system is not defined, only the dynamic behaviour of the vehicles/drivers is determined. Therefore, the logical question is whether the related investigations enhance an existing and known law or reveal a new connection. Specifically, which function class/type is enhanced by the IDM? The publication presents a model analysis, the goal of which was the exploration of a feature of the IDM, which, as yet, ‘remained hidden’. The theoretical model results are useful, this analysis important in the practice in the field of hybrid control as well. The transfer of the IDM groups through large-scale networks has special practical significance. For example, in convoys, groups of special vehicle, safety measures with delegations. In this case, the large-scale network traffic characteristics and the IDM traffic characteristics should be taken into account simultaneously. Important characteristics are the speed–density laws. In case of effective modelling of large networks macroscopic models are used, however the IDMs are microscopic. With careful modelling, we cannot be in contradiction with the application of speed–density law, where there IDM convoy passes. Therefore, in terms of practical applications, it is important to recognize what kind of speed–density law is applied by the IDM convoys in traffic. Therefore, in our case the goal was not the validation of the model, but the exploration of a further feature of the validated model. The separate validation of the model was not necessary, since many validated applications for this model have been demonstrated in practice. In our calculations, also the applied model parameter values remained in the range of the model parameters used in the literature. This paper presents a new approach for Velocity–Density Model (VDM) synthesis. It consists in modelling separately each of the density and the velocity (macroscopic parameter). From this study, safety time headway (microscopic parameter) can be identified from macroscopic data by mean of interpolation method in the developed map of velocity–density. By combining the density and the velocity models, a generalized new VDM is developed. It is shown that from this one, some literature VDMs, as well as their properties, can be derived by fixing some of its parameters.


1983 ◽  
Vol 100 (1) ◽  
pp. 211-220 ◽  
Author(s):  
G. E. L. Morris ◽  
I. E. Currah

SUMMARYFor many horticultural crops the distribution of weight over size grades is of more importance than the total weight. This paper shows how simply determined features of interrelationships of the weight, size and shape of an individual in the crop can be combined to provide estimates of various aspects of the distribution of crop weight over size grades. The two relationships required are (i) the probability density function of the grading variable for the crop; (ii) a function relating the weight of an individual to the corresponding value of the grading variable.The paper shows how each of these can be determined either from published data or by simple experiment. Examples using data on onions and carrots are given to illustrate this and also to show some of the more important practical applications of the methods. For example, they allow the results of grading with one set of size grades to be extrapolated to a different set of grades without recourse to further measurement or experimentation and this is illustrated using published data on carrots. Other possible uses are also discussed and outlined.


Symmetry ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 166
Author(s):  
Eric Affum ◽  
Xiasong Zhang ◽  
Xiaofen Wang ◽  
John Bosco Ansuura

In line with the proposed 5th Generation network, content centric network/named data networking (CCN/NDN) has been offered as one of the promising paradigms to cope with the communication needs of future realistic network communications. CCN/NDN allows network communication based on content names and also allows users to obtain information from any of the nearest intermediary caches on the network. Due to that, the ability of cached content to protect itself is essential since contents can be cached on any node everywhere, and publishers may not have total control over their own published data. The attribute based encryption (ABE) scheme is a preferable approach, identified to enable cached contents to be self-secured since it has a special property of encryption with policies. However, most of the proposed ABE schemes for CCN/NDN suffer from some loopholes. They are not flexible in the expression of access policy, they are inefficient, they are based on bilinear maps with pairings, and they are vulnerable to quantum cryptography algorithms. Hence, we propose the ciphertext policy attribute based encryption access control (CP-ABE AC) scheme from a lightweight ideal lattice based on ring learning with error (R-LWE) problem, and demonstrated its use in practical applications. The proposed scheme is proved to be secure and efficient under the decision ring LWE problem in the selective set model. To achieve an efficient scheme, we used an efficient trapdoor technique and the access tree representation of access structure describing the access policies was modified into a new structure, based on a reduced ordered binary decision diagram (reduce-OBDD). This access structure can support Boolean operations such as AND, NOT, OR, and threshold gates. The final result showed that the proposed scheme was secure and efficient for applications, thereby supporting CCN/NDN as a promising paradigm.


Sign in / Sign up

Export Citation Format

Share Document