scholarly journals Damage and Failure in a Statistical Crack Model

2020 ◽  
Vol 10 (23) ◽  
pp. 8700
Author(s):  
L.G. Margolin

In this paper I will take a close look at a statistical crack model (SCM) as is used in engineering computer codes to simulate fracture at high strain rates. My general goal is to understand the macroscopic behavior effected by the microphysical processes incorporated into an SCM. More specifically, I will assess the importance of including local interactions between cracks into the growth laws of an SCM. My strategy will be to construct a numerical laboratory that represents a single computational cell containing a realization of a statistical distribution of cracks. The cracks will evolve by the microphysical models of the SCM, leading to quantifiable damage and failure of the computational cell. I will use the numerical data generated by randomly generated ensembles of the fracture process to establish scaling laws that will modify and simplify the implementation of the SCM into large scale engineering codes.

Author(s):  
Sébastien Dartevelle

Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics and inadequately constrained in terms of initial, boundary, and inflow conditions. Consequently, little to no real-time data exist to validate computer codes developed to model these geophysical events as a whole. However, code validation remains a necessary step, particularly when volcanologists use numerical data for assessment and mitigation of volcanic hazards as more often performed nowadays. We suggest performing the validation task in volcanology in two steps as followed. First, numerical geo-modelers should perform the validation task against simple and well-constrained analog (small-scale) experiments targeting the key physics controlling volcanic cloud phenomenology. This first step would be a validation analysis as classically performed in engineering and in CFD sciences. In this case, geo-modelers emphasize on validating against analog experiments that unambiguously represent the key-driving physics. The second “geo-validation” step is to compare numerical results against geophysical-geological (large-scale) events which are described ?as thoroughly as possible? in terms of boundary, initial, or flow conditions. Although this last step can only be a qualitative comparison against a non-fully closed system event —hence it is not per se a validation analysis—, it nevertheless attempts to rationally use numerical geo-models for large-scale volcanic phenomenology. This last step, named “field validation or geo-validation”, is as important in order to convince policy maker of the adequacy of numerical tools for modeling large-scale explosive volcanism phenomenology.


2013 ◽  
pp. 1697-1723
Author(s):  
Sébastien Dartevelle

Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics and inadequately constrained in terms of initial, boundary, and inflow conditions. Consequently, little to no real-time data exist to validate computer codes developed to model these geophysical events as a whole. However, code validation remains a necessary step, particularly when volcanologists use numerical data for assessment and mitigation of volcanic hazards as more often performed nowadays. We suggest performing the validation task in volcanology in two steps as followed. First, numerical geo-modelers should perform the validation task against simple and well-constrained analog (small-scale) experiments targeting the key physics controlling volcanic cloud phenomenology. This first step would be a validation analysis as classically performed in engineering and in CFD sciences. In this case, geo-modelers emphasize on validating against analog experiments that unambiguously represent the key-driving physics. The second “geo-validation” step is to compare numerical results against geophysical-geological (large-scale) events which are described ?as thoroughly as possible? in terms of boundary, initial, or flow conditions. Although this last step can only be a qualitative comparison against a non-fully closed system event —hence it is not per se a validation analysis—, it nevertheless attempts to rationally use numerical geo-models for large-scale volcanic phenomenology. This last step, named “field validation or geo-validation”, is as important in order to convince policy maker of the adequacy of numerical tools for modeling large-scale explosive volcanism phenomenology.


2012 ◽  
Vol 9 (6) ◽  
pp. 7317-7378 ◽  
Author(s):  
A. Kleidon ◽  
E. Zehe ◽  
U. Ehret ◽  
U. Scherer

Abstract. The organization of drainage basins shows some reproducible phenomena, as exemplified by self-similar fractal river network structures and typical scaling laws, and these have been related to energetic optimization principles, such as minimization of stream power, minimum energy expenditure or maximum "access". Here we describe the organization and dynamics of drainage systems using thermodynamics, focusing on the generation, dissipation and transfer of free energy associated with river flow and sediment transport. We argue that the organization of drainage basins reflects the fundamental tendency of natural systems to deplete driving gradients as fast as possible through the maximization of free energy generation, thereby accelerating the dynamics of the system. This effectively results in the maximization of sediment export to deplete topographic gradients as fast as possible and potentially involves large-scale feedbacks to continental uplift. We illustrate this thermodynamic description with a set of three highly simplified models related to water and sediment flow and describe the mechanisms and feedbacks involved in the evolution and dynamics of the associated structures. We close by discussing how this thermodynamic perspective is consistent with previous approaches and the implications that such a thermodynamic description has for the understanding and prediction of sub-grid scale organization of drainage systems and preferential flow structures in general.


2019 ◽  
Author(s):  
Xiaoqi Xu ◽  
Chunsong Lu ◽  
Yangang Liu ◽  
Wenhua Gao ◽  
Yuan Wang ◽  
...  

Abstract. Overprediction of precipitation over the Tibetan Plateau is often found in numerical simulations, which is thought to be related to coarse grid sizes or inaccurate large-scale forcing. In addition to confirming the important role of model grid sizes, this study shows that liquid-phase precipitation parameterization is another key culprit, and underlying physical mechanisms are revealed. A typical summer plateau precipitation event is simulated with the Weather Research and Forecasting (WRF) model by introducing different parameterizations of liquid-phase microphysical processes into the commonly used Morrison scheme, including autoconversion, accretion, and entrainment-mixing mechanisms. All simulations can reproduce the general spatial distribution and temporal variation of precipitation. The precipitation in the high-resolution domain is less overpredicted than in the low-resolution domain. The accretion process plays more important roles than other liquid-phase processes in simulating precipitation. Employing the accretion parameterization considering raindrop size makes the total surface precipitation closest to the observation which is supported by the Heidke skill scores. The physical reason is that this accretion parameterization can suppress fake accretion and liquid-phase precipitation when cloud droplets are too small to initiate precipitation.


2020 ◽  
Vol 17 (163) ◽  
pp. 20190655 ◽  
Author(s):  
Fatma Ayancik ◽  
Frank E. Fish ◽  
Keith W. Moored

Cetaceans convert dorsoventral body oscillations into forward velocity with a complex interplay between their morphological and kinematic features and the fluid environment. However, it is unknown to what extent morpho-kinematic features of cetaceans are intertwined to maximize their efficiency. By interchanging the shape and kinematic variables of five cetacean species, the interplay of their flukes morpho-kinematic features is examined by characterizing their thrust, power and propulsive efficiency. It is determined that the shape and kinematics of the flukes have considerable influence on force production and power consumption. Three-dimensional heaving and pitching scaling laws are developed by considering both added mass and circulatory-based forces, which are shown to closely model the numerical data. Using the scaling relations as a guide, it is determined that the added mass forces are important in predicting the trend between the efficiency and aspect ratio, however, the thrust and power are driven predominately by the circulatory forces. The scaling laws also reveal that there is an optimal dimensionless heave-to-pitch ratio h * that maximizes the efficiency. Moreover, the optimal h * varies with the aspect ratio, the amplitude-to-chord ratio and the Lighthill number. This indicates that the shape and kinematics of propulsors are intertwined, that is, there are specific kinematics that are tailored to the shape of a propulsor in order to maximize its propulsive efficiency.


In sample surveys the final estimate is prepared from information collected for sample units of definite size (area) located at random. Large-scale work involves journeys from one sample unit to another so that both cost and precision of the result depend on size (area) as well as the number (density per sq. mile) of sample units. The object of planning is to settle these two quantities in such a way that ( a ) the precision is a maximum for any assigned cost, or ( b ) the cost is a minimum for any assigned precision. The present paper discusses the solution for (1) uni-stage sampling (with randomization in one single stage) both in the abstract and in the concrete; and for (2) multi-stage sample (with randomization in more than one stage) mostly in the abstract. The whole area is considered here as a statistical field consisting of a large number of basic cells each having a definite value of the variate under study. These values (with suitable grouping) form an abstract frequency distribution corresponding to which there exists a set of associated space distributions (of which the observed field is but one) generated by allocating the variate values to different cells in different ways. This raises novel problems which are space generalizations of the classical theory of sampling distribution and estimation. On the applied side it also enables classification of the technique into two types: ( a ) ‘individual’ or ( b ) ‘grid’ sampling depending on whether each sample unit consists of only one or more than one basic cell. For most space distribution precision of the result is nearly equal for both types of sampling; these are called fields of random type. For certain fields (including those usually observed in nature) precision depends on sampling type; these are fields of non-random type. Application to estimating acreage under jute covering 60,000 sq. miles in Bengal in 1941-2 is described with numerical data. The margin of error of the sample estimate was about 2 %, while cost was only a fifteenth of that of a complete census made in the same year by an official agency.


Author(s):  
Zhongheng Guo ◽  
Lingyu Sun ◽  
Taikun Wang ◽  
Junmin Du ◽  
Han Li ◽  
...  

At the conceptual design phase of a large-scale underwater structure, a small-scale model in a water tank is often used for the experimental verification of kinematic principles and structural safety. However, a general scaling law for structure-fluid interaction (FSI) problems has not been established. In the present paper, the scaling laws for three typical FSI problems under the water, rigid body moves at a given kinematic equation or is driven by time-dependent fluids with given initial condition, as well as elastic-plastic body moves and then deforms subject to underwater impact loads, are investigated, respectively. First, the power laws for these three types of FSI problems were derived by dimensional analysis method. Then, the laws for the first two types were verified by numerical simulation. In addition, a multipurpose small-scale water sink test device was developed for numerical model updating. For the third type of problem, the dimensional analysis is no longer suitable due to its limitation on identifying the fluid pressure and structural stress, a simulation-based procedure for dynamics evaluation of large-scale structure was provided. The results show that, for some complex FSI problems, if small-scale prototype is tested safely, it doesn’t mean the full-scale product is also safe if both their pressure and stress are the main concerns, it needs further demonstration, at least by numerical simulation.


Author(s):  
C. L. Ford ◽  
J. F. Carrotte ◽  
A. D. Walker

This paper examines the effect of compressor generated inlet conditions on the air flow uniformity through lean burn fuel injectors. Any resulting nonuniformity in the injector flow field can impact on local fuel air ratios and hence emissions performance. The geometry considered is typical of the lean burn systems currently being proposed for future, low emission aero engines. Initially, Reynolds-averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) predictions were used to examine the flow field development between compressor exit and the inlet to the fuel injector. This enabled the main flow field features in this region to be characterized along with identification of the various stream-tubes captured by the fuel injector passages. The predictions indicate the resulting flow fields entering the injector passages are not uniform. This is particularly evident in the annular passages furthest away from the injector centerline which pass the majority of the flow which subsequently forms the main reaction zone within the flame tube. Detailed experimental measurements were also undertaken on a fully annular facility incorporating an axial compressor and lean burn combustion system. The measurements were obtained at near atmospheric pressure/temperatures and under nonreacting conditions. Time-resolved and time-averaged data were obtained at various locations and included measurements of the flow field issuing from the various fuel injector passages. In this way any nonuniformity in these flow fields could be quantified. In conjunction with the numerical data, the sources of nonuniformities in the injector exit plane were identified. For example, a large scale bulk variation (+/−10%) of the injector flow field was attributed to the development of the flow field upstream of the injector, compared with localized variations (+/−5%) that were generated by the injector swirl vane wakes. Using this data the potential effects on fuel injector emissions performance can be assessed.


Sign in / Sign up

Export Citation Format

Share Document