Comment on “Numerical models of flow patterns around a rigid inclusion in a viscous matrix undergoing simple shear: implications of model parameters and boundary conditions” by N. Mandal, S.K. Samanta and C. Chakraborty [Journal of Structural Geology 27 (2005) 1599–1609]

2006 ◽  
Vol 28 (7) ◽  
pp. 1371-1374 ◽  
Author(s):  
Fernando O. Marques ◽  
Rui Taborda ◽  
Santanu Bose
1987 ◽  
Vol 52 (8) ◽  
pp. 1888-1904
Author(s):  
Miloslav Hošťálek ◽  
Ivan Fořt

A theoretical model is described of the mean two-dimensional flow of homogeneous charge in a flat-bottomed cylindrical tank with radial baffles and six-blade turbine disc impeller. The model starts from the concept of vorticity transport in the bulk of vortex liquid flow through the mechanism of eddy diffusion characterized by a constant value of turbulent (eddy) viscosity. The result of solution of the equation which is analogous to the Stokes simplification of equations of motion for creeping flow is the description of field of the stream function and of the axial and radial velocity components of mean flow in the whole charge. The results of modelling are compared with the experimental and theoretical data published by different authors, a good qualitative and quantitative agreement being stated. Advantage of the model proposed is a very simple schematization of the system volume necessary to introduce the boundary conditions (only the parts above the impeller plane of symmetry and below it are distinguished), the explicit character of the model with respect to the model parameters (model lucidity, low demands on the capacity of computer), and, in the end, the possibility to modify the given model by changing boundary conditions even for another agitating set-up with radially-axial character of flow.


2010 ◽  
Vol 37 (4) ◽  
pp. 600-610 ◽  
Author(s):  
Vladan Kuzmanovic ◽  
Ljubodrag Savic ◽  
John Stefanakos

This paper presents two-dimensional (2D) and three-dimensional (3D) numerical models for unsteady phased thermal analysis of RCC dams. The time evolution of a thermal field has been modeled using the actual dam shape, RCC technology and the adequate description of material properties. Model calibration and verification has been done based on the field investigations of the Platanovryssi dam, the highest RCC dam in Europe. The results of a long-term thermal analysis, with actual initial and boundary conditions, have shown a good agreement with the observed temperatures. The influence of relevant parameters on the thermal field of RCC dams has been analyzed. It is concluded that the 2D model is appropriate for the thermal phased analysis, and that the boundary conditions and the mixture properties are the most influential on the RCC dam thermal behavior.


2000 ◽  
Vol 663 ◽  
Author(s):  
J. Samper ◽  
R. Juncosa ◽  
V. Navarro ◽  
J. Delgado ◽  
L. Montenegro ◽  
...  

ABSTRACTFEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project dealing with the bentonite engineered barrier designed for sealing and containment of waste in a high level radioactive waste repository (HLWR). It includes two main experiments: an situ full-scale test performed at Grimsel (GTS) and a mock-up test operating since February 1997 at CIEMAT facilities in Madrid (Spain) [1,2,3]. One of the objectives of FEBEX is the development and testing of conceptual and numerical models for the thermal, hydrodynamic, and geochemical (THG) processes expected to take place in engineered clay barriers. A significant improvement in coupled THG modeling of the clay barrier has been achieved both in terms of a better understanding of THG processes and more sophisticated THG computer codes. The ability of these models to reproduce the observed THG patterns in a wide range of THG conditions enhances the confidence in their prediction capabilities. Numerical THG models of heating and hydration experiments performed on small-scale lab cells provide excellent results for temperatures, water inflow and final water content in the cells [3]. Calculated concentrations at the end of the experiments reproduce most of the patterns of measured data. In general, the fit of concentrations of dissolved species is better than that of exchanged cations. These models were later used to simulate the evolution of the large-scale experiments (in situ and mock-up). Some thermo-hydrodynamic hypotheses and bentonite parameters were slightly revised during TH calibration of the mock-up test. The results of the reference model reproduce simultaneously the observed water inflows and bentonite temperatures and relative humidities. Although the model is highly sensitive to one-at-a-time variations in model parameters, the possibility of parameter combinations leading to similar fits cannot be precluded. The TH model of the “in situ” test is based on the same bentonite TH parameters and assumptions as for the “mock-up” test. Granite parameters were slightly modified during the calibration process in order to reproduce the observed thermal and hydrodynamic evolution. The reference model captures properly relative humidities and temperatures in the bentonite [3]. It also reproduces the observed spatial distribution of water pressures and temperatures in the granite. Once calibrated the TH aspects of the model, predictions of the THG evolution of both tests were performed. Data from the dismantling of the in situ test, which is planned for the summer of 2001, will provide a unique opportunity to test and validate current THG models of the EBS.


2021 ◽  
pp. 1-18
Author(s):  
Gisela Vanegas ◽  
John Nejedlik ◽  
Pascale Neff ◽  
Torsten Clemens

Summary Forecasting production from hydrocarbon fields is challenging because of the large number of uncertain model parameters and the multitude of observed data that are measured. The large number of model parameters leads to uncertainty in the production forecast from hydrocarbon fields. Changing operating conditions [e.g., implementation of improved oil recovery or enhanced oil recovery (EOR)] results in model parameters becoming sensitive in the forecast that were not sensitive during the production history. Hence, simulation approaches need to be able to address uncertainty in model parameters as well as conditioning numerical models to a multitude of different observed data. Sampling from distributions of various geological and dynamic parameters allows for the generation of an ensemble of numerical models that could be falsified using principal-component analysis (PCA) for different observed data. If the numerical models are not falsified, machine-learning (ML) approaches can be used to generate a large set of parameter combinations that can be conditioned to the different observed data. The data conditioning is followed by a final step ensuring that parameter interactions are covered. The methodology was applied to a sandstone oil reservoir with more than 70 years of production history containing dozens of wells. The resulting ensemble of numerical models is conditioned to all observed data. Furthermore, the resulting posterior-model parameter distributions are only modified from the prior-model parameter distributions if the observed data are informative for the model parameters. Hence, changes in operating conditions can be forecast under uncertainty, which is essential if nonsensitive parameters in the history are sensitive in the forecast.


2021 ◽  
Vol 130 (4) ◽  
Author(s):  
Prasoon Anand ◽  
Snehashish Chakraverty ◽  
Soumyajit Mukherjee

2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


Author(s):  
Michael Link ◽  
Zheng Qian

Abstract In recent years procedures for updating analytical model parameters have been developed by minimizing differences between analytical and preferably experimental modal analysis results. Provided that the initial analysis model contains parameters capable of describing possible damage these techniques could also be used for damage detection. In this case the parameters are updated using test data before and after the damage. Looking at complex structures with hundreds of parameters one generally has to measure the modal data at many locations and try to reduce the number of unknown parameters by some kind of localization technique because the measurement information is generally not sufficient to identify all the parameters equally distributed all over the structure. Another way of reducing the number of parameters shall be presented here. This method is based on the idea of measuring only a part of the structure and replacing the residual structure by dynamic boundary conditions which describe the dynamic stiffness at the interfaces between the measured main structure and the remaining unmeasured residual structure. This approach has some advantage since testing could be concentrated on critical areas where structural modifications are expected either due to damage or due to intended design changes. The dynamic boundary conditions are expressed in Craig-Bampton (CB) format by transforming the mass and stiffness matrices of the unmeasured residual structure to the interface degrees of freedom (DOF) and to the modal DOFs of the residual structure fixed at the interface. The dynamic boundary stiffness concentrates all physical parameters of the residual structure in only a few parameters which are open for updating. In this approach damage or modelling errors within the unmeasured residual structure are taken into account only in a global sense whereas the measured main structure is parametrized locally as usual by factoring mass and stiffness submatrices defining the type and the location of the physical parameters to be identified. The procedure was applied to identify the design parameters of a beam type frame structure with bolted joints using experimental modal data.


Author(s):  
X. Li ◽  
J. L. Gaddis ◽  
T. Wang

The flow field of a 2-D laminar confined impinging slot jet is investigated. Numerical results indicate that there exist two different solutions in some range of geometric and flow parameters. The two steady flow patterns are obtained under identical boundary conditions but only with different initial flow fields. Three different exit boundary conditions are investigated to eliminate artificial effects. The different flow patterns are observed to significantly affect the heat transfer. A flow visualization experiment is carried out to verify the computational results and both flow patterns are observed. The bifurcation mechanism is interpreted and discussed.


Sign in / Sign up

Export Citation Format

Share Document