Measurement and identification of the nonlinear dynamics of a jointed structure using full-field data, Part I: Measurement of nonlinear dynamics

2022 ◽  
Vol 166 ◽  
pp. 108401
Author(s):  
Wei Chen ◽  
Debasish Jana ◽  
Aryan Singh ◽  
Mengshi Jin ◽  
Mattia Cenedese ◽  
...  
2021 ◽  
Vol 255 ◽  
pp. 106620
Author(s):  
A. Elouneg ◽  
D. Sutula ◽  
J. Chambert ◽  
A. Lejeune ◽  
S.P.A. Bordas ◽  
...  

2015 ◽  
Author(s):  
B. Al-Wehaibi ◽  
S. BinAkresh ◽  
M. Issaka ◽  
S. Al-Shamrani

2020 ◽  
Author(s):  
Swinda Falkena ◽  
Jana de Wiljes ◽  
Antje Weisheimer ◽  
Theodore G. Shepherd

<p>A number of methods exist for the identification of atmospheric circulation regimes. The most commonly-used method is k-means clustering. Often the clustering algorithm is applied to the first several principal components, instead of the full field data. In addition, many studies use a time-filter to get rid of high frequency oscillations before the clustering is executed. We discuss the consequences of these filtering techniques on the identified circulation regimes for the Euro-Atlantic sector in winter. Most studies identify four regimes: the Atlantic Ridge, the Scandinavian Blocking, and the two phases of the North Atlantic Oscillation. However, when k-means clustering is applied to the full field data of a reanalysis dataset, the optimal number of regimes is not found to be four, but six. This optimal number is based on the use of an information criterion, together with consistency arguments. The two additional regimes can be identified as the opposite phases of the Atlantic Ridge and Scandinavian Blocking, since they have a low-pressure area where the original regimes have a high-pressure area. Furthermore, the incorporation of a persistence constraint within the clustering algorithm is found to preserve the occurrence rates of the regimes, and thus maintains the consistency of the results. In contrast, applying a time-filter to enforce persistence of the regimes changes the occurrence rates. We conclude that care must be taken when filtering the data before the clustering algorithm is applied, since this can lead to biases in the identified circulation regimes and their occurrence rates.</p>


2009 ◽  
Vol 21 (1) ◽  
pp. 015703 ◽  
Author(s):  
Stéphane Avril ◽  
Pierre Feissel ◽  
Fabrice Pierron ◽  
Pierre Villon

2002 ◽  
Vol 5 (02) ◽  
pp. 126-134 ◽  
Author(s):  
R.O. Baker ◽  
F. Kuppe ◽  
S. Chugh ◽  
R. Bora ◽  
S. Stojanovic ◽  
...  

Summary Modern streamline-based reservoir simulators are able to account for actual field conditions such as 3D multiphase flow effects, reservoir heterogeneity, gravity, and changing well conditions. A streamline simulator was used to model four field cases, with approximately 400 wells and 150,000 gridblocks. History-match run times were approximately 1 CPU hour per run, with the final history matches completed in approximately 1 month per field. In all field cases, a high percentage of wells were history matched within the first two to three runs. Streamline simulation not only enables a rapid turnaround time for studies, but it also serves as a different tool in resolving each of the studied fields' unique characteristics. The primary reasons for faster history matching of permeability fields using 3D streamline technology as compared to conventional finite-difference (FD) techniques are as follows: Streamlines clearly identify which producer-injector pairs communicate strongly (flow visualization). Streamlines allow the use of a very large number of wells, thereby substantially reducing the uncertainty associated with outer-boundary conditions. Streamline flow paths indicate that idealized drainage patterns do not exist in real fields. It is therefore unrealistic to extract symmetric elements out of a full field. The speed and efficiency of the method allows the solution of fine-scale and/or full-field models with hundreds of wells. The streamline simulator honors the historical total fluid injection and production volumes exactly because there are no drawdown constraints for incompressible problems. The technology allows for easy identification of regions that require modifications to achieve a history match. Streamlines provide new flow information (i.e., well connectivity, drainage volumes, and well allocation factors) that cannot be derived from conventional simulation methods. Introduction In the past, streamline-based flow simulation was quite limited in its application to field data. Emanuel and Milliken1 showed how hybrid streamtube models were used to history match field data rapidly to arrive at both an updated geologic model and a current oil-saturation distribution for input to FD simulations. FD simulators were then used in forecast mode. Recent advances in streamline-based flow simulators have overcome many of the limitations of previous streamline and streamtube methods.2-6 Streamline-based simulators are now fully 3D and account for multiphase gravity and fluid mobility effects as well as compressibility effects. Another key improvement is that the simulator can now account for changing well conditions due to rate changes, infill drilling, producer-injector conversions, and well abandonments. With advances in streamline methods, the technique is rapidly becoming a common tool to assist in the modeling and forecasting of field cases. As this technology has matured, it is becoming available to a larger group of engineers and is no longer confined to research centers. Published case studies using streamline simulators are now appearing from a broad distribution of sources.7–12 Because of the increasing interest in this technology, our first intent in this paper is to outline a methodology for where and how streamline-based simulation fits in the reservoir engineering toolbox. Our second objective is to provide insight into why we think the method works so well in some cases. Finally, we will demonstrate the application of the technology to everyday field situations useful to mainstream exploitation or reservoir engineers, as opposed to specialized or research applications. The Streamline Simulation Method For a more detailed mathematical description of the streamline method, please refer to the Appendix and subsequent references. In brief, the streamline simulation method solves a 3D problem by decoupling it into a series of 1D problems, each one solved along a streamline. Unlike FD simulation, streamline simulation relies on transporting fluids along a dynamically changing streamline- based flow grid, as opposed to the underlying Cartesian grid. The result is that large timestep sizes can be taken without numerical instabilities, giving the streamline method a near-linear scaling in terms of CPU efficiency vs. model size.6 For very large models, streamline-based simulators can be one to two orders of magnitude faster than FD methods. The timestep size in streamline methods is not limited by a classic grid throughput (CFL) condition but by how far fluids can be transported along the current streamline grid before the streamlines need to be updated. Factors that influence this limit include nonlinear effects like mobility, gravity, and well rate changes.5 In real field displacements, historical well effects have a far greater impact on streamline-pattern changes than do mobility and gravity. Thus, the key is determining how much historical data can be upscaled without significantly impacting simulation results. For all cases considered here, 1-year timestep sizes were more than adequate to capture changes in historical data, gravity, and mobility effects. It is worth noting that upscaling historical data also would benefit run times for FD simulations. Where possible, both SL and FD methods would then require similar simulation times. However, only for very coarse grids and specific problems is it possible to take 1-year timestep sizes with FD methods. As the grid becomes finer, CFL limitations begin to dictate the timestep size, which is much smaller than is necessary to honor nonlinearities. This is why streamline methods exhibit larger speed-up factors over FD methods as the number of grid cells increases.


2009 ◽  
Author(s):  
Steven Michael ◽  
Ronald R. Parenti ◽  
John D. Moores ◽  
William Wilcox, Jr. ◽  
Timothy M. Yarnall ◽  
...  

2021 ◽  
Vol 347 ◽  
pp. 00029
Author(s):  
John D. Van Tonder ◽  
Martin P. Venter ◽  
Gerhard Venter

A theoretical testing method for fully characterising the Mooney-Rivlin hyper-elastic material model is proposed by capturing full-field data, namely displacement field and indentation force data. A finite element model with known parameters will act as the experimental model against which all data will be referenced. This paper proposes a method of inverse finite element analysis operating under the assumption of equally objective function optimal planes or “hyper-planes”. The paper concludes that the Mooney-Rivlin material model can theoretically be fully characterised in a single indentation test by applying methods discussed in the paper when using full-field data operating under the assumption of hyper-planes.


Sign in / Sign up

Export Citation Format

Share Document