previous time step
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 5)

H-INDEX

3
(FIVE YEARS 2)

2021 ◽  
Vol 2057 (1) ◽  
pp. 012079
Author(s):  
A V Valov

Abstract The primary focus of this paper is to investigate the interaction between simultaneously propagating multiple fractures, initiated from an inclined well. In particular, the aim is to better understand the influence of the well inclination angle on the stress shadow between the fractures and on the overall resulting geometry of individual cracks. To simplify the analysis, the paper assumes the limit of large perforation friction, which leads to a uniform flux distribution between the fractures. The mathematical model for multiple hydraulic fractures is constructed by coupling together the respective models for individual fractures, each representing a single planar fracture model. In this approach, the fracture induced stress or stress shadow from a previous time step is used as an input for a given single hydraulic fracture to propagate independently. Further, to reduce computational burden, the effects associated with tangential stresses and displacements are neglected, whereby the stress interaction between the fractures is solely described by the normal opening and the normal stress component. Numerical results are presented for the storage viscosity dominated regime, whereby the effects of toughness and leak-off are negligible. An interesting behaviour is observed, demonstrating that the well inclination angle plays a significant role on the overall fracture symmetry. For zero inclination, all the fractures are nearly symmetrical and identical. However, once well inclination is introduced, this breaks the symmetry, making a profound effect on the final result.


Author(s):  
Абдикерим Ырысбаевич Курбаналиев ◽  
Бурулгул Рахманбердиевна Ойчуева ◽  
Анипа Ташбаевна Калмурзаева ◽  
Аманбек Жайнакович Жайнаков ◽  
Топчубай Чокоевич Култаев

Приведены предварительные результаты численного моделирования двухфазного течения двух несжимаемых и несмешивающихся жидкостей через водослив трапециевидной формы. Целью работы была демонстрация возможностей решателя interFoam различных версий открытого пакета OpenFoam при моделировании рассматриваемого класса течений. Численные расчеты проведены с использованием входящего в состав OpenFoam руководства weirOverFlow. В пакете OpenFOAM6 коэффициент fvcDdtPhiCoeff для вычисления потоков массы на гранях ячеек изменен в целях улучшения устойчивости/точности и исключения осцилляций давления при высоких числах Куранта. Он вычисляется с использованием значений плотности и потока массы с предыдущего временн´ого шага. Результаты численных расчетов показывают, что такие изменения вызывают чрезмерно быстрый переход от нестационарного течения к стационарному. The results of numerical simulation for a two-phase flow of two incompressible and immiscible liquids through a trapezoidal spillway are presented. To simulate the free boundary, we used the method of fluid volume. The aim of the work was to demonstrate the capabilities of the various versions of interFoam solver of the OpenFOAM package for modelling the considered class of flows. Numerical calculations were performed using the OpenFOAM weirOverFlow tutorial. In order to improve the consistency, usability, flexibility and ease of modifying the interFoam solver, the existing interDyMFoam solver with the local dynamic mesh adaptation function was combined with the interFoam solver with a static computational mesh. In addition, in the OpenFOAM6 package, the fvcDdtPhiCoeff coefficient used for calculating the time derivative and taking into account the Rhie- Chow correction on the collocated grid for calculating mass fluxes on the cell faces was changed in order to improve stability/accuracy and eliminate pressure oscillations at high Courant numbers. The calculation of fvcDdtPhiCoeff coefficient in OpenFOAM5 requires the density value from the current time step along with the mass flow value from the previous time step, while in OpenFOAM6, both density and mass flow values are taken from the previous time step for calculation of the fvcDdtPhiCoeff coefficient. The results of numerical calculations of the OpenFOAM6 package show that such changes lead to an excessively fast transition of the transient flow to the stationary one in comparison with other versions of the OpenFOAM package.


2020 ◽  
Vol 24 (1) ◽  
pp. 169-188 ◽  
Author(s):  
Hannes Müller-Thomy

Abstract. In urban hydrology rainfall time series of high resolution in time are crucial. Such time series with sufficient length can be generated through the disaggregation of daily data with a micro-canonical cascade model. A well-known problem of time series generated in this way is the inadequate representation of the autocorrelation. In this paper two cascade model modifications are analysed regarding their ability to improve the autocorrelation in disaggregated time series with 5 min resolution. Both modifications are based on a state-of-the-art reference cascade model (method A). In the first modification, a position dependency is introduced in the first disaggregation step (method B). In the second modification the position of a wet time step is redefined in addition by taking into account the disaggregated finer time steps of the previous time step instead of the previous time step itself (method C). Both modifications led to an improvement of the autocorrelation, especially the position redefinition (e.g. for lag-1 autocorrelation, relative errors of −3 % (method B) and 1 % (method C) instead of −4 % for method A). To ensure the conservation of a minimum rainfall amount in the wet time steps, the mimicry of a measurement device is simulated after the disaggregation process. Simulated annealing as a post-processing strategy was tested as an alternative as well as an addition to the modifications in methods B and C. For the resampling, a special focus was given to the conservation of the extreme rainfall values. Therefore, a universal extreme event definition was introduced to define extreme events a priori without knowing their occurrence in time or magnitude. The resampling algorithm is capable of improving the autocorrelation, independent of the previously applied cascade model variant (e.g. for lag-1 autocorrelation the relative error of −4 % for method A is reduced to 0.9 %). Also, the improvement of the autocorrelation by the resampling was higher than by the choice of the cascade model modification. The best overall representation of the autocorrelation was achieved by method C in combination with the resampling algorithm. The study was carried out for 24 rain gauges in Lower Saxony, Germany.


2019 ◽  
Vol 23 (2) ◽  
pp. 1015-1034 ◽  
Author(s):  
Stephanie Thiesen ◽  
Paul Darscheid ◽  
Uwe Ehret

Abstract. In this study, we propose a data-driven approach for automatically identifying rainfall-runoff events in discharge time series. The core of the concept is to construct and apply discrete multivariate probability distributions to obtain probabilistic predictions of each time step that is part of an event. The approach permits any data to serve as predictors, and it is non-parametric in the sense that it can handle any kind of relation between the predictor(s) and the target. Each choice of a particular predictor data set is equivalent to formulating a model hypothesis. Among competing models, the best is found by comparing their predictive power in a training data set with user-classified events. For evaluation, we use measures from information theory such as Shannon entropy and conditional entropy to select the best predictors and models and, additionally, measure the risk of overfitting via cross entropy and Kullback–Leibler divergence. As all these measures are expressed in “bit”, we can combine them to identify models with the best tradeoff between predictive power and robustness given the available data. We applied the method to data from the Dornbirner Ach catchment in Austria, distinguishing three different model types: models relying on discharge data, models using both discharge and precipitation data, and recursive models, i.e., models using their own predictions of a previous time step as an additional predictor. In the case study, the additional use of precipitation reduced predictive uncertainty only by a small amount, likely because the information provided by precipitation is already contained in the discharge data. More generally, we found that the robustness of a model quickly dropped with the increase in the number of predictors used (an effect well known as the curse of dimensionality) such that, in the end, the best model was a recursive one applying four predictors (three standard and one recursive): discharge from two distinct time steps, the relative magnitude of discharge compared with all discharge values in a surrounding 65 h time window and event predictions from the previous time step. Applying the model reduced the uncertainty in event classification by 77.8 %, decreasing conditional entropy from 0.516 to 0.114 bits. To assess the quality of the proposed method, its results were binarized and validated through a holdout method and then compared to a physically based approach. The comparison showed similar behavior of both models (both with accuracy near 90 %), and the cross-validation reinforced the quality of the proposed model. Given enough data to build data-driven models, their potential lies in the way they learn and exploit relations between data unconstrained by functional or parametric assumptions and choices. And, beyond that, the use of these models to reproduce a hydrologist's way of identifying rainfall-runoff events is just one of many potential applications.


2019 ◽  
Vol 281 ◽  
pp. 01008 ◽  
Author(s):  
Omar Kammouh ◽  
Paolo Gardoni ◽  
Gian Paolo Cimellaro

Resilience indicators are a convenient tool to assess the resilience of engineering systems. They are often used in preliminary designs or in the assessment of complex systems. This paper introduces a novel approach to assess the time-dependent resilience of engineering systems using resilience indicators. The temporal dimension is tackled in this work using the Dynamic Bayesian Network (DBN). DBN extends the classical BN by adding the time dimension. It permits the interaction among variables at different time steps. It can be used to track the evolution of a system’s performance given an evidence recorded at a previous time step. This allows predicting the resilience state of a system given its initial condition. A mathematical probabilistic framework based on the DBN is developed to model the resilience of dynamic engineering systems. A case study is presented in the paper to demonstrate the applicability of the introduced framework.


2018 ◽  
Author(s):  
Stephanie Thiesen ◽  
Paul Darscheid ◽  
Uwe Ehret

Abstract. In this study, we propose a data-driven approach to automatically identify rainfall-runoff events in discharge time series. The core of the concept is to construct and apply discrete multivariate probability distributions to obtain probabilistic predictions of each time step being part of an event. The approach permits any data to serve as predictors, and it is non-parametric in the sense that it can handle any kind of relation between the predictor(s) and the target. Each choice of a particular predictor data set is equivalent to formulating a model hypothesis. Among competing models, the best is found by comparing their predictive power in a training data set with user-classified events. For evaluation, we use measures from Information Theory such as Shannon Entropy and Conditional Entropy to select the best predictors and models and, additionally, measure the risk of overfitting via Cross Entropy and Kullback–Leibler Divergence. As all these measures are expressed in bit, we can combine them to identify models with the best tradeoff between predictive power and robustness given the available data. We applied the method to data from the Dornbirnerach catchment in Austria distinguishing three different model types: Models relying on discharge data, models using both discharge and precipitation data, and recursive models, i.e., models using their own predictions of a previous time step as an additional predictor. In the case study, the additional use of precipitation reduced predictive uncertainty only by a small amount, likely because the information provided by precipitation is already contained in the discharge data. More generally, we found that the robustness of a model quickly dropped with the increase in the number of predictors used (an effect well known as the Curse of Dimensionality), such that in the end, the best model was a recursive one applying four predictors (three standard and one recursive): discharge from two distinct time steps, the relative magnitude of discharge in a 65-hour time window and event predictions from the previous time step. Applying the model reduced the uncertainty about event classification by 77.8 %, decreasing Conditional Entropy from 0.516 to 0.114 bits. Given enough data to build data-driven models, their potential lies in the way they learn and exploit relations between data unconstrained by functional or parametric assumptions and choices. And, beyond that, the use of these models to reproduce a hydrologist's way to identify rainfall-runoff events is just one of many potential applications.


2017 ◽  
Vol 20 (1) ◽  
pp. 134-148 ◽  
Author(s):  
Mohamad Javad Alizadeh ◽  
Vahid Nourani ◽  
Mojtaba Mousavimehr ◽  
Mohamad Reza Kavianpour

Abstract In this study, an integrated artificial neural network (IANN) model incorporating both observed and predicted time series as input variables conjoined with wavelet transform for flow forecasting with different lead times. The daily model employs forecasts of the tributaries in its input structure in order to predict the daily flow in the main river in the next time steps. The predictive models for the tributaries are those of the conventional wavelet-ANN models in which they comprised only observed time series as input variables. The monthly model updates its input structure by other forecasts of the tributaries and also the predicted time series of the main river in the previous time step. The model is utilized for flow forecasting in the Snoqualmie River basin, Washington State, USA. In the integrated model, the output of each tributary (sub-basins) and also the previous flow time series of the main river are used as input variables. Regarding the results of this study, the daily flow discharge can be successfully estimated for up to several days ahead (4 d) in the main river and tributaries. Moreover, an acceptable prediction of the flow within the next two months can be achieved by applying the proposed model.


Author(s):  
Xavier Pialat ◽  
Olivier Simonin ◽  
Philippe Villedieu

The purpose of this paper is both to present and validate the methodology of a hybrid method coupling a Eulerian and a Lagrangian approaches in turbulent gas-particle flows. The knowledge of the dispersed phase is displayed in terms of a joint fluid-particle probability density function (pdf) which obeys a Boltzmann-like equation. We chose two different ways of resolution of this equation, depending on the required level of description. The first one is a stochastic Lagrangian approach which embeds a Langevin equation for the fluid velocity seen along the particle path. The second one is a Eulerian second-order momentum approach derived in the same frame as the preceding one. These two approaches are then coupled through half-fluxes. This procedures allows well-posed boundary conditions stemmed from previous time-step statistics for the two approaches. The aim is to provide a methodology able to take into account physical phenomena such as particle bouncing on rough walls or deposition in inhomogeneous flows with a reasonable numerical cost. The paper present the methodology and validations in the case of inert monodispersed particle in a turbulent shear flow without two-way coupling. Comparisons of the results of the hybrid method with each approach and LES/DPS results indicate that the hybrid method could become a powerful simulation tool for gas-particle flows.


Author(s):  
G Mavros ◽  
H Rahnejat ◽  
P D King

An analysis of the mechanism of tyre contact force generation under transient conditions is presented. For this purpose, two different versions of a brush model are used, both with inertial and viscoelastic properties. The first model consists of independent bristles, while the second, with a more realistic scenario, introduces viscoelastic circumferential connections between the sequential bristles, which affect the lateral degrees of freedom. Friction between the tyre and the ground follows an experimentally verified stick-slip law. For the model with independent bristles, the state of each bristle at any instant of time depends only on the state of the same bristle at a previous time step. In the second model, the instantaneous state depends on the state of the same bristle at the preceding time step, as well as on the state of the two adjacent bristles at the same time. Simulation results reveal the differences between the two models and most importantly show how transient friction force generation may differ substantially from steady state predictions. The findings suggest that transient tyre behaviour should not be attributed solely to the contributions of the flexible belt and carcass. On the contrary, the observed transience in the neighbourhood of the contact patch should also be taken into account.


1998 ◽  
Vol 26 ◽  
pp. 77-82 ◽  
Author(s):  
Yuji Kominami ◽  
Yasoichi Endo ◽  
Shoji Niwano ◽  
Syuichi Ushioda

This paper describes a method for estimating the depth of new snow, using hourly data of total snow depth and precipitation. As the snow cover is compacted continuously due to its own weight, the depth of new snow deposited since the previous time-step to the present time is given by a difference between the height of the present snow surface and the present is impacted height of the previous snow surface. Thus, based on viscous compression theory and an empirical relation between compressive viscosity and the density of snow, an equation has been derived to compute the time variation of the thickness of a snow layer due to viscous compression. Using this equation, the present height of the previous snow surface, which cannot be measured by simple means, was computed and the depth of daily new snow was estimated as its difference from the present measured total snow depth. The approximated results were found to be in good agreement with data measured in Tohkamachi during the three winters from 1992–93 to 1994–95. The standard deviation was 1.71 cm and the maximum difference between estimated values and observed values was ± 8 cm.


Sign in / Sign up

Export Citation Format

Share Document