Residence Time Distribution in a Swirling Flow at Non-Reacting, Reacting, and Steam-Diluted Conditions

Author(s):  
Katharina Göckeler ◽  
Steffen Terhaar ◽  
Christian Oliver Paschereit

Residence time distributions in a swirling, premixed combustor flow are determined by means of tracer experiments and a reactor network model. The measurements were conducted at non-reacting, reacting, and steam-diluted reacting conditions for steam contents of up to 30% of the air mass flow. The tracer distribution was obtained from the light scattering of seeding particles employing the quantitative light sheet technique (QLS). At steady operating conditions, a positive step of particle feed was applied, yielding cumulative distribution functions (CDF) for the tracer response. The shape of the curve is characteristic for the local degree of mixedness. Fresh and recirculating gases were found to mix rapidly at non-reacting and highly steam diluted conditions, whereas mixing was more gradual at dry reacting conditions. The instantaneous mixing near the burner outlet is related to the presence of a large scale helical structure, which was suppressed at dry reacting conditions. Zones of similar mixing time scales, such as the recirculation zones, are identified. The CDF curves in these zones are reproduced by a network model of plug flow and perfectly mixed flow reactors. Reactor residence times and inlet volume flow fractions obtained in this way provide data for kinetic network models.

Author(s):  
Katharina Göckeler ◽  
Steffen Terhaar ◽  
Christian Oliver Paschereit

Residence time distributions in a swirling, premixed combustor flow are determined by means of tracer experiments and a reactor network model. The measurements were conducted at nonreacting, reacting, and steam-diluted reacting conditions for steam contents of up to 30% of the air mass flow. The tracer distribution was obtained from the light scattering of seeding particles employing the quantitative light sheet technique (QLS). At steady operating conditions, a positive step of particle feed was applied, yielding cumulative distribution functions (CDF) for the tracer response. The shape of the curve is characteristic for the local degree of mixedness. Fresh and recirculating gases were found to mix rapidly at nonreacting and highly steam-diluted conditions, whereas mixing was more gradual at dry reacting conditions. The instantaneous mixing near the burner outlet is related to the presence of a large-scale helical structure, which was suppressed at dry reacting conditions. Zones of similar mixing time scales, such as the recirculation zones, are identified. The CDF curves in these zones are reproduced by a network model of plug flow and perfectly mixed flow reactors. Reactor residence times and inlet volume flow fractions obtained in this way provide data for kinetic network models.


2011 ◽  
Vol 18 (2) ◽  
pp. 223-234 ◽  
Author(s):  
R. Haas ◽  
K. Born

Abstract. In this study, a two-step probabilistic downscaling approach is introduced and evaluated. The method is exemplarily applied on precipitation observations in the subtropical mountain environment of the High Atlas in Morocco. The challenge is to deal with a complex terrain, heavily skewed precipitation distributions and a sparse amount of data, both spatial and temporal. In the first step of the approach, a transfer function between distributions of large-scale predictors and of local observations is derived. The aim is to forecast cumulative distribution functions with parameters from known data. In order to interpolate between sites, the second step applies multiple linear regression on distribution parameters of observed data using local topographic information. By combining both steps, a prediction at every point of the investigation area is achieved. Both steps and their combination are assessed by cross-validation and by splitting the available dataset into a trainings- and a validation-subset. Due to the estimated quantiles and probabilities of zero daily precipitation, this approach is found to be adequate for application even in areas with difficult topographic circumstances and low data availability.


2017 ◽  
Vol 32 (3) ◽  
pp. 1161-1183 ◽  
Author(s):  
Bryan M. Burlingame ◽  
Clark Evans ◽  
Paul J. Roebber

Abstract This study evaluates the influence of planetary boundary layer parameterization on short-range (0–15 h) convection initiation (CI) forecasts within convection-allowing ensembles that utilize subsynoptic-scale observations collected during the Mesoscale Predictability Experiment. Three cases, 19–20 May, 31 May–1 June, and 8–9 June 2013, are considered, each characterized by a different large-scale flow pattern. An object-based method is used to verify and analyze CI forecasts. Local mixing parameterizations have, relative to nonlocal mixing parameterizations, higher probabilities of detection but also higher false alarm ratios, such that the ensemble mean forecast skill only subtly varied between parameterizations considered. Temporal error distributions associated with matched events are approximately normal around a zero mean, suggesting little systematic timing bias. Spatial error distributions are skewed, with average mean (median) distance errors of approximately 44 km (28 km). Matched event cumulative distribution functions suggest limited forecast skill increases beyond temporal and spatial thresholds of 1 h and 100 km, respectively. Forecast skill variation is greatest between cases with smaller variation between PBL parameterizations or between individual ensemble members for a given case, implying greatest control on CI forecast skill by larger-scale features than PBL parameterization. In agreement with previous studies, local mixing parameterizations tend to produce simulated boundary layers that are too shallow, cool, and moist, while nonlocal mixing parameterizations tend to be deeper, warmer, and drier. Forecasts poorly resolve strong capping inversions across all parameterizations, which is hypothesized to result primarily from implicit numerical diffusion associated with the default finite-differencing formulation for vertical advection used herein.


2016 ◽  
Vol 55 (9) ◽  
pp. 2091-2108 ◽  
Author(s):  
Michael Weniger ◽  
Petra Friederichs

AbstractThe feature-based spatial verification method named for its three score components: structure, amplitude, and location (SAL) is applied to cloud data, that is, two-dimensional spatial fields of total cloud cover and spectral radiance. Model output is obtained from the German-focused Consortium for Small-Scale Modeling (COSMO-DE) forward operator Synthetic Satellite Simulator (SynSat) and compared with SEVIRI satellite data. The aim of this study is twofold: first, to assess the applicability of SAL to this kind of data and, second, to analyze the role of external object identification algorithms (OIA) and the effects of observational uncertainties on the resulting scores. A comparison of three different OIA shows that the threshold level, which is a fundamental part of all studied algorithms, induces high sensitivity and unstable behavior of object-dependent SAL scores (i.e., even very small changes in parameter values lead to large changes in the resulting scores). An in-depth statistical analysis reveals significant effects on distributional quantities commonly used in the interpretation of SAL, for example, median and interquartile distance. Two sensitivity indicators that are based on the univariate cumulative distribution functions are derived. They make it possible to assess the sensitivity of the SAL scores to threshold-level changes without computationally expensive iterative calculations of SAL for various thresholds. The mathematical structure of these indicators connects the sensitivity of the SAL scores to parameter changes with the effect of observational uncertainties. Last, the discriminating power of SAL is studied. It is shown that—for large-scale cloud data—changes in the parameters may have larger effects on the object-dependent SAL scores (i.e., the S and L2 scores) than does a complete loss of temporal collocation.


Author(s):  
L. K. Doraiswamy

Three important (complicating) possibilities were not considered in the treatment of reactors presented in earlier chapters: (1) the residence time of the reactant molecules need not always be fully defined in terms of plug flow or fully mixed flow; (2) the equations describing certain situations can have more than one solution, leading to multiple steady states; and (3) there could be periods of unsteady-state operation with detrimental effects on performance, that is, transients could develop in a reactor. Actually, reactors can operate under conditions where there is an arbitrary distribution of residence times, leading to different degrees of mixing with consequent effects on reactor performance. Also, multiple solutions do exist for equations describing certain situations, and they can have an important bearing on the choice of operating conditions. And, finally, unsteady-state operation is a known feature of the start-up and shutdown periods of continuous reactor operation; it can also be introduced by intentional cycling of reactants. We briefly review these three important aspects of reactors in this chapter. However, because the subjects are highly mathematical, the treatment will be restricted to simple formulations and qualitative discussions that can act as guidelines in predicting reactor performance. All aspects of mixing in chemical reactors are based on the theory of residence time distribution first enunciated by Danckwerts (1953). Therefore, we begin our discussion of mixing with a brief description of this theory. When a steady stream of fluid flows through a vessel, different elements of the fluid spend different amounts of time within it. This distribution of residence times is denoted by a curve which represents, at any given time, the amount of fluid with ages between t and t + dt flowing out in the exit stream.


Stats ◽  
2020 ◽  
Vol 3 (3) ◽  
pp. 412-426
Author(s):  
Edmund Marth ◽  
Gerd Bramerdorfer

In the field of electrical machine design, excellent performance for multiple objectives, like efficiency or torque density, can be reached by using contemporary optimization techniques. Unfortunately, highly optimized designs are prone to be rather sensitive regarding uncertainties in the design parameters. This paper introduces an approach to rate the sensitivity of designs with a large number of tolerance-affected parameters using cumulative distribution functions (CDFs) based on finite element analysis results. The accuracy of the CDFs is estimated using the Dvoretzky–Kiefer–Wolfowitz inequality, as well as the bootstrapping method. The advantage of the presented technique is that computational time can be kept low, even for complex problems. As a demanding test case, the effect of imperfect permanent magnets on the cogging torque of a Vernier machine with 192 tolerance-affected parameters is investigated. Results reveal that for this problem, a reliable statement about the robustness can already be made with 1000 finite element calculations.


2003 ◽  
Vol 767 ◽  
Author(s):  
Ara Philipossian ◽  
Erin Mitchell

AbstractThis study explores aspects of the fluid dynamics of CMP processes. The residence time distribution of slurry under the wafer is experimentally determined and used to calculate the Dispersion Number (Δ) of the fluid in the wafer-pad region based on a dispersion model for non-ideal reactors. Furthermore, lubrication theory is used to explain flow behaviors at various operating conditions. Results indicate that at low wafer pressure and high relative pad-wafer velocity, the slurry exhibits nearly ideal plug flow behavior. As pressure increases and velocity decreases, flow begins to deviate from ideality and the slurry becomes increasingly more mixed beneath the wafer. These phenomena are confirmed to be the result of variable slurry film thicknesses between the pad and the wafer, as measured by changes in the coefficient of friction (COF) in the pad-wafer interface.


2019 ◽  
Vol 491 (3) ◽  
pp. 4247-4253 ◽  
Author(s):  
David Harvey ◽  
Wessel Valkenburg ◽  
Amelie Tamone ◽  
Alexey Boyarsky ◽  
Frederic Courbin ◽  
...  

ABSTRACT Flux ratio anomalies in strong gravitationally lensed quasars constitute a unique way to probe the abundance of non-luminous dark matter haloes, and hence the nature of dark matter. In this paper, we identify double-imaged quasars as a statistically efficient probe of dark matter, since they are 20 times more abundant than quadruply imaged quasars. Using N-body simulations that include realistic baryonic feedback, we measure the full distribution of flux ratios in doubly imaged quasars for cold (CDM) and warm dark matter (WDM) cosmologies. Through this method, we fold in two key systematics – quasar variability and line-of-sight structures. We find that WDM cosmologies predict a ∼6 per cent difference in the cumulative distribution functions of flux ratios relative to CDM, with CDM predicting many more small ratios. Finally, we estimate that ∼600 doubly imaged quasars will need to be observed in order to be able to unambiguously discern between CDM and the two WDM models studied here. Such sample sizes will be easily within reach of future large-scale surveys such as Euclid. In preparation for this survey data, we require discerning the scale of the uncertainties in modelling lens galaxies and their substructure in simulations, plus a strong understanding of the selection function of observed lensed quasars.


1980 ◽  
Vol 15 (1) ◽  
pp. 1-16
Author(s):  
W. Akhtar ◽  
G.P. Mathur ◽  
D.S. Dickey

Abstract Design equations have been developed to estimate liquid velocities and mixing times in air agitated tanks. Determination of the gas rate necessary for adequate agitation in a given geometry is possible with this information. Air agitation offers benefits of increased dissolved oxygen and cost effective mixing for some waste water treatment applications. Empirical expressions for surface and bottom velocities, as a function of gas flow rate and tank geometry have been developed from laboratory measurements. Since neither statistical nor dimensional analysis of the laboratory results could prove conclusively the correct form of the velocity correlations, the different correlation forms were used to verify large scale velocity measurements. Only one of the three trial correlations correctly predicted large scale velocities. The importance of these velocity correlations is evident from experience with mechanical agitator design, which shows that liquid velocity is the appropriate design criterion for most similar applications. Mixing times were measured experimentally in the laboratory and studied with a mathematical model. The model was an unsteady state mass balance containing convective flow terms with turbulent dispersion super-imposed on the flow. The velocities for the convective flow terms were calculated from the empirical velocity correlations. Estimates of the turbulent dispersion coefficients were investigated experimentally. Because multiple velocity correlations and a computer model for mixing time are difficult to use when performing design calculations, empirical correlations for bulk velocity and mixing time were derived. Combined with a relationship for power input, the design correlations provide information necessary to determine operating conditions in large scale, air agitated tanks. The effects of tank geometry on air agitated design have been explored within a range of typical construction dimensions. Thus, the principal elements of a complete design approach to air agitated rectangular tanks are presented.


2019 ◽  
Author(s):  
Ryther Anderson ◽  
Achay Biong ◽  
Diego Gómez-Gualdrón

<div>Tailoring the structure and chemistry of metal-organic frameworks (MOFs) enables the manipulation of their adsorption properties to suit specific energy and environmental applications. As there are millions of possible MOFs (with tens of thousands already synthesized), molecular simulation, such as grand canonical Monte Carlo (GCMC), has frequently been used to rapidly evaluate the adsorption performance of a large set of MOFs. This allows subsequent experiments to focus only on a small subset of the most promising MOFs. In many instances, however, even molecular simulation becomes prohibitively time consuming, underscoring the need for alternative screening methods, such as machine learning, to precede molecular simulation efforts. In this study, as a proof of concept, we trained a neural network as the first example of a machine learning model capable of predicting full adsorption isotherms of different molecules not included in the training of the model. To achieve this, we trained our neural network only on alchemical species, represented only by their geometry and force field parameters, and used this neural network to predict the loadings of real adsorbates. We focused on predicting room temperature adsorption of small (one- and two-atom) molecules relevant to chemical separations. Namely, argon, krypton, xenon, methane, ethane, and nitrogen. However, we also observed surprisingly promising predictions for more complex molecules, whose properties are outside the range spanned by the alchemical adsorbates. Prediction accuracies suitable for large-scale screening were achieved using simple MOF (e.g. geometric properties and chemical moieties), and adsorbate (e.g. forcefield parameters and geometry) descriptors. Our results illustrate a new philosophy of training that opens the path towards development of machine learning models that can predict the adsorption loading of any new adsorbate at any new operating conditions in any new MOF.</div>


Sign in / Sign up

Export Citation Format

Share Document