scholarly journals Time-Volume Estimation of Velocity Fields From Non-Synchronous Planar Measurements Using Linear Stochastic Estimation

Author(s):  
Daniel Butcher ◽  
Adrian Spencer

Abstract With increasing complexity of aerodynamic devices such as gas turbine fuel swirl nozzles (FSN) and combustors, the need for time-resolved full volume flow characterisation is becoming greater. Even with modern advancements in both numerical and experimental methods, this remains a challenging area. The work presented in this paper combines multiple non-synchronous planar measurements to reconstruct an estimate of a synchronous, instantaneous flow field of the whole measurement set. Temporal information is retained through the linear stochastic estimation (LSE) technique. The technique is described, applied and validated with a simplified combustor and FSN geometry flow for which 3-component, 3-dimensional (3C3D) flow information is known from published tomographic PIV[1]. Using the tomographic PIV data set, multiple virtual ‘planes’ may be extracted to emulate single planar PIV measurements and produce the correlations required for LSE. In this example, multiple parallel planes are synchronised with a single perpendicular plane that intersects each of them. As the underlying data set is volumetric, the measured velocity is known a priori and therefore can be directly compared to the estimated velocity field for validation purposes. The work shows that when the input time-resolved planar velocity measurements are first POD (proper orthogonal decomposition) filtered, high correlation between the estimations and the validation velocity volumes are possible. This results in estimated full volume velocity distributions which are available at the same time instance as the input field — i.e. a time resolved velocity estimation at the frequency of the single input plane. While 3C3D information is used in the presented work, this is necessary only for validation; in true application planar technique would be used. The study concludes that provided the number of sensors used for input LSE exceeds the number of POD modes used for pre-filtering, it is possible to achieve correlation greater than 99%.

2019 ◽  
Vol 141 (10) ◽  
Author(s):  
Daniel Butcher ◽  
Adrian Spencer

The work presented in this paper combines multiple nonsynchronous planar measurements to reconstruct an estimate of a synchronous, instantaneous flow field of the whole measurement set. Temporal information is retained through the linear stochastic estimation (LSE) technique. The technique is described, applied, and validated with a simplified combustor and fuel swirl nozzles (FSN) geometry flow for which three-component, three-dimensional (3C3D) flow information is available. Using the 3C3D dataset, multiple virtual “planes” may be extracted to emulate single planar particle image velocimetry (PIV) measurements and produce the correlations required for LSE. In this example, multiple parallel planes are synchronized with a single perpendicular plane that intersects each of them. As the underlying dataset is known, it therefore can be directly compared to the estimated velocity field for validation purposes. The work shows that when the input time-resolved planar velocity measurements are first proper orthogonal decomposition (POD) filtered, high correlation between the estimations and the validation velocity volumes are possible. This results in estimated full volume velocity distributions, which are available at the same time instance as the input field—i.e., a time-resolved velocity estimation at the frequency of the single input plane. While 3C3D information is used in the presented work, this is necessary only for validation; in true application, planar technique would be used. The study concludes that provided the number of sensors used for input LSE exceeds the number of POD modes used for prefiltering, it is possible to achieve correlation greater than 99%.


Author(s):  
Joseph van Batenburg-Sherwood ◽  
Stavroula Balabani

AbstractModelling blood flow in microvascular networks is challenging due to the complex nature of haemorheology. Zero- and one-dimensional approaches cannot reproduce local haemodynamics, and models that consider individual red blood cells (RBCs) are prohibitively computationally expensive. Continuum approaches could provide an efficient solution, but dependence on a large parameter space and scarcity of experimental data for validation has limited their application. We describe a method to assimilate experimental RBC velocity and concentration data into a continuum numerical modelling framework. Imaging data of RBCs were acquired in a sequentially bifurcating microchannel for various flow conditions. RBC concentration distributions were evaluated and mapped into computational fluid dynamics simulations with rheology prescribed by the Quemada model. Predicted velocities were compared to particle image velocimetry data. A subset of cases was used for parameter optimisation, and the resulting model was applied to a wider data set to evaluate model efficacy. The pre-optimised model reduced errors in predicted velocity by 60% compared to assuming a Newtonian fluid, and optimisation further reduced errors by 40%. Asymmetry of RBC velocity and concentration profiles was demonstrated to play a critical role. Excluding asymmetry in the RBC concentration doubled the error, but excluding spatial distributions of shear rate had little effect. This study demonstrates that a continuum model with optimised rheological parameters can reproduce measured velocity if RBC concentration distributions are known a priori. Developing this approach for RBC transport with more network configurations has the potential to provide an efficient approach for modelling network-scale haemodynamics.


Geophysics ◽  
2008 ◽  
Vol 73 (3) ◽  
pp. S99-S114 ◽  
Author(s):  
Einar Iversen ◽  
Martin Tygel

Seismic time migration is known for its ability to generate well-focused and interpretable images, based on a velocity field specified in the time domain. A fundamental requirement of this time-migration velocity field is that lateral variations are small. In the case of 3D time migration for symmetric elementary waves (e.g., primary PP reflections/diffractions, for which the incident and departing elementary waves at the reflection/diffraction point are pressure [P] waves), the time-migration velocity is a function depending on four variables: three coordinates specifying a trace point location in the time-migration domain and one angle, the so-called migration azimuth. Based on a time-migration velocity field available for a single azimuth, we have developed a method providing an image-ray transformation between the time-migration domain and the depth domain. The transformation is obtained by a process in which image rays and isotropic depth-domain velocity parameters for their propagation are esti-mated simultaneously. The depth-domain velocity field and image-ray transformation generated by the process have useful applications. The estimated velocity field can be used, for example, as an initial macrovelocity model for depth migration and tomographic inversion. The image-ray transformation provides a basis for time-to-depth conversion of a complete time-migrated seismic data set or horizons interpreted in the time-migration domain. This time-to-depth conversion can be performed without the need of an a priori known velocity model in the depth domain. Our approach has similarities as well as differences compared with a recently published method based on knowledge of time-migration velocity fields for at least three migration azimuths. We show that it is sufficient, as a minimum, to give as input a time-migration velocity field for one azimuth only. A practical consequence of this simplified input is that the image-ray transformation and its corresponding depth-domain velocity field can be generated more easily.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


2015 ◽  
Vol 8 (2) ◽  
pp. 941-963 ◽  
Author(s):  
T. Vlemmix ◽  
F. Hendrick ◽  
G. Pinardi ◽  
I. De Smedt ◽  
C. Fayt ◽  
...  

Abstract. A 4-year data set of MAX-DOAS observations in the Beijing area (2008–2012) is analysed with a focus on NO2, HCHO and aerosols. Two very different retrieval methods are applied. Method A describes the tropospheric profile with 13 layers and makes use of the optimal estimation method. Method B uses 2–4 parameters to describe the tropospheric profile and an inversion based on a least-squares fit. For each constituent (NO2, HCHO and aerosols) the retrieval outcomes are compared in terms of tropospheric column densities, surface concentrations and "characteristic profile heights" (i.e. the height below which 75% of the vertically integrated tropospheric column density resides). We find best agreement between the two methods for tropospheric NO2 column densities, with a standard deviation of relative differences below 10%, a correlation of 0.99 and a linear regression with a slope of 1.03. For tropospheric HCHO column densities we find a similar slope, but also a systematic bias of almost 10% which is likely related to differences in profile height. Aerosol optical depths (AODs) retrieved with method B are 20% high compared to method A. They are more in agreement with AERONET measurements, which are on average only 5% lower, however with considerable relative differences (standard deviation ~ 25%). With respect to near-surface volume mixing ratios and aerosol extinction we find considerably larger relative differences: 10 ± 30, −23 ± 28 and −8 ± 33% for aerosols, HCHO and NO2 respectively. The frequency distributions of these near-surface concentrations show however a quite good agreement, and this indicates that near-surface concentrations derived from MAX-DOAS are certainly useful in a climatological sense. A major difference between the two methods is the dynamic range of retrieved characteristic profile heights which is larger for method B than for method A. This effect is most pronounced for HCHO, where retrieved profile shapes with method A are very close to the a priori, and moderate for NO2 and aerosol extinction which on average show quite good agreement for characteristic profile heights below 1.5 km. One of the main advantages of method A is the stability, even under suboptimal conditions (e.g. in the presence of clouds). Method B is generally more unstable and this explains probably a substantial part of the quite large relative differences between the two methods. However, despite a relatively low precision for individual profile retrievals it appears as if seasonally averaged profile heights retrieved with method B are less biased towards a priori assumptions than those retrieved with method A. This gives confidence in the result obtained with method B, namely that aerosol extinction profiles tend on average to be higher than NO2 profiles in spring and summer, whereas they seem on average to be of the same height in winter, a result which is especially relevant in relation to the validation of satellite retrievals.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. F25-F34 ◽  
Author(s):  
Benoit Tournerie ◽  
Michel Chouteau ◽  
Denis Marcotte

We present and test a new method to correct for the static shift affecting magnetotelluric (MT) apparent resistivity sounding curves. We use geostatistical analysis of apparent resistivity and phase data for selected periods. For each period, we first estimate and model the experimental variograms and cross variogram between phase and apparent resistivity. We then use the geostatistical model to estimate, by cokriging, the corrected apparent resistivities using the measured phases and apparent resistivities. The static shift factor is obtained as the difference between the logarithm of the corrected and measured apparent resistivities. We retain as final static shift estimates the ones for the period displaying the best correlation with the estimates at all periods. We present a 3D synthetic case study showing that the static shift is retrieved quite precisely when the static shift factors are uniformly distributed around zero. If the static shift distribution has a nonzero mean, we obtained best results when an apparent resistivity data subset can be identified a priori as unaffected by static shift and cokriging is done using only this subset. The method has been successfully tested on the synthetic COPROD-2S2 2D MT data set and on a 3D-survey data set from Las Cañadas Caldera (Tenerife, Canary Islands) severely affected by static shift.


Paleobiology ◽  
2016 ◽  
Vol 43 (1) ◽  
pp. 68-84 ◽  
Author(s):  
Bradley Deline ◽  
William I. Ausich

AbstractA priori choices in the detail and breadth of a study are important in addressing scientific hypotheses. In particular, choices in the number and type of characters can greatly influence the results in studies of morphological diversity. A new character suite was constructed to examine trends in the disparity of early Paleozoic crinoids. Character-based rarefaction analysis indicated that a small subset of these characters (~20% of the complete data set) could be used to capture most of the properties of the entire data set in analyses of crinoids as a whole, noncamerate crinoids, and to a lesser extent camerate crinoids. This pattern may be the result of the covariance between characters and the characterization of rare morphologies that are not represented in the primary axes in morphospace. Shifting emphasis on different body regions (oral system, calyx, periproct system, and pelma) also influenced estimates of relative disparity between subclasses of crinoids. Given these results, morphological studies should include a pilot analysis to better examine the amount and type of data needed to address specific scientific hypotheses.


Sign in / Sign up

Export Citation Format

Share Document