scholarly journals Observation of turbulent dispersion of artificially released SO<sub>2</sub> puffs with UV cameras

2018 ◽  
Vol 11 (11) ◽  
pp. 6169-6188 ◽  
Author(s):  
Anna Solvejg Dinger ◽  
Kerstin Stebel ◽  
Massimo Cassiani ◽  
Hamidreza Ardeshiri ◽  
Cirilo Bernardo ◽  
...  

Abstract. In atmospheric tracer experiments, a substance is released into the turbulent atmospheric flow to study the dispersion parameters of the atmosphere. That can be done by observing the substance's concentration distribution downwind of the source. Past experiments have suffered from the fact that observations were only made at a few discrete locations and/or at low time resolution. The Comtessa project (Camera Observation and Modelling of 4-D Tracer Dispersion in the Atmosphere) is the first attempt at using ultraviolet (UV) camera observations to sample the three-dimensional (3-D) concentration distribution in the atmospheric boundary layer at high spatial and temporal resolution. For this, during a three-week campaign in Norway in July 2017, sulfur dioxide (SO2), a nearly passive tracer, was artificially released in continuous plumes and nearly instantaneous puffs from a 9 m high tower. Column-integrated SO2 concentrations were observed with six UV SO2 cameras with sampling rates of several hertz and a spatial resolution of a few centimetres. The atmospheric flow was characterised by eddy covariance measurements of heat and momentum fluxes at the release mast and two additional towers. By measuring simultaneously with six UV cameras positioned in a half circle around the release point, we could collect a data set of spatially and temporally resolved tracer column densities from six different directions, allowing a tomographic reconstruction of the 3-D concentration field. However, due to unfavourable cloudy conditions on all measurement days and their restrictive effect on the SO2 camera technique, the presented data set is limited to case studies. In this paper, we present a feasibility study demonstrating that the turbulent dispersion parameters can be retrieved from images of artificially released puffs, although the presented data set does not allow for an in-depth analysis of the obtained parameters. The 3-D trajectories of the centre of mass of the puffs were reconstructed enabling both a direct determination of the centre of mass meandering and a scaling of the image pixel dimension to the position of the puff. The latter made it possible to retrieve the temporal evolution of the puff spread projected to the image plane. The puff spread is a direct measure of the relative dispersion process. Combining meandering and relative dispersion, the absolute dispersion could be retrieved. The turbulent dispersion in the vertical is then used to estimate the effective source size, source timescale and the Lagrangian integral time. In principle, the Richardson–Obukhov constant of relative dispersion in the inertial subrange could be also obtained, but the observation time was not sufficiently long in comparison to the source timescale to allow an observation of this dispersion range. While the feasibility of the methodology to measure turbulent dispersion could be demonstrated, a larger data set with a larger number of cloud-free puff releases and longer observation times of each puff will be recorded in future studies to give a solid estimate for the turbulent dispersion under a variety of stability conditions.

2018 ◽  
Author(s):  
Anna Solvejg Dinger ◽  
Kerstin Stebel ◽  
Massimo Cassiani ◽  
Hamidreza Ardeshiri ◽  
Cirilo Bernardo ◽  
...  

Abstract. In atmospheric tracer experiments, a substance is released into the turbulent atmospheric flow to study the dispersion parameters of the atmosphere. That can be done by observing the substance's concentration distribution downwind of the source. Past experiments have suffered from the fact that observations were only made at few discrete locations and/or at low time resolution. The Comtessa project (Camera Observation and Modelling of 4D Tracer Dispersion in the Atmosphere) is the first attempt at using ultraviolet (UV) camera observations to sample the three-dimensional (3D) concentration distribution in the atmospheric boundary layer at high spatial and temporal resolution. For this, during a three-week campaign in Norway in July 2017, sulfur dioxide (SO2), a nearly passive tracer, was artificially released in continuous plumes and nearly-instantaneous puffs from a 9 m high tower. Column-integrated SO2 concentrations were observed with six UV SO2 cameras with sampling rates of several Hertz and a spatial resolution of a few centimetres. The atmospheric flow was characterised by eddy covariance measurements of heat and momentum fluxes at the release mast and two additional towers. By measuring simultaneously with six UV cameras positioned in a half circle around the release point, we could collect a data set of spatially and temporally resolved tracer column densities from six different directions, allowing a tomographic reconstruction of the 3D concentration field. However, due to unfavourable cloudy conditions on all measurement days and their restrictive effect on the SO2 camera technique, the presented data set is limited to case studies. In this paper, we present a feasibility study demonstrating that the turbulent dispersion parameters can be retrieved from images of artificially released puffs. The 3D trajectories of the centre of mass of the puffs were reconstructed enabling both a direct determination of the centre of mass meandering and a scaling of the image pixel dimension to the position of the puff. The latter made it possible to retrieve the temporal evolution of the puff spread projected to the image plane. The puff spread is a direct measure of the relative dispersion process. Combining meandering and relative dispersion, the absolute dispersion could be retrieved. The turbulent dispersion in the vertical is then used to estimate the effective source size, source time scale and the Lagrangian integral time. In principle, the Richardson-Obukhov constant of relative dispersion in the inertial subrange could be also obtained, but the observation time was not sufficiently long in comparison to the source time scale to allow an observation of this dispersion range. While the feasibility of the methodology to measure turbulent dispersion could be demonstrated, a larger data set with a larger number of cloud-free puff releases and longer observation times of each puff will be recorded in future studies to give a solid estimate for the turbulent dispersion under a variety of stability conditions.


2010 ◽  
Vol 661 ◽  
pp. 412-445 ◽  
Author(s):  
Q. LIAO ◽  
E. A. COWEN

The relative dispersion of a scalar plume is examined experimentally. A passive fluorescent tracer is continuously released from a flush-bed mounted source into the turbulent boundary layer of a laboratory-generated open channel flow. A two-dimensional particle image velocimetry–laser-induced florescence (PIV–LIF) technique is applied to measure the instantaneous horizontal velocity and concentration fields. Measured results are used to investigate the relationship between the boundary-layer turbulence and the evolution of the distance-neighbour function, namely the probability density distribution of the separation distance between two marked fluid particles within a cloud of particles. Special attention is paid to the hypothesis that a diffusion equation can describe the evolution of the distance-neighbour function. The diffusion coefficient in such an equation, termed the ‘relative diffusivity’, is directly calculated based on the concentration distribution. The results indicate that the relative diffusivity statistically depends on particle separation lengths instead of the overall size of the plume. Measurements at all stages of the dispersing plume collapse onto a single curve and follow a 4/3 power law in the inertial subrange. The Richardson–Obukhov constant is estimated from the presented dataset. The relationship between the one-dimensional (1D) representation of the distance-neighbour function and its three-dimensional (3D) representation is discussed. An extended model for relative diffusivity beyond the inertial subrange is proposed based on the structure of the turbulent velocity field, and it agrees well with measurements. The experimental evidence implies that, while the diffusion of the distance-neighbour function is completely determined by the underlying turbulence, the overall growth rate of the plume is affected by both the turbulent flow and its actual concentration distribution.


Author(s):  
Todd D. Jack ◽  
Carl N. Ford ◽  
Shari-Beth Nadell ◽  
Vicki Crisp

A causal analysis of aviation accidents by engine type is presented. The analysis employs a top-down methodology that performs a detailed analysis of the causes and factors cited in accident reports to develop a “fingerprint” profile for each engine type. This is followed by an in-depth analysis of each fingerprint that produces a sequential breakdown. Analysis results of National Transportation Safety Board (NTSB) accidents, both fatal and non-fatal, that occurred during the time period of 1990–1998 are presented. Each data set is comprised of all accidents that involved aircraft with the following engine types: turbofan, turbojet, turboprop, and turboshaft (includes turbine helicopters). During this time frame there were 1461 accidents involving turbine powered aircraft; 306 of these involved propulsion malfunctions and/ or failures. Analyses are performed to investigate the sequential relationships between propulsion system malfunctions or failures with other causes and factors for each engine type. Other malfunctions or events prominent within each data set are also analyzed. Significant trends are identified. The results from this study can be used to identify areas for future research into intervention, prevention, and mitigation strategies.


2010 ◽  
Vol 2 (2) ◽  
pp. 38-51 ◽  
Author(s):  
Marc Halbrügge

Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) taskThis paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.


Author(s):  
Hsien-Chung Lin ◽  
Eugen Solowjow ◽  
Masayoshi Tomizuka ◽  
Edwin Kreuzer

This contribution presents a method to estimate environmental boundaries with mobile agents. The agents sample a concentration field of interest at their respective positions and infer a level curve of the unknown field. The presented method is based on support vector machines (SVMs), whereby the concentration level of interest serves as the decision boundary. The field itself does not have to be estimated in order to obtain the level curve which makes the method computationally very appealing. A myopic strategy is developed to pick locations that yield most informative concentration measurements. Cooperative operations of multiple agents are demonstrated by dividing the domain in Voronoi tessellations. Numerical studies demonstrate the feasibility of the method on a real data set of the California coastal area. The exploration strategy is benchmarked against random walk which it clearly outperforms.


Author(s):  
A. Andreini ◽  
A. Bonini ◽  
G. Caciolli ◽  
B. Facchini ◽  
S. Taddei

Due to the stringent cooling requirements of novel aero-engines combustor liners, a comprehensive understanding of the phenomena concerning the interaction of hot gases with typical coolant jets plays a major role in the design of efficient cooling systems. In this work, an aerodynamic analysis of the effusion cooling system of an aero-engine combustor liner was performed; the aim was the definition of a correlation for the discharge coefficient (CD) of the single effusion hole. The data were taken from a set of CFD RANS (Reynolds-averaged Navier-Stokes) simulations, in which the behavior of the effusion cooling system was investigated over a wide range of thermo/fluid-dynamics conditions. In some of these tests, the influence on the effusion flow of an additional air bleeding port was taken into account, making it possible to analyze its effects on effusion holes CD. An in depth analysis of the numerical data set has pointed out the opportunity of an efficient reduction through the ratio of the annulus and the hole Reynolds numbers: The dependence of the discharge coefficients from this parameter is roughly linear. The correlation was included in an in-house one-dimensional thermo/fluid network solver, and its results were compared with CFD data. An overall good agreement of pressure and mass flow rate distributions was observed. The main source of inaccuracy was observed in the case of relevant air bleed mass flow rates due to the inherent three-dimensional behavior of the flow close to bleed opening. An additional comparison with experimental data was performed in order to improve the confidence in the accuracy of the correlation: Within the validity range of pressure ratios in which the correlation is defined (>1.02), this comparison pointed out a good reliability in the prediction of discharge coefficients. An approach to model air bleeding was then proposed, with the assessment of its impact on liner wall temperature prediction.


1998 ◽  
Vol 357 ◽  
pp. 167-198 ◽  
Author(s):  
BURKHARD M. O. HEPPE

The relative velocity of two fluid particles in homogeneous and stationary turbulence is considered. Looking for reduced dynamics of turbulent dispersion, we apply the nonlinear Mori–Zwanzig projector method to the Navier–Stokes equations. The projector method decomposes the Lagrangian acceleration into a conditionally averaged part and a random force. The result is an exact generalized Langevin equation for the Lagrangian velocity differences accounting for the exact equation of the Eulerian probability density. From the generalized Langevin equation, we obtain a stochastic model of relative dispersion by stochastic estimation of conditional averages and by assuming the random force to be Gaussian white noise. This new approach to dispersion modelling generalizes and unifies stochastic models based on the well-mixed condition and the moments approximation. Furthermore, we incorporate viscous effects in a systematic way. At a moderate Reynolds number, the model agrees qualitatively with direct numerical simulations showing highly non-Gaussian separation and velocity statistics for particle pairs initially close together. At very large Reynolds numbers, the mean-square separation obeys a Richardson law with coefficient of the order of 0.1.


2015 ◽  
Vol 39 (3) ◽  
pp. 326-345 ◽  
Author(s):  
David Martín-Moncunill ◽  
Miguel-Ángel Sicilia-Urban ◽  
Elena García-Barriocanal ◽  
Salvador Sánchez-Alonso

Purpose – Large terminologies usually contain a mix of terms that are either generic or domain specific, which makes the use of the terminology itself a difficult task that may limit the positive effects of these systems. The purpose of this paper is to systematically evaluate the degree of domain specificity of the AGROVOC controlled vocabulary terms as a representative of a large terminology in the agricultural domain and discuss the generic/specific boundaries across its hierarchy. Design/methodology/approach – A user-oriented study with domain-experts in conjunction with quantitative and systematic analysis. First an in-depth analysis of AGROVOC was carried out to make a proper selection of terms for the experiment. Then domain-experts were asked to classify the terms according to their domain specificity. An evaluation was conducted to analyse the domain-experts’ results. Finally, the resulting data set was automatically compared with the terms in SUMO, an upper ontology and MILO, a mid-level ontology; to analyse the coincidences. Findings – Results show the existence of a high number of generic terms. The motivation for several of the unclear cases is also depicted. The automatic evaluation showed that there is not a direct way to assess the specificity degree of a term by using SUMO and MILO ontologies, however, it provided additional validation of the results gathered from the domain-experts. Research limitations/implications – The “domain-analysis” concept has long been discussed and it could be addressed from different perspectives. A resume of these perspectives and an explanation of the approach followed in this experiment is included in the background section. Originality/value – The authors propose an approach to identify the domain specificity of terms in large domain-specific terminologies and a criterion to measure the overall domain specificity of a knowledge organisation system, based on domain-experts analysis. The authors also provide a first insight about using automated measures to determine the degree to which a given term can be considered domain specific. The resulting data set from the domain-experts’ evaluation can be reused as a gold standard for further research about these automatic measures.


2009 ◽  
Vol 9 (14) ◽  
pp. 4827-4840 ◽  
Author(s):  
M. L. Melamed ◽  
R. Basaldud ◽  
R. Steinbrecher ◽  
S. Emeis ◽  
L. G. Ruíz-Suárez ◽  
...  

Abstract. This work presents ground based differential optical absorption spectroscopy (DOAS) measurements of nitrogen dioxide (NO2) during the MILAGRO field campaign in March 2006 at the Tenango del Aire research site located to the southeast of Mexico City. The DOAS NO2 column density measurements are used in conjunction with ceilometer, meteorological and surface nitric oxide (NO), nitrogen oxides (NOx) and total reactive nitrogen (NOy) measurements to analyze pollution transport events to the southeast of Mexico City during the MILARGO field campaign. The study divides the data set into three case study pollution transport events that occurred at the Tenango del Aire research site. The unique data set is then used to provide an in depth analysis of example days of each of the pollution transport events. An in depth analysis of 13 March 2006, a Case One day, shows the transport of several air pollution plumes during the morning through the Tenango del Aire research site when southerly winds are present and demonstrates how DOAS tropospheric NO2 vertical column densities (VCD), surface NO2 mixing ratios and ceilometer data are used to determine the vertical homogeneity of the pollution layer. The analysis of 18 March 2006, a Case Two day, shows that when northerly winds are present for the entire day, the air at the Tenango del Aire research site is relatively clean and no major pollution plumes are detected. Case 3 days are characterized by relatively clean air throughout the morning with large DOAS NO2 enhancements detected in the afternoon. The analysis of 28 March 2006 show the DOAS NO2 enhancements are likely due to lightning activity and demonstrate how suitable ground-based DOAS measruements are for monitoring anthropogenic and natural pollution sources that reside above the mixing layer.


2011 ◽  
Vol 38 (4) ◽  
pp. 433-443 ◽  
Author(s):  
Hamid Zaman ◽  
Khandker M. Nurul Habib

Travel demand management (TDM) for achieving sustainability is now considered one of the most important aspects of transportation planning and operation. It is now a well known fact that excessive use of private car results inefficient travel behaviour. So, from the TDM perspective, it is of great importance to analyze travel behaviour for improving our understanding on how to influence people to reduce car use and choose more sustainable modes such as  carpool, public transit, park & ride, walk, bike etc. This study attempts an in-depth analysis of commuting mode choice behaviour using a week-long commuter survey data set collected in the City of Edmonton. Using error correlated nested logit model for panel data, this study investigates sensitivities of various factors including some specific TDM policies such as flexible office hours, compressed work week etc. Results of the investigation provide profound understanding and guidelines for designing effective TDM policies.


Sign in / Sign up

Export Citation Format

Share Document