scholarly journals A scaling analysis of ozone photochemistry: I Model development

2005 ◽  
Vol 5 (6) ◽  
pp. 12957-12983
Author(s):  
B. Ainslie ◽  
D. G. Steyn

Abstract. A scaling analysis has been used to capture the integrated behaviour of several photochemical mechanisms for a wide range of precursor concentrations and a variety of environmental conditions. The Buckingham Pi method of dimensional analysis was used to express the relevant variables in terms of dimensionless groups. These grouping show maximum ozone, initial NOx and initial VOC concentrations are made non-dimensional by the average NO2 photolysis rate (jav) and the rate constant for the NO-O3 titration reaction (kNO); temperature by the NO-O3 activation energy (ENO) and Boltzmann constant (k) and total irradiation time by the cumulative javΔt photolysis rate (π3). The analysis shows dimensionless maximum ozone concentration can be described by a product of powers of dimensionless initial NOx concentration, dimensionless temperature, and a similarity curve directly dependent on the ratio of initial VOC to NOx concentration and implicitly dependent on the cumulative NO2 photolysis rate. When Weibull transformed, the similarity relationship shows a scaling break with dimensionless model output clustering onto two straight line segments, parameterized using four variables: two describing the slopes of the line segments and two giving the location of their intersection. A fifth parameter is used to normalize the model output. The scaling analysis, similarity curve and parameterization appear to be independent of the details of the chemical mechanism, hold for a variety of VOC species and mixtures and a wide range of temperatures and actinic fluxes.

2006 ◽  
Vol 6 (12) ◽  
pp. 4067-4077 ◽  
Author(s):  
B. Ainslie ◽  
D. G. Steyn

Abstract. A scaling analysis has been used to capture the integrated behaviour of several photochemical mechanisms for a wide range of precursor concentrations and a variety of environmental conditions. The Buckingham Pi method of dimensional analysis was used to express the relevant variables in terms of dimensionless groups. These grouping show maximum ozone, initial NOx and initial VOC concentrations are made non-dimensional by the average NO2 photolysis rate (jav) and the rate constant for the NO–O3 titration reaction (kNO); temperature by the NO–O3 activation energy (ENO) and Boltzmann constant (k) and total irradiation time by the cumulative javΔt photolysis rate. The analysis shows dimensionless maximum ozone concentration can be described by a product of powers of dimensionless initial NOx concentration, dimensionless temperature, and a similarity curve directly dependent on the ratio of initial VOC to NOx concentration and implicitly dependent on the cumulative NO2 photolysis rate. When Weibull transformed, the similarity relationship shows a scaling break with dimensionless model output clustering onto two straight line segments, parameterized using four variables: two describing the slopes of the line segments and two giving the location of their intersection. A fifth parameter is used to normalize the model output. The scaling analysis, similarity curve and parameterization appear to be independent of the details of the chemical mechanism, hold for a variety of VOC species and mixtures and a wide range of temperatures and actinic fluxes.


Author(s):  
Vassilios Papapostolou ◽  
Charles Turquand d’Auzay ◽  
Nilanjan Chakraborty

AbstractThe minimum ignition energy (MIE) requirements for ensuring successful thermal runaway and self-sustained flame propagation have been analysed for forced ignition of homogeneous stoichiometric biogas-air mixtures for a wide range of initial turbulence intensities and CO2 dilutions using three-dimensional Direct Numerical Simulations under decaying turbulence. The biogas is represented by a CH4 + CO2 mixture and a two-step chemical mechanism involving incomplete oxidation of CH4 to CO and H2O and an equilibrium between the CO oxidation and the CO2 dissociation has been used for simulating biogas-air combustion. It has been found that the MIE increases with increasing CO2 content in the biogas due to the detrimental effect of the CO2 dilution on the burning and heat release rates. The MIE for ensuring self-sustained flame propagation has been found to be greater than the MIE for ensuring only thermal runaway irrespective of its outcome for large root-mean-square (rms) values of turbulent velocity fluctuation, and the MIE values increase with increasing rms turbulent velocity for both cases. It has been found that the MIE values increase more steeply with increasing rms turbulent velocity beyond a critical turbulence intensity than in the case of smaller turbulence intensities. The variations of the normalised MIE (MIE normalised by the value for the quiescent laminar condition) with normalised turbulence intensity for biogas-air mixtures are found to be qualitatively similar to those obtained for the undiluted mixture. However, the critical turbulence intensity has been found to decrease with increasing CO2 dilution. It has been found that the normalised MIE for self-sustained flame propagation increases with increasing rms turbulent velocity following a power-law and the power-law exponent has been found not to vary much with the level of CO2 dilution. This behaviour has been explained using a scaling analysis and flame wrinkling statistics. The stochasticity of the ignition event has been analysed by using different realisations of statistically similar turbulent flow fields for the energy inputs corresponding to the MIE and it has been demonstrated that successful outcomes are obtained in most of the instances, justifying the accuracy of the MIE values identified by this analysis.


2021 ◽  
Vol 13 (2) ◽  
pp. 723
Author(s):  
Antti Kurvinen ◽  
Arto Saari ◽  
Juhani Heljo ◽  
Eero Nippala

It is widely agreed that dynamics of building stocks are relatively poorly known even if it is recognized to be an important research topic. Better understanding of building stock dynamics and future development is crucial, e.g., for sustainable management of the built environment as various analyses require long-term projections of building stock development. Recognizing the uncertainty in relation to long-term modeling, we propose a transparent calculation-based QuantiSTOCK model for modeling building stock development. Our approach not only provides a tangible tool for understanding development when selected assumptions are valid but also, most importantly, allows for studying the sensitivity of results to alternative developments of the key variables. Therefore, this relatively simple modeling approach provides fruitful grounds for understanding the impact of different key variables, which is needed to facilitate meaningful debate on different housing, land use, and environment-related policies. The QuantiSTOCK model may be extended in numerous ways and lays the groundwork for modeling the future developments of building stocks. The presented model may be used in a wide range of analyses ranging from assessing housing demand at the regional level to providing input for defining sustainable pathways towards climate targets. Due to the availability of high-quality data, the Finnish building stock provided a great test arena for the model development.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Veepan Kumar ◽  
Ravi Shankar ◽  
Prem Vrat

PurposeIn today’s uncertain business environment, Industry 4.0 is regarded as a viable strategic plan for addressing a wide range of manufacturing-related challenges. However, it appears that its level of adoption varies across many countries. In the case of a developing economy like India, practitioners are still in the early stages of implementation. The implementation of Industry 4.0 appears to be complex, and it must be investigated holistically in order to gain a better understanding of it. Therefore, an attempt has been made to examine the Industry 4.0 implementation for the Indian manufacturing organization in a detailed way by analyzing the complexities of relevant variables.Design/methodology/approachSAP-LAP (situation-actor-process and learning-action-performance) and an efficient interpretive ranking process (e-IRP) were used to analyze the various variables influencing Industry 4.0 implementation. The variables were identified, as per SAP-LAP, through a thorough review of the literature and based on the perspectives of various experts. The e-IRP has been used to prioritize the selected elements (i.e. actors with respect to processes and actions with respect to performance) of SAP-LAP.FindingsThis study ranked five stakeholders according to their priority in Industry 4.0 implementation: government policymakers, industry associations, research and academic institutions, manufacturers and customers. In addition, the study also prioritized important actions that need to be taken by these stakeholders.Practical implicationsThe results of this study would be useful in identifying and managing the various actors and actions related to Industry 4.0 implementation. Accordingly, their prioritized sequence would be useful to the practitioners in preparing the well-defined and comprehensive strategic roadmap for Industry 4.0.Originality/valueThis study has adopted qualitative and quantitative approaches for identifying and prioritizing different variables of Industry 4.0 implementation. This, in turn, helps the stakeholder to comprehend the concept of Industry 4.0 in a much simpler way.


2021 ◽  
Author(s):  
Astrid Ramirez Hernandez ◽  
Trupti Kathrotia ◽  
Torsten Methling ◽  
Marina Braun-Unkhoff ◽  
Uwe Riedel

Abstract The development of advanced reaction models to predict pollutant emissions in aero-engine combustors usually relies on surrogate formulations of a specific jet fuel for mimicking its chemical composition. 1,3,5-trimethylbenzene is one of the suitable components to represent aromatics species in those surrogates. However, a comprehensive reaction model for 1,3,5-trimethylbenzene combustion requires a mechanism to describe the m-xylene oxidation. In this work, the development of a chemical kinetic mechanism for describing the m-xylene combustion in a wide parameter range (i.e. temperature, pressure, and fuel equivalence ratios) is presented. The m-xylene reaction submodel was developed based on existing reaction mechanisms of similar species such as toluene and reaction pathways adapted from literature. The sub-model was integrated into an existing detailed mechanism that contains the kinetics of a wide range of n-paraffins, iso-paraffins, cyclo-paraffins, and aromatics. Simulation results for m-xylene were validated against experimental data available in literature. Results show that the presented m-xylene mechanism correctly predicts ignition delay times at different pressures and temperatures as well as laminar burning velocities at atmospheric pressure and various fuel equivalence ratios. At high pressure, some deviations of the calculated laminar burning velocity and the measured values are obtained at stoichiometric to rich equivalence ratios. Additionally, the model predicts reasonably well concentration profiles of major and intermediate species at different temperatures and atmospheric pressure.


1970 ◽  
Vol 185 (1) ◽  
pp. 407-424 ◽  
Author(s):  
H. R. M. Craig ◽  
H. J. A. Cox

A comprehensive method of estimating the performance of axial flow steam and gas turbines is presented, based on analysis of linear cascade tests on blading, on a number of turbine test results, and on air tests of model casings. The validity of the use of such data is briefly considered. Data are presented to allow performance estimation of actual machines over a wide range of Reynolds number, Mach number, aspect ratio and other relevant variables. The use of the method in connection with three-dimensional methods of flow estimation is considered, and data presented showing encouraging agreement between estimates and available test results. Finally ‘carpets’ are presented showing the trends in efficiencies that are attainable in turbines designed over a wide range of loading, axial velocity/blade speed ratio, Reynolds number and aspect ratio.


2021 ◽  
Author(s):  
Stefanie Holzwarth ◽  
Martin Bachmann ◽  
Bringfried Pflug ◽  
Aimé Meygret ◽  
Caroline Bès ◽  
...  

<p>The objective of the H2020 project “Copernicus Cal/Val Solution (CCVS)” is to define a holistic Cal/Val strategy for all ongoing and upcoming Copernicus Sentinel missions. This includes an improved calibration of currently operational or planned Copernicus Sentinel sensors and the validation of Copernicus core products generated by the payload ground segments. CCVS will identify gaps and propose long-term solutions to address currently existing constraints in the Cal/Val domain and exploit existing synergies between the missions. An overview of existing calibration and validation sources and means is needed before starting the gap analysis. In this context, this survey is concerned with measurement capabilities for aerial campaigns.</p><p>Since decades airborne observations are an essential contribution to support Earth-System model development and space-based observing programs, both in the domains of Earth Observation (radar and optical) as well as for atmospheric research. The collection of airborne reference data can be directly related to satellite observations, since they are collected in ideal validation conditions using well calibrated reference sensors. Many of these sensors are also used to validate and characterize postlaunch instrument performance. The variety of available aircraft equipped with different instrumentations ranges from motorized gliders to jets acquiring data from different heights to the upper troposphere. In addition, balloons are also used as platforms, either small weather balloons with light payload (around 3 kg), or open stratospheric balloons with big payload (more than a ton). For some time now, UAVs/drones are also used in order to acquire data for Cal/Val purposes. They offer a higher flexibility compared to airplanes, plus covering a bigger area compared to in-situ measurements on ground. On the other hand, they also have limitations when it comes to the weight of instrumentation and maximum altitude level above ground. This reflects the wide range of possible aerial measurements supporting the Cal/Val activities.</p><p>The survey will identify the different airborne campaigns. The report will include the description of campaigns, their spatial distribution and extent, ownership and funding, data policy and availability and measurement frequency. Also, a list of common instrumentation, metrological traceability, availability of uncertainty evaluation and quality management will be discussed. The report additionally deals with future possibilities e.g., planned developments and emerging technologies in instrumentation for airborne and balloon based campaigns.</p><p>This presentation gives an overview of the preliminary survey results and puts them in context with the Cal/Val requirements of the different Copernicus Sentinel missions.</p><p>This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement No 101004242.</p>


1982 ◽  
Vol 54 (3) ◽  
pp. 683-692 ◽  
Author(s):  
J. Timothy Petersik

Ginsburg's filter theory successfully accounts for the perceptual distortions perceived in a wide range of illusions and bistable phenomena. Essentially, the theory proposes that illusory distortions are the natural consequence of low-pass spatial filtering (based upon the human modulation transfer function) of the physical stimulus. With regard to the Müller-Lyer illusion, predictions based upon filter theory and human scan-path data are in accord. However, data linking filter theory's predictions regarding perceptual experiences associated with the illusion to the eye-scan results have been missing. In the present experiment subjects provided subjective estimations of their own eye scans while viewing each of the following stimuli: the fins-out member of the Müller-Lyer illusion, the fins-in member of the Müller-Lyer illusion, and a finless horizontal line (variations of each stimulus consisted of one, two, and three line segments). The analysis of these data supported three predictions that were derived from filter theory. Potential problems facing filter theory are also addressed.


2019 ◽  
Vol 35 (9) ◽  
pp. 1527-1538 ◽  
Author(s):  
Chava L Ramspek ◽  
Ype de Jong ◽  
Friedo W Dekker ◽  
Merel van Diepen

Abstract Background Prediction tools that identify chronic kidney disease (CKD) patients at a high risk of developing kidney failure have the potential for great clinical value, but limited uptake. The aim of the current study is to systematically review all available models predicting kidney failure in CKD patients, organize empirical evidence on their validity and ultimately provide guidance in the interpretation and uptake of these tools. Methods PubMed and EMBASE were searched for relevant articles. Titles, abstracts and full-text articles were sequentially screened for inclusion by two independent researchers. Data on study design, model development and performance were extracted. The risk of bias and clinical usefulness were assessed and combined in order to provide recommendations on which models to use. Results Of 2183 screened studies, a total of 42 studies were included in the current review. Most studies showed high discriminatory capacity and the included predictors had large overlap. Overall, the risk of bias was high. Slightly less than half the studies (48%) presented enough detail for the use of their prediction tool in practice and few models were externally validated. Conclusions The current systematic review may be used as a tool to select the most appropriate and robust prognostic model for various settings. Although some models showed great potential, many lacked clinical relevance due to being developed in a prevalent patient population with a wide range of disease severity. Future research efforts should focus on external validation and impact assessment in clinically relevant patient populations.


2011 ◽  
Vol 366 (1567) ◽  
pp. 1129-1138 ◽  
Author(s):  
Mark Collard ◽  
Briggs Buchanan ◽  
Jesse Morin ◽  
Andre Costopoulos

Recent studies have suggested that the decisions that hunter–gatherers make about the diversity and complexity of their subsistence toolkits are strongly affected by risk of resource failure. However, the risk proxies and samples employed in these studies are potentially problematic. With this in mind, we retested the risk hypothesis with data from hunter–gatherer populations who lived in the northwest coast and plateau regions of the Pacific Northwest during the early contact period. We focused on these populations partly because the northwest coast and plateau differ in ways that can be expected to lead to differences in risk, and partly because of the availability of data for a wide range of risk-relevant variables. Our analyses suggest that the plateau was a more risky environment than the northwest coast. However, the predicted differences in the number and complexity of the populations' subsistence tools were not observed. The discrepancy between our results and those of previous tests of the risk hypothesis is not due to methodological differences. Rather, it seems to reflect an important but hitherto unappreciated feature of the relationship between risk and toolkit structure, namely that the impact of risk is dependent on the scale of the risk differences among populations.


Sign in / Sign up

Export Citation Format

Share Document