scholarly journals A scaling analysis of ozone photochemistry

2006 ◽  
Vol 6 (12) ◽  
pp. 4067-4077 ◽  
Author(s):  
B. Ainslie ◽  
D. G. Steyn

Abstract. A scaling analysis has been used to capture the integrated behaviour of several photochemical mechanisms for a wide range of precursor concentrations and a variety of environmental conditions. The Buckingham Pi method of dimensional analysis was used to express the relevant variables in terms of dimensionless groups. These grouping show maximum ozone, initial NOx and initial VOC concentrations are made non-dimensional by the average NO2 photolysis rate (jav) and the rate constant for the NO–O3 titration reaction (kNO); temperature by the NO–O3 activation energy (ENO) and Boltzmann constant (k) and total irradiation time by the cumulative javΔt photolysis rate. The analysis shows dimensionless maximum ozone concentration can be described by a product of powers of dimensionless initial NOx concentration, dimensionless temperature, and a similarity curve directly dependent on the ratio of initial VOC to NOx concentration and implicitly dependent on the cumulative NO2 photolysis rate. When Weibull transformed, the similarity relationship shows a scaling break with dimensionless model output clustering onto two straight line segments, parameterized using four variables: two describing the slopes of the line segments and two giving the location of their intersection. A fifth parameter is used to normalize the model output. The scaling analysis, similarity curve and parameterization appear to be independent of the details of the chemical mechanism, hold for a variety of VOC species and mixtures and a wide range of temperatures and actinic fluxes.

2005 ◽  
Vol 5 (6) ◽  
pp. 12957-12983
Author(s):  
B. Ainslie ◽  
D. G. Steyn

Abstract. A scaling analysis has been used to capture the integrated behaviour of several photochemical mechanisms for a wide range of precursor concentrations and a variety of environmental conditions. The Buckingham Pi method of dimensional analysis was used to express the relevant variables in terms of dimensionless groups. These grouping show maximum ozone, initial NOx and initial VOC concentrations are made non-dimensional by the average NO2 photolysis rate (jav) and the rate constant for the NO-O3 titration reaction (kNO); temperature by the NO-O3 activation energy (ENO) and Boltzmann constant (k) and total irradiation time by the cumulative javΔt photolysis rate (π3). The analysis shows dimensionless maximum ozone concentration can be described by a product of powers of dimensionless initial NOx concentration, dimensionless temperature, and a similarity curve directly dependent on the ratio of initial VOC to NOx concentration and implicitly dependent on the cumulative NO2 photolysis rate. When Weibull transformed, the similarity relationship shows a scaling break with dimensionless model output clustering onto two straight line segments, parameterized using four variables: two describing the slopes of the line segments and two giving the location of their intersection. A fifth parameter is used to normalize the model output. The scaling analysis, similarity curve and parameterization appear to be independent of the details of the chemical mechanism, hold for a variety of VOC species and mixtures and a wide range of temperatures and actinic fluxes.


Author(s):  
Vassilios Papapostolou ◽  
Charles Turquand d’Auzay ◽  
Nilanjan Chakraborty

AbstractThe minimum ignition energy (MIE) requirements for ensuring successful thermal runaway and self-sustained flame propagation have been analysed for forced ignition of homogeneous stoichiometric biogas-air mixtures for a wide range of initial turbulence intensities and CO2 dilutions using three-dimensional Direct Numerical Simulations under decaying turbulence. The biogas is represented by a CH4 + CO2 mixture and a two-step chemical mechanism involving incomplete oxidation of CH4 to CO and H2O and an equilibrium between the CO oxidation and the CO2 dissociation has been used for simulating biogas-air combustion. It has been found that the MIE increases with increasing CO2 content in the biogas due to the detrimental effect of the CO2 dilution on the burning and heat release rates. The MIE for ensuring self-sustained flame propagation has been found to be greater than the MIE for ensuring only thermal runaway irrespective of its outcome for large root-mean-square (rms) values of turbulent velocity fluctuation, and the MIE values increase with increasing rms turbulent velocity for both cases. It has been found that the MIE values increase more steeply with increasing rms turbulent velocity beyond a critical turbulence intensity than in the case of smaller turbulence intensities. The variations of the normalised MIE (MIE normalised by the value for the quiescent laminar condition) with normalised turbulence intensity for biogas-air mixtures are found to be qualitatively similar to those obtained for the undiluted mixture. However, the critical turbulence intensity has been found to decrease with increasing CO2 dilution. It has been found that the normalised MIE for self-sustained flame propagation increases with increasing rms turbulent velocity following a power-law and the power-law exponent has been found not to vary much with the level of CO2 dilution. This behaviour has been explained using a scaling analysis and flame wrinkling statistics. The stochasticity of the ignition event has been analysed by using different realisations of statistically similar turbulent flow fields for the energy inputs corresponding to the MIE and it has been demonstrated that successful outcomes are obtained in most of the instances, justifying the accuracy of the MIE values identified by this analysis.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Veepan Kumar ◽  
Ravi Shankar ◽  
Prem Vrat

PurposeIn today’s uncertain business environment, Industry 4.0 is regarded as a viable strategic plan for addressing a wide range of manufacturing-related challenges. However, it appears that its level of adoption varies across many countries. In the case of a developing economy like India, practitioners are still in the early stages of implementation. The implementation of Industry 4.0 appears to be complex, and it must be investigated holistically in order to gain a better understanding of it. Therefore, an attempt has been made to examine the Industry 4.0 implementation for the Indian manufacturing organization in a detailed way by analyzing the complexities of relevant variables.Design/methodology/approachSAP-LAP (situation-actor-process and learning-action-performance) and an efficient interpretive ranking process (e-IRP) were used to analyze the various variables influencing Industry 4.0 implementation. The variables were identified, as per SAP-LAP, through a thorough review of the literature and based on the perspectives of various experts. The e-IRP has been used to prioritize the selected elements (i.e. actors with respect to processes and actions with respect to performance) of SAP-LAP.FindingsThis study ranked five stakeholders according to their priority in Industry 4.0 implementation: government policymakers, industry associations, research and academic institutions, manufacturers and customers. In addition, the study also prioritized important actions that need to be taken by these stakeholders.Practical implicationsThe results of this study would be useful in identifying and managing the various actors and actions related to Industry 4.0 implementation. Accordingly, their prioritized sequence would be useful to the practitioners in preparing the well-defined and comprehensive strategic roadmap for Industry 4.0.Originality/valueThis study has adopted qualitative and quantitative approaches for identifying and prioritizing different variables of Industry 4.0 implementation. This, in turn, helps the stakeholder to comprehend the concept of Industry 4.0 in a much simpler way.


1970 ◽  
Vol 185 (1) ◽  
pp. 407-424 ◽  
Author(s):  
H. R. M. Craig ◽  
H. J. A. Cox

A comprehensive method of estimating the performance of axial flow steam and gas turbines is presented, based on analysis of linear cascade tests on blading, on a number of turbine test results, and on air tests of model casings. The validity of the use of such data is briefly considered. Data are presented to allow performance estimation of actual machines over a wide range of Reynolds number, Mach number, aspect ratio and other relevant variables. The use of the method in connection with three-dimensional methods of flow estimation is considered, and data presented showing encouraging agreement between estimates and available test results. Finally ‘carpets’ are presented showing the trends in efficiencies that are attainable in turbines designed over a wide range of loading, axial velocity/blade speed ratio, Reynolds number and aspect ratio.


1982 ◽  
Vol 54 (3) ◽  
pp. 683-692 ◽  
Author(s):  
J. Timothy Petersik

Ginsburg's filter theory successfully accounts for the perceptual distortions perceived in a wide range of illusions and bistable phenomena. Essentially, the theory proposes that illusory distortions are the natural consequence of low-pass spatial filtering (based upon the human modulation transfer function) of the physical stimulus. With regard to the Müller-Lyer illusion, predictions based upon filter theory and human scan-path data are in accord. However, data linking filter theory's predictions regarding perceptual experiences associated with the illusion to the eye-scan results have been missing. In the present experiment subjects provided subjective estimations of their own eye scans while viewing each of the following stimuli: the fins-out member of the Müller-Lyer illusion, the fins-in member of the Müller-Lyer illusion, and a finless horizontal line (variations of each stimulus consisted of one, two, and three line segments). The analysis of these data supported three predictions that were derived from filter theory. Potential problems facing filter theory are also addressed.


2011 ◽  
Vol 366 (1567) ◽  
pp. 1129-1138 ◽  
Author(s):  
Mark Collard ◽  
Briggs Buchanan ◽  
Jesse Morin ◽  
Andre Costopoulos

Recent studies have suggested that the decisions that hunter–gatherers make about the diversity and complexity of their subsistence toolkits are strongly affected by risk of resource failure. However, the risk proxies and samples employed in these studies are potentially problematic. With this in mind, we retested the risk hypothesis with data from hunter–gatherer populations who lived in the northwest coast and plateau regions of the Pacific Northwest during the early contact period. We focused on these populations partly because the northwest coast and plateau differ in ways that can be expected to lead to differences in risk, and partly because of the availability of data for a wide range of risk-relevant variables. Our analyses suggest that the plateau was a more risky environment than the northwest coast. However, the predicted differences in the number and complexity of the populations' subsistence tools were not observed. The discrepancy between our results and those of previous tests of the risk hypothesis is not due to methodological differences. Rather, it seems to reflect an important but hitherto unappreciated feature of the relationship between risk and toolkit structure, namely that the impact of risk is dependent on the scale of the risk differences among populations.


1989 ◽  
Vol 32 (4) ◽  
pp. 887-911 ◽  
Author(s):  
Richard S. Tyler ◽  
Brian C. J. Moore ◽  
Francis K. Kuk

The main purpose of this study was to provide an independent corroboration of open-set word recognition in some of the better cochlear-implant patients. These included the Chorimac, Nucleus (one group from the U.S.A. and one group from Hannover, Germany), Symbion, Duren/Cologne and 3M/Vienna implants. Three experiments are reported: (1) word recognition in word lists and in sentences; (2) environmental sound perception, and (3) gap detection. On word recognition, the scores of 6 Chorimac patients averaged 2.5% words and 0.7% words in sentences correct in the French tests. In the German tests, the scores averaged 17% words and 10% words in sentences for 10 Duren/Cologne patients, 15% words and 16% words in sentences for 9 3M/Vienna patients, and 10% words and 16% words in sentences (3% to 26%) for 10 Nucleus/Hannover patients. In the English tests, the scores averaged 11% words and 29.6% words in sentences for l0 Nucleus-U.S.A. patients, and 13.7% words and 35.7% words in sentences for the 9 Symbion patients. The ability to recognize recorded environmental sounds was measured with a closed set of 18 sounds. Performance averaged 23% correct for Chorimac patients, 41% correct for 3M/Vienna patients, 44% correct for Nucleus/Hannover patients, 21% correct for Duren/Cologne patients, 58% correct for Nucleus/U.S.A. patients, and 83% correct for Symbion patients. A multidimensional scaling analysis suggested that patients were, in part, utilizing information about the envelope and about the periodic/aperiodic nature of some of the sounds. Gap detection thresholds with a one-octave wide noise centered at 500 Hz varied widely among patients. Typically, patients with gap thresholds less than 40 ms showed a wide range of performance on speech perception tasks, whereas patients with gap-detection thresholds greater than 40 ms showed poor word recognition skills.


1967 ◽  
Vol 89 (2) ◽  
pp. 356-360 ◽  
Author(s):  
S. Eshghy

The heat transfer aspects of abrasive cutoff operation are investigated analytically. The analytical results lead to integral expressions for a dimensionless temperature of cutting interface as a function of dimensionless time and two dimensionless parameters. A rather simple expression is given which approximates the integral equations in a wide range of parameters. These results are dependent upon an important cutting parameter, namely, the “energy per unit volume” or the work necessary for removing a unit volume of chips. It is concluded that, if energy per unit volume decreases with downfeed faster than the latter to about −0.3 power, there exists a critical downfeed at which the temperatures will be maximum. For slower rates of decrease, lower temperatures are obtained at lower downfeeds. The partition functions for energy per unit volume, thermal penetration of the wheel, and experimental temperature measurements will be given in subsequent papers.


2020 ◽  
Vol 86 (4) ◽  
pp. 118-125
Author(s):  
Oksana Kharchenko ◽  
Vitaliy Smokal ◽  
Oksana Krupka

In particular, as an important class of organic heterocyclic dyes, aurones exhibit unique photochemical and photophysical properties, which render them useful in a variety of applications, such as fluorescent labels and probes in biology and medicine. Despite of the wide range of applications, the photochemical properties of the aurone class remain less well known. The backbone of aurone molecule has excellent planarity and from the viewpoint of molecular engineering, molecular planarity plays an important role in tuning nonlinear optical properties of materials. Therefore, this work is aimed to the synthesis of new derivatives based on 6-hydroxyaurone and study their photochemical properties. Novel monomers based on (2Z)-2-benzylidene)-6-hydroxy-1-benzofuran-3 (2H)-one with different withdrawing substituents in the benzylidene moiety were synthesized by acylation of the hydroxy group by methacryloil chloride. The polymerization was carried out in 10% solutions of the monomers in dimethylformamide, 2,2ˊ-azobisisobutyronitrile was used as the initiator The structure of the synthesized compounds was proved by 1H NMR spectroscopy. The study of the photochemical properties of synthesized polymers was performed by UV VIS spectroscopy. New polymers with auron moiety have been shown ability to photoinduced Z-E-isomerization. The rate constants of Z-E-photoisomerization were determined by slope angle tangent of dependence ln(D/D0) on the irradiation time. The half-reaction periods for E-isomers of auronecontaining polymers were calculated.


2004 ◽  
Vol 4 (4) ◽  
pp. 3721-3783 ◽  
Author(s):  
L. E. Whitehouse ◽  
A. S. Tomlin ◽  
M. J. Pilling

Abstract. Explicit mechanisms describing the complex degradation pathways of atmospheric volatile organic compounds (VOCs) are important, since they allow the study of the contribution of individual VOCS to secondary pollutant formation. They are computationally expensive to solve however, since they contain large numbers of species and a wide range of time-scales causing stiffness in the resulting equation systems. This paper and the following companion paper describe the application of systematic and automated methods for reducing such complex mechanisms, whilst maintaining the accuracy of the model with respect to important species and features. The methods are demonstrated via application to version 2 of the Leeds Master Chemical Mechanism. The methods of local concentration sensitivity analysis and overall rate sensitivity analysis proved to be efficient and capable of removing the majority of redundant reactions and species in the scheme across a wide range of conditions relevant to the polluted troposphere. The application of principal component analysis of the rate sensitivity matrix was computationally expensive due to its use of the decomposition of very large matrices, and did not produce significant reduction over and above the other sensitivity methods. The use of the quasi-steady state approximation (QSSA) proved to be an extremely successful method of removing the fast time-scales within the system, as demonstrated by a local perturbation analysis at each stage of reduction. QSSA species were automatically selected via the calculation of instantaneous QSSA errors based on user-selected tolerances. The application of the QSSA led to the removal of a large number of alkoxy radicals and excited Criegee bi-radicals via reaction lumping. The resulting reduced mechanism was shown to reproduce the concentration profiles of the important species selected from the full mechanism over a wide range of conditions, including those outside of which the reduced mechanism was generated. As a result of a reduction in the number of species in the scheme of a factor of 2, and a reduction in stiffness, the computational time required for simulations was reduced by a factor of 4 when compared to the full scheme.


Sign in / Sign up

Export Citation Format

Share Document