scholarly journals Change in ozone depletion rates beginning in the mid 1990s: trend analyses of the TOMS/ SBUV merged total ozone data, 1978-2003

2006 ◽  
Vol 24 (2) ◽  
pp. 493-502 ◽  
Author(s):  
J. W. Krzyscin

Abstract. Statistical analyses have been applied to the gridded monthly means of total ozone from combined TOMS and SBUV measurements (version 8 of the data) for the period 1978-2003. We focus on the detection of a change in the trend pattern by searching for a turnaround in the previous downward trend. The ozone time series have been examined separately for each grid point and season, taking into account the various descriptions of the trend term: double-linear, proportional to the index of the overall chlorine content in the stratosphere, and a smooth curve without an a priori defined shape (the output of the regression model). Standard explanatory variables representing physical and chemical processes known to influence the ozone distribution have been considered: Mg II index, QBO wind at 10 and 30 hPa, zonal wind anomalies at 50 hPa along the 60° north or 60° south circle, the index of the stratospheric aerosols loading in the NH or SH, and the tropopause pressure. The multivariate adaptive regression splines methodology is used to find an optimal set of the explanatory variables and shape of the trend curve. The statistical errors of the models' estimates have been calculated using block bootstrapping of the models' residuals. The results appear to be consistent among models using different formulations of the trend pattern. The 2003 level of total ozone after the removal of the variations due to the parameterized dynamical/chemical forcing on the ozone is still below the long-term (1978-2003) mean level over the extratropical regions. The deficit is ~2-5% in the NH and much larger in the SH and exhibits clear seasonal variability, ~15% in autumn, ~10% in winter, and ~-5% in spring and summer. The present total ozone level is higher beyond the tropics than that in the mid 1990s but it is too early to announce a beginning of the ozone recovery there because of the trend uncertainties, due to errors of the regression estimates for individual grid points and longitudinal variability of the trend pattern. A rigorous statistical test has shown the statistically significant turnaround for some grid points over the extratropical region and a deepening of the ozone negative trend has not been found for any grid point.

2008 ◽  
Vol 8 (11) ◽  
pp. 2847-2857 ◽  
Author(s):  
J. W. Krzyścin ◽  
J. L. Borkowski

Abstract. The total ozone data over Europe are available for only few ground-based stations in the pre-satellite era disallowing examination of the spatial trend variability over the whole continent. A need of having gridded ozone data for a trend analysis and input to radiative transfer models stimulated a reconstruction of the daily ozone values since January 1950. Description of the reconstruction model and its validation were a subject of our previous paper. The data base used was built within the objectives of the COST action 726 "Long-term changes and climatology of UV radiation over Europe". Here we focus on trend analyses. The long-term variability of total ozone is discussed using results of a flexible trend model applied to the reconstructed total ozone data for the period 1950–2004. The trend pattern, which comprises both anthropogenic and "natural" component, is not a priori assumed but it comes from a smooth curve fit to the zonal monthly means and monthly grid values. The ozone long-term changes are calculated separately for cold (October–next year April) and warm (May–September) seasons. The confidence intervals for the estimated ozone changes are derived by the block bootstrapping. The statistically significant negative trends are found almost over the whole Europe only in the period 1985–1994. Negative trends up to −3% per decade appeared over small areas in earlier periods when the anthropogenic forcing on the ozone layer was weak . The statistically positive trends are found only during warm seasons 1995–2004 over Svalbard archipelago. The reduction of ozone level in 2004 relative to that before the satellite era is not dramatic, i.e., up to ~−5% and ~−3.5% in the cold and warm subperiod, respectively. Present ozone level is still depleted over many popular resorts in southern Europe and northern Africa. For high latitude regions the trend overturning could be inferred in last decade (1995–2004) as the ozone depleted areas are not found there in 2004 in spite of substantial ozone depletion in the period 1985–1994.


2008 ◽  
Vol 8 (1) ◽  
pp. 47-69
Author(s):  
J. W. Krzyścin ◽  
J. L. Borkowski

Abstract. Long-term variability of total ozone over Europe is discussed using results of a flexible trend model applied to the reconstructed total ozone data for the period 1950–2004. The data base used was built within the objectives of the COST action 726 "Long-term changes and climatology of UV radiation over Europe". The trend pattern, which comprises both anthropogenic and "natural" component, is not a priori assumed but it is a result of a smooth curve fit to the zonal monthly means and monthly grid values. The trend values in 5-year and 10-year intervals in cold (October-next year April) and warm (May–September) seasons are calculated as the differences between the smooth curve values at the end and beginning of selected time intervals divided by length of the intervals. The confidence intervals for the trend values are calculated by the block bootstrapping. The statistically significant negative trends are found almost over whole Europe only in the period 1985–1994. Negative trends up to −3% per decade appeared over small areas in earlier periods when the anthropogenic forcing on the ozone layer was weak. The statistically positive trends are found only during warm seasons 1995–2004 over Svalbard archipelago. The reduction of ozone level in 2004 relative to that before the satellite era is not dramatic, i.e., up to ~−5% and ~−3.5% in the cold and warm subperiod, respectively. Present ozone level is still depleted over many popular resorts in southern Europe and northern Africa. For high latitude regions the trend overturning could be inferred in last decade (1995–2004) as the ozone depleted areas are not found there in 2004 in spite of substantial ozone depletion in the period 1985–1994.


2020 ◽  
pp. 1-14
Author(s):  
Siqiang Chen ◽  
Masahiro Toyoura ◽  
Takamasa Terada ◽  
Xiaoyang Mao ◽  
Gang Xu

A textile fabric consists of countless parallel vertical yarns (warps) and horizontal yarns (wefts). While common looms can weave repetitive patterns, Jacquard looms can weave the patterns without repetition restrictions. A pattern in which the warps and wefts cross on a grid is defined in a binary matrix. The binary matrix can define which warp and weft is on top at each grid point of the Jacquard fabric. The process can be regarded as encoding from pattern to textile. In this work, we propose a decoding method that generates a binary pattern from a textile fabric that has been already woven. We could not use a deep neural network to learn the process based solely on the training set of patterns and observed fabric images. The crossing points in the observed image were not completely located on the grid points, so it was difficult to take a direct correspondence between the fabric images and the pattern represented by the matrix in the framework of deep learning. Therefore, we propose a method that can apply the framework of deep learning viau the intermediate representation of patterns and images. We show how to convert a pattern into an intermediate representation and how to reconvert the output into a pattern and confirm its effectiveness. In this experiment, we confirmed that 93% of correct pattern was obtained by decoding the pattern from the actual fabric images and weaving them again.


2009 ◽  
Vol 27 (4) ◽  
pp. 1377-1386 ◽  
Author(s):  
M. Antón ◽  
D. Loyola ◽  
M. López ◽  
J. M. Vilaplana ◽  
M. Bañón ◽  
...  

Abstract. The main objective of this article is to compare the total ozone data from the new Global Ozone Monitoring Experiment instrument (GOME-2/MetOp) with reliable ground-based measurement recorded by five Brewer spectroradiometers in the Iberian Peninsula. In addition, a similar comparison for the predecessor instrument GOME/ERS-2 is described. The period of study is a whole year from May 2007 to April 2008. The results show that GOME-2/MetOp ozone data already has a very good quality, total ozone columns are on average 3.05% lower than Brewer measurements. This underestimation is higher than that obtained for GOME/ERS-2 (1.46%). However, the relative differences between GOME-2/MetOp and Brewer measurements show significantly lower variability than the differences between GOME/ERS-2 and Brewer data. Dependencies of these relative differences with respect to the satellite solar zenith angle (SZA), the satellite scan angle, the satellite cloud cover fraction (CF), and the ground-based total ozone measurements are analyzed. For both GOME instruments, differences show no significant dependence on SZA. However, GOME-2/MetOp data show a significant dependence on the satellite scan angle (+1.5%). In addition, GOME/ERS-2 differences present a clear dependence with respect to the CF and ground-based total ozone; such differences are minimized for GOME-2/MetOp. The comparison between the daily total ozone values provided by both GOME instruments shows that GOME-2/MetOp ozone data are on average 1.46% lower than GOME/ERS-2 data without any seasonal dependence. Finally, deviations of a priori climatological ozone profile used by the satellite retrieval algorithm from the true ozone profile are analyzed. Although excellent agreement between a priori climatological and measured partial ozone values is found for the middle and high stratosphere, relative differences greater than 15% are common for the troposphere and lower stratosphere.


2016 ◽  
Vol 7 (4) ◽  
pp. 810-822 ◽  
Author(s):  
P. Sonali ◽  
D. Nagesh Kumar

Worldwide, major changes in the climate are expected due to global warming, which leads to temperature variations. To assess the climate change impact on the hydrological cycle, a spatio-temporal change detection study of potential evapotranspiration (PET) along with maximum and minimum temperatures (Tmax and Tmin) over India have been performed for the second half of the 20th century (1950–2005) both at monthly and seasonal scale. From the observed monthly climatology of PET over India, high values of PET are envisioned during the months of March, April, May and June. Temperature is one of the significant factors in explaining changes in PET. Hence seasonal correlations of PET with Tmax and Tmin were analyzed using Spearman rank correlation. Correlation of PET with Tmax was found to be higher compared to that with Tmin. Seasonal variability of trend at each grid point over India was studied for Tmax, Tmin and PET separately. Trend Free Pre-Whitening and Modified Mann Kendall approaches, which consider the effect of serial correlation, were employed for the trend detection analysis. A significant trend was observed in Tmin compared to Tmax and PET. Significant upward trends in Tmax, Tmin and PET were observed over most of the grid points in the interior peninsular region.


2007 ◽  
Vol 19 (1) ◽  
pp. 47-79 ◽  
Author(s):  
Abigail Morrison ◽  
Sirko Straube ◽  
Hans Ekkehard Plesser ◽  
Markus Diesmann

Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.


2008 ◽  
Vol 26 (9) ◽  
pp. 2645-2648 ◽  
Author(s):  
T. L. Gulyaeva ◽  
I. Stanislawska

Abstract. The planetary ionospheric storm index, Wp, is deduced from the numerical global ionospheric GPS-IONEX maps of the vertical total electron content, TEC, for more than half a solar cycle, 1999–2008. The TEC values are extracted from the 600 grid points of the map at latitudes 60° N to 60° S with a step of 5° and longitudes 0° to 345° E with a step of 15° providing the data for 00:00 to 23:00 h of local time. The local effects of the solar radiant energy are filtered out by normalizing of the TEC in terms of the solar zenith angle χ at a particular time and the local noon value χ0. The degree of perturbation, DTEC, is computed as log of TEC relative to quiet reference median for 27 days prior to the day of observation. The W-index map is generated by segmentation of DTEC with the relevant thresholds specified earlier for foF2 so that 1 or −1 stands for the quiet state, 2 or −2 for the moderate disturbance, 3 or −3 for the moderate ionospheric storm, and 4 or −4 for intense ionospheric storm at each grid point of the map. The planetary ionospheric storm Wp index is obtained from the W-index map as a latitudinal average of the distance between maximum positive and minimum negative W-index weighted by the latitude/longitude extent of the extreme values on the map. The threshold Wp exceeding 4.0 index units and the peak value Wpmax≥6.0 specify the duration and the power of the planetary ionosphere-plasmasphere storm. It is shown that the occurrence of the Wp storms is growing with the phase of the solar cycle being twice as much as the number of the magnetospheric storms with Dst≤−100 nT and Ap≥100 nT.


1979 ◽  
Vol 49 ◽  
pp. 237-239
Author(s):  
J.A. Högbom

A common problem in radio synthesis work is that of determining the brightness at N grid points in the map domain when there are only n<N independent interferometer measurements available. The missing (N-n) equations can in principle be replaced by an equivalent amount of information in the form of a priori knowledge about the brightness distribution. One way of doing this is to add an equation of the formi.e. some function of the brightness distribution is maximized subject to the condition that the map be compatible with the n measurements. This automatically gives (N-n) new equations, leaving us in the pleasant situation of having as many equations as there are unknowns.


Geophysics ◽  
2002 ◽  
Vol 67 (4) ◽  
pp. 1225-1231 ◽  
Author(s):  
Seongjai Kim

The article is concerned with the development and comparison of three different algorithms for the computation of first‐arrival traveltimes: the fast marching method (FMM), the group marching method (GMM), and a second‐order finite‐difference eikonal solver. GMM is introduced as a variant of FMM. It proceeds the solution by advancing a selected group of grid points at a time, rather than sorting the solution in the narrow band to march forward a single grid point. The second‐order eikonal solver studied in the article is an expanding‐box, essentially nonoscillatory scheme for which the stability is enforced by the introduction of a down ‘n’ out marching and a post‐sweeping iteration. Techniques such as the maximum angle condition, the average normal velocity, and cache‐based implementation are introduced for the algorithms to improve the numerical accuracy and efficiency. The algorithms are implemented for solving the eikonal equation in 3‐D isotropic media, and their performances are compared. GMM is numerically verified to be faster than FMM. However, the second‐order algorithm turns out to be superior to these first‐order level‐set methods in both accuracy and efficiency; the incorporation of average normal velocity improves accuracy dramatically for the second‐order scheme.


2020 ◽  
Author(s):  
Robin Stoffer ◽  
Caspar van Leeuwen ◽  
Damian Podareanu ◽  
Valeriu Codreanu ◽  
Menno Veerman ◽  
...  

&lt;p&gt;&lt;span&gt;Large-eddy simulation (LES) is an often used technique in the geosciences to simulate turbulent oceanic and atmospheric flows. In LES, the effects of the unresolved turbulence scales on the resolved scales (via the Reynolds stress tensor) have to be parameterized with subgrid models. These subgrid models usually require strong assumptions about the relationship between the resolved flow fields and the Reynolds stress tensor, which are often violated in reality and potentially hamper their accuracy.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;In this study, using the finite-difference computational fluid dynamics code MicroHH (v2.0) and turbulent channel flow as a test case (friction Reynolds number Re&lt;sub&gt;&amp;#964;&lt;/sub&gt; 590), we incorporated and tested a newly emerging subgrid modelling approach that does not require those assumptions. Instead, it relies on neural networks that are highly non-linear and flexible. Similar to currently used subgrid models, we designed our neural networks such that they can be applied locally in the grid domain: at each grid point the neural networks receive as an input the locally resolved flow fields (u,v,w), rather than the full flow fields. As an output, the neural networks give the Reynolds stress tensor at the considered grid point. This local application integrates well with our simulation code, and is necessary to run our code in parallel within distributed memory systems.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;To allow our neural networks to learn the relationship between the specified input and output, we created a training dataset that contains ~10.000.000 samples of corresponding inputs and outputs. We derived those samples directly from high-resolution 3D direct numerical simulation (DNS) snapshots of turbulent flow fields. Since the DNS explicitly resolves all the relevant turbulence scales, by downsampling the DNS we were able to derive both the Reynolds stress tensor and the corresponding lower-resolution flow fields typical for LES. In this calculation, we took into account both the discretization and interpolation errors introduced by the finite staggered LES grid. Subsequently, using these samples we optimized the parameters of the neural networks to minimize the difference between the predicted and the &amp;#8216;true&amp;#8217; output derived from DNS.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;After that, we tested the performance of our neural networks in two different ways:&lt;/span&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;span&gt;A priori or offline testing, where we used a withheld part of the training dataset (10%) to test the capability of the neural networks to correctly predict the Reynolds stress tensor for data not used to optimize its parameters. We found that the neural networks were, in general, well able to predict the correct values. &lt;/span&gt;&lt;/li&gt; &lt;li&gt;&lt;span&gt;A posteriori or online testing, where we incorporated our neural networks directly into our LES. To keep the total involved computational effort feasible, we strongly enhanced the prediction speed of the neural network by relying on highly optimized matrix-vector libraries. The full successful integration of the neural networks within LES remains challenging though, mainly because the neural networks tend to introduce numerical instability into the LES. We are currently investigating ways to minimize this instability, while maintaining the high accuracy in the a priori test and the high prediction speed.&lt;/span&gt;&lt;/li&gt; &lt;/ol&gt;


Sign in / Sign up

Export Citation Format

Share Document