scholarly journals Technical Note: 30 years of HIRS data of upper tropospheric humidity

2014 ◽  
Vol 14 (14) ◽  
pp. 7533-7541 ◽  
Author(s):  
K. Gierens ◽  
K. Eleftheratos ◽  
L. Shi

Abstract. We use 30 years of intercalibrated HIRS (High-Resolution Infrared Radiation Sounder) data to produce a 30-year data set of upper tropospheric humidity with respect to ice (UTHi). Since the required brightness temperatures (channels 12 and 6, T12 and T6) are intercalibrated to different versions of the HIRS sensors (HIRS/2 and HIRS/4) it is necessary to convert the channel 6 brightness temperatures which are intercalibrated to HIRS/4 into equivalent brightness temperatures intercalibrated to HIRS/2, which is achieved using a linear regression. Using the new regression coefficients we produce daily files of UTHi, T12 and T6, for each NOAA satellite and METOP-A (Meteorological Operational Satellite Programme), which carry the HIRS instrument. From this we calculate daily and monthly means in 2.5° × 2.5° resolution for the northern midlatitude zone 30–60° N. As a first application we calculate decadal means of UTHi and the brightness temperatures for the two decades 1980–1989 and 2000–2009. We find that the humidity mainly increased from the 1980s to the 2000s and that this increase is highly statistically significant in large regions of the considered midlatitude belt. The main reason for this result and its statistical significance is the corresponding increase of the T12 variance. Changes of the mean brightness temperatures are less significant.

2014 ◽  
Vol 14 (5) ◽  
pp. 5871-5892 ◽  
Author(s):  
K. Gierens ◽  
K. Eleftheratos ◽  
L. Shi

Abstract. We use 30 years of intercalibrated HIRS data to produce a 30 year data set of upper tropospheric humidity with respect to ice (UTHi). Since the required brightness temperatures (channels 12 and 6, T12 and T6) are intercalibrated to different versions of the HIRS sensors (HIRS/2 and HIRS/4) it is necessary to convert the channel 6 brightness temperatures which are intercalibrated to HIRS/4 into equivalent brightness temperatures intercalibrated to HIRS/2, which is achieved using a linear regression. Using the new regression coefficients we produce daily files of UTHi, T12 and T6, for each NOAA satellite and METOP-A, which carry the HIRS instrument. From this we calculate daily and monthly means in 2.5° × 2.5° resolution for the northern mid-latitude zone 30 to 60° N. As a first application we calculate decadal means of UTHi and the brightness temperatures for the two decades 1980–1989 and 2000–2009. We find that the humidity mainly increased from the 1980s to the 2000s and that this increase is highly statistically significant in large regions of the considered mid-latitude belt. The main reason for this result and its statistical significance is the corresponding increase of the T12 variance. Changes of the mean brightness temperatures are less significant.


2021 ◽  
Author(s):  
Solange Suli ◽  
Matilde Rusticucci ◽  
Soledad Collazo

<p>Small variations in the mean state of the atmosphere can cause large changes in the frequency of extreme events. In order to deepen and extend previous results in time, in this work we analyzed the linear relationship between extreme and mean temperature (Τ) on a climate change scale in Argentina. Two monthly extreme indices, cold nights (TN10) and warm days (TX90), were calculated based on the quality-controlled daily minimum and maximum temperature data provided by the Argentine National Meteorological Service from 58 conventional weather stations located over Argentina in the 1977–2017 period. Subsequently, we evaluated the relationship between the linear trends of extremes and mean temperature on a seasonal basis (JFM, AMJ, JAS, and OND). Student's T-test was performed to analyze their statistical significance at 5%. Firstly, positive (negative) and significant linear regressions were found between TX90 (TN10) trends and mean temperature trends for the four studied seasons. Therefore, an increase in the Τ-trend maintains a linear relationship with significant increase (decrease) of warm days (cold nights). Moreover, we found that JFM was the one with the highest coefficient of determination (0.602 for hot extremes and 0.511 for cold extremes), implying that 60.2% (51.1%) of the TX90 (TN10) trend could be explained as a function of the Τ-trend by a linear regression. In addition, in the JFM (OND) quarter, the TX90 index increased by 7.02 (6.02) % of days each with a 1 ºC increase in the mean temperature. Likewise, the TN10 index decreased by 4.94 (and 4.99) % of days from a 1ºC increase in the mean temperature for the JFM (AMJ) quarter. Finally, it is worthwhile to highlight the uneven behavior between hot and cold extremes and the mean temperature. Specifically, it was observed that the slopes of the linear regression calculated for the TX90 index and Τ presented a higher absolute value than those registered for the TN10 index and Τ. Therefore, a change in the mean temperature affects hot extremes to a greater extent than cold ones in Argentina.</p>


Author(s):  
Emilia Mendes

Although numerous studies on Web effort estimation have been carried out to date, there is no consensus on what constitutes the best effort estimation technique to be used by Web companies. It seems that not only the effort estimation technique itself can influence the accuracy of predictions, but also the characteristics of the data set used (e.g., skewness, collinearity; Shepperd & Kadoda, 2001). Therefore, it is often necessary to compare different effort estimation techniques, looking for those that provide the best estimation accuracy for the data set being employed. With this in mind, the use of graphical aids such as boxplots is not always enough to assess the existence of significant differences between effort prediction models. The same applies to measures of prediction accuracy such as the mean magnitude of relative error (MMRE), median magnitude of relative error (MdMRE), and prediction at level l (Pred[25]). Other techniques, which correspond to the group of statistical significance tests, need to be employed to check if the different residuals obtained for each of the effort estimation techniques compared come from the same population. This chapter details how to use such techniques and how their results should be interpreted.


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Shubei Li ◽  
Dong Zhang ◽  
Lan Yang ◽  
Yujie Li ◽  
Xiaoxin Zhu ◽  
...  

A simple and accurate HPLC-UV method was developed for the simultaneous quantitative analysis of main stilbenes and flavones in different parts (fronds, rhizomes, and frond bases) ofM. struthiopteris. The chromatographic separation was performed on a Kromasil C18 column (4.6 mm × 250 mm, 5 μm) with the mobile phase of MeOH-H2O (including 0.1% phosphoric acid) using a gradient elution at the flow rate of 1.0 mL min−1and UV detection at 295 nm. The method was validated by specificity, linearity, accuracy (recovery), and precision tests (repeatability, intra- and interday). For all the six compounds, the linear regression coefficients ranged from 0.9958 to 0.9998 within the test ranges; intra- and interday precisions were<2% and the mean recoveries ranged from 98.09 to 103.56%. The amount of these compounds in the frond bases was almost the same as in the rhizomes but much higher than that in the fronds. The results indicate that the HPLC method developed was appropriate for the analysis of the six compounds in different parts (fronds, rhizomes, and frond bases) ofM. struthiopteris.


1992 ◽  
Vol 14 (1) ◽  
pp. 25-27 ◽  
Author(s):  
J. S. O. Odonde

Experimental data always contains measurement errors (or noise, in signal processing). This paper is concerned with the removal of outliers from a data set consisting of only a handful of points. The data set has a unimodal probability distribution function, the mode is thus a reliable estimate of the central tendency. The approach is nonparametric; for the data set (xi, yi) only the ordinates (yi) are used. The abscissa (xi) are reparametrized to the variable i = 1, N.The data is bounded using a calculated mode and a new measure: the mean absolute deviation from the mode. This does not seem to have been reported before. The mean is removed and low frequency filtering is performed in the frequency domain, after which the mean is reintroduced.


2017 ◽  
Vol 10 (2) ◽  
pp. 681-693 ◽  
Author(s):  
Klaus Gierens ◽  
Kostas Eleftheratos

Abstract. In the present study we explore the capability of the intercalibrated HIRS brightness temperature data at channel 12 (the HIRS water vapour channel; T12) to reproduce ice supersaturation in the upper troposphere during the period 1979–2014. Focus is given on the transition from the HIRS 2 to the HIRS 3 instrument in the year 1999, which involved a shift of the central wavelength in channel 12 from 6.7 to 6.5 µm. It is shown that this shift produced a discontinuity in the time series of low T12 values ( < 235 K) and associated cases of high upper-tropospheric humidity with respect to ice (UTHi  > 70 %) in the year 1999 which prevented us from maintaining a continuous, long-term time series of ice saturation throughout the whole record (1979–2014). We show that additional corrections are required to the low T12 values in order to bring HIRS 3 levels down to HIRS 2 levels. The new corrections are based on the cumulative distribution functions of T12 from NOAA 14 and 15 satellites (that is, when the transition from HIRS 2 to HIRS 3 occurred). By applying these corrections to the low T12 values we show that the discontinuity in the time series caused by the transition of HIRS 2 to HIRS 3 is not apparent anymore when it comes to calculating extreme UTHi cases. We come up with a new time series for values found at the low tail of the T12 distribution, which can be further exploited for analyses of ice saturation and supersaturation cases. The validity of the new method with respect to typical intercalibration methods such as regression-based methods is presented and discussed.


2013 ◽  
Vol 31 (4_suppl) ◽  
pp. 77-77
Author(s):  
Joshua E. Meyer ◽  
Alan Thomay ◽  
Karen J. Ruth ◽  
Talha Shaikh ◽  
Andre A. Konski ◽  
...  

77 Background: EC is often treated with CRT followed by E. E is typically performed 6 weeks after completion of CRT, but the optimal timing is unknown. Previous work has shown that a longer time interval (TI) between CRT and E resulted in a higher percentage of patients (pts) with pathologic complete response. This study was undertaken to determine whether this improved response comes at the expense of increased surgical Cx. Methods: Complete records were available for 85 pts who underwent CRT and subsequent E at a single academic center from 2001-2011. Surgical Cx were collected. Univariable and multivariable analyses were performed to investigate the association between length of TI from CRT to E and Cx, adjusting for age, gender, and surgery type. Multiple linear regression was performed to examine the association of length of stay (LOS) and estimated blood loss (EBL) with TI, adjusting for covariates. Results: Of 85 patients, 72 were male and the histology was adenocarcinoma in 72. The median age was 61 (range: 36-80), the most common clinical stage was T3N1 and 60% of pts had ECOG performance status of 1 (range 0-2). The median length of CRT (most commonly Cisplatin, 5FU and 50.4 Gy) was 37 days and median TI from initiation of CRT to E was 89 days (range: 64-242). 59 pts (69%) experienced at least 1 complication. The mean TIs for pts with and without Cx were 97 and 87 days (P=0.019). When specific Cx were examined, the mean TI for pulmonary Cx was greater (107 v. 89 days; P=0.018). Patients experiencing anastamotic leaks had shorter mean TIs (83 v. 96 days; P=0.022). Multiple linear regression showed a positive association between LOS and TI (p=0.0027) but none with EBL. On multivariable analysis, increased TI predicted for pulmonary complications (OR 1.05, P=0.0061) and LOS (OR 1.03, P=0.033). Increased TI predicted for decreased risk of anastamotic leak (OR 0.94, P=0.063). Conclusions: In this retrospective data set, we demonstrated an association between longer TI from CRT to E and pulmonary toxicity in EC pts. Longer TI was also associated with increased LOS. In contrast, anastamotic leaks were associated with shorter TIs. These data suggest TI from CRT to E may impact the risk of Cx.


2019 ◽  
Vol 19 (6) ◽  
pp. 3733-3746
Author(s):  
Klaus Gierens ◽  
Kostas Eleftheratos

Abstract. We present a novel retrieval for upper-tropospheric humidity (UTH) from High-resolution Infrared Radiation Sounder (HIRS) channel 12 radiances that successfully bridges the wavelength change from 6.7 to 6.5 µm that occurred from HIRS/2 on National Oceanic and Atmospheric Administration satellite NOAA-14 to HIRS/3 on satellite NOAA-15. The jump in average brightness temperature (in the water vapour channel; T12) that this change had caused (about −7 K) could be fixed with a statistical inter-calibration method (Shi and Bates, 2011). Unfortunately, the retrieval of UTHi (upper-tropospheric humidity with respect to ice) based on the inter-calibrated data was not satisfying at the high tail of the distribution of UTHi. Attempts to construct a better inter-calibration in the low T12 range (equivalent to the high UTHi range) were either not successful (Gierens et al., 2018) or required additional statistically determined corrections to the measured brightness temperatures (Gierens and Eleftheratos, 2017). The new method presented here is based on the original one (Soden and Bretherton, 1993; Stephens et al., 1996; Jackson and Bates, 2001), but it extends linearisations in the formulation of water vapour saturation pressure and in the temperature dependence of the Planck function to second order. To achieve the second-order formulation we derive the retrieval from the beginning, and we find that the most influential ingredient is the use of different optical constants for the two involved channel wavelengths (6.7 and 6.5 µm). The result of adapting the optical constant is an almost perfect match between UTH data measured by HIRS/2 on NOAA-14 and HIRS/3 on NOAA-15 on 1004 common days of operation. The method is applied to both UTH and UTHi. For each case retrieval coefficients are derived. We present a number of test applications, e.g. on computed brightness temperatures based on high-resolution radiosonde profiles, on the brightness temperatures measured by the satellites on the mentioned 1004 common days of operation. Further, we present time series of the occurrence frequency of high UTHi cases, and we show the overall probability distribution of UTHi. The two latter applications expose indications of moistening of the upper troposphere over the last 35 years. Finally, we discuss the significance of UTH. We state that UTH algorithms cannot be judged for their correctness or incorrectness, since there is no true UTH. Instead, UTH algorithms should fulfill a number of usefulness postulates, which we suggest and discuss.


2006 ◽  
Vol 6 (3) ◽  
pp. 831-846 ◽  
Author(s):  
X. Calbet ◽  
P. Schlüssel

Abstract. The Empirical Orthogonal Function (EOF) retrieval technique consists of calculating the eigenvectors of the spectra to later perform a linear regression between these and the atmospheric states, this first step is known as training. At a later stage, known as performing the retrievals, atmospheric profiles are derived from measured atmospheric radiances. When EOF retrievals are trained with a statistically different data set than the one used for retrievals two basic problems arise: significant biases appear in the retrievals and differences between the covariances of the training data set and the measured data set degrade them. The retrieved profiles will show a bias with respect to the real profiles which comes from the combined effect of the mean difference between the training and the real spectra projected into the atmospheric state space and the mean difference between the training and the atmospheric profiles. The standard deviations of the difference between the retrieved profiles and the real ones show different behavior depending on whether the covariance of the training spectra is bigger, equal or smaller than the covariance of the measured spectra with which the retrievals are performed. The procedure to correct for these effects is shown both analytically and with a measured example. It consists of first calculating the average and standard deviation of the difference between real observed spectra and the calculated spectra obtained from the real atmospheric state and the radiative transfer model used to create the training spectra. In a later step, measured spectra must be bias corrected with this average before performing the retrievals and the linear regression of the training must be performed adding noise to the spectra corresponding to the aforementioned calculated standard deviation. This procedure is optimal in the sense that to improve the retrievals one must resort to using a different training data set or a different algorithm.


2016 ◽  
Author(s):  
Andre Peters ◽  
Thomas Nehls ◽  
Gerd Wessolek

Abstract. Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter [Peters, A., Nehls, T., Schonsky, H., and Wessolek, G.: Separating precipitation and evapotranspiration from noise – a new filter routine for high-resolution lysimeter data, Hydrol. Earth Syst. Sci., 18, 1189–1198, doi:10.5194/hess-18-1189-2014, 2014]. The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with one minute resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are one day, one hour and 10 minutes. As expected, the step scheme yielded reasonable flux rates only for a resolution of one day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest to use always the spline scheme.


Sign in / Sign up

Export Citation Format

Share Document