scholarly journals A new exponentially-decaying error correlation model for assimilating OCO-2 column-average CO<sub>2</sub> data, using a length scale computed from airborne lidar measurements

2021 ◽  
Author(s):  
David F. Baker ◽  
Emily Bell ◽  
Kenneth J. Davis ◽  
Joel F. Campbell ◽  
Bing Lin ◽  
...  

Abstract. To check the accuracy of column-average dry air CO2 mole fractions (XCO2) retrieved from Orbiting Carbon Overvatory (OCO-2) data, a similar quantity has been measured from the Multi-functional Fiber Laser Lidar (MFLL) aboard aircraft flying underneath OCO-2 as part of the Atmospheric Carbon and Transport (ACT)-America flight campaigns. Here we do a lagged correlation analysis of these MFLL-OCO-2 column CO2 differences and find that their correlation spectrum falls off rapidly at along-track separation distances of under 10 km, with a correlation length scale of about 10 km, and less rapidly at longer separation distances, with a correlation length scale of about 20 km. The OCO-2 satellite takes many CO2 measurements with small (~3 km2) fields of view (FOVs) in a thin (<10 km wide) swath running parallel to its orbit: up to 24 separate FOVs may be obtained per second (across a ~6.75 km distance on the ground), though clouds, aerosols, and other factors cause considerable data dropout. Errors in the CO2 retrieval method have long been thought to be correlated at these fine scales, and methods to account for these when assimilating these data into top-down atmospheric CO2 flux inversions have been developed. A common approach has been to average the data at coarser scales (e.g., in 10-second-long bins) along-track, then assign an uncertainty to the averaged value that accounts for the error correlations. Here we outline the methods used up to now for computing these 10-second averages and their uncertainties, including the constant-correlation-with-distance error model currently being used to summarize the OCO-2 version 9 XCO2 retrievals as part of the OCO-2 flux inversion model intercomparison project. We then derive a new one-dimensional error model using correlations that decay exponentially with separation distance, apply this model to the OCO-2 data using the correlation length scales derived from the MFLL-OCO-2 differences, and compare the results (for both the average and its uncertainty) to those given by the current constant-correlation error model. To implement this new model, the data are averaged first across 2-second spans, to collapse the cross-track distribution of the real data onto the 1-D path assumed by the new model. A small percentage of the data that cause nonphysical negative averaging weights in the model are thrown out. The correlation lengths over the ocean, which the land-based MFLL data do not clarify, are assumed to be twice those over the land. The new correlation model gives 10-second XCO2 averages that are only a few tenths of a ppm different from the constant-correlation model. Over land, the uncertainties in the mean are also similar, suggesting that the +0.3 constant correlation coefficient currently used in the model there is accurate. Over the oceans, the twice-the-land correlation lengths that we assume here result in a significantly lower uncertainty on the mean than the +0.6 constant correlation currently gives – measurements similar to the MFLL ones are needed over the oceans to do better. Finally, we show how our 1-D exponential error correlation model may be used to account for correlations in those inversion methods that choose to assimilate each XCO2 retrieval individually, and to account for correlations between separate 10-second averages when these are assimilated instead.

2021 ◽  
Author(s):  
David F Baker ◽  
Emily Bell ◽  
Kenneth J Davis ◽  
Joel F Campbell ◽  
Bing Lin ◽  
...  

1998 ◽  
Vol 371 ◽  
pp. 269-299 ◽  
Author(s):  
CHING-YAO CHEN ◽  
ECKART MEIBURG

Direct numerical simulations are employed to investigate the coupling between the viscous fingering instability and permeability heterogeneities for miscible displacements in quarter five-spot flows. Even moderate inhomogeneities are seen to have a strong effect on the flow, which can result in a complete bypass of the linear growth phase of the viscous fingering instability. In contrast to their homogeneous counterparts (cf. Part 1, Chen & Meiburg 1998), heterogeneous quarter five-spot flows are seen to exhibit a more uniform dominant length scale throughout the entire flow domain. In line with earlier findings for unidirectional displacements, an optimal interaction of the mobility and permeability related vorticity modes can occur when the viscous length scale is of the same order as the correlation length of the heterogeneities. This resonance mechanism results in a minimal breakthrough recovery for intermediate correlation lengths, at fixed dimensionless flow rates in the form of a Péclet number Pe. However, for a constant correlation length, the recovery does not show a minimum as Pe is varied.Confirming earlier observations, the simulations show a more rapid breakthrough as the variance of the permeability variations increases. However, this tendency is far more noticeable in some parameter regimes than in others. It is furthermore observed that relatively low variances usually cannot change the tendency for a dominant finger to evolve along the inherently preferred diagonal direction, especially for relatively small correlation lengths. Only for higher variances, and for larger correlation lengths, are situations observed in which an off-diagonal finger can become dominant. Due to the nonlinear nature of the selection mechanisms at work, a change in the variance of the heterogeneities can result in the formation of dominant fingers along entirely different channels.


2018 ◽  
Vol 146 (4) ◽  
pp. 1181-1195 ◽  
Author(s):  
Ross N. Hoffman

A one-dimensional (1D) analysis problem is defined and analyzed to explore the interaction of observation thinning or superobservation with observation errors that are correlated or systematic. The general formulation might be applied to a 1D analysis of radiance or radio occultation observations in order to develop a strategy for the use of such data in a full data assimilation system, but is applied here to a simple analysis problem with parameterized error covariances. Findings for the simple problem include the following. For a variational analysis method that includes an estimate of the full observation error covariances, the analysis is more sensitive to variations in the estimated background and observation error standard deviations than to variations in the corresponding correlation length scales. Furthermore, if everything else is fixed, the analysis error increases with decreasing true background error correlation length scale and with increasing true observation error correlation length scale. For a weighted least squares analysis method that assumes the observation errors are uncorrelated, best results are obtained for some degree of thinning and/or tuning of the weights. Without tuning, the best strategy is superobservation with a spacing approximately equal to the observation error correlation length scale.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 955
Author(s):  
Alamir Elsayed ◽  
Mohamed El-Beltagy ◽  
Amnah Al-Juhani ◽  
Shorooq Al-Qahtani

The point kinetic model is a system of differential equations that enables analysis of reactor dynamics without the need to solve coupled space-time system of partial differential equations (PDEs). The random variations, especially during the startup and shutdown, may become severe and hence should be accounted for in the reactor model. There are two well-known stochastic models for the point reactor that can be used to estimate the mean and variance of the neutron and precursor populations. In this paper, we reintroduce a new stochastic model for the point reactor, which we named the Langevin point kinetic model (LPK). The new LPK model combines the advantages, accuracy, and efficiency of the available models. The derivation of the LPK model is outlined in detail, and many test cases are analyzed to investigate the new model compared with the results in the literature.


2009 ◽  
Vol 54 (2) ◽  
pp. 778-782 ◽  
Author(s):  
Akihiro Tanaka ◽  
Tetsuya Aiba ◽  
Takashi Otsuka ◽  
Katsuya Suemaru ◽  
Tatsuya Nishimiya ◽  
...  

ABSTRACT We determined the population pharmacokinetics of vancomycin (VAN) using the glomerular filtration rate (GFR) estimated from the serum cystatin C concentration. We examined the predictive performance of the trough serum VAN concentration for determination of the initial dose by using a new model for the analysis of the population pharmacokinetic parameters. Data for 86 patients were used to estimate the values of the population pharmacokinetic parameters. Analysis with a nonlinear mixed-effects modeling program was done by using a one-compartment model. Data for 78 patients were used to evaluate the predictive performance of the new model for the analysis of population pharmacokinetic parameters. The estimated GFR values determined by using Hoek's formula correlated linearly with VAN clearance (VAN clearance [ml/min] = 0.825 × GFR). The mean volume of distribution was 0.864 (liters/kg). The interindividual variability of VAN clearance was 19.8%. The accuracy of the prediction determined by use of the new model was statistically better than that determined by use of the Japanese nomogram-based model because the 95% confidence interval (−3.45 to −1.38) of the difference in each value of the mean absolute error (−2.41) did not include 0. Use of the serum cystatin C concentration as a marker of renal function for prediction of serum VAN concentrations may be useful.


Author(s):  
A.-L. Montreuil ◽  
M. Chen ◽  
A. Esquerré ◽  
R. Houthuys ◽  
R. Moelans ◽  
...  

<p><strong>Abstract.</strong> Sustainable management of the coastal resources requires a better understanding of the processes that drive coastline change. The coastline is a highly dynamic sea-terrestrial interface. It is affected by forcing factors such as water levels, waves, winds, and the highest and most severe changes occur during storm surges. Extreme storms are drivers responsible for rapid and sometimes dramatic changes of the coastline. The consequences of the impacts from these events entail a broad range of social, economic and natural resource considerations from threats to humans, infrastructure and habitats. This study investigates the impact of a severe storm on coastline response on a sandy multi-barred beach at the Belgian coast. Airborne LiDAR surveys acquired pre- and post-storm covering an area larger than 1 km<sup>2</sup> were analyzed and reproducible monitoring solutions adapted to assess beach morphological changes were applied. Results indicated that the coast retreated by a maximum of 14.7 m where the embryo dunes in front of the fixed dunes were vanished and the foredune undercut. Storm surge and wave attacks were probably the most energetic there. However, the response of the coastline proxies associated with the mean high water line (MHW) and dunetoe (DuneT) was spatially variable. Based on the extracted beach features, good correlations (r>0.73) were found between coastline, berm and inner intertidal bar morphology, while it was weak with the most seaward bars covered in the surveys. This highlights the role of the upper features on the beach to protect the coastline from storm erosion by reducing wave energy. The findings are of critical importance in improving our knowledge and forecasting of coastline response to storms, and also in its translation into management practices.</p>


2019 ◽  
Vol 104 (2-3) ◽  
pp. 331-354 ◽  
Author(s):  
Angela Busse ◽  
Thomas O. Jelly

AbstractThe influence of surface anisotropy upon the near-wall region of a rough-wall turbulent channel flow is investigated using direct numerical simulation (DNS). A set of nine irregular rough surfaces with fixed mean peak-to-valley height, near-Gaussian height distributions and specified streamwise and spanwise correlation lengths were synthesised using a surface generation algorithm. By defining the surface anisotropy ratio (SAR) as the ratio of the streamwise and spanwise correlation lengths of the surface, we demonstrate that surfaces with a strong spanwise anisotropy (SAR < 1) can induce an over 200% increase in the roughness function ΔU+, compared to their streamwise anisotropic (SAR > 1) equivalent. Furthermore, we find that the relationship between the roughness function ΔU+ and the SAR parameter approximately follows an exponentially decaying function. The statistical response of the near-wall flow is studied using a “double-averaging” methodology in order to distinguish form-induced “dispersive” stresses from their turbulent counterparts. Outer-layer similarity is recovered for the mean velocity defect profile as well as the Reynolds stresses. The dispersive stresses all attain their maxima within the roughness canopy. Only the streamwise dispersive stress reaches levels that are comparable to the equivalent Reynolds stress, with surfaces of high SAR attaining the highest levels of streamwise dispersive stress. The Reynolds stress anisotropy also shows distinct differences between cases with strong streamwise anisotropy that stay close to an axisymmetric, rod-like state for all wall-normal locations, compared to cases with spanwise anisotropy where an axisymmetric, disk-like state of the Reynolds stress anisotropy tensor is observed around the roughness mean plane. Overall, the results from this study underline that the drag penalty incurred by a rough surface is strongly influenced by the surface topography and highlight its impact upon the mean momentum deficit in the outer flow as well as the Reynolds and dispersive stresses within the roughness layer.


2019 ◽  
Vol 622 ◽  
pp. A131 ◽  
Author(s):  
U. Simola ◽  
X. Dumusque ◽  
J. Cisewski-Kehe

Context. Stellar activity is one of the primary limitations to the detection of low-mass exoplanets using the radial-velocity (RV) technique. Stellar activity can be probed by measuring time-dependent variations in the shape of the cross-correlation function (CCF). It is therefore critical to measure with high-precision these shape variations to decorrelate the signal of an exoplanet from spurious RV signals caused by stellar activity. Aims. We propose to estimate the variations in shape of the CCF by fitting a Skew Normal (SN) density which, unlike the commonly employed Normal density, includes a Skewness parameter to capture the asymmetry of the CCF induced by stellar activity and the convective blueshift. Methods. We compared the performances of the proposed method to the commonly employed Normal density using both simulations and real observations with different levels of activity and signal-to-noise ratios. Results. When considering real observations, the correlation between the RV and the asymmetry of the CCF and between the RV and the width of the CCF are stronger when using the parameters estimated with the SN density rather than those obtained with the commonly employed Normal density. In particular, the strongest correlations have been obtained when using the mean of the SN as an estimate for the RV. This suggests that the CCF parameters estimated using a SN density are more sensitive to stellar activity, which can be helpful when estimating stellar rotational periods and when characterizing stellar activity signals. Using the proposed SN approach, the uncertainties estimated on the RV defined as the median of the SN are on average 10% smaller than the uncertainties calculated on the mean of the Normal. The uncertainties estimated on the asymmetry parameter of the SN are on average 15% smaller than the uncertainties measured on the Bisector Inverse Slope Span (BIS SPAN), which is the commonly used parameter to evaluate the asymmetry of the CCF. We also propose a new model to account for stellar activity when fitting a planetary signal to RV data. Based on simple simulations, we were able to demonstrate that this new model improves the planetary detection limits by 12% compared to the model commonly used to account for stellar activity. Conclusions. The SN density is a better model than the Normal density for characterizing the CCF since the correlations used to probe stellar activity are stronger and the uncertainties of the RV estimate and the asymmetry of the CCF are both smaller.


Sign in / Sign up

Export Citation Format

Share Document