scholarly journals Carbon source/sink information provided by column CO<sub>2</sub> measurements from the Orbiting Carbon Observatory

2010 ◽  
Vol 10 (9) ◽  
pp. 4145-4165 ◽  
Author(s):  
D. F. Baker ◽  
H. Bösch ◽  
S. C. Doney ◽  
D. O'Brien ◽  
D. S. Schimel

Abstract. We quantify how well column-integrated CO2 measurements from the Orbiting Carbon Observatory (OCO) should be able to constrain surface CO2 fluxes, given the presence of various error sources. We use variational data assimilation to optimize weekly fluxes at a 2°×5° resolution (lat/lon) using simulated data averaged across each model grid box overflight (typically every ~33 s). Grid-scale simulations of this sort have been carried out before for OCO using simplified assumptions for the measurement error. Here, we more accurately describe the OCO measurements in two ways. First, we use new estimates of the single-sounding retrieval uncertainty and averaging kernel, both computed as a function of surface type, solar zenith angle, aerosol optical depth, and pointing mode (nadir vs. glint). Second, we collapse the information content of all valid retrievals from each grid box crossing into an equivalent multi-sounding measurement uncertainty, factoring in both time/space error correlations and data rejection due to clouds and thick aerosols. Finally, we examine the impact of three types of systematic errors: measurement biases due to aerosols, transport errors, and mistuning errors caused by assuming incorrect statistics. When only random measurement errors are considered, both nadir- and glint-mode data give error reductions over the land of ~45% for the weekly fluxes, and ~65% for seasonal fluxes. Systematic errors reduce both the magnitude and spatial extent of these improvements by about a factor of two, however. Improvements nearly as large are achieved over the ocean using glint-mode data, but are degraded even more by the systematic errors. Our ability to identify and remove systematic errors in both the column retrievals and atmospheric assimilations will thus be critical for maximizing the usefulness of the OCO data.

2008 ◽  
Vol 8 (6) ◽  
pp. 20051-20112 ◽  
Author(s):  
D. F. Baker ◽  
H. Bösch ◽  
S. C. Doney ◽  
D. S. Schimel

Abstract. We perform a series of observing system simulation experiments (OSSEs) to quantify how well surface CO2 fluxes may be estimated using column-integrated CO2 data from the Orbiting Carbon Observatory (OCO), given the presence of various error sources. We use variational data assimilation to optimize weekly fluxes at 2°×5° (lat/lon) using simulated data averaged only across the ~33 s that OCO takes to cross a typical 2°×5° model grid box. Grid-scale OSSEs of this sort have been carried out before for OCO using simplified assumptions for the measurement error. Here, we more accurately describe the OCO measurements in two ways. First, we use new estimates of the single-sounding retrieval uncertainty and averaging kernel, both computed as a function of surface type, solar zenith angle, aerosol optical depth, and pointing mode (nadir vs. glint). Second, we collapse the information content of all valid retrievals from each grid box crossing into an equivalent multi-sounding measurement uncertainty, factoring in both time/space error correlations and data availability due to clouds and thick aerosols (calculated from MODIS data). Finally, we examine the impact of three types of systematic errors: measurement biases due to aerosols, transport errors, and errors caused by assuming incorrect error statistics. When only random measurement errors are considered, both nadir- and glint-mode data give error reductions of ~50% over the land for the weekly fluxes, and ~65% for seasonal fluxes. Systematic errors reduce both the magnitude and extent of these improvements by up to a factor of two, however. Flux improvements over the ocean are significant only when using glint-mode data and are smaller than those over land; when the assimilation is mistuned, slow convergence makes even these improvements difficult to achieve. The OCO data may prove most useful over the tropical land areas, where our current flux knowledge is weak and where the measurements remain fairly accurate even in the face of systematic errors.


2012 ◽  
Vol 44 (3) ◽  
pp. 454-466 ◽  
Author(s):  
Sander P. M. van den Tillaart ◽  
Martijn J. Booij ◽  
Maarten S. Krol

Uncertainties in discharge determination may have serious consequences for hydrological modelling and resulting discharge predictions used for flood forecasting, climate change impact assessment and reservoir operation. The aim of this study is to quantify the effect of discharge errors on parameters and performance of a conceptual hydrological model for discharge prediction applied to two catchments. Six error sources in discharge determination are considered: random measurement errors without autocorrelation; random measurement errors with autocorrelation; systematic relative measurement errors; systematic absolute measurement errors; hysteresis in the discharge–water level relation and effects of an outdated discharge–water level relation. Assuming realistic magnitudes for each error source, results show that systematic errors and an outdated discharge–water level relation have a considerable influence on model performance, while other error sources have a small to negligible effect. The effects of errors on parameters are large if the effects on model performance are large as well and vice versa. Parameters controlling the water balance are influenced by systematic errors and parameters related to the shape of the hydrograph are influenced by random errors. Large effects of discharge errors on model performance and parameters should be taken into account when using discharge predictions for flood forecasting and impact assessment.


Author(s):  
R. Shults

The problem of accuracy determination of the UAV position using INS at aerial photography can be resolved in two different ways: modelling of measurement errors or in-field calibration for INS. The paper presents the results of INS errors research by mathematical modelling. In paper were considered the following steps: developing of INS computer model; carrying out INS simulation; using reference data without errors, estimation of errors and their influence on maps creation accuracy by UAV data. It must be remembered that the values of orientation angles and the coordinates of the projection centre may change abruptly due to the influence of the atmosphere (different air density, wind, etc.). Therefore, the mathematical model of the INS was constructed taking into account the use of different models of wind gusts. For simulation were used typical characteristics of micro electromechanical (MEMS) INS and parameters of standard atmosphere. According to the simulation established domination of INS systematic errors that accumulate during the execution of photographing and require compensation mechanism, especially for orientation angles. MEMS INS have a high level of noise at the system input. Thanks to the developed model, we are able to investigate separately the impact of noise in the absence of systematic errors. According to the research was found that on the interval of observations in 5 seconds the impact of random and systematic component is almost the same. The developed model of INS errors studies was implemented in Matlab software environment and without problems can be improved and enhanced with new blocks.


Author(s):  
W.J. de Ruijter ◽  
Sharma Renu

Established methods for measurement of lattice spacings and angles of crystalline materials include x-ray diffraction, microdiffraction and HREM imaging. Structural information from HREM images is normally obtained off-line with the traveling table microscope or by the optical diffractogram technique. We present a new method for precise measurement of lattice vectors from HREM images using an on-line computer connected to the electron microscope. It has already been established that an image of crystalline material can be represented by a finite number of sinusoids. The amplitude and the phase of these sinusoids are affected by the microscope transfer characteristics, which are strongly influenced by the settings of defocus, astigmatism and beam alignment. However, the frequency of each sinusoid is solely a function of overall magnification and periodicities present in the specimen. After proper calibration of the overall magnification, lattice vectors can be measured unambiguously from HREM images.Measurement of lattice vectors is a statistical parameter estimation problem which is similar to amplitude, phase and frequency estimation of sinusoids in 1-dimensional signals as encountered, for example, in radar, sonar and telecommunications. It is important to properly model the observations, the systematic errors and the non-systematic errors. The observations are modelled as a sum of (2-dimensional) sinusoids. In the present study the components of the frequency vector of the sinusoids are the only parameters of interest. Non-systematic errors in recorded electron images are described as white Gaussian noise. The most important systematic error is geometric distortion. Lattice vectors are measured using a two step procedure. First a coarse search is obtained using a Fast Fourier Transform on an image section of interest. Prior to Fourier transformation the image section is multiplied with a window, which gradually falls off to zero at the edges. The user indicates interactively the periodicities of interest by selecting spots in the digital diffractogram. A fine search for each selected frequency is implemented using a bilinear interpolation, which is dependent on the window function. It is possible to refine the estimation even further using a non-linear least squares estimation. The first two steps provide the proper starting values for the numerical minimization (e.g. Gauss-Newton). This third step increases the precision with 30% to the highest theoretically attainable (Cramer and Rao Lower Bound). In the present studies we use a Gatan 622 TV camera attached to the JEM 4000EX electron microscope. Image analysis is implemented on a Micro VAX II computer equipped with a powerful array processor and real time image processing hardware. The typical precision, as defined by the standard deviation of the distribution of measurement errors, is found to be <0.003Å measured on single crystal silicon and <0.02Å measured on small (10-30Å) specimen areas. These values are ×10 times larger than predicted by theory. Furthermore, the measured precision is observed to be independent on signal-to-noise ratio (determined by the number of averaged TV frames). Obviously, the precision is restricted by geometric distortion mainly caused by the TV camera. For this reason, we are replacing the Gatan 622 TV camera with a modern high-grade CCD-based camera system. Such a system not only has negligible geometric distortion, but also high dynamic range (>10,000) and high resolution (1024x1024 pixels). The geometric distortion of the projector lenses can be measured, and corrected through re-sampling of the digitized image.


2000 ◽  
Vol 151 (12) ◽  
pp. 502-507
Author(s):  
Christian Küchli

Are there any common patterns in the transition processes from traditional and more or less sustainable forest management to exploitative use, which can regularly be observed both in central Europe and in the countries of the South (e.g. India or Indonesia)? Attempts were made with a time-space-model to typify those force fields, in which traditional sustainable forest management is undermined and is then transformed into a modern type of sustainable forest management. Although it is unlikely that the history of the North will become the future of the South, the glimpse into the northern past offers a useful starting point for the understanding of the current situation in the South, which in turn could stimulate the debate on development. For instance, the patterns which stand behind the conflicts on forest use in the Himalayas are very similar to the conflicts in the Alps. In the same way, the impact of socio-economic changes on the environment – key word ‹globalisation› – is often much the same. To recognize comparable patterns can be very valuable because it can act as a stimulant for the search of political, legal and technical solutions adapted to a specific situation. For the global community the realization of the way political-economic alliances work at the head of the ‹globalisationwave›can only signify to carry on trying to find a common language and understanding at the negotiation tables. On the lee side of the destructive breaker it is necessary to conserve and care for what survived. As it was the case in Switzerland these forest islands could once become the germination points for the genesis of a cultural landscape, where close-to-nature managed forests will constitute an essential element.


1993 ◽  
Vol 27 (3-4) ◽  
pp. 1-13 ◽  
Author(s):  
Arie H. Havelaar ◽  
Siem H. Heisterkamp ◽  
Janneke A. Hoekstra ◽  
Kirsten A. Mooijman

The general concept of measurement errors is applied to quantitative bacteriological counts on membrane filters or agar plates. The systematic errors of these methods are related to the growth characteristics of the medium (recovery of target organisms and inhibition of non-target organisms) and to its differential characteristics (sensitivity and specificity). Factors that influence the precision of microbiological counts are the variation between replicates, within samples, between operators and between laboratories. It is also affected by the linearity of the method, the verification rate and, where applicable, the number of colonies subcultured for verification. Repeatability (r) and reproducibility (R) values can be calculated on the logarithmic scale.


2020 ◽  
Author(s):  
Ayan Chatterjee ◽  
Ram Bajpai ◽  
Pankaj Khatiwada

BACKGROUND Lifestyle diseases are the primary cause of death worldwide. The gradual growth of negative behavior in humans due to physical inactivity, unhealthy habit, and improper nutrition expedites lifestyle diseases. In this study, we develop a mathematical model to analyze the impact of regular physical activity, healthy habits, and a proper diet on weight change, targeting obesity as a case study. Followed by, we design an algorithm for the verification of the proposed mathematical model with simulated data of artificial participants. OBJECTIVE This study intends to analyze the effect of healthy behavior (physical activity, healthy habits, and proper dietary pattern) on weight change with a proposed mathematical model and its verification with an algorithm where personalized habits are designed to change dynamically based on the rule. METHODS We developed a weight-change mathematical model as a function of activity, habit, and nutrition with the first law of thermodynamics, basal metabolic rate (BMR), total daily energy expenditure (TDEE), and body-mass-index (BMI) to establish a relationship between health behavior and weight change. Followed by, we verified the model with simulated data. RESULTS The proposed provable mathematical model showed a strong relationship between health behavior and weight change. We verified the mathematical model with the proposed algorithm using simulated data following the necessary constraints. The adoption of BMR and TDEE calculation following Harris-Benedict’s equation has increased the model's accuracy under defined settings. CONCLUSIONS This study helped us understand the impact of healthy behavior on obesity and overweight with numeric implications and the importance of adopting a healthy lifestyle abstaining from negative behavior change.


Author(s):  
Grant Duwe

As the use of risk assessments for correctional populations has grown, so has concern that these instruments exacerbate existing racial and ethnic disparities. While much of the attention arising from this concern has focused on how algorithms are designed, relatively little consideration has been given to how risk assessments are used. To this end, the present study tests whether application of the risk principle would help preserve predictive accuracy while, at the same time, mitigate disparities. Using a sample of 9,529 inmates released from Minnesota prisons who had been assessed multiple times during their confinement on a fully-automated risk assessment, this study relies on both actual and simulated data to examine the impact of program assignment decisions on changes in risk level from intake to release. The findings showed that while the risk principle was used in practice to some extent, the simulated results showed that greater adherence to the risk principle would increase reductions in risk levels and minimize the disparities observed at intake. The simulated data further revealed the most favorable outcomes would be achieved by not only applying the risk principle, but also by expanding program capacity for the higher-risk inmates in order to adequately reduce their risk.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


2021 ◽  
Vol 18 (1) ◽  
pp. 163-176
Author(s):  
Penghua Han ◽  
Cun Zhang ◽  
Zhaopeng Ren ◽  
Xiang He ◽  
Sheng Jia

Abstract The advance speed of a longwall face is an essential factor affecting the mining pressure and overburden movement, and an effective approach for choosing a reasonable advance speed to realise coal mine safety and efficient production is needed. To clarify the influence of advance speed on the overburden movement law of a fully mechanised longwall face, a time-space subsidence model of overburden movement is established by the continuous medium analysis method. The movement law of overburden in terms of the advance speed is obtained, and mining stress characteristics at different advance speeds are reasonably explained. The theoretical results of this model are further verified by a physical simulation experiment. The results support the following conclusions. (i) With increasing advance speed of the longwall face, the first (periodic) rupture interval of the main roof and the key stratum increase, while the subsidence of the roof, the fracture angle and the rotation angle of the roof decrease. (ii) With increasing advance speed, the roof displacement range decreases gradually, and the influence range of the advance speed on the roof subsidence is 75 m behind the longwall face. (iii) An increase in the advance speed of the longwall face from 4.89 to 15.23 m/d (daily advancing of the longwall face) results in a 3.28% increase in the impact load caused by the sliding instability of the fractured rock of the main roof and a 5.79% decrease in the additional load caused by the rotation of the main roof, ultimately resulting in a 9.63% increase in the average dynamic load coefficient of the support. The roof subsidence model based on advance speed is proposed to provide theoretical support for rational mining design and mining-pressure-control early warning for a fully mechanised longwall face.


Sign in / Sign up

Export Citation Format

Share Document