scholarly journals Towards standardized processing of eddy covariance flux measurements of carbonyl sulfide

2020 ◽  
Vol 13 (7) ◽  
pp. 3957-3975
Author(s):  
Kukka-Maaria Kohonen ◽  
Pasi Kolari ◽  
Linda M. J. Kooijmans ◽  
Huilin Chen ◽  
Ulli Seibt ◽  
...  

Abstract. Carbonyl sulfide (COS) flux measurements with the eddy covariance (EC) technique are becoming popular for estimating gross primary productivity. To compare COS flux measurements across sites, we need standardized protocols for data processing. In this study, we analyze how various data processing steps affect the calculated COS flux and how they differ from carbon dioxide (CO2) flux processing steps, and we provide a method for gap-filling COS fluxes. Different methods for determining the time lag between COS mixing ratio and the vertical wind velocity (w) resulted in a maximum of 15.9 % difference in the median COS flux over the whole measurement period. Due to limited COS measurement precision, small COS fluxes (below approximately 3 pmol m−2 s−1) could not be detected when the time lag was determined from maximizing the covariance between COS and w. The difference between two high-frequency spectral corrections was 2.7 % in COS flux calculations, whereas omitting the high-frequency spectral correction resulted in a 14.2 % lower median flux, and different detrending methods caused a spread of 6.2 %. Relative total uncertainty was more than 5 times higher for low COS fluxes (lower than ±3 pmol m−2 s−1) than for low CO2 fluxes (lower than ±1.5 µmol m−2 s−1), indicating a low signal-to-noise ratio of COS fluxes. Due to similarities in ecosystem COS and CO2 exchange, we recommend applying storage change flux correction and friction velocity filtering as usual in EC flux processing, but due to the low signal-to-noise ratio of COS fluxes, we recommend using CO2 data for time lag and high-frequency corrections of COS fluxes due to the higher signal-to-noise ratio of CO2 measurements.

2019 ◽  
Author(s):  
Kukka-Maaria Kohonen ◽  
Pasi Kolari ◽  
Linda M. J. Kooijmans ◽  
Huilin Chen ◽  
Ulli Seibt ◽  
...  

Abstract. Carbonyl sulfide (COS) flux measurements with the eddy covariance (EC) technique are growing in popularity with the recent development in using COS to estimate gross photosynthesis at the ecosystem scale. Flux data intercomparison would benefit from standardized protocols for COS flux data processing. In this study, we analyze how various data processing steps affect the final flux and provide a method for gap-filling COS fluxes. Different methods for determining the lag time between COS mixing ratio and the vertical wind velocity (w) resulted in a maximum of 12 % difference in the cumulative COS flux. Due to limited COS measurement precision, small COS fluxes (below approximately 3 pmol m−2 s−1) could not be detected when the lag time was determined from maximizing the covariance between COS and w. We recommend using a combination of COS and carbon dioxide (CO2) lag times in determining the COS flux, depending on the flux magnitude compared to the detection limit of each averaging period. Different high frequency spectral corrections had a maximum effect of 10 % on COS flux calculations and different detrending methods only 1.2 %. Relative total uncertainty was more than five times higher for low COS fluxes (absolute flux lower than 3 pmol m−2 s−1) than for low CO2 fluxes (lower than 1.5 μmol m−2 s−1), indicating a low signal-to-noise ratio of COS fluxes. Due to similarities in ecosystem COS and CO2 exchange, and the low signal-to-noise ratio of COS fluxes that is similar to methane, we recommend a combination of CO2 and methane flux processing protocols for COS EC fluxes.


2015 ◽  
Vol 8 (3) ◽  
pp. 2913-2955 ◽  
Author(s):  
B. Langford ◽  
W. Acton ◽  
C. Ammann ◽  
A. Valach ◽  
E. Nemitz

Abstract. All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.


2015 ◽  
Vol 8 (10) ◽  
pp. 4197-4213 ◽  
Author(s):  
B. Langford ◽  
W. Acton ◽  
C. Ammann ◽  
A. Valach ◽  
E. Nemitz

Abstract. All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.


2021 ◽  
Vol 21 (10) ◽  
pp. 249
Author(s):  
Zhong-Rui Bai ◽  
Hao-Tong Zhang ◽  
Hai-Long Yuan ◽  
Dong-Wei Fan ◽  
Bo-Liang He ◽  
...  

Abstract LAMOST Data Release 5, covering ∼17 000 deg2 from –10° to 80° in declination, contains 9 million co-added low-resolution spectra of celestial objects, each spectrum combined from repeat exposure of two to tens of times during Oct 2011 to Jun 2017. In this paper, we present the spectra of individual exposures for all the objects in LAMOST Data Release 5. For each spectrum, the equivalent width of 60 lines from 11 different elements are calculated with a new method combining the actual line core and fitted line wings. For stars earlier than F type, the Balmer lines are fitted with both emission and absorption profiles once two components are detected. Radial velocity of each individual exposure is measured by minimizing χ 2 between the spectrum and its best template. The database for equivalent widths of spectral lines and radial velocities of individual spectra are available online. Radial velocity uncertainties with different stellar type and signal-to-noise ratio are quantified by comparing different exposure of the same objects. We notice that the radial velocity uncertainty depends on the time lag between observations. For stars observed in the same day and with signal-to-noise ratio higher than 20, the radial velocity uncertainty is below 5km s−1, and increases to 10 km s−1 for stars observed in different nights.


In recent communication technologies, very high sampling rates are required for rf signals particularly for signals coming under ultra high frequency (UHF), super high frequency (SHF) and extremely high frequency (EHF) ranges. The applications include global positioning system (GPS), satellite communication, radar, radio astronomy, 5G mobile phones etc. Such high sampling rates can be accomplished with time-interleaved analog to digital converters (TIADCs). However, sampling time offsets existing in TIADCs produce non-uniform samples. This poses a drawback in the reconstruction of the signal. The current paper addresses this drawback and offers a solution for improved signal reconstruction by estimation and correction of the offsets. A modified differential evolution (MDE) algorithm, which is an optimization algorithm, is used for estimating the sampling time offsets and the estimated offsets are used for correction. The estimation algorithm is implemented on an FPGA board and correction is implemented using MATLAB. The power consumption of FPGA for implementation is 57mW. IO utilization is 27% for 4-channel TIADCs and 13% for 2-channel TIADCs. The algorithm estimated the sampling time offsets precisely. For estimation the algorithm uses a sinusoidal signal as a test signal. Correction is performed with sinusoidal and speech signals as inputs for TIADCs. Performance metrics used for evaluating the algorithm are SNR (signal to noise ratio), SNDR (signal to noise and distortion ratio), SFDR (spurious-free dynamic range) and PSNR (peak signal to noise ratio). A noteworthy improvement is observed in the above mentioned parameters. Results are compared with the existing state of the art algorithms and superiority of the proposed algorithm is verified.


2003 ◽  
Vol 131 (8) ◽  
pp. 1715-1732 ◽  
Author(s):  
Matthew Newman ◽  
Prashant D. Sardeshmukh ◽  
Christopher R. Winkler ◽  
Jeffrey S. Whitaker

Abstract The predictability of weekly averaged circulation anomalies in the Northern Hemisphere, and diabatic heating anomalies in the Tropics, is investigated in a linear inverse model (LIM) derived from their observed simultaneous and time-lag correlation statistics. In both winter and summer, the model's forecast skill at week 2 (days 8–14) and week 3 (days 15–21) is comparable to that of a comprehensive global medium-range forecast (MRF) model developed at the National Centers for Environmental Prediction (NCEP). Its skill at week 3 is actually higher on average, partly due to its better ability to forecast tropical heating variations and their influence on the extratropical circulation. The geographical and temporal variations of forecast skill are also similar in the two models. This makes the much simpler LIM an attractive tool for assessing and diagnosing atmospheric predictability at these forecast ranges. The LIM assumes that the dynamics of weekly averages are linear, asymptotically stable, and stochastically forced. In a forecasting context, the predictable signal is associated with the deterministic linear dynamics, and the forecast error with the unpredictable stochastic noise. In a low-order linear model of a high-order chaotic system, this stochastic noise represents the effects of both chaotic nonlinear interactions and unresolved initial components on the evolution of the resolved components. Its statistics are assumed here to be state independent. An average signal-to-noise ratio is estimated at each grid point on the hemisphere and is then used to estimate the potential predictability of weekly variations at the point. In general, this predictability is about 50% higher in winter than summer over the Pacific and North America sectors; the situation is reversed over Eurasia and North Africa. Skill in predicting tropical heating variations is important for realizing this potential skill. The actual LIM forecast skill has a similar geographical structure but weaker magnitude than the potential skill. In this framework, the predictable variations of forecast skill from case to case are associated with predictable variations of signal rather than of noise. This contrasts with the traditional emphasis in studies of shorter-term predictability on flow-dependent instabilities, that is, on the predictable variations of noise. In the LIM, the predictable variations of signal are associated with variations of the initial state projection on the growing singular vectors of the LIM's propagator, which have relatively large amplitude in the Tropics. At times of strong projection on such structures, the signal-to-noise ratio is relatively high, and the Northern Hemispheric circulation is not only potentially but also actually more predictable than at other times.


2021 ◽  
Author(s):  
Xuegang Su

We are investigating the feasibility of binary coded excitation methods using Golay code pairs for high frequency ultrasound imaging as a way to increase the signal to noise ratio. I present some theoretical models used to simulate the coded excitation method and results generated from the models. A new coded excitation high frequency ultrasound prototype system was built to verify the simulation results. Both the simulation and the experimental results show that binary coded excitation can improve the signal to noise ratio in high frequency ultrasound backscatter signals. These results are confirmed in phantoms and excised bovine liver. If just white noise is considered, the encoding gain is 15dB for a Golay pair of length 4. We find the system to be very sensitive to motion (i.e. phase shift) and frequency dependent (FD) attenuation, creating sidelobes and degrading axial resolution and encoding gain. Methods to address these issues are discussed.


Sign in / Sign up

Export Citation Format

Share Document