scholarly journals Surface Pressure Tide Climatologies Deduced from a Quality-Controlled Network of Barometric Observations*

2014 ◽  
Vol 142 (12) ◽  
pp. 4872-4889 ◽  
Author(s):  
Michael Schindelegger ◽  
Richard D. Ray

Abstract Global “ground truth” knowledge of solar diurnal S1 and semidiurnal S2 surface pressure tides as furnished by barometric in situ observations represents a valuable standard for wide-ranging geophysical and meteorological applications. This study attempts to aid validations of the air pressure tide signature in current climate or atmospheric analysis models by developing a new global assembly of nearly 6900 mean annual S1 and S2 estimates on the basis of station and marine barometric reports from the International Surface Pressure Databank, version 2 (ISPDv2), for a principal time span of 1990–2010. Previously published tidal compilations have been limited by inadequate spatial coverage or by internal inconsistencies and outliers from suspect tidal analyses; here, these problems are mostly overcome through 1) automated data filtering under ISPDv2’s quality-control framework and 2) a meticulously conducted visual inspection of station harmonic decompositions. The quality of the resulting compilation is sufficient to support global interpolation onto a reasonably fine mesh of 1° horizontal spacing. A multiquadric interpolation algorithm, with parameters fine-tuned by frequency and for land or ocean regions, is employed. Global charts of the gridded surface pressure climatologies are presented, and these are mapped to a wavenumber versus latitude spectrum for comparison with long-term means of S1 and S2 from four present-day atmospheric analysis systems. This cross verification, shown to be feasible even for the minor stationary modes of the tides, reveals a small but probably significant overestimation of up to 18% for peak semidiurnal amplitudes as predicted by global analysis models.

2014 ◽  
Vol 11 (13) ◽  
pp. 3547-3602 ◽  
Author(s):  
P. Ciais ◽  
A. J. Dolman ◽  
A. Bombelli ◽  
R. Duren ◽  
A. Peregon ◽  
...  

Abstract. A globally integrated carbon observation and analysis system is needed to improve the fundamental understanding of the global carbon cycle, to improve our ability to project future changes, and to verify the effectiveness of policies aiming to reduce greenhouse gas emissions and increase carbon sequestration. Building an integrated carbon observation system requires transformational advances from the existing sparse, exploratory framework towards a dense, robust, and sustained system in all components: anthropogenic emissions, the atmosphere, the ocean, and the terrestrial biosphere. The paper is addressed to scientists, policymakers, and funding agencies who need to have a global picture of the current state of the (diverse) carbon observations. We identify the current state of carbon observations, and the needs and notional requirements for a global integrated carbon observation system that can be built in the next decade. A key conclusion is the substantial expansion of the ground-based observation networks required to reach the high spatial resolution for CO2 and CH4 fluxes, and for carbon stocks for addressing policy-relevant objectives, and attributing flux changes to underlying processes in each region. In order to establish flux and stock diagnostics over areas such as the southern oceans, tropical forests, and the Arctic, in situ observations will have to be complemented with remote-sensing measurements. Remote sensing offers the advantage of dense spatial coverage and frequent revisit. A key challenge is to bring remote-sensing measurements to a level of long-term consistency and accuracy so that they can be efficiently combined in models to reduce uncertainties, in synergy with ground-based data. Bringing tight observational constraints on fossil fuel and land use change emissions will be the biggest challenge for deployment of a policy-relevant integrated carbon observation system. This will require in situ and remotely sensed data at much higher resolution and density than currently achieved for natural fluxes, although over a small land area (cities, industrial sites, power plants), as well as the inclusion of fossil fuel CO2 proxy measurements such as radiocarbon in CO2 and carbon-fuel combustion tracers. Additionally, a policy-relevant carbon monitoring system should also provide mechanisms for reconciling regional top-down (atmosphere-based) and bottom-up (surface-based) flux estimates across the range of spatial and temporal scales relevant to mitigation policies. In addition, uncertainties for each observation data-stream should be assessed. The success of the system will rely on long-term commitments to monitoring, on improved international collaboration to fill gaps in the current observations, on sustained efforts to improve access to the different data streams and make databases interoperable, and on the calibration of each component of the system to agreed-upon international scales.


2020 ◽  
Vol 12 (16) ◽  
pp. 2642
Author(s):  
Stelios Mertikas ◽  
Achilleas Tripolitsiotis ◽  
Craig Donlon ◽  
Constantin Mavrocordatos ◽  
Pierre Féménias ◽  
...  

This work presents the latest calibration results for the Copernicus Sentinel-3A and -3B and the Jason-3 radar altimeters as determined by the Permanent Facility for Altimetry Calibration (PFAC) in west Crete, Greece. Radar altimeters are used to provide operational measurements for sea surface height, significant wave height and wind speed over oceans. To maintain Fiducial Reference Measurement (FRM) status, the stability and quality of altimetry products need to be continuously monitored throughout the operational phase of each altimeter. External and independent calibration and validation facilities provide an objective assessment of the altimeter’s performance by comparing satellite observations with ground-truth and in-situ measurements and infrastructures. Three independent methods are employed in the PFAC: Range calibration using a transponder, sea-surface calibration relying upon sea-surface Cal/Val sites, and crossover analysis. Procedures to determine FRM uncertainties for Cal/Val results have been demonstrated for each calibration. Biases for Sentinel-3A Passes No. 14, 278 and 335, Sentinel-3B Passes No. 14, 71 and 335, as well as for Jason-3 Passes No. 18 and No. 109 are given. Diverse calibration results by various techniques, infrastructure and settings are presented. Finally, upgrades to the PFAC in support of the Copernicus Sentinel-6 ‘Michael Freilich’, due to launch in November 2020, are summarized.


2009 ◽  
Vol 66 (7) ◽  
pp. 1467-1479 ◽  
Author(s):  
Sarah L. Hughes ◽  
N. Penny Holliday ◽  
Eugene Colbourne ◽  
Vladimir Ozhigin ◽  
Hedinn Valdimarsson ◽  
...  

Abstract Hughes, S. L., Holliday, N. P., Colbourne, E., Ozhigin, V., Valdimarsson, H., Østerhus, S., and Wiltshire, K. 2009. Comparison of in situ time-series of temperature with gridded sea surface temperature datasets in the North Atlantic. – ICES Journal of Marine Science, 66: 1467–1479. Analysis of the effects of climate variability and climate change on the marine ecosystem is difficult in regions where long-term observations of ocean temperature are sparse or unavailable. Gridded sea surface temperature (SST) products, based on a combination of satellite and in situ observations, can be used to examine variability and long-term trends because they provide better spatial coverage than the limited sets of long in situ time-series. SST data from three gridded products (Reynolds/NCEP OISST.v2., Reynolds ERSST.v3, and the Hadley Centre HadISST1) are compared with long time-series of in situ measurements from ICES standard sections in the North Atlantic and Nordic Seas. The variability and trends derived from the two data sources are examined, and the usefulness of the products as a proxy for subsurface conditions is discussed.


2020 ◽  
Vol 21 (4) ◽  
pp. 791-805
Author(s):  
Yuanyuan Wang ◽  
Zhaojun Zheng

AbstractTriple collocation (TC) is a popular technique for determining the data quality of three products that estimate the same geophysical variable using mutually independent methods. When TC is applied to a triplet of one point-scale in situ and two coarse-scale datasets that have the similar spatial resolution, the TC-derived performance metric for the point-scale dataset can be used to assess its spatial representativeness. In this study, the spatial representativeness of in situ snow depth measurements from the meteorological stations in northeast China was assessed using an unbiased correlation metric estimated with TC. Stations are considered representative if ; that is, in situ measurements explain no less than 50% of the variations in the “ground truth” of the snow depth averaged at the coarse scale (0.25°). The results confirmed that TC can be used to reliably exploit existing sparse snow depth networks. The main findings are as follows. 1) Among all the 98 stations in the study region, 86 stations have valid values, of which 57 stations are representative for the entire snow season (October–December, January–April). 2) Seasonal variations in are large: 63 stations are representative during the snow accumulation period (December–February), whereas only 25 stations are representative during the snow ablation period (October–November, March–April). 3) The is positively correlated with mean snow depth, which largely determines the global decreasing trend in from north to south. After removing this trend, residuals in can be explained by heterogeneity features concerning elevation and conditional probability of snow presence near the stations.


1997 ◽  
Vol 16 (7) ◽  
pp. 631-660 ◽  
Author(s):  
B. Varughese ◽  
A. Mukherjee

A global-local approach for the analysis of tapered laminated composites is presented. The method is considerably more economical than existing techniques. A new drop-off element that is degenerated from a 2-D drop-off element is introduced to carry out the global analysis. The drop-off element accommodates the termination of plies within the element and therefore, the size of the stiffness matrix reduces. At the vicinity of the drop-off, where the stress concentration is high, a local analysis with a refined mesh is performed with the results of the global analysis. Since a fine mesh is to be adopted only near the drop-off, there is substantial economy in the computational effort and time as compared to the conventional analysis models. The present method has been validated extensively against published results. The discussions on the stress/strain distributions at different locations have been presented.


2014 ◽  
Vol 7 (10) ◽  
pp. 10513-10558
Author(s):  
S. Barthlott ◽  
M. Schneider ◽  
F. Hase ◽  
A. Wiegele ◽  
E. Christner ◽  
...  

Abstract. Within the NDACC (Network for the Detection of Atmospheric Composition Change), more than 20 FTIR (Fourier–Transform InfraRed) spectrometers, spread worldwide, provide long-term data records of many atmospheric trace gases. We present a method that uses measured and modelled XCO2 for assessing the consistency of these data records. Our NDACC XCO2 retrieval setup is kept simple so that it can easily be adopted for any NDACC/FTIR-like measurement made since the late 1950s. By a comparison to coincident TCCON (Total Carbon Column Observing Network) measurements, we empirically demonstrate the useful quality of this NDACC XCO2 product (empirically obtained scatter between TCCON and NDACC is about 4‰ for daily mean as well as monthly mean comparisons and the bias is 25‰). As XCO2 model we developed and used a simple regression model fitted to CarbonTracker results and the Mauna Loa CO2 in-situ records. A comparison to TCCON data suggests an uncertainty of the model for monthly mean data of below 3‰. We apply the method to the NDACC/FTIR spectra that are used within the project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water) and demonstrate that there is a good consistency for these globally representative set of spectra measured since 1996: the scatter between the modelled and measured XCO2 on a yearly time scale is only 3‰.


2020 ◽  
Vol 10 (3) ◽  
pp. 760
Author(s):  
Dongqi Zhang ◽  
Jie Yu ◽  
Hui Li ◽  
Xin Zhou ◽  
Changhui Song ◽  
...  

Selective laser melting (SLM) is a layer by layer process of melting and solidifying of metal powders. The surface quality of the previous layer directly affects the uniformity of the next layer. If the surface roughness value of the previous layer is large, there is the possibility of not being able to complete the layering process such that the entire process has to be abandoned. At least, it may result in long term durability problem and the inhomogeneity, may even make the processed structure not be able to be predicted. In the present study, the ability of a fiber laser to in-situ polish the rough surfaces of four typical additive-manufactured alloys, namely, Ti6Al4V, AlSi10Mg, 316L and IN718 was demonstrated. The results revealed that the surface roughness of the as-received alloys could be reduced to about 3 μm through the application of the laser-polishing process, and the initial surfaces had roughness values of 8.80–16.64 μm. Meanwhile, for a given energy density, a higher laser power produced a laser-polishing effect that was often more obvious, with the surface roughness decreasing with an increase in the laser power. Further, the polishing strategy will be optimized by simulation in our following study.


Author(s):  
Ana Roza Llera ◽  
Amalia Jimenez ◽  
Lurdes Fernández-Díaz

Anthropogenic lead pollution is an environmental problem that threatens the quality of soils and waters and endangers living organisms in numerous surface and subsurface habitats. Lead coprecipitation on mineral surfaces through dissolution-recrystallization processes has long term effects on lead bioavailability. Gypsum and calcite are among the most abundant and reactive rock forming minerals present in numerous geological settings. In this work, we study the interaction of slightly acidic (pHi = 5.5) Pb-bearing aqueous solutions ([Pb]i = 1 mM and 10 mM) with crystals of gypsum and /or calcite under atmospheric conditions. This interaction results in a reduction of the concentration of lead in the liquid phase due to the precipitation of newly formed Pb-bearing solid phases. The extent of this Pb removal mainly depends on the nature of the primary mineral phase involved in the interaction. Thus, when gypsum is the only solid phase initially present in the system the Pb-bearing liquid-gypsum interaction results in Pb removals in the 98-99.8 % range, regardless of [Pb]i. In contrast, when the interaction takes place with calcite, Pb removal strongly depends on [Pb]i. It reaches 99% when [Pb]i = 1 mM while it is much more modest (⁓13%) when [Pb]i = 10 mM. Interestingly, Pb-removal is maximized for both [Pb]i (99.9 % for solutions with [Pb]i = 10 mM and 99.7% for solutions with [Pb]i = 1 mM) when Pb-polluted solutions simultaneously interact with gypsum and calcite crystals. Despite the large Pb removals found in most of the cases studied, the final Pb concentration ([Pb]f) in the liquid phase always is well above the maximum permitted in drinking water (0.1 ppm), with the minimum ([Pb]f = 0.7 ppm) being obtained for solutions with [Pb]i =1 mM after their interaction with mixtures of gypsum and calcite crystals. This result suggests that integrating the use of mixtures of gypsum-calcite crystals might help to develop more efficient strategies for in-situ decontaminating Pb-polluted waters through mineral coprecipitation processes.


Sign in / Sign up

Export Citation Format

Share Document