scholarly journals Mesoscale ocean variability signal recovered from altimeter data in the SW Atlantic Ocean: a comparison of orbit error correction in three Geosat data sets

1995 ◽  
Vol 43 (2) ◽  
pp. 101-110
Author(s):  
Gustavo Goni ◽  
Guillermo Podesta ◽  
Otis Brown ◽  
James Brown

Orbit error is one of the largest sources of uncertainty in studies of ocean dynamics using satellite altimeters. The sensitivity of GEOSAT mesoscale ocean variability estimates to altimeter orbit precision in the SW Atlantic is analyzed using three GEOSAT data sets derived from different orbit estimation methods: (a) the original GDR data set, which has the lowest orbit precision, (b) the GEM-T2 set, constructed from a much more precise orbital model, and (c) the Sirkes-Wunsch data set, derived from additional spectral analysis of the GEM-T2 data set. Differences among the data sets are investigated for two tracks in dynamically dissimilar regimes of the Southwestern Atlantic Ocean, by comparing: (a) distinctive features of the average power density spectra of the sea height residuals and (b) space-time diagrams of sea height residuals. The variability estimates produced by the three data sets are extremely similar in both regimes after removal of the time-dependent component of the orbit error using a quadratic fit. Our results indicate that altimeter orbit precision with appropriate processing plays only a minor role in studies of mesoscale ocean variability.

2021 ◽  
Author(s):  
David Cotton ◽  

<p><strong>Introduction</strong></p><p>HYDROCOASTAL is a two year project funded by ESA, with the objective to maximise exploitation of SAR and SARin altimeter measurements in the coastal zone and inland waters, by evaluating and implementing new approaches to process SAR and SARin data from CryoSat-2, and SAR altimeter data from Sentinel-3A and Sentinel-3B. Optical data from Sentinel-2 MSI and Sentinel-3 OLCI instruments will also be used in generating River Discharge products.</p><p>New SAR and SARin processing algorithms for the coastal zone and inland waters will be developed and implemented and evaluated through an initial Test Data Set for selected regions. From the results of this evaluation a processing scheme will be implemented to generate global coastal zone and river discharge data sets.</p><p>A series of case studies will assess these products in terms of their scientific impacts.</p><p>All the produced data sets will be available on request to external researchers, and full descriptions of the processing algorithms will be provided</p><p> </p><p><strong>Objectives</strong></p><p>The scientific objectives of HYDROCOASTAL are to enhance our understanding  of interactions between the inland water and coastal zone, between the coastal zone and the open ocean, and the small scale processes that govern these interactions. Also the project aims to improve our capability to characterize the variation at different time scales of inland water storage, exchanges with the ocean and the impact on regional sea-level changes</p><p>The technical objectives are to develop and evaluate  new SAR  and SARin altimetry processing techniques in support of the scientific objectives, including stack processing, and filtering, and retracking. Also an improved Wet Troposphere Correction will be developed and evaluated.</p><p><strong>Project  Outline</strong></p><p>There are four tasks to the project</p><ul><li>Scientific Review and Requirements Consolidation: Review the current state of the art in SAR and SARin altimeter data processing as applied to the coastal zone and to inland waters</li> <li>Implementation and Validation: New processing algorithms with be implemented to generate a Test Data sets, which will be validated against models, in-situ data, and other satellite data sets. Selected algorithms will then be used to generate global coastal zone and river discharge data sets</li> <li>Impacts Assessment: The impact of these global products will be assess in a series of Case Studies</li> <li>Outreach and Roadmap: Outreach material will be prepared and distributed to engage with the wider scientific community and provide recommendations for development of future missions and future research.</li> </ul><p> </p><p><strong>Presentation</strong></p><p>The presentation will provide an overview to the project, present the different SAR altimeter processing algorithms that are being evaluated in the first phase of the project, and early results from the evaluation of the initial test data set.</p><p> </p>


2017 ◽  
Vol 9 (1) ◽  
pp. 211-220 ◽  
Author(s):  
Amelie Driemel ◽  
Eberhard Fahrbach ◽  
Gerd Rohardt ◽  
Agnieszka Beszczynska-Möller ◽  
Antje Boetius ◽  
...  

Abstract. Measuring temperature and salinity profiles in the world's oceans is crucial to understanding ocean dynamics and its influence on the heat budget, the water cycle, the marine environment and on our climate. Since 1983 the German research vessel and icebreaker Polarstern has been the platform of numerous CTD (conductivity, temperature, depth instrument) deployments in the Arctic and the Antarctic. We report on a unique data collection spanning 33 years of polar CTD data. In total 131 data sets (1 data set per cruise leg) containing data from 10 063 CTD casts are now freely available at doi:10.1594/PANGAEA.860066. During this long period five CTD types with different characteristics and accuracies have been used. Therefore the instruments and processing procedures (sensor calibration, data validation, etc.) are described in detail. This compilation is special not only with regard to the quantity but also the quality of the data – the latter indicated for each data set using defined quality codes. The complete data collection includes a number of repeated sections for which the quality code can be used to investigate and evaluate long-term changes. Beginning with 2010, the salinity measurements presented here are of the highest quality possible in this field owing to the introduction of the OPTIMARE Precision Salinometer.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. M41-M48 ◽  
Author(s):  
Hongwei Liu ◽  
Mustafa Naser Al-Ali

The ideal approach for continuous reservoir monitoring allows generation of fast and accurate images to cope with the massive data sets acquired for such a task. Conventionally, rigorous depth-oriented velocity-estimation methods are performed to produce sufficiently accurate velocity models. Unlike the traditional way, the target-oriented imaging technology based on the common-focus point (CFP) theory can be an alternative for continuous reservoir monitoring. The solution is based on a robust data-driven iterative operator updating strategy without deriving a detailed velocity model. The same focusing operator is applied on successive 3D seismic data sets for the first time to generate efficient and accurate 4D target-oriented seismic stacked images from time-lapse field seismic data sets acquired in a [Formula: see text] injection project in Saudi Arabia. Using the focusing operator, target-oriented prestack angle domain common-image gathers (ADCIGs) could be derived to perform amplitude-versus-angle analysis. To preserve the amplitude information in the ADCIGs, an amplitude-balancing factor is applied by embedding a synthetic data set using the real acquisition geometry to remove the geometry imprint artifact. Applying the CFP-based target-oriented imaging to time-lapse data sets revealed changes at the reservoir level in the poststack and prestack time-lapse signals, which is consistent with the [Formula: see text] injection history and rock physics.


2013 ◽  
Vol 3 (4) ◽  
pp. 61-83 ◽  
Author(s):  
Eleftherios Tiakas ◽  
Apostolos N. Papadopoulos ◽  
Yannis Manolopoulos

The last years there is an increasing interest for query processing techniques that take into consideration the dominance relationship between items to select the most promising ones, based on user preferences. Skyline and top-k dominating queries are examples of such techniques. A skyline query computes the items that are not dominated, whereas a top-k dominating query returns the k items with the highest domination score. To enable query optimization, it is important to estimate the expected number of skyline items as well as the maximum domination value of an item. In this article, the authors provide an estimation for the maximum domination value under the dinstinct values and attribute independence assumptions. The authors provide three different methodologies for estimating and calculating the maximum domination value and the authors test their performance and accuracy. Among the proposed estimation methods, their method Estimation with Roots outperforms all others and returns the most accurate results. They also introduce the eliminating dimension, i.e., the dimension beyond which all domination values become zero, and the authors provide an efficient estimation of that dimension. Moreover, the authors provide an accurate estimation of the skyline cardinality of a data set.


Endocrinology ◽  
2019 ◽  
Vol 160 (10) ◽  
pp. 2395-2400 ◽  
Author(s):  
David J Handelsman ◽  
Lam P Ly

Abstract Hormone assay results below the assay detection limit (DL) can introduce bias into quantitative analysis. Although complex maximum likelihood estimation methods exist, they are not widely used, whereas simple substitution methods are often used ad hoc to replace the undetectable (UD) results with numeric values to facilitate data analysis with the full data set. However, the bias of substitution methods for steroid measurements is not reported. Using a large data set (n = 2896) of serum testosterone (T), DHT, estradiol (E2) concentrations from healthy men, we created modified data sets with increasing proportions of UD samples (≤40%) to which we applied five different substitution methods (deleting UD samples as missing and substituting UD sample with DL, DL/√2, DL/2, or 0) to calculate univariate descriptive statistics (mean, SD) or bivariate correlations. For all three steroids and for univariate as well as bivariate statistics, bias increased progressively with increasing proportion of UD samples. Bias was worst when UD samples were deleted or substituted with 0 and least when UD samples were substituted with DL/√2, whereas the other methods (DL or DL/2) displayed intermediate bias. Similar findings were replicated in randomly drawn small subsets of 25, 50, and 100. Hence, we propose that in steroid hormone data with ≤40% UD samples, substituting UD with DL/√2 is a simple, versatile, and reasonably accurate method to minimize left censoring bias, allowing for data analysis with the full data set.


2010 ◽  
Vol 2 (1) ◽  
pp. 133-155 ◽  
Author(s):  
A. Velo ◽  
F. F. Pérez ◽  
X. Lin ◽  
R. M. Key ◽  
T. Tanhua ◽  
...  

Abstract. Data on carbon and carbon-relevant hydrographic and hydrochemical parameters from 188 previously non-publicly available cruise data sets in the Artic Mediterranean Seas (AMS), Atlantic Ocean and Southern Ocean have been retrieved and merged to a new database: CARINA (CARbon IN the Atlantic Ocean). These data have gone through rigorous quality control (QC) procedures to assure the highest possible quality and consistency. The data for most of the measured parameters in the CARINA database were objectively examined in order to quantify systematic differences in the reported values. Systematic biases found in the data have been corrected in the data products, three merged data files with measured, calculated and interpolated data for each of the three CARINA regions; AMS, Atlantic Ocean and Southern Ocean. Out of a total of 188 cruise entries in the CARINA database, 59 reported pH measured values. All reported pH data have been unified to the Sea-Water Scale (SWS) at 25 °C. Here we present details of the secondary QC of pH in the CARINA database and the scale unification to SWS at 25 °C. The pH scale has been converted for 36 cruises. Procedures of quality control, including crossover analysis between cruises and inversion analysis are described. Adjustments were applied to the pH values for 21 of the cruises in the CARINA dataset. With these adjustments the CARINA database is consistent both internally as well as with the GLODAP data, an oceanographic data set based on the World Hydrographic Program in the 1990s. Based on our analysis we estimate the internal consistency of the CARINA pH data to be 0.005 pH units. The CARINA data are now suitable for accurate assessments of, for example, oceanic carbon inventories and uptake rates, for ocean acidification assessment and for model validation.


2016 ◽  
Vol 29 (20) ◽  
pp. 7295-7311 ◽  
Author(s):  
Hyacinth C. Nnamchi ◽  
Jianping Li ◽  
Fred Kucharski ◽  
In-Sik Kang ◽  
Noel S. Keenlyside ◽  
...  

Abstract Equatorial Atlantic variability is dominated by the Atlantic Niño peaking during the boreal summer. Studies have shown robust links of the Atlantic Niño to fluctuations of the St. Helena subtropical anticyclone and Benguela Niño events. Furthermore, the occurrence of opposite sea surface temperature (SST) anomalies in the eastern equatorial and southwestern extratropical South Atlantic Ocean (SAO), also peaking in boreal summer, has recently been identified and termed the SAO dipole (SAOD). However, the extent to which and how the Atlantic Niño and SAOD are related remain unclear. Here, an analysis of historical observations reveals the Atlantic Niño as a possible intrinsic equatorial arm of the SAOD. Specifically, the observed sporadic equatorial warming characteristic of the Atlantic Niño (~0.4 K) is consistently linked to southwestern cooling (~−0.4 K) of the Atlantic Ocean during the boreal summer. Heat budget calculations show that the SAOD is largely driven by the surface net heat flux anomalies while ocean dynamics may be of secondary importance. Perturbations of the St. Helena anticyclone appear to be the dominant mechanism triggering the surface heat flux anomalies. A weakening of the anticyclone will tend to weaken the prevailing northeasterlies and enhance evaporative cooling over the southwestern Atlantic Ocean. In the equatorial region, the southeast trade winds weaken, thereby suppressing evaporation and leading to net surface warming. Thus, it is hypothesized that the wind–evaporation–SST feedback may be responsible for the growth of the SAOD events linking southern extratropics and equatorial Atlantic variability via surface net heat flux anomalies.


2009 ◽  
Vol 22 (5) ◽  
pp. 1255-1276 ◽  
Author(s):  
Kettyah C. Chhak ◽  
Emanuele Di Lorenzo ◽  
Niklas Schneider ◽  
Patrick F. Cummins

Abstract An ocean model is used to examine and compare the forcing mechanisms and underlying ocean dynamics of two dominant modes of ocean variability in the northeast Pacific (NEP). The first mode is identified with the Pacific decadal oscillation (PDO) and accounts for the most variance in model sea surface temperatures (SSTs) and sea surface heights (SSHs). It is characterized by a monopole structure with a strong coherent signature along the coast. The second mode of variability is termed the North Pacific Gyre Oscillation (NPGO). This mode accounts for the most variance in sea surface salinities (SSSs) in the model and in long-term observations. While the NPGO is related to the second EOF of the North Pacific SST anomalies (the Victoria mode), it is defined here in terms of SSH anomalies. The NPGO is characterized by a pronounced dipole structure corresponding to variations in the strengths of the eastern and central branches of the subpolar and subtropical gyres in the North Pacific. It is found that the PDO and NPGO modes are each tied to a specific atmospheric forcing pattern. The PDO is related to the overlying Aleutian low, while the NPGO is forced by the North Pacific Oscillation. The above-mentioned climate modes captured in the model hindcast are reflected in satellite altimeter data. A budget reconstruction is used to study how the atmospheric forcing drives the SST and SSH anomalies. Results show that the basinwide SST and SSS anomaly patterns associated with each mode are shaped primarily by anomalous horizontal advection of mean surface temperature and salinity gradients (∇ Tand ∇ S) via anomalous surface Ekman currents. This suggests a direct link of these modes with atmospheric forcing and the mean ocean circulation. Smaller-scale patterns in various locations along the coast and in the Gulf of Alaska are, however, not resolved with the budget reconstructions. Vertical profiles of the PDO and NPGO indicate that the modes are strongest mainly in the upper ocean down to 250 m. The shallowness of the modes, the depth of the mean mixed layer, and wintertime temperature profile inversions contribute to the sensitivity of the budget analysis in the regions of reduced reconstruction skill.


2010 ◽  
Vol 2 (1) ◽  
pp. 17-34 ◽  
Author(s):  
T. Tanhua ◽  
R. Steinfeldt ◽  
R. M. Key ◽  
P. Brown ◽  
N. Gruber ◽  
...  

Abstract. Water column data of carbon and carbon-relevant hydrographic and hydrochemical parameters from 188 previously non-publicly available cruise data sets in the Arctic Mediterranean Seas, Atlantic and Southern Ocean have been retrieved and merged into a new database: CARINA (CARbon dioxide IN the Atlantic Ocean). The data have gone through rigorous quality control procedures to assure the highest possible quality and consistency. The data for the pertinent parameters in the CARINA database were objectively examined in order to quantify systematic differences in the reported values, i.e. secondary quality control. Systematic biases found in the data have been corrected in the three data products: merged data files with measured, calculated and interpolated data for each of the three CARINA regions, i.e. the Arctic Mediterranean Seas, the Atlantic and the Southern Ocean. These products have been corrected to be internally consistent. Ninety-eight of the cruises in the CARINA database were conducted in the Atlantic Ocean, defined here as the region south of the Greenland-Iceland-Scotland Ridge and north of about 30° S. Here we present an overview of the Atlantic Ocean synthesis of the CARINA data and the adjustments that were applied to the data product. We also report the details of the secondary QC (Quality Control) for salinity for this data set. Procedures of quality control – including crossover analysis between stations and inversion analysis of all crossover data – are briefly described. Adjustments to salinity measurements were applied to the data from 10 cruises in the Atlantic Ocean region. Based on our analysis we estimate the internal consistency of the CARINA-ATL salinity data to be 4.1 ppm. With these adjustments the CARINA data products are consistent both internally as well as with GLODAP data, an oceanographic data set based on the World Hydrographic Program in the 1990s, and is now suitable for accurate assessments of, for example, oceanic carbon inventories and uptake rates and for model validation.


2018 ◽  
Vol 7 (6) ◽  
pp. 33
Author(s):  
Morteza Marzjarani

Selecting a proper model for a data set is a challenging task. In this article, an attempt was made to answer and to find a suitable model for a given data set. A general linear model (GLM) was introduced along with three different methods for estimating the parameters of the model. The three estimation methods considered in this paper were ordinary least squares (OLS), generalized least squares (GLS), and feasible generalized least squares (FGLS). In the case of GLS, two different weights were selected for improving the severity of heteroscedasticity and the proper weight (s) was deployed. The third weight was selected through the application of FGLS. Analyses showed that only two of the three weights including the FGLS were effective in improving or reducing the severity of heteroscedasticity. In addition, each data set was divided into Training, Validation, and Testing producing a more reliable set of estimates for the parameters in the model. Partitioning data is a relatively new approach is statistics borrowed from the field of machine learning. Stepwise and forward selection methods along with a number of statistics including the average square error testing (ASE), Adj. R-Sq, AIC, AICC, and ASE validate along with proper hierarchies were deployed to select a more appropriate model(s) for a given data set. Furthermore, the response variable in both data files was transformed using the Box-Cox method to meet the assumption of normality. Analysis showed that the logarithmic transformation solved this issue in a satisfactory manner. Since the issues of heteroscedasticity, model selection, and partitioning of data have not been addressed in fisheries, for introduction and demonstration purposes only, the 2015 and 2016 shrimp data in the Gulf of Mexico (GOM) were selected and the above methods were applied to these data sets. At the conclusion, some variations of the GLM were identified as possible leading candidates for the above data sets.


Sign in / Sign up

Export Citation Format

Share Document