scholarly journals Thermocline Model for Estimating Argo Sea Surface Temperature

2022 ◽  
Vol 4 (1) ◽  
pp. 1
Author(s):  
Zhang ChunLing ◽  
Zhang Meng-Li ◽  
Wang Zhen-Feng ◽  
Hu Song ◽  
Wang Dan-Yang ◽  
...  

Argo has become an important constituent of the global ocean observation system. However, due to the lack of sea surface measurements from most Argo profiles, the application of Argo data is still limited. In this study, a thermocline model was constructed based on three key thermocline parameters, i.e, thermocline upper depth, the thermocline bottom depth, and thermocline temperature gradient. Following the model, we estimated the sea surface temperature of Argo profiles by providing the relationship between sea surface and subsurface temperature. We tested the effectiveness of our proposed model using statistical analysis and by comparing the sea surface temperature with the results obtained from traditional methods and in situ observations in the Pacific Ocean. The root mean square errors of results obtained from thermocline model were found to be significantly reduced compared to the extrapolation results and satellite retrieved temperature results. The correlation coefficient between the estimation result and in situ observation was 0.967. Argo surface temperature, estimated by the thermocline model, has been theoretically proved to be reliable. Thus, our model generates theoretically feasible data present the mesoscale phenomenon in more detail. Overall, this study compensates for the lack surface observation of Argo, and provides a new tool to establish complete Argo data sets.

2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Chang Liu ◽  
Yuning Lei ◽  
Feng Gao ◽  
Meizhen Zhao

In situ observation is one of the most direct and efficient ways to understand the ocean, but it is usually limited in terms of spatial and temporal coverage. The determination of optimal sampling strategies that effectively utilize available resources to maximize the information content of the collected ocean data is becoming an open problem. The historical sea surface temperature (SST) dataset contains the spatial variability information of SST, and this prior knowledge can be used to optimize the configuration of sampling points. Here, a configuration method of sampling points based on the variability of SST is studied. Firstly, in order to get the spatial variability of SST in the ocean field to be sampled, the historical SST data of the field is analyzed. Then, K-means algorithm is used to cluster the subsampled fields to make the configuration of sampling points more suitable. Finally, to evaluate the sampling performance of the new configuration method of sampling points, the SST field is reconstructed by the method based on compression sensing algorithm. Results show that the proposed optimal configuration method of sampling points significantly outperforms the traditional random sampling points distribution method in terms of reconstruction accuracy. These results provide a new method for configuring sampling points of ocean in situ observation with limited resources.


Ocean Science ◽  
2012 ◽  
Vol 8 (5) ◽  
pp. 845-857 ◽  
Author(s):  
S. Guinehut ◽  
A.-L. Dhomps ◽  
G. Larnicol ◽  
P.-Y. Le Traon

Abstract. This paper describes an observation-based approach that efficiently combines the main components of the global ocean observing system using statistical methods. Accurate but sparse in situ temperature and salinity profiles (mainly from Argo for the last 10 yr) are merged with the lower accuracy but high-resolution synthetic data derived from satellite altimeter and sea surface temperature observations to provide global 3-D temperature and salinity fields at high temporal and spatial resolution. The first step of the method consists in deriving synthetic temperature fields from altimeter and sea surface temperature observations, and salinity fields from altimeter observations, through multiple/simple linear regression methods. The second step of the method consists in combining the synthetic fields with in situ temperature and salinity profiles using an optimal interpolation method. Results show the revolutionary nature of the Argo observing system. Argo observations now allow a global description of the statistical relationships that exist between surface and subsurface fields needed for step 1 of the method, and can constrain the large-scale temperature and mainly salinity fields during step 2 of the method. Compared to the use of climatological estimates, results indicate that up to 50% of the variance of the temperature fields can be reconstructed from altimeter and sea surface temperature observations and a statistical method. For salinity, only about 20 to 30% of the signal can be reconstructed from altimeter observations, making the in situ observing system essential for salinity estimates. The in situ observations (step 2 of the method) further reduce the differences between the gridded products and the observations by up to 20% for the temperature field in the mixed layer, and the main contribution is for salinity and the near surface layer with an improvement up to 30%. Compared to estimates derived using in situ observations only, the merged fields provide a better reconstruction of the high resolution temperature and salinity fields. This also holds for the large-scale and low-frequency fields thanks to a better reduction of the aliasing due to the mesoscale variability. Contribution of the merged fields is then illustrated to describe qualitatively the temperature variability patterns for the period from 1993 to 2009.


Ocean Science ◽  
2016 ◽  
Vol 12 (1) ◽  
pp. 257-274 ◽  
Author(s):  
V. Turpin ◽  
E. Remy ◽  
P. Y. Le Traon

Abstract. Observing system experiments (OSEs) are carried out over a 1-year period to quantify the impact of Argo observations on the Mercator Ocean 0.25° global ocean analysis and forecasting system. The reference simulation assimilates sea surface temperature (SST), SSALTO/DUACS (Segment Sol multi-missions dALTimetrie, d'orbitographie et de localisation précise/Data unification and Altimeter combination system) altimeter data and Argo and other in situ observations from the Coriolis data center. Two other simulations are carried out where all Argo and half of the Argo data are withheld. Assimilating Argo observations has a significant impact on analyzed and forecast temperature and salinity fields at different depths. Without Argo data assimilation, large errors occur in analyzed fields as estimated from the differences when compared with in situ observations. For example, in the 0–300 m layer RMS (root mean square) differences between analyzed fields and observations reach 0.25 psu and 1.25 °C in the western boundary currents and 0.1 psu and 0.75 °C in the open ocean. The impact of the Argo data in reducing observation–model forecast differences is also significant from the surface down to a depth of 2000 m. Differences between in situ observations and forecast fields are thus reduced by 20 % in the upper layers and by up to 40 % at a depth of 2000 m when Argo data are assimilated. At depth, the most impacted regions in the global ocean are the Mediterranean outflow, the Gulf Stream region and the Labrador Sea. A significant degradation can be observed when only half of the data are assimilated. Therefore, Argo observations matter to constrain the model solution, even for an eddy-permitting model configuration. The impact of the Argo floats' data assimilation on other model variables is briefly assessed: the improvement of the fit to Argo profiles do not lead globally to unphysical corrections on the sea surface temperature and sea surface height. The main conclusion is that the performance of the Mercator Ocean 0.25° global data assimilation system is heavily dependent on the availability of Argo data.


2020 ◽  
Vol 12 (5) ◽  
pp. 759
Author(s):  
Kyungman Kwon ◽  
Byoung-Ju Choi ◽  
Sung-Dae Kim ◽  
Sang-Ho Lee ◽  
Kyung-Ae Park

The sea surface temperature (SST) is essential data for the ocean and atmospheric prediction systems and climate change studies. Five global gridded sea surface temperature products were evaluated with independent in situ SST data of the Yellow Sea (YS) from 2010 to 2013 and the sources of SST error were identified. On average, SST from the gridded optimally interpolated level 4 (L4) datasets had a root mean square difference (RMSD) of less than 1 °C compared to the in situ observation data of the YS. However, the RMSD was relatively high (2.3 °C) in the shallow coastal region in June and July and this RMSD was mostly attributed to the large warm bias (>2 °C). The level 3 (L3) SST data were frequently missing in early summer because of frequent sea fog formation and a strong (>1.2 °C/12 km) spatial temperature gradient across the tidal mixing front in the eastern YS. The missing data were optimally interpolated from the SST observation in offshore warm water and warm biased SST climatology in the region. To fundamentally improve the accuracy of the L4 gridded SST data, it is necessary to increase the number of SST observation data in the tidally well mixed region. As an interim solution to the warm bias in the gridded SST datasets in the eastern YS, the SST climatology for the optimal interpolation can be improved based on long-term in situ observation data. To reduce the warm bias in the gridded SST products, two bias correction methods were suggested and compared. Bias correction methods using a simple analytical function and using climatological observation data reduced the RMSD by 19–29% and 37–49%, respectively, in June.


2021 ◽  
Vol 53 (1) ◽  
Author(s):  
Bambang Sukresno ◽  
Dinarika Jatisworo ◽  
Rizki Hanintyo

Sea surface temperature (SST) is an important variable in oceanography. One of the SST data can be obtained from the Global Observation Mission-Climate (GCOM-C) satellite. Therefore, this data needs to be validated before being applied in various fields. This study aimed to validate SST data from the GCOM-C satellite in the Indonesian Seas. Validation was performed using the data of Multi-sensor Ultra-high Resolution sea surface temperature (MUR-SST) and in situ sea surface temperature Quality Monitor (iQuam). The data used are the daily GCOM-C SST dataset from January to December 2018, as well as the daily dataset from MUR-SST and iQuam in the same period. The validation process was carried out using the three-way error analysis method. The results showed that the accuracy of the GCOM-C SST was 0.37oC.


Author(s):  
M. A. Syariz ◽  
L. M. Jaelani ◽  
L. Subehi ◽  
A. Pamungkas ◽  
E. S. Koenhardono ◽  
...  

The Sea Surface Temperature (SST) retrieval from satellites data Thus, it could provide SST data for a long time. Since, the algorithms of SST estimation by using Landsat 8 Thermal Band are sitedependence, we need to develop an applicable algorithm in Indonesian water. The aim of this research was to develop SST algorithms in the North Java Island Water. The data used are in-situ data measured on April 22, 2015 and also estimated brightness temperature data from Landsat 8 Thermal Band Image (band 10 and band 11). The algorithm was established using 45 data by assessing the relation of measured in-situ data and estimated brightness temperature. Then, the algorithm was validated by using another 40 points. The results showed that the good performance of the sea surface temperature algorithm with coefficient of determination (<i>R</i><sup>2</sup>) and Root Mean Square Error (<i>RMSE</i>) of 0.912 and 0.028, respectively.


Ocean Science ◽  
2009 ◽  
Vol 5 (4) ◽  
pp. 403-419 ◽  
Author(s):  
C. Skandrani ◽  
J.-M. Brankart ◽  
N. Ferry ◽  
J. Verron ◽  
P. Brasseur ◽  
...  

Abstract. In the context of stand alone ocean models, the atmospheric forcing is generally computed using atmospheric parameters that are derived from atmospheric reanalysis data and/or satellite products. With such a forcing, the sea surface temperature that is simulated by the ocean model is usually significantly less accurate than the synoptic maps that can be obtained from the satellite observations. This not only penalizes the realism of the ocean long-term simulations, but also the accuracy of the reanalyses or the usefulness of the short-term operational forecasts (which are key GODAE and MERSEA objectives). In order to improve the situation, partly resulting from inaccuracies in the atmospheric forcing parameters, the purpose of this paper is to investigate a way of further adjusting the state of the atmosphere (within appropriate error bars), so that an explicit ocean model can produce a sea surface temperature that better fits the available observations. This is done by performing idealized assimilation experiments in which Mercator-Ocean reanalysis data are considered as a reference simulation describing the true state of the ocean. Synthetic observation datasets for sea surface temperature and salinity are extracted from the reanalysis to be assimilated in a low resolution global ocean model. The results of these experiments show that it is possible to compute piecewise constant parameter corrections, with predefined amplitude limitations, so that long-term free model simulations become much closer to the reanalysis data, with misfit variance typically divided by a factor 3. These results are obtained by applying a Monte Carlo method to simulate the joint parameter/state prior probability distribution. A truncated Gaussian assumption is used to avoid the most extreme and non-physical parameter corrections. The general lesson of our experiments is indeed that a careful specification of the prior information on the parameters and on their associated uncertainties is a key element in the computation of realistic parameter estimates, especially if the system is affected by other potential sources of model errors.


2014 ◽  
Vol 142 (5) ◽  
pp. 1771-1791 ◽  
Author(s):  
Mohamed Helmy Elsanabary ◽  
Thian Yew Gan

Abstract Rainfall is the primary driver of basin hydrologic processes. This article examines a recently developed rainfall predictive tool that combines wavelet principal component analysis (WPCA), an artificial neural networks-genetic algorithm (ANN-GA), and statistical disaggregation into an integrated framework useful for the management of water resources around the upper Blue Nile River basin (UBNB) in Ethiopia. From the correlation field between scale-average wavelet powers (SAWPs) of the February–May (FMAM) global sea surface temperature (SST) and the first wavelet principal component (WPC1) of June–September (JJAS) seasonal rainfall over the UBNB, sectors of the Indian, Atlantic, and Pacific Oceans where SSTs show a strong teleconnection with JJAS rainfall in the UBNB (r ≥ 0.4) were identified. An ANN-GA model was developed to forecast the UBNB seasonal rainfall using the selected SST sectors. Results show that ANN-GA forecasted seasonal rainfall amounts that agree well with the observed data for the UBNB [root-mean-square errors (RMSEs) between 0.72 and 0.82, correlation between 0.68 and 0.77, and Hanssen–Kuipers (HK) scores between 0.5 and 0.77], but the results in the foothills region of the Great Rift Valley (GRV) were poor, which is expected since the variability of WPC1 mainly comes from the highlands of Ethiopia. The Valencia and Schaake model was used to disaggregate the forecasted seasonal rainfall to weekly rainfall, which was found to reasonably capture the characteristics of the observed weekly rainfall over the UBNB. The ability to forecast the UBNB rainfall at a season-long lead time will be useful for an optimal allocation of water usage among various competing users in the river basin.


Sign in / Sign up

Export Citation Format

Share Document