scholarly journals Reconstruction of global surface ocean pCO<sub>2</sub> using region-specific predicators based on a stepwise FFNN regression algorithm

2021 ◽  
Author(s):  
Guorong Zhong ◽  
Xuegang Li ◽  
Jinming Song ◽  
Baoxiao Qu ◽  
Fan Wang ◽  
...  

Abstract. Various machine learning methods were attempted in the global mapping of surface ocean partial pressure of CO2 (pCO2) to reduce the uncertainty of global ocean CO2 sink estimate due to undersampling of pCO2. In previous researches the predicators of pCO2 were usually selected empirically based on theoretic drivers of surface ocean pCO2 and same combination of predictors were applied in all areas unless lack of coverage. However, the differences between the drivers of surface ocean pCO2 in different regions were not considered. In this work, we combined the stepwise regression algorithm and a Feed Forward Neural Network (FFNN) to selected predicators of pCO2 based on mean absolute error in each of the 11 biogeochemical provinces defined by Self-Organizing Map (SOM) method. Based on the predicators selected, a monthly global 1° × 1° surface ocean pCO2 product from January 1992 to August 2019 was constructed. Validation of different combination of predicators based on the SOCAT dataset version 2020 and independent observations from time series stations was carried out. The prediction of pCO2 based on region-specific predicators selected by the stepwise FFNN algorithm were more precise than that based on predicators from previous researches. Appling of a FFNN size improving algorithm in each province decreased the mean absolute error (MAE) of global estimate to 11.32 μatm and the root mean square error (RMSE) to 17.99 μatm. The script file of the stepwise FFNN algorithm and pCO2 product are distributed through the Institute of Oceanology of the Chinese Academy of Sciences Marine Science Data Center (IOCAS; http://dx.doi.org/10.12157/iocas.2021.0022, Zhong et al., 2021).

2014 ◽  
Vol 31 (8) ◽  
pp. 1838-1849 ◽  
Author(s):  
J. Zeng ◽  
Y. Nojiri ◽  
P. Landschützer ◽  
M. Telszewski ◽  
S. Nakaoka

Abstract A feed-forward neural network is used to create a monthly climatology of the sea surface fugacity of CO2 (fCO2) on a 1° × 1° spatial resolution. Using 127 880 data points from 1990 to 2011 in the track-gridded database of the Surface Ocean CO2 Atlas version 2.0 (Bakker et al.), the model yields a global mean fCO2 increase rate of 1.50 μatm yr−1. The rate was used to normalize multiple years’ fCO2 observations to the reference year of 2000. A total of 73 265 data points from the normalized data were used to model the global fCO2 climatology. The model simulates monthly fCO2 distributions that agree well with observations and yields an anthropogenic CO2 update of −1.9 to −2.3 PgC yr−1. The range reflects the uncertainty related to using different wind products for the flux calculation. This estimate is in good agreement with the recently derived best estimate by Wanninkhof et al. The model product benefits from a finer spatial resolution compared to the product of Lamont–Doherty Earth Observatory (Takahashi et al.), which is currently the most frequently used product. It therefore has the potential to improve estimates of the global ocean CO2 uptake. The method’s benefits include but are not limited to the following: (i) a fixed structure is not required to model fCO2 as a nonlinear function of biogeochemical variables, (ii) only one neural network configuration is sufficient to model global fCO2 in all seasons, and (iii) the model can be extended to produce global fCO2 maps at a higher resolution in time and space as long as the required data for input variables are available.


2020 ◽  
Vol 15 ◽  
Author(s):  
Fahad Layth Malallah ◽  
Baraa T. Shareef ◽  
Mustafah Ghanem Saeed ◽  
Khaled N. Yasen

Aims: Normally, the temperature increase of individuals leads to the possibility of getting a type of disease, which might be risky to other people such as coronavirus. Traditional techniques for tracking core-temperature require body contact either by oral, rectum, axillary, or tympanic, which are unfortunately considered intrusive in nature as well as causes of contagion. Therefore, sensing human core-temperature non-intrusively and remotely is the objective of this research. Background: Nowadays, increasing level of medical sectors is a necessary targets for the research operations, especially with the development of the integrated circuit, sensors and cameras that made the normal life easier. Methods: The solution is by proposing an embedded system consisting of the Arduino microcontroller, which is trained with a model of Mean Absolute Error (MAE) analysis for predicting Contactless Core-Temperature (CCT), which is the real body temperature. Results: The Arduino is connected to an Infrared-Thermal sensor named MLX90614 as input signal, and connected to the LCD to display the CCT. To evaluate the proposed system, experiments are conducted by participating 31-subject sensing contactless temperature from the three face sub-regions: forehead, nose, and cheek. Conclusion: Experimental results approved that CCT can be measured remotely depending on the human face, in which the forehead region is better to be dependent, rather than nose and cheek regions for CCT measurement due to the smallest


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2670
Author(s):  
Thomas Quirin ◽  
Corentin Féry ◽  
Dorian Vogel ◽  
Céline Vergne ◽  
Mathieu Sarracanie ◽  
...  

This paper presents a tracking system using magnetometers, possibly integrable in a deep brain stimulation (DBS) electrode. DBS is a treatment for movement disorders where the position of the implant is of prime importance. Positioning challenges during the surgery could be addressed thanks to a magnetic tracking. The system proposed in this paper, complementary to existing procedures, has been designed to bridge preoperative clinical imaging with DBS surgery, allowing the surgeon to increase his/her control on the implantation trajectory. Here the magnetic source required for tracking consists of three coils, and is experimentally mapped. This mapping has been performed with an in-house three-dimensional magnetic camera. The system demonstrates how magnetometers integrated directly at the tip of a DBS electrode, might improve treatment by monitoring the position during and after the surgery. The three-dimensional operation without line of sight has been demonstrated using a reference obtained with magnetic resonance imaging (MRI) of a simplified brain model. We observed experimentally a mean absolute error of 1.35 mm and an Euclidean error of 3.07 mm. Several areas of improvement to target errors below 1 mm are also discussed.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


2021 ◽  
pp. 1-13
Author(s):  
Richa ◽  
Punam Bedi

Recommender System (RS) is an information filtering approach that helps the overburdened user with information in his decision making process and suggests items which might be interesting to him. While presenting recommendation to the user, accuracy of the presented list is always a concern for the researchers. However, in recent years, the focus has now shifted to include the unexpectedness and novel items in the list along with accuracy of the recommended items. To increase the user acceptance, it is important to provide potentially interesting items which are not so obvious and different from the items that the end user has rated. In this work, we have proposed a model that generates serendipitous item recommendation and also takes care of accuracy as well as the sparsity issues. Literature suggests that there are various components that help to achieve the objective of serendipitous recommendations. In this paper, fuzzy inference based approach is used for the serendipity computation because the definitions of the components overlap. Moreover, to improve the accuracy and sparsity issues in the recommendation process, cross domain and trust based approaches are incorporated. A prototype of the system is developed for the tourism domain and the performance is measured using mean absolute error (MAE), root mean square error (RMSE), unexpectedness, precision, recall and F-measure.


2021 ◽  
pp. 875697282199994
Author(s):  
Joseph F. Hair ◽  
Marko Sarstedt

Most project management research focuses almost exclusively on explanatory analyses. Evaluation of the explanatory power of statistical models is generally based on F-type statistics and the R 2 metric, followed by an assessment of the model parameters (e.g., beta coefficients) in terms of their significance, size, and direction. However, these measures are not indicative of a model’s predictive power, which is central for deriving managerial recommendations. We recommend that project management researchers routinely use additional metrics, such as the mean absolute error or the root mean square error, to accurately quantify their statistical models’ predictive power.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 830
Author(s):  
William E. Lewis ◽  
Timothy L. Olander ◽  
Christopher S. Velden ◽  
Christopher Rozoff ◽  
Stefano Alessandrini

Accurate, reliable estimates of tropical cyclone (TC) intensity are a crucial element in the warning and forecast process worldwide, and for the better part of 50 years, estimates made from geostationary satellite observations have been indispensable to forecasters for this purpose. One such method, the Advanced Dvorak Technique (ADT), was used to develop analog ensemble (AnEn) techniques that provide more precise estimates of TC intensity with instant access to information on the reliability of the estimate. The resulting methods, ADT-AnEn and ADT-based Error Analog Ensemble (ADTE-AnEn), were trained and tested using seventeen years of historical ADT intensity estimates using k-fold cross-validation with 10 folds. Using only two predictors, ADT-estimated current intensity (maximum wind speed) and TC center latitude, both AnEn techniques produced significant reductions in mean absolute error and bias for all TC intensity classes in the North Atlantic and for most intensity classes in the Eastern Pacific. The ADTE-AnEn performed better for extreme intensities in both basins (significantly so in the Eastern Pacific) and will be incorporated in the University of Wisconsin’s Cooperative Institute for Meteorological Satellite Studies (UW-CIMSS) workflow for further testing during operations in 2021.


2020 ◽  
Vol 11 (1) ◽  
pp. 39
Author(s):  
Eric Järpe ◽  
Mattias Weckstén

A new method for musical steganography for the MIDI format is presented. The MIDI standard is a user-friendly music technology protocol that is frequently deployed by composers of different levels of ambition. There is to the author’s knowledge no fully implemented and rigorously specified, publicly available method for MIDI steganography. The goal of this study, however, is to investigate how a novel MIDI steganography algorithm can be implemented by manipulation of the velocity attribute subject to restrictions of capacity and security. Many of today’s MIDI steganography methods—less rigorously described in the literature—fail to be resilient to steganalysis. Traces (such as artefacts in the MIDI code which would not occur by the mere generation of MIDI music: MIDI file size inflation, radical changes in mean absolute error or peak signal-to-noise ratio of certain kinds of MIDI events or even audible effects in the stego MIDI file) that could catch the eye of a scrutinizing steganalyst are side-effects of many current methods described in the literature. This steganalysis resilience is an imperative property of the steganography method. However, by restricting the carrier MIDI files to classical organ and harpsichord pieces, the problem of velocities following the mood of the music can be avoided. The proposed method, called Velody 2, is found to be on par with or better than the cutting edge alternative methods regarding capacity and inflation while still possessing a better resilience against steganalysis. An audibility test was conducted to check that there are no signs of audible traces in the stego MIDI files.


Sign in / Sign up

Export Citation Format

Share Document