scholarly journals Version 2 of the IASI NH<sub>3</sub> neural network retrieval algorithm; near-real time and reanalysed datasets

Author(s):  
Martin Van Damme ◽  
Simon Whitburn ◽  
Lieven Clarisse ◽  
Cathy Clerbaux ◽  
Daniel Hurtmans ◽  
...  

Abstract. Recently, Whitburn et al. (2016) presented a neural network-based algorithm for retrieving atmospheric ammonia (NH3) columns from IASI satellite observations. In the past year, several improvements have been introduced and the resulting new baseline version, ANNI-NH3-v2, is documented here. One of the main changes to the algorithm is that separate neural networks were trained for land and sea observations, resulting in a better training performance for both groups. By reducing and transforming the input parameter space, performance is now also better for observations associated with favourable sounding conditions (i.e. enhanced thermal contrasts). Other changes relate to the introduction of a bias correction over sea and the treatment of the satellite zenith angle. In addition to these algorithmic changes, new recommendations for post-filtering the data and for averaging data in time or space are formulated. We also introduce a second dataset (ANNI-NH3-v2R-I) which relies on ERA-Interim ECMWF meteorological input data, along with built-in surface temperature, rather than the operationally provided Eumetsat IASI L2 data used for the standard near-real time version. The need for such a dataset emerged after a series of sharp discontinuities were identified in the NH3 timeseries, which could be traced back to incremental changes in the IASI L2 algorithms for temperature and clouds. The reanalysed dataset is coherent in time and can therefore be used to study trends. Furthermore, both datasets agree reasonably well in the mean on recent data, after the date when the IASI meteorological L2 version 6 became operational (30 September 2014).

2017 ◽  
Vol 10 (12) ◽  
pp. 4905-4914 ◽  
Author(s):  
Martin Van Damme ◽  
Simon Whitburn ◽  
Lieven Clarisse ◽  
Cathy Clerbaux ◽  
Daniel Hurtmans ◽  
...  

Abstract. Recently, Whitburn et al.(2016) presented a neural-network-based algorithm for retrieving atmospheric ammonia (NH3) columns from Infrared Atmospheric Sounding Interferometer (IASI) satellite observations. In the past year, several improvements have been introduced, and the resulting new baseline version, Artificial Neural Network for IASI (ANNI)-NH3-v2.1, is documented here. One of the main changes to the algorithm is that separate neural networks were trained for land and sea observations, resulting in a better training performance for both groups. By reducing and transforming the input parameter space, performance is now also better for observations associated with favourable sounding conditions (i.e. enhanced thermal contrasts). Other changes relate to the introduction of a bias correction over land and sea and the treatment of the satellite zenith angle. In addition to these algorithmic changes, new recommendations for post-filtering the data and for averaging data in time or space are formulated. We also introduce a second dataset (ANNI-NH3-v2.1R-I) which relies on ERA-Interim ECMWF meteorological input data, along with surface temperature retrieved from a dedicated network, rather than the operationally provided Eumetsat IASI Level 2 (L2) data used for the standard near-real-time version. The need for such a dataset emerged after a series of sharp discontinuities were identified in the NH3 time series, which could be traced back to incremental changes in the IASI L2 algorithms for temperature and clouds. The reanalysed dataset is coherent in time and can therefore be used to study trends. Furthermore, both datasets agree reasonably well in the mean on recent data, after the date when the IASI meteorological L2 version 6 became operational (30 September 2014).


2012 ◽  
Vol 157-158 ◽  
pp. 11-15 ◽  
Author(s):  
Shao Xiong Wu

A real-time WPNN-based model was present for the simultaneous recognition of both mean and variance CCPs. In the modeling of structure for patterns recognition, the combined wavelet transform with probabilistic neural network (WPNN) was proposed. Input data was decomposed by wavelet transform into several detail coefficients and approximations. The approximation obtained and energy of every lever detail coefficients was for the input of PNN. The simulation results shows that it can recognize each pattern of the mean and variance CCPs accurately, which can be used in simultaneous process mean and variance monitoring.


Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 234 ◽  
Author(s):  
Hyun Yoo ◽  
Soyoung Han ◽  
Kyungyong Chung

Recently, a massive amount of big data of bioinformation is collected by sensor-based IoT devices. The collected data are also classified into different types of health big data in various techniques. A personalized analysis technique is a basis for judging the risk factors of personal cardiovascular disorders in real-time. The objective of this paper is to provide the model for the personalized heart condition classification in combination with the fast and effective preprocessing technique and deep neural network in order to process the real-time accumulated biosensor input data. The model can be useful to learn input data and develop an approximation function, and it can help users recognize risk situations. For the analysis of the pulse frequency, a fast Fourier transform is applied in preprocessing work. With the use of the frequency-by-frequency ratio data of the extracted power spectrum, data reduction is performed. To analyze the meanings of preprocessed data, a neural network algorithm is applied. In particular, a deep neural network is used to analyze and evaluate linear data. A deep neural network can make multiple layers and can establish an operation model of nodes with the use of gradient descent. The completed model was trained by classifying the ECG signals collected in advance into normal, control, and noise groups. Thereafter, the ECG signal input in real time through the trained deep neural network system was classified into normal, control, and noise. To evaluate the performance of the proposed model, this study utilized a ratio of data operation cost reduction and F-measure. As a result, with the use of fast Fourier transform and cumulative frequency percentage, the size of ECG reduced to 1:32. According to the analysis on the F-measure of the deep neural network, the model had 83.83% accuracy. Given the results, the modified deep neural network technique can reduce the size of big data in terms of computing work, and it is an effective system to reduce operation time.


Water ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1547
Author(s):  
Jian Sha ◽  
Xue Li ◽  
Man Zhang ◽  
Zhong-Liang Wang

Accurate real-time water quality prediction is of great significance for local environmental managers to deal with upcoming events and emergencies to develop best management practices. In this study, the performances in real-time water quality forecasting based on different deep learning (DL) models with different input data pre-processing methods were compared. There were three popular DL models concerned, including the convolutional neural network (CNN), long short-term memory neural network (LSTM), and hybrid CNN–LSTM. Two types of input data were applied, including the original one-dimensional time series and the two-dimensional grey image based on the complete ensemble empirical mode decomposition algorithm with adaptive noise (CEEMDAN) decomposition. Each type of input data was used in each DL model to forecast the real-time monitoring water quality parameters of dissolved oxygen (DO) and total nitrogen (TN). The results showed that (1) the performances of CNN–LSTM were superior to the standalone model CNN and LSTM; (2) the models used CEEMDAN-based input data performed much better than the models used the original input data, while the improvements for non-periodic parameter TN were much greater than that for periodic parameter DO; and (3) the model accuracies gradually decreased with the increase of prediction steps, while the original input data decayed faster than the CEEMDAN-based input data and the non-periodic parameter TN decayed faster than the periodic parameter DO. Overall, the input data preprocessed by the CEEMDAN method could effectively improve the forecasting performances of deep learning models, and this improvement was especially significant for non-periodic parameters of TN.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2019 ◽  
Vol 32 (11) ◽  
pp. 6735-6744
Author(s):  
Nicoló Savioli ◽  
Enrico Grisan ◽  
Silvia Visentin ◽  
Erich Cosmi ◽  
Giovanni Montana ◽  
...  

AbstractThe automatic analysis of ultrasound sequences can substantially improve the efficiency of clinical diagnosis. This article presents an attempt to automate the challenging task of measuring the vascular diameter of the fetal abdominal aorta from ultrasound images. We propose a neural network architecture consisting of three blocks: a convolutional neural network (CNN) for the extraction of imaging features, a convolution gated recurrent unit (C-GRU) for exploiting the temporal redundancy of the signal, and a regularized loss function, called CyclicLoss, to impose our prior knowledge about the periodicity of the observed signal. The solution is investigated with a cohort of 25 ultrasound sequences acquired during the third-trimester pregnancy check, and with 1000 synthetic sequences. In the extraction of features, it is shown that a shallow CNN outperforms two other deep CNNs with both the real and synthetic cohorts, suggesting that echocardiographic features are optimally captured by a reduced number of CNN layers. The proposed architecture, working with the shallow CNN, reaches an accuracy substantially superior to previously reported methods, providing an average reduction of the mean squared error from 0.31 (state-of-the-art) to 0.09 $$\mathrm{mm}^2$$mm2, and a relative error reduction from 8.1 to 5.3%. The mean execution speed of the proposed approach of 289 frames per second makes it suitable for real-time clinical use.


2009 ◽  
Vol 13 (8) ◽  
pp. 1413-1425 ◽  
Author(s):  
N. Q. Hung ◽  
M. S. Babel ◽  
S. Weesakul ◽  
N. K. Tripathi

Abstract. This paper presents a new approach using an Artificial Neural Network technique to improve rainfall forecast performance. A real world case study was set up in Bangkok; 4 years of hourly data from 75 rain gauge stations in the area were used to develop the ANN model. The developed ANN model is being applied for real time rainfall forecasting and flood management in Bangkok, Thailand. Aimed at providing forecasts in a near real time schedule, different network types were tested with different kinds of input information. Preliminary tests showed that a generalized feedforward ANN model using hyperbolic tangent transfer function achieved the best generalization of rainfall. Especially, the use of a combination of meteorological parameters (relative humidity, air pressure, wet bulb temperature and cloudiness), the rainfall at the point of forecasting and rainfall at the surrounding stations, as an input data, advanced ANN model to apply with continuous data containing rainy and non-rainy period, allowed model to issue forecast at any moment. Additionally, forecasts by ANN model were compared to the convenient approach namely simple persistent method. Results show that ANN forecasts have superiority over the ones obtained by the persistent model. Rainfall forecasts for Bangkok from 1 to 3 h ahead were highly satisfactory. Sensitivity analysis indicated that the most important input parameter besides rainfall itself is the wet bulb temperature in forecasting rainfall.


2007 ◽  
Vol 2007 ◽  
pp. 1-6 ◽  
Author(s):  
Bekir Karlık ◽  
Kemal Yüksek

The aim of this study is to develop a novel fuzzy clustering neural network (FCNN) algorithm as pattern classifiers for real-time odor recognition system. In this type of FCNN, the input neurons activations are derived through fuzzy c mean clustering of the input data, so that the neural system could deal with the statistics of the measurement error directly. Then the performance of FCNN network is compared with the other network which is well-known algorithm, named multilayer perceptron (MLP), for the same odor recognition system. Experimental results show that both FCNN and MLP provided high recognition probability in determining various learn categories of odors, however, the FCNN neural system has better ability to recognize odors more than the MLP network.


2018 ◽  
Vol 8 (1) ◽  
pp. 107
Author(s):  
Rizki Amalia Nurdini ◽  
Yudi Priyadi ◽  
Norita .

Indonesia’s coal mining industry has been decreased since the last five years and causing the financial performance of companies in the industry to deteriorate. The aim of this paper is to analyze the bankruptcy prediction on coal mining sector companies listed in Indonesia Stock Exchange (IDX) in 2012 – 2016 using data mining prediction method that is artificial neural network model with three financial ratios as an input parameter. The financial ratios used are shareholder’s equity ratio, current ratio and return on assets. The results indicate that these ratios are very suitable to be used as an input parameter because it shows a quite significant difference in calculation results between bankrupted and non-bankrupted companies.The ANN training model used in the prediction process in this study resulted in the best training performance with the model architecture of 15 neurons on input layer and one hidden layer with 30 neurons in it. The training model produces training performance with the lowest MSE of 0,000000313 and the highest R of 99,9%. Bankruptcy prediction result using ANN showed that 7 (seven) coal mining sector companies are predicted to be bankrupt


Sign in / Sign up

Export Citation Format

Share Document