scholarly journals Towards a climatology of fog frequency in the Atacama Desert via multi-spectral satellite data and machine learning techniques

Author(s):  
Christoph Böhm ◽  
Jan H. Schween ◽  
Mark Reyers ◽  
Benedikt Maier ◽  
Ulrich Löhnert ◽  
...  

AbstractIn many hyper-arid ecosystems, such as the Atacama Desert, fog is the most important fresh water source. To study biological and geological processes in such water-limited regions, knowledge about the spatio-temporal distribution and variability of fog presence is necessary. In this study, in-situ measurements provided by a network of climate stations equipped, inter alia, with leaf wetness sensors are utilized to create a reference fog data set which enables the validation of satellite-based fog retrieval methods. Further, a new satellite-based fog detection approach is introduced which uses brightness temperatures measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) as input for a neural network. Such a machine learning technique can exploit all spectral information of the satellite data and represent potential non-linear relationships. Compared to a second fog detection approach based on MODIS cloud top height retrievals, the neural network reaches a higher detection skill (Heidke skill score of 0.56 compared to 0.49). A suitable representation of temporal variability on subseasonal time scales is provided with correlations mostly greater than 0.7 between fog occurrence time series derived from the neural network and the reference data for individual climate stations, respectively. Furthermore, a suitable spatial representativity of the neural network approach to expand the application to the whole region is indicated. Three-year averages of fog frequencies reveal similar spatial patterns for the austral winter season for both approaches. However, differences are found for the summer and potential reasons are discussed.

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA41-WA52 ◽  
Author(s):  
Dario Grana ◽  
Leonardo Azevedo ◽  
Mingliang Liu

Among the large variety of mathematical and computational methods for estimating reservoir properties such as facies and petrophysical variables from geophysical data, deep machine-learning algorithms have gained significant popularity for their ability to obtain accurate solutions for geophysical inverse problems in which the physical models are partially unknown. Solutions of classification and inversion problems are generally not unique, and uncertainty quantification studies are required to quantify the uncertainty in the model predictions and determine the precision of the results. Probabilistic methods, such as Monte Carlo approaches, provide a reliable approach for capturing the variability of the set of possible models that match the measured data. Here, we focused on the classification of facies from seismic data and benchmarked the performance of three different algorithms: recurrent neural network, Monte Carlo acceptance/rejection sampling, and Markov chain Monte Carlo. We tested and validated these approaches at the well locations by comparing classification predictions to the reference facies profile. The accuracy of the classification results is defined as the mismatch between the predictions and the log facies profile. Our study found that when the training data set of the neural network is large enough and the prior information about the transition probabilities of the facies in the Monte Carlo approach is not informative, machine-learning methods lead to more accurate solutions; however, the uncertainty of the solution might be underestimated. When some prior knowledge of the facies model is available, for example, from nearby wells, Monte Carlo methods provide solutions with similar accuracy to the neural network and allow a more robust quantification of the uncertainty, of the solution.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Shreya Nag ◽  
Nimitha Jammula

The diagnosis of a disease to determine a specific condition is crucial in caring for patients and furthering medical research. The timely and accurate diagnosis can have important implications for both patients and healthcare providers. An earlier diagnosis allows doctors to consider more methods of treatment, allowing them to have a greater flexibility of tailoring their decisions, and ultimately improving the patient’s health. Additionally, a timely detection allows patients to have a greater control over their health and their decisions, allowing them to plan ahead. As advancements in computer science and technology continue to improve, these two factors can play a major role in aiding healthcare providers with medical issues. The emergence of artificial intelligence and machine learning can aid in addressing the challenge of completing timely and accurate diagnosis. The goal of this research work is to design a system that utilizes machine learning and neural network techniques to diagnose chronic kidney disease with more than 90% accuracy based on a clinical data set, and to do a comparative study of the performance of the neural network versus supervised machine learning approaches. Based on the results, all the algorithms performed well in prediction of chronic kidney disease (CKD) with more that 90% accuracy. The neural network system provided the best performance (accuracy = 100%) in prediction of chronic kidney disease in comparison with the supervised Random Forest algorithm (accuracy = 99%) and the supervised Decision Tree algorithm (accuracy = 97%).


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


Author(s):  
Komsan Wongkalasin ◽  
Teerapon Upachaban ◽  
Wacharawish Daosawang ◽  
Nattadon Pannucharoenwong ◽  
Phadungsak Ratanadecho

This research aims to enhance the watermelon’s quality selection process, which was traditionally conducted by knocking the watermelon fruit and sort out by the sound’s character. The proposed method in this research is generating the sound spectrum through the watermelon and then analyzes the response signal’s frequency and the amplitude by Fast Fourier Transform (FFT). Then the obtained data were used to train and verify the neural network processor. The result shows that, the frequencies of 129 and 172 Hz were suit to be used in the comparison. Thirty watermelons, which were randomly selected from the orchard, were used to create a data set, and then were cut to manually check and match to the fruits’ quality. The 129 Hz frequency gave the response ranging from 13.57 and above in 3 groups of watermelons quality, including, not fully ripened, fully ripened, and close to rotten watermelons. When the 172 Hz gave the response between 11.11–12.72 in not fully ripened watermelons and those of 13.00 or more in the group of close to rotten and hollow watermelons. The response was then used as a training condition for the artificial neural network processor of the sorting machine prototype. The verification results provided a reasonable prediction of the ripeness level of watermelon and can be used as a pilot prototype to improve the efficiency of the tools to obtain a modern-watermelon quality selection tool, which could enhance the competitiveness of the local farmers on the product quality control.


2005 ◽  
Vol 488-489 ◽  
pp. 793-796 ◽  
Author(s):  
Hai Ding Liu ◽  
Ai Tao Tang ◽  
Fu Sheng Pan ◽  
Ru Lin Zuo ◽  
Ling Yun Wang

A model was developed for the analysis and prediction of correlation between composition and mechanical properties of Mg-Al-Zn (AZ) magnesium alloys by applying artificial neural network (ANN). The input parameters of the neural network (NN) are alloy composition. The outputs of the NN model are important mechanical properties, including ultimate tensile strength, tensile yield strength and elongation. The model is based on multilayer feedforward neural network. The NN was trained with comprehensive data set collected from domestic and foreign literature. A very good performance of the neural network was achieved. The model can be used for the simulation and prediction of mechanical properties of AZ system magnesium alloys as functions of composition.


2019 ◽  
Author(s):  
Longxiang Su ◽  
Chun Liu ◽  
Dongkai Li ◽  
Jie He ◽  
Fanglan Zheng ◽  
...  

BACKGROUND Heparin is one of the most commonly used medications in intensive care units. In clinical practice, the use of a weight-based heparin dosing nomogram is standard practice for the treatment of thrombosis. Recently, machine learning techniques have dramatically improved the ability of computers to provide clinical decision support and have allowed for the possibility of computer generated, algorithm-based heparin dosing recommendations. OBJECTIVE The objective of this study was to predict the effects of heparin treatment using machine learning methods to optimize heparin dosing in intensive care units based on the predictions. Patient state predictions were based upon activated partial thromboplastin time in 3 different ranges: subtherapeutic, normal therapeutic, and supratherapeutic, respectively. METHODS Retrospective data from 2 intensive care unit research databases (Multiparameter Intelligent Monitoring in Intensive Care III, MIMIC-III; e–Intensive Care Unit Collaborative Research Database, eICU) were used for the analysis. Candidate machine learning models (random forest, support vector machine, adaptive boosting, extreme gradient boosting, and shallow neural network) were compared in 3 patient groups to evaluate the classification performance for predicting the subtherapeutic, normal therapeutic, and supratherapeutic patient states. The model results were evaluated using precision, recall, F1 score, and accuracy. RESULTS Data from the MIMIC-III database (n=2789 patients) and from the eICU database (n=575 patients) were used. In 3-class classification, the shallow neural network algorithm performed the best (F1 scores of 87.26%, 85.98%, and 87.55% for data set 1, 2, and 3, respectively). The shallow neural network algorithm achieved the highest F1 scores within the patient therapeutic state groups: subtherapeutic (data set 1: 79.35%; data set 2: 83.67%; data set 3: 83.33%), normal therapeutic (data set 1: 93.15%; data set 2: 87.76%; data set 3: 84.62%), and supratherapeutic (data set 1: 88.00%; data set 2: 86.54%; data set 3: 95.45%) therapeutic ranges, respectively. CONCLUSIONS The most appropriate model for predicting the effects of heparin treatment was found by comparing multiple machine learning models and can be used to further guide optimal heparin dosing. Using multicenter intensive care unit data, our study demonstrates the feasibility of predicting the outcomes of heparin treatment using data-driven methods, and thus, how machine learning–based models can be used to optimize and personalize heparin dosing to improve patient safety. Manual analysis and validation suggested that the model outperformed standard practice heparin treatment dosing.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Jeffrey Micher

We present a method for building a morphological generator from the output of an existing analyzer for Inuktitut, in the absence of a two-way finite state transducer which would normally provide this functionality. We make use of a sequence to sequence neural network which “translates” underlying Inuktitut morpheme sequences into surface character sequences. The neural network uses only the previous and the following morphemes as context. We report a morpheme accuracy of approximately 86%. We are able to increase this accuracy slightly by passing deep morphemes directly to output for unknown morphemes. We do not see significant improvement when increasing training data set size, and postulate possible causes for this.


2019 ◽  
Vol 2019 (02) ◽  
pp. 89-98
Author(s):  
Vijayakumar T

Predicting the category of tumors and the types of the cancer in its early stage remains as a very essential process to identify depth of the disease and treatment available for it. The neural network that functions similar to the human nervous system is widely utilized in the tumor investigation and the cancer prediction. The paper presents the analysis of the performance of the neural networks such as the, FNN (Feed Forward Neural Networks), RNN (Recurrent Neural Networks) and the CNN (Convolutional Neural Network) investigating the tumors and predicting the cancer. The results obtained by evaluating the neural networks on the breast cancer Wisconsin original data set shows that the CNN provides 43 % better prediction than the FNN and 25% better prediction than the RNN.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7709
Author(s):  
Serena Cerfoglio ◽  
Manuela Galli ◽  
Marco Tarabini ◽  
Filippo Bertozzi ◽  
Chiarella Sforza ◽  
...  

Nowadays, the use of wearable inertial-based systems together with machine learning methods opens new pathways to assess athletes’ performance. In this paper, we developed a neural network-based approach for the estimation of the Ground Reaction Forces (GRFs) and the three-dimensional knee joint moments during the first landing phase of the Vertical Drop Jump. Data were simultaneously recorded from three commercial inertial units and an optoelectronic system during the execution of 112 jumps performed by 11 healthy participants. Data were processed and sorted to obtain a time-matched dataset, and a non-linear autoregressive with external input neural network was implemented in Matlab. The network was trained through a train-test split technique, and performance was evaluated in terms of Root Mean Square Error (RMSE). The network was able to estimate the time course of GRFs and joint moments with a mean RMSE of 0.02 N/kg and 0.04 N·m/kg, respectively. Despite the comparatively restricted data set and slight boundary errors, the results supported the use of the developed method to estimate joint kinetics, opening a new perspective for the development of an in-field analysis method.


Terminology ◽  
2022 ◽  
Author(s):  
Ayla Rigouts Terryn ◽  
Véronique Hoste ◽  
Els Lefever

Abstract As with many tasks in natural language processing, automatic term extraction (ATE) is increasingly approached as a machine learning problem. So far, most machine learning approaches to ATE broadly follow the traditional hybrid methodology, by first extracting a list of unique candidate terms, and classifying these candidates based on the predicted probability that they are valid terms. However, with the rise of neural networks and word embeddings, the next development in ATE might be towards sequential approaches, i.e., classifying each occurrence of each token within its original context. To test the validity of such approaches for ATE, two sequential methodologies were developed, evaluated, and compared: one feature-based conditional random fields classifier and one embedding-based recurrent neural network. An additional comparison was added with a machine learning interpretation of the traditional approach. All systems were trained and evaluated on identical data in multiple languages and domains to identify their respective strengths and weaknesses. The sequential methodologies were proven to be valid approaches to ATE, and the neural network even outperformed the more traditional approach. Interestingly, a combination of multiple approaches can outperform all of them separately, showing new ways to push the state-of-the-art in ATE.


Sign in / Sign up

Export Citation Format

Share Document