scholarly journals A deep learning model for gas storage optimization

Author(s):  
Nicolas Curin ◽  
Michael Kettler ◽  
Xi Kleisinger-Yu ◽  
Vlatka Komaric ◽  
Thomas Krabichler ◽  
...  

AbstractTo the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon. In this article, we utilize techniques inspired by reinforcement learning in order to optimize the operation plans of underground natural gas storage facilities. We provide a theoretical framework and assess the performance of the proposed method numerically in comparison to a state-of-the-art least-squares Monte-Carlo approach. Due to the inherent intricacy originating from the high-dimensional forward market as well as the numerous constraints and frictions, the optimization exercise can hardly be tackled by means of traditional techniques.

2021 ◽  
Vol 15 (8) ◽  
pp. 898-911
Author(s):  
Yongqing Zhang ◽  
Jianrong Yan ◽  
Siyu Chen ◽  
Meiqin Gong ◽  
Dongrui Gao ◽  
...  

Rapid advances in biological research over recent years have significantly enriched biological and medical data resources. Deep learning-based techniques have been successfully utilized to process data in this field, and they have exhibited state-of-the-art performances even on high-dimensional, nonstructural, and black-box biological data. The aim of the current study is to provide an overview of the deep learning-based techniques used in biology and medicine and their state-of-the-art applications. In particular, we introduce the fundamentals of deep learning and then review the success of applying such methods to bioinformatics, biomedical imaging, biomedicine, and drug discovery. We also discuss the challenges and limitations of this field, and outline possible directions for further research.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Rao ◽  
Y Li ◽  
R Ramakrishnan ◽  
A Hassaine ◽  
D Canoy ◽  
...  

Abstract Background/Introduction Predicting incident heart failure has been challenging. Deep learning models when applied to rich electronic health records (EHR) offer some theoretical advantages. However, empirical evidence for their superior performance is limited and they remain commonly uninterpretable, hampering their wider use in medical practice. Purpose We developed a deep learning framework for more accurate and yet interpretable prediction of incident heart failure. Methods We used longitudinally linked EHR from practices across England, involving 100,071 patients, 13% of whom had been diagnosed with incident heart failure during follow-up. We investigated the predictive performance of a novel transformer deep learning model, “Transformer for Heart Failure” (BEHRT-HF), and validated it using both an external held-out dataset and an internal five-fold cross-validation mechanism using area under receiver operating characteristic (AUROC) and area under the precision recall curve (AUPRC). Predictor groups included all outpatient and inpatient diagnoses within their temporal context, medications, age, and calendar year for each encounter. By treating diagnoses as anchors, we alternatively removed different modalities (ablation study) to understand the importance of individual modalities to the performance of incident heart failure prediction. Using perturbation-based techniques, we investigated the importance of associations between selected predictors and heart failure to improve model interpretability. Results BEHRT-HF achieved high accuracy with AUROC 0.932 and AUPRC 0.695 for external validation, and AUROC 0.933 (95% CI: 0.928, 0.938) and AUPRC 0.700 (95% CI: 0.682, 0.718) for internal validation. Compared to the state-of-the-art recurrent deep learning model, RETAIN-EX, BEHRT-HF outperformed it by 0.079 and 0.030 in terms of AUPRC and AUROC. Ablation study showed that medications were strong predictors, and calendar year was more important than age. Utilising perturbation, we identified and ranked the intensity of associations between diagnoses and heart failure. For instance, the method showed that established risk factors including myocardial infarction, atrial fibrillation and flutter, and hypertension all strongly associated with the heart failure prediction. Additionally, when population was stratified into different age groups, incident occurrence of a given disease had generally a higher contribution to heart failure prediction in younger ages than when diagnosed later in life. Conclusions Our state-of-the-art deep learning framework outperforms the predictive performance of existing models whilst enabling a data-driven way of exploring the relative contribution of a range of risk factors in the context of other temporal information. Funding Acknowledgement Type of funding source: Private grant(s) and/or Sponsorship. Main funding source(s): National Institute for Health Research, Oxford Martin School, Oxford Biomedical Research Centre


2018 ◽  
Vol 24 (4) ◽  
pp. 225-247 ◽  
Author(s):  
Xavier Warin

Abstract A new method based on nesting Monte Carlo is developed to solve high-dimensional semi-linear PDEs. Depending on the type of non-linearity, different schemes are proposed and theoretically studied: variance error are given and it is shown that the bias of the schemes can be controlled. The limitation of the method is that the maturity or the Lipschitz constants of the non-linearity should not be too high in order to avoid an explosion of the computational time. Many numerical results are given in high dimension for cases where analytical solutions are available or where some solutions can be computed by deep-learning methods.


Author(s):  
Zhaoliang He ◽  
Hongshan Li ◽  
Zhi Wang ◽  
Shutao Xia ◽  
Wenwu Zhu

With the growth of computer vision-based applications, an explosive amount of images have been uploaded to cloud servers that host such online computer vision algorithms, usually in the form of deep learning models. JPEG has been used as the de facto compression and encapsulation method for images. However, standard JPEG configuration does not always perform well for compressing images that are to be processed by a deep learning model—for example, the standard quality level of JPEG leads to 50% of size overhead (compared with the best quality level selection) on ImageNet under the same inference accuracy in popular computer vision models (e.g., InceptionNet and ResNet). Knowing this, designing a better JPEG configuration for online computer vision-based services is still extremely challenging. First, cloud-based computer vision models are usually a black box to end-users; thus, it is challenging to design JPEG configuration without knowing their model structures. Second, the “optimal” JPEG configuration is not fixed; instead, it is determined by confounding factors, including the characteristics of the input images and the model, the expected accuracy and image size, and so forth. In this article, we propose a reinforcement learning (RL)-based adaptive JPEG configuration framework, AdaCompress. In particular, we design an edge (i.e., user-side) RL agent that learns the optimal compression quality level to achieve an expected inference accuracy and upload image size, only from the online inference results, without knowing details of the model structures. Furthermore, we design an explore-exploit mechanism to let the framework fast switch an agent when it detects a performance degradation, mainly due to the input change (e.g., images captured across daytime and night). Our evaluation experiments using real-world online computer vision-based APIs from Amazon Rekognition, Face++, and Baidu Vision show that our approach outperforms existing baselines by reducing the size of images by one-half to one-third while the overall classification accuracy only decreases slightly. Meanwhile, AdaCompress adaptively re-trains or re-loads the RL agent promptly to maintain the performance.


2021 ◽  
Vol 14 (11) ◽  
pp. 1950-1963
Author(s):  
Jie Liu ◽  
Wenqian Dong ◽  
Qingqing Zhou ◽  
Dong Li

Cardinality estimation is a fundamental and critical problem in databases. Recently, many estimators based on deep learning have been proposed to solve this problem and they have achieved promising results. However, these estimators struggle to provide accurate results for complex queries, due to not capturing real inter-column and inter-table correlations. Furthermore, none of these estimators contain the uncertainty information about their estimations. In this paper, we present a join cardinality estimator called Fauce. Fauce learns the correlations across all columns and all tables in the database. It also contains the uncertainty information of each estimation. Among all studied learned estimators, our results are promising: (1) Fauce is a light-weight estimator, it has 10× faster inference speed than the state of the art estimator; (2) Fauce is robust to the complex queries, it provides 1.3×--6.7× smaller estimation errors for complex queries compared with the state of the art estimator; (3) To the best of our knowledge, Fauce is the first estimator that incorporates uncertainty information for cardinality estimation into a deep learning model.


Author(s):  
Yang Liu ◽  
Yachao Yuan ◽  
Jing Liu

Abstract Automatic defect classification is vital to ensure product quality, especially for steel production. In the real world, the amount of collected samples with labels is limited due to high labor costs, and the gathered dataset is usually imbalanced, making accurate steel defect classification very challenging. In this paper, a novel deep learning model for imbalanced multi-label surface defect classification, named ImDeep, is proposed. It can be deployed easily in steel production lines to identify different defect types on the steel's surface. ImDeep incorporates three key techniques, i.e., Imbalanced Sampler, Fussy-FusionNet, and Transfer Learning. It improves the model's classification performance with multi-label and reduces the model's complexity over small datasets with low latency. The performance of different fusion strategies and three key techniques of ImDeep is verified. Simulation results prove that ImDeep accomplishes better performance than the state-of-the-art over the public dataset with varied sizes. Specifically, ImDeep achieves about 97% accuracy of steel surface defect classification over a small imbalanced dataset with a low latency, which improves about 10% compared with that of the state-of-the-art.


2020 ◽  
Vol 12 (2) ◽  
pp. 21-34
Author(s):  
Mostefai Abdelkader

In recent years, increasing attention is being paid to sentiment analysis on microblogging platforms such as Twitter. Sentiment analysis refers to the task of detecting whether a textual item (e.g., a tweet) contains an opinion about a topic. This paper proposes a probabilistic deep learning approach for sentiments analysis. The deep learning model used is a convolutional neural network (CNN). The main contribution of this approach is a new probabilistic representation of the text to be fed as input to the CNN. This representation is a matrix that stores for each word composing the message the probability that it belongs to a positive class and the probability that it belongs to a negative class. The proposed approach is evaluated on four well-known datasets HCR, OMD, STS-gold, and a dataset provided by the SemEval-2017 Workshop. The results of the experiments show that the proposed approach competes with the state-of-the-art sentiment analyzers and has the potential to detect sentiments from textual data in an effective manner.


2020 ◽  
Author(s):  
Sebastian Bomberg ◽  
Neha Goel

<p>The presented work focuses on disaster risk management of cities which are prone to natural hazards. Based on aerial imagery captured by drones of regions in Caribbean islands, we show how to process and automatically identify roof material of individual structures using a deep learning model. Deep learning refers to a machine learning technique using deep artificial neural networks. Unlike other techniques, deep learning does not necessarily require feature engineering but may process raw data directly. The outcome of this assessment can be used for steering risk mitigations measures, creating risk hazard maps or advising municipal bodies or help organizations on investing their resources in rebuilding reinforcements. Data at hand consists of images in BigTIFF format and GeoJSON files including the building footprint, unique building ID and roof material labels. We demonstrate how to use MATLAB and its toolboxes for processing large image files that do not fit in computer memory. Based on this, we perform the training of a deep learning model to classify roof material present in the images. We achieve this by subjecting a pretrained ResNet-18 neural network to transfer learning. Training is further accelerated by means of GPU computing. The accuracy computed from a validation data set achieved by this baseline model is 74%. Further tuning of hyperparameters is expected to improve accuracy significantly.</p>


2021 ◽  
Author(s):  
Xuhan Liu ◽  
Kai Ye ◽  
Herman Van Vlijmen ◽  
Michael T. M. Emmerich ◽  
Adriaan P. IJzerman ◽  
...  

<p>In polypharmacology, ideal drugs are required to bind to multiple specific targets to enhance efficacy or to reduce resistance formation. Although deep learning has achieved breakthrough in drug discovery, most of its applications only focus on a single drug target to generate drug-like active molecules in spite of the reality that drug molecules often interact with more than one target which can have desired (polypharmacology) or undesired (toxicity) effects. In a previous study we proposed a new method named <i>DrugEx</i> that integrates an exploration strategy into RNN-based reinforcement learning to improve the diversity of the generated molecules. Here, we extended our <i>DrugEx</i> algorithm with multi-objective optimization to generate drug molecules towards more than one specific target (two adenosine receptors, A<sub>1</sub>AR and A<sub>2A</sub>AR, and the potassium ion channel hERG in this study). In our model, we applied an RNN as the <i>agent</i> and machine learning predictors as the <i>environment</i>, both of which were pre-trained in advance and then interplayed under the reinforcement learning framework. The concept of evolutionary algorithms was merged into our method such that <i>crossover</i> and <i>mutation</i> operations were implemented by the same deep learning model as the <i>agent</i>. During the training loop, the agent generates a batch of SMILES-based molecules. Subsequently scores for all objectives provided by the <i>environment</i> are used for constructing Pareto ranks of the generated molecules with non-dominated sorting and Tanimoto-based crowding distance algorithms. Here, we adopted GPU acceleration to speed up the process of Pareto optimization. The final reward of each molecule is calculated based on the Pareto ranking with the ranking selection algorithm. The agent is trained under the guidance of the reward to make sure it can generate more desired molecules after convergence of the training process. All in all we demonstrate generation of compounds with a diverse predicted selectivity profile toward multiple targets, offering the potential of high efficacy and lower toxicity.</p>


Sign in / Sign up

Export Citation Format

Share Document