Architecture, Generative Model, and Deep Reinforcement Learning for IoT Applications: Deep Learning Perspective

Author(s):  
Shaveta Malik ◽  
Amit Kumar Tyagi ◽  
Sameer Mahajan
Author(s):  
Sangseok Yun ◽  
Jae-Mo Kang ◽  
Jeongseok Ha ◽  
Sangho Lee ◽  
Dong-Woo Ryu ◽  
...  

2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Tiago Pereira ◽  
Maryam Abbasi ◽  
Bernardete Ribeiro ◽  
Joel P. Arrais

AbstractIn this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine $$A_{2A}$$ A 2 A and $$\kappa$$ κ opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy.


2021 ◽  
pp. 102685
Author(s):  
Parjanay Sharma ◽  
Siddhant Jain ◽  
Shashank Gupta ◽  
Vinay Chamola

Author(s):  
Eduardo F. Morales ◽  
Rafael Murrieta-Cid ◽  
Israel Becerra ◽  
Marco A. Esquivel-Basaldua

2021 ◽  
Author(s):  
Hanxiao Xu ◽  
Jie Liang ◽  
Wenchaun Zang

Abstract This paper combines deep Q network (DQN) with long and short-term memory (LSTM) and proposes a novel hybrid deep learning method called DQN-LSTM framework. The proposed method aims to address the prediction of five Chinese agricultural commodities futures prices over different time duration. The DQN-LSTM applies the strategy enhancement of deep reinforcement learning to the structural parameter optimization of deep recurrent networks, and achieves the organic integration of two types of deep learning algorithms. The new framework has the capacity of self-optimization and learning of parameters, thus improving the performance of prediction by its own iteration, which shows great prospects for future application in financial prediction and other directions. The performance of the proposed method is evaluated by comparing the effectiveness of the DQN-LSTM method with that of traditional predicting methods such as auto-regressive integrated moving average (ARIMA), support vector machine (SVR) and LSTM. The results show that the DQN-LSTM method can effectively optimize the traditional LSTM structural parameters through policy iteration of the deep reinforcement learning algorithm, which contributes to a better long and short-term prediction accuracy. In particular, the longer the prediction period, the more obvious the advantage of prediction accuracy of a DQN-LSTM method.


Sign in / Sign up

Export Citation Format

Share Document