scholarly journals Novel Potential Inhibitors Against SARS-CoV-2 Using Artificial Intelligence

Author(s):  
Madhusudan Verma

Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with the fragment-based drug design (ADQN-FBDD) along with variational autoencoder with KL annealing and circular annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro . Structure-based optimization policy (SBOP) is used in reinforcement learning. The reason for using variational autoencoders is that it does not deviate much from actual inhibitors, but since VAE suffers from KL diminishing we have used KL annealing and circular annealing to address this issue.

2020 ◽  
Author(s):  
Madhusudan Verma

Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with the fragment-based drug design (ADQN-FBDD) along with variational autoencoder with KL annealing and circular annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro . Structure-based optimization policy (SBOP) is used in reinforcement learning. The reason for using variational autoencoders is that it does not deviate much from actual inhibitors, but since VAE suffers from KL diminishing we have used KL annealing and circular annealing to address this issue.


2020 ◽  
Author(s):  
Madhusudan Verma ◽  
Deepanshu Bansal

Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with the fragment-based drug design (ADQN-FBDD) along with variational autoencoder with KL annealing and circular annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro . Structure-based optimization policy (SBOP) is used in reinforcement learning. The reason for using variational autoencoders is that it does not deviate much from actual inhibitors, but since VAE suffers from KL diminishing we have used KL annealing and circular annealing to address this issue. Researchers can use this compound as potential drugs against SARS-CoV-2.


2020 ◽  
Author(s):  
Madhusudan Verma ◽  
Deepanshu Bansal

Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with the fragment-based drug design (ADQN-FBDD) along with variational autoencoder with KL annealing and circular annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro . Structure-based optimization policy (SBOP) is used in reinforcement learning. The reason for using variational autoencoders is that it does not deviate much from actual inhibitors, but since VAE suffers from KL diminishing we have used KL annealing and circular annealing to address this issue. Researchers can use this compound as potential drugs against SARS-CoV-2.


2020 ◽  
Author(s):  
Madhusudan Verma ◽  
Deepanshu Bansal

Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with the fragment-based drug design (ADQN-FBDD) along with variational autoencoder with KL annealing and circular annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro . Structure-based optimization policy (SBOP) is used in reinforcement learning. The reason for using variational autoencoders is that it does not deviate much from actual inhibitors, but since VAE suffers from KL diminishing we have used KL annealing and circular annealing to address this issue. Researchers can use this compound as potential drugs against SARS-CoV-2.


2020 ◽  
Author(s):  
Madhusudan Verma ◽  
Deepanshu Bansal

Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with the fragment-based drug design (ADQN-FBDD) along with variational autoencoder with KL annealing and circular annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro . Structure-based optimization policy (SBOP) is used in reinforcement learning. The reason for using variational autoencoders is that it does not deviate much from actual inhibitors, but since VAE suffers from KL diminishing we have used KL annealing and circular annealing to address this issue. Researchers can use this compound as potential drugs against SARS-CoV-2.


2020 ◽  
Author(s):  
Madhusudan Verma

Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with the fragment-based drug design (ADQN-FBDD) along with variational autoencoder with KL annealing and circular annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro . Structure-based optimization policy (SBOP) is used in reinforcement learning. The reason for using variational autoencoders is that it does not deviate much from actual inhibitors, but since VAE suffers from KL diminishing we have used KL annealing and circular annealing to address this issue. Researchers can use this compound as potential drugs against SARS-CoV-2.


2021 ◽  
Author(s):  
Madhusudan Verma

<div>Since known approved drugs like liponavir and ritonavir failed to cure SARS-CoV-2 </div><div>infected patients, it is high time to generate new chemical entities against this virus. </div><div>3CL main protease acts as key enzyme for the growth of a virus which acts as </div><div>biocatalyst and helps to break protein and ultimately in the growth of coronavirus. </div><div>Based on a recently solved structure (PDB ID: 6LU7), we developed a novel </div><div>advanced deep Q-learning network with the fragment-based drug design </div><div>(ADQN-FBDD) along with variational autoencoder with KL annealing and circular </div><div>annealing for generating potential lead compounds targeting SARS-CoV-2 3CLpro. </div><div>Structure-based optimization policy (SBOP) is used in reinforcement learning. The </div><div>reason for using variational autoencoders is that it does not deviate much from actual </div><div>inhibitors, but since VAE suffers from KL diminishing we have used KL annealing </div><div>and circular annealing to address this issue. Researchers can use this compound as </div><div>potential drugs against SARS-CoV-2</div>


Author(s):  
Abdelghafour Harraz ◽  
Mostapha Zbakh

Artificial Intelligence allows to create engines that are able to explore, learn environments and therefore create policies that permit to control them in real time with no human intervention. It can be applied, through its Reinforcement Learning techniques component, using frameworks such as temporal differences, State-Action-Reward-State-Action (SARSA), Q Learning to name a few, to systems that are be perceived as a Markov Decision Process, this opens door in front of applying Reinforcement Learning to Cloud Load Balancing to be able to dispatch load dynamically to a given Cloud System. The authors will describe different techniques that can used to implement a Reinforcement Learning based engine in a cloud system.


2021 ◽  
Vol 2131 (3) ◽  
pp. 032103
Author(s):  
A P Badetskii ◽  
O A Medved

Abstract The article discusses the issues of choosing a route and an option of cargo flows in multimodal connection in modern conditions. Taking into account active development of artificial intelligence and digital technologies in all types of production activities, it is proposed to use reinforcement learning algorithms to solve the problem. An analysis of the existing algorithms has been carried out, on the basis of which it was found that when choosing a route option for cargo in a multimodal connection, it would be useful to have a qualitative assessment of terminal states. To obtain such an estimate, the Q-learning algorithm was applied in the article, which showed sufficient convergence and efficiency.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Feng Ding ◽  
Guanfeng Ma ◽  
Zhikui Chen ◽  
Jing Gao ◽  
Peng Li

With the advent of the era of artificial intelligence, deep reinforcement learning (DRL) has achieved unprecedented success in high-dimensional and large-scale artificial intelligence tasks. However, the insecurity and instability of the DRL algorithm have an important impact on its performance. The Soft Actor-Critic (SAC) algorithm uses advanced functions to update the policy and value network to alleviate some of these problems. However, SAC still has some problems. In order to reduce the error caused by the overestimation of SAC, we propose a new SAC algorithm called Averaged-SAC. By averaging the previously learned action-state estimates, it reduces the overestimation problem of soft Q-learning, thereby contributing to a more stable training process and improving performance. We evaluate the performance of Averaged-SAC through some games in the MuJoCo environment. The experimental results show that the Averaged-SAC algorithm effectively improves the performance of the SAC algorithm and the stability of the training process.


Sign in / Sign up

Export Citation Format

Share Document