scholarly journals Improved Docking of Protein Models by a Combination of Alphafold2 and ClusPro

2021 ◽  
Author(s):  
Usman Ghani ◽  
Israel Desta ◽  
Akhil Jindal ◽  
Omeir Khan ◽  
George Jones ◽  
...  

AbstractIt has been demonstrated earlier that the neural network based program AlphaFold2 can be used to dock proteins given the two sequences separated by a gap as the input. The protocol presented here combines AlphaFold2 with the physics based docking program ClusPro. The monomers of the model generated by AlphaFold2 are separated, re-docked using ClusPro, and the resulting 10 models are refined by AlphaFold2. Finally, the five original AlphaFold2 models are added to the 10 AlphaFold2 refined ClusPro models, and the 15 models are ranked by their predicted aligned error (PAE) values obtained by AlphaFold2. The protocol is applied to two benchmark sets of complexes, the first based on the established protein-protein docking benchmark, and the second consisting of only structures released after May 2018, the cut-off date for training AlphaFold2. It is shown that the quality of the initial AlphaFold2 models improves with each additional step of the protocol. In particular, adding the AlphaFold2 refined ClusPro models to the AlphaFold2 models increases the success rate by 23% in the top 5 predictions, whereas considering the 10 models obtained by the combined protocol increases the success rate to close to 40%. The improvement is similar for the second benchmark that includes only complexes distinct from the proteins used for training the neural network.

2021 ◽  
Author(s):  
Ian Kotthoff ◽  
Petras J. Kundrotas ◽  
Ilya A. Vakser

AbstractProtein docking protocols typically involve global docking scan, followed by re-ranking of the scan predictions by more accurate scoring functions that are either computationally too expensive or algorithmically impossible to include in the global scan. Development and validation of scoring methodologies are often performed on scoring benchmark sets (docking decoys) which offer concise and nonredundant representation of the global docking scan output for a large and diverse set of protein-protein complexes. Two such protein-protein scoring benchmarks were built for the Dockground resource, which contains various datasets for the development and testing of protein docking methodologies. One set was generated based on the Dockground unbound docking benchmark 4, and the other based on protein models from the Dockground model-model benchmark 2. The docking decoys were designed to reflect the reality of the real-case docking applications (e.g., correct docking predictions defined as near-native rather than native structures), and to minimize applicability of approaches not directly related to the development of scoring functions (reducing clustering of predictions in the binding funnel and disparity in structural quality of the near-native and non-native matches). The sets were further characterized by the source organism and the function of the protein-protein complexes. The sets, freely available to the research community on the Dockground webpage, present a unique, user-friendly resource for the developing and testing of protein-protein scoring approaches.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaochao Fan ◽  
Hongfei Lin ◽  
Liang Yang ◽  
Yufeng Diao ◽  
Chen Shen ◽  
...  

Humor refers to the quality of being amusing. With the development of artificial intelligence, humor recognition is attracting a lot of research attention. Although phonetics and ambiguity have been introduced by previous studies, existing recognition methods still lack suitable feature design for neural networks. In this paper, we illustrate that phonetics structure and ambiguity associated with confusing words need to be learned for their own representations via the neural network. Then, we propose the Phonetics and Ambiguity Comprehension Gated Attention network (PACGA) to learn phonetic structures and semantic representation for humor recognition. The PACGA model can well represent phonetic information and semantic information with ambiguous words, which is of great benefit to humor recognition. Experimental results on two public datasets demonstrate the effectiveness of our model.


2019 ◽  
Vol 20 (S25) ◽  
Author(s):  
Yumeng Yan ◽  
Sheng-You Huang

Abstract Background Protein-protein docking is a valuable computational approach for investigating protein-protein interactions. Shape complementarity is the most basic component of a scoring function and plays an important role in protein-protein docking. Despite significant progresses, shape representation remains an open question in the development of protein-protein docking algorithms, especially for grid-based docking approaches. Results We have proposed a new pairwise shape-based scoring function (LSC) for protein-protein docking which adopts an exponential form to take into account long-range interactions between protein atoms. The LSC scoring function was incorporated into our FFT-based docking program and evaluated for both bound and unbound docking on the protein docking benchmark 4.0. It was shown that our LSC achieved a significantly better performance than four other similar docking methods, ZDOCK 2.1, MolFit/G, GRAMM, and FTDock/G, in both success rate and number of hits. When considering the top 10 predictions, LSC obtained a success rate of 51.71% and 6.82% for bound and unbound docking, respectively, compared to 42.61% and 4.55% for the second-best program ZDOCK 2.1. LSC also yielded an average of 8.38 and 3.94 hits per complex in the top 1000 predictions for bound and unbound docking, respectively, followed by 6.38 and 2.96 hits for the second-best ZDOCK 2.1. Conclusions The present LSC method will not only provide an initial-stage docking approach for post-docking processes but also have a general implementation for accurate representation of other energy terms on grids in protein-protein docking. The software has been implemented in our HDOCK web server at http://hdock.phys.hust.edu.cn/.


2015 ◽  
Vol 764-765 ◽  
pp. 863-867
Author(s):  
Yih Chuan Lin ◽  
Pu Jian Hsu

In this paper, an error concealment scheme for neural-network based compression of depth image in 3D videos is proposed. In the neural-network based compression, each depth image is represented by one or more neural networks. The advantage of neural-network based compression lies in the parallel processing ability of multiple neurons, which can handle the massive data volume of 3D videos. The similarity of neuron weights of neighboring nodes is exploited to recover the loss neuron weights when transmitting with an error-prone communication channel. With a simulated noisy channel, the quality of compressed 3D video, which is reconstructed undergoing the noisy channel, can be recovered well by the proposed error concealment scheme.


2020 ◽  
Vol 5 (2) ◽  
pp. 221-224
Author(s):  
Joy Oyinye Orukwo ◽  
Ledisi Giok Kabari

Diabetes has always been a silent killer and the number of people suffering from it has increased tremendously in the last few decades. More often than not, people continue with their normal lifestyle, unaware that their health is at severe risk and with each passing day diabetes goes undetected. Artificial Neural Networks have become extensively useful in medical diagnosis as it provides a powerful tool to help analyze, model and make sense of complex clinical data. This study developed a diabetes diagnosis system using feed-forward neural network with supervised learning algorithm. The neural network is systematically trained and tested and a success rate of 90% was achieved.


2021 ◽  
Vol 8 (3) ◽  
pp. 15-27
Author(s):  
Mohamed N. Sweilam ◽  
Nikolay Tolstokulakov

Depth estimation has made great progress in the last few years due to its applications in robotics science and computer vision. Various methods have been implemented and enhanced to estimate the depth without flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers, especially for the video applications which have more complexity of the neural network which af ects the run time. Moreover to use such input like monocular video for depth estimation is considered an attractive idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of RAM and with using less number of parameters without having a significant reduction in the quality of the depth estimation.


Author(s):  
S O Stepanenko ◽  
P Y Yakimov

Object classification with use of neural networks is extremely current today. YOLO is one of the most often used frameworks for object classification. It produces high accuracy but the processing speed is not high enough especially in conditions of limited performance of a computer. This article researches use of a framework called NVIDIA TensorRT to optimize YOLO with the aim of increasing the image processing speed. Saving efficiency and quality of the neural network work TensorRT allows us to increase the processing speed using an optimization of the architecture and an optimization of calculations on a GPU.


Author(s):  
Wahyu Srimulyani ◽  
Aina Musdholifah

 Indonesia has many food varieties, one of which is rice varieties. Each rice variety has physical characteristics that can be recognized through color, texture, and shape. Based on these physical characteristics, rice can be identified using the Neural Network. Research using 12 features has not optimal results. This study proposes the addition of geometry features with Learning Vector Quantization and Backpropagation algorithms that are used separately.The trial uses data from 9 rice varieties taken from several regions in Yogyakarta. The acquisition of rice was carried out using a camera Canon D700 with a kit lens and maximum magnification, 55 mm. Data sharing is carried out for training and testing, and the training data was sharing with the quality of the rice. Preprocessing of data was carried out before feature extraction with the trial and error thresholding process of segmentation. Evaluation is done by comparing the results of the addition of 6 geometry features and before adding geometry features.The test results show that the addition of 6 geometry features gives an increase in the value of accuracy. This is evidenced by the Backpropagation algorithm resulting in increased accuracy of 100% and 5.2% the result of the LVQ algorithm.


2019 ◽  
Vol 22 (6) ◽  
pp. 189-197
Author(s):  
E. S. Sirota ◽  
M. I. Truphanov

In work the algorithm of restoration of the images damaged as a result of influence of noise of various nature is considered. The advantages and disadvantages of the existing approaches, as well as the prospects of using artificial neural networks, are noted. A double-layer neural network is used as an image restoration tool, and it is assumed that the location of the damaged pixels is known. A neuron is represented as a 3x3 array, where each element of the array has a pixel color value that corresponds to the value of that color in the palette. The neural network is trained on intact images, while the color difference of pixels acts as a learning criterion. For a more accurate restoration, it is recommended at the training stage to select images similar in color to damaged ones. At the recovery stage, neurons (3x3) are formed around the damaged pixels, so that the damaged pixel is located in the middle of the neuron data array. The damaged pixel is assigned a neuron value depending on the average value of the weights matrix. An algorithm for the restoration of pixels, as well as its software implementation. The simulation was carried out in the RGB palette separately for each channel. To assess the quality of the recovery were selected groups of images with varying degrees of damage. Unlike existing solutions, the algorithm has the simplicity of implementation. The  research results show that regardless of the degree of damage (within 50%), about 70% of damaged pixels are restored. Further studies suggest a modification of the algorithm to restore images with enlarged areas of damage, as well as adapting it to restore three-dimensional images.


Sign in / Sign up

Export Citation Format

Share Document