scholarly journals A Review of Deep Learning Methods for Antibodies

Antibodies ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 12 ◽  
Author(s):  
Jordan Graves ◽  
Jacob Byerly ◽  
Eduardo Priego ◽  
Naren Makkapati ◽  
S. Vince Parish ◽  
...  

Driven by its successes across domains such as computer vision and natural language processing, deep learning has recently entered the field of biology by aiding in cellular image classification, finding genomic connections, and advancing drug discovery. In drug discovery and protein engineering, a major goal is to design a molecule that will perform a useful function as a therapeutic drug. Typically, the focus has been on small molecules, but new approaches have been developed to apply these same principles of deep learning to biologics, such as antibodies. Here we give a brief background of deep learning as it applies to antibody drug development, and an in-depth explanation of several deep learning algorithms that have been proposed to solve aspects of both protein design in general, and antibody design in particular.

Molecules ◽  
2019 ◽  
Vol 24 (12) ◽  
pp. 2233 ◽  
Author(s):  
Michele Montaruli ◽  
Domenico Alberga ◽  
Fulvio Ciriaco ◽  
Daniela Trisciuzzi ◽  
Anna Rita Tondo ◽  
...  

In this continuing work, we have updated our recently proposed Multi-fingerprint Similarity Search algorithm (MuSSel) by enabling the generation of dominant ionized species at a physiological pH and the exploration of a larger data domain, which included more than half a million high-quality small molecules extracted from the latest release of ChEMBL (version 24.1, at the time of writing). Provided with a high biological assay confidence score, these selected compounds explored up to 2822 protein drug targets. To improve the data accuracy, samples marked as prodrugs or with equivocal biological annotations were not considered. Notably, MuSSel performances were overall improved by using an object-relational database management system based on PostgreSQL. In order to challenge the real effectiveness of MuSSel in predicting relevant therapeutic drug targets, we analyzed a pool of 36 external bioactive compounds published in the Journal of Medicinal Chemistry from October to December 2018. This study demonstrates that the use of highly curated chemical and biological experimental data on one side, and a powerful multi-fingerprint search algorithm on the other, can be of the utmost importance in addressing the fate of newly conceived small molecules, by strongly reducing the attrition of early phases of drug discovery programs.


2019 ◽  
Vol 3 (2) ◽  
pp. 31-40 ◽  
Author(s):  
Ahmed Shamsaldin ◽  
Polla Fattah ◽  
Tarik Rashid ◽  
Nawzad Al-Salihi

At present, deep learning is widely used in a broad range of arenas. A convolutional neural networks (CNN) is becoming the star of deep learning as it gives the best and most precise results when cracking real-world problems. In this work, a brief description of the applications of CNNs in two areas will be presented: First, in computer vision, generally, that is, scene labeling, face recognition, action recognition, and image classification; Second, in natural language processing, that is, the fields of speech recognition and text classification.


2020 ◽  
Vol 70 (2) ◽  
pp. 234-238
Author(s):  
K.S. Imanbaev ◽  

Currently, deep learning of neural networks is one of the most popular methods for speech recognition, natural language processing, and computer vision. The article reviews the history of deep learning of neural networks and the current state in General. We consider algorithms for training neural networks used for deep training of neural networks, followed by fine-tuning using the method of back propagation of errors. Neural networks with large numbers of hidden layers, frequently occurring and disappearing gradients are very difficult to train. In this paper, we consider methods that successfully implement training of neural networks with large numbers of layers (more than one hundred) and vanishing gradients. A review of well-known libraries used for successful deep learning of neural networks is conducted.


Author(s):  
Lakshaga Jyothi M, Et. al.

Smart Classrooms are becoming very popular nowadays. The boom of recent technologies such as the Internet of Things, thanks to those technologies that are tremendously equipping every corner of a diverse set of fields. Every educational institution has set some benchmark on adopting these technologies in their daily lives. But due to some constraints and setbacks, these IoT technological embodiments in the educational sector is still in the premature stage. The major success of any technological evolution is based on its full-fledged implementation to fit the society in the broader concern. The breakthrough in recent years by Deep Learning principles as it outperforms traditional machine learning models to solve any tasks especially, Computer Vision and Natural language processing problems.  A fusion of Computer Vision and Natural Language Processing as a new astonishing field that have shown its existence in the recent years. Using such mixtures with the IoT platforms is a challenging task and and has not reached the eyes of many researchers across the globe.  Many researchers of the past have shown interest in designing an intelligent classroom on a different context. Hence to fill this gap, we have proposed an approach or a conceptual model through which Deep Learning architectures fused in the IoT systems results in an Intelligent Classroom via such hybrid systems. Apart from this, we have also discussed the major challenges, limitations as well as opportunities that can arise with Deep Learning-based IoT Solutions. In this paper, we have summarized the available applications of these technologies to suit our solution.  Thus, this paper can be taken as a kickstart for our research to have a glimpse of the available papers for the success of our proposed approach.


2021 ◽  
Vol 7 ◽  
pp. e773
Author(s):  
Jan Egger ◽  
Antonio Pepe ◽  
Christina Gsaxner ◽  
Yuan Jin ◽  
Jianning Li ◽  
...  

Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term ‘deep learning’, and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1751
Author(s):  
Xiang Hu ◽  
Wenjing Yang ◽  
Hao Wen ◽  
Yu Liu ◽  
Yuanxi Peng

Hyperspectral image (HSI) classification is the subject of intense research in remote sensing. The tremendous success of deep learning in computer vision has recently sparked the interest in applying deep learning in hyperspectral image classification. However, most deep learning methods for hyperspectral image classification are based on convolutional neural networks (CNN). Those methods require heavy GPU memory resources and run time. Recently, another deep learning model, the transformer, has been applied for image recognition, and the study result demonstrates the great potential of the transformer network for computer vision tasks. In this paper, we propose a model for hyperspectral image classification based on the transformer, which is widely used in natural language processing. Besides, we believe we are the first to combine the metric learning and the transformer model in hyperspectral image classification. Moreover, to improve the model classification performance when the available training samples are limited, we use the 1-D convolution and Mish activation function. The experimental results on three widely used hyperspectral image data sets demonstrate the proposed model’s advantages in accuracy, GPU memory cost, and running time.


It is always beneficial to reassess the previously done work to create interest and develop understanding about the subject in importance. In computer vision, to perform the task of feature extraction, classification or segmentation, measurement and assessment of image structures (medical images, natural images etc.) is to be done very efficiently. In the field of image processing numerous techniques are available, but it is very difficult to perform these tasks due to noise and other variable artifacts. Various Deep machine learning algorithms are used to perform complex task of recognition and computer vision. Recently Convolutional Neural Networks (CNNs-back bone of numerous deep learning algorithms) have shown state of the art performance in high level computer vision tasks, such as object detection, object recognition, classification, machine translation, semantic segmentation, speech recognition, scene labelling, medical imaging, robotics and control, , natural language processing (NLP), bio-informatics, cybersecurity, and many others. Convolution neural networks is the attempt to combine mathematics to computer science with icing of biology on it. CNNs work in two parts. The first part is mathematics that supports feature extraction and second part is about classification and prediction at pixel level. This review is intended for those who want to grab the complete knowledge about CNN, their development form ancient age to modern state of art system of deep learning system. This review paper is organized in three steps: in the first step introduction about the concept is given along with necessary background information. In the second step other highlights and related work proposed by various authors is explained. Third step is the complete layer wise architecture of convolution networks. The last section is followed by detailed discussion on improvements, and challenges on these deep learning techniques. Most papers consider for this review are later than 2012 from when the history of convolution neural networks and deep learning begins


Author(s):  
Polyana B. Costa ◽  
Guilherme Marques ◽  
Arhur C. Serra ◽  
Daniel de S. Moraes ◽  
Antonio J. G. Busson ◽  
...  

Methods based on Machine Learning have become state-of-the-art in various segments of computing, especially in the fields of computer vision, speech recognition, and natural language processing. Such methods, however, generally work best when applied to specific tasks in specific domains where large training datasets are available. This paper presents an overview of the state-of-the-art in the area of Deep Learning for Multimedia Content Analysis (image, audio, and video), and describe recent works that propose The integration of deep learning with symbolic AI reasoning. We draw a picture of the future by discussing envisaged use cases that address media understanding gaps which can be solved by the integration of machine learning and symbolic AI, the so-called Neuro-Symbolic integration.


2022 ◽  
Author(s):  
Ms. Aayushi Bansal ◽  
Dr. Rewa Sharma ◽  
Dr. Mamta Kathuria

Recent advancements in deep learning architecture have increased its utility in real-life applications. Deep learning models require a large amount of data to train the model. In many application domains, there is a limited set of data available for training neural networks as collecting new data is either not feasible or requires more resources such as in marketing, computer vision, and medical science. These models require a large amount of data to avoid the problem of overfitting. One of the data space solutions to the problem of limited data is data augmentation. The purpose of this study focuses on various data augmentation techniques that can be used to further improve the accuracy of a neural network. This saves the cost and time consumption required to collect new data for the training of deep neural networks by augmenting available data. This also regularizes the model and improves its capability of generalization. The need for large datasets in different fields such as computer vision, natural language processing, security and healthcare is also covered in this survey paper. The goal of this paper is to provide a comprehensive survey of recent advancements in data augmentation techniques and their application in various domains.


2018 ◽  
Author(s):  
Mohammed AlQuraishi

AbstractPredicting protein structure from sequence is a central challenge of biochemistry. Co‐evolution methods show promise, but an explicit sequence‐to‐structure map remains elusive. Advances in deep learning that replace complex, human‐designed pipelines with differentiable models optimized end‐to‐end suggest the potential benefits of similarly reformulating structure prediction. Here we report the first end‐to‐end differentiable model of protein structure. The model couples local and global protein structure via geometric units that optimize global geometry without violating local covalent chemistry. We test our model using two challenging tasks: predicting novel folds without co‐evolutionary data and predicting known folds without structural templates. In the first task the model achieves state‐of‐the‐art accuracy and in the second it comes within 1‐2Å; competing methods using co‐evolution and experimental templates have been refined over many years and it is likely that the differentiable approach has substantial room for further improvement, with applications ranging from drug discovery to protein design.


Sign in / Sign up

Export Citation Format

Share Document