scholarly journals BioPhi: A platform for antibody design, humanization and humanness evaluation based on natural antibody repertoires and deep learning

2021 ◽  
Author(s):  
David Prihoda ◽  
Jad Maamary ◽  
Andrew Waight ◽  
Veronica Juan ◽  
Laurence Fayadat-Dilman ◽  
...  

Despite recent advances in transgenic animal models and display technologies, humanization of mouse sequences remains the primary route for therapeutic antibody development. Traditionally, humanization is manual, laborious, and requires expert knowledge. Although automation efforts are advancing, existing methods are either demonstrated on a small scale or are entirely proprietary. To predict the immunogenicity risk, the human-likeness of sequences can be evaluated using existing humanness scores, but these lack diversity, granularity or interpretability. Meanwhile, immune repertoire sequencing has generated rich antibody libraries such as the Observed Antibody Space (OAS) that offer augmented diversity not yet exploited for antibody engineering. Here we present BioPhi, an open-source platform featuring novel methods for humanization (Sapiens) and humanness evaluation (OASis). Sapiens is a deep learning humanization method trained on the OAS database using language modeling. Based on an in silico humanization benchmark of 177 antibodies, Sapiens produced sequences at scale while achieving results comparable to that of human experts. OASis is a granular, interpretable and diverse humanness score based on 9-mer peptide search in the OAS. OASis separated human and non-human sequences with high accuracy, and correlated with clinical immunogenicity. Together, BioPhi offers an antibody design interface with automated methods that capture the richness of natural antibody repertoires to produce therapeutics with desired properties and accelerate antibody discovery campaigns. BioPhi is accessible at https://biophi.dichlab.org and https://github.com/Merck/BioPhi.

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1579
Author(s):  
Dongqi Wang ◽  
Qinghua Meng ◽  
Dongming Chen ◽  
Hupo Zhang ◽  
Lisheng Xu

Automatic detection of arrhythmia is of great significance for early prevention and diagnosis of cardiovascular disease. Traditional feature engineering methods based on expert knowledge lack multidimensional and multi-view information abstraction and data representation ability, so the traditional research on pattern recognition of arrhythmia detection cannot achieve satisfactory results. Recently, with the increase of deep learning technology, automatic feature extraction of ECG data based on deep neural networks has been widely discussed. In order to utilize the complementary strength between different schemes, in this paper, we propose an arrhythmia detection method based on the multi-resolution representation (MRR) of ECG signals. This method utilizes four different up to date deep neural networks as four channel models for ECG vector representations learning. The deep learning based representations, together with hand-crafted features of ECG, forms the MRR, which is the input of the downstream classification strategy. The experimental results of big ECG dataset multi-label classification confirm that the F1 score of the proposed method is 0.9238, which is 1.31%, 0.62%, 1.18% and 0.6% higher than that of each channel model. From the perspective of architecture, this proposed method is highly scalable and can be employed as an example for arrhythmia recognition.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Processes ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 575
Author(s):  
Jelena Ochs ◽  
Ferdinand Biermann ◽  
Tobias Piotrowski ◽  
Frederik Erkens ◽  
Bastian Nießing ◽  
...  

Laboratory automation is a key driver in biotechnology and an enabler for powerful new technologies and applications. In particular, in the field of personalized therapies, automation in research and production is a prerequisite for achieving cost efficiency and broad availability of tailored treatments. For this reason, we present the StemCellDiscovery, a fully automated robotic laboratory for the cultivation of human mesenchymal stem cells (hMSCs) in small scale and in parallel. While the system can handle different kinds of adherent cells, here, we focus on the cultivation of adipose-derived hMSCs. The StemCellDiscovery provides an in-line visual quality control for automated confluence estimation, which is realized by combining high-speed microscopy with deep learning-based image processing. We demonstrate the feasibility of the algorithm to detect hMSCs in culture at different densities and calculate confluences based on the resulting image. Furthermore, we show that the StemCellDiscovery is capable of expanding adipose-derived hMSCs in a fully automated manner using the confluence estimation algorithm. In order to estimate the system capacity under high-throughput conditions, we modeled the production environment in a simulation software. The simulations of the production process indicate that the robotic laboratory is capable of handling more than 95 cell culture plates per day.


Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 212
Author(s):  
Youssef Skandarani ◽  
Pierre-Marc Jodoin ◽  
Alain Lalande

Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.


Cancers ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 2419
Author(s):  
Georg Steinbuss ◽  
Mark Kriegsmann ◽  
Christiane Zgorzelski ◽  
Alexander Brobeil ◽  
Benjamin Goeppert ◽  
...  

The diagnosis and the subtyping of non-Hodgkin lymphoma (NHL) are challenging and require expert knowledge, great experience, thorough morphological analysis, and often additional expensive immunohistological and molecular methods. As these requirements are not always available, supplemental methods supporting morphological-based decision making and potentially entity subtyping are required. Deep learning methods have been shown to classify histopathological images with high accuracy, but data on NHL subtyping are limited. After annotation of histopathological whole-slide images and image patch extraction, we trained and optimized an EfficientNet convolutional neuronal network algorithm on 84,139 image patches from 629 patients and evaluated its potential to classify tumor-free reference lymph nodes, nodal small lymphocytic lymphoma/chronic lymphocytic leukemia, and nodal diffuse large B-cell lymphoma. The optimized algorithm achieved an accuracy of 95.56% on an independent test set including 16,960 image patches from 125 patients after the application of quality controls. Automatic classification of NHL is possible with high accuracy using deep learning on histopathological images and routine diagnostic applications should be pursued.


2021 ◽  
Vol 9 (7) ◽  
pp. 755
Author(s):  
Kangkang Jin ◽  
Jian Xu ◽  
Zichen Wang ◽  
Can Lu ◽  
Long Fan ◽  
...  

Warm current has a strong impact on the melting of sea ice, so clarifying the current features plays a very important role in the Arctic sea ice coverage forecasting study field. Currently, Arctic acoustic tomography is the only feasible method for the large-range current measurement under the Arctic sea ice. Furthermore, affected by the high latitudes Coriolis force, small-scale variability greatly affects the accuracy of Arctic acoustic tomography. However, small-scale variability could not be measured by empirical parameters and resolved by Regularized Least Squares (RLS) in the inverse problem of Arctic acoustic tomography. In this paper, the convolutional neural network (CNN) is proposed to enhance the prediction accuracy in the Arctic, and especially, Gaussian noise is added to reflect the disturbance of the Arctic environment. First, we use the finite element method to build the background ocean model. Then, the deep learning CNN method constructs the non-linear mapping relationship between the acoustic data and the corresponding flow velocity. Finally, the simulation result shows that the deep learning convolutional neural network method being applied to Arctic acoustic tomography could achieve 45.87% accurate improvement than the common RLS method in the current inversion.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3910 ◽  
Author(s):  
Taeho Hur ◽  
Jaehun Bang ◽  
Thien Huynh-The ◽  
Jongwon Lee ◽  
Jee-In Kim ◽  
...  

The most significant barrier to success in human activity recognition is extracting and selecting the right features. In traditional methods, the features are chosen by humans, which requires the user to have expert knowledge or to do a large amount of empirical study. Newly developed deep learning technology can automatically extract and select features. Among the various deep learning methods, convolutional neural networks (CNNs) have the advantages of local dependency and scale invariance and are suitable for temporal data such as accelerometer (ACC) signals. In this paper, we propose an efficient human activity recognition method, namely Iss2Image (Inertial sensor signal to Image), a novel encoding technique for transforming an inertial sensor signal into an image with minimum distortion and a CNN model for image-based activity classification. Iss2Image converts real number values from the X, Y, and Z axes into three color channels to precisely infer correlations among successive sensor signal values in three different dimensions. We experimentally evaluated our method using several well-known datasets and our own dataset collected from a smartphone and smartwatch. The proposed method shows higher accuracy than other state-of-the-art approaches on the tested datasets.


2015 ◽  
Vol 112 (19) ◽  
pp. 6236-6241 ◽  
Author(s):  
Thomas M. Neeson ◽  
Michael C. Ferris ◽  
Matthew W. Diebel ◽  
Patrick J. Doran ◽  
Jesse R. O’Hanley ◽  
...  

In many large ecosystems, conservation projects are selected by a diverse set of actors operating independently at spatial scales ranging from local to international. Although small-scale decision making can leverage local expert knowledge, it also may be an inefficient means of achieving large-scale objectives if piecemeal efforts are poorly coordinated. Here, we assess the value of coordinating efforts in both space and time to maximize the restoration of aquatic ecosystem connectivity. Habitat fragmentation is a leading driver of declining biodiversity and ecosystem services in rivers worldwide, and we simultaneously evaluate optimal barrier removal strategies for 661 tributary rivers of the Laurentian Great Lakes, which are fragmented by at least 6,692 dams and 232,068 road crossings. We find that coordinating barrier removals across the entire basin is nine times more efficient at reconnecting fish to headwater breeding grounds than optimizing independently for each watershed. Similarly, a one-time pulse of restoration investment is up to 10 times more efficient than annual allocations totaling the same amount. Despite widespread emphasis on dams as key barriers in river networks, improving road culvert passability is also essential for efficiently restoring connectivity to the Great Lakes. Our results highlight the dramatic economic and ecological advantages of coordinating efforts in both space and time during restoration of large ecosystems.


2019 ◽  
Vol 27 (1) ◽  
pp. 17-35 ◽  
Author(s):  
Jiaxing Tan ◽  
Yumei Huo ◽  
Zhengrong Liang ◽  
Lihong Li

Sign in / Sign up

Export Citation Format

Share Document