scholarly journals RootNav 2.0: Deep Learning for Automatic Navigation of Complex Plant Root Architectures

2019 ◽  
Author(s):  
Robail Yasrab ◽  
Jonathan A Atkinson ◽  
Darren M Wells ◽  
Andrew P French ◽  
Tony P Pridmore ◽  
...  

AbstractWe present a new image analysis approach that provides fully-automatic extraction of complex root system architectures from a range of plant species in varied imaging setups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task Convolutional Neural Network architecture. The network has been designed to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. In addition, the network simultaneously locates seeds, and first and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. The proposed method is evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. The results are compared with semi-automatic analysis via the original RootNav tool, demonstrating comparable accuracy, with a 10-fold increase in speed. We then demonstrate the ability of the network to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. We transfer for a final time to images of Brassica napus from a hydroponic assay, and still demonstrate good accuracy despite many fewer training images. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools.

GigaScience ◽  
2019 ◽  
Vol 8 (11) ◽  
Author(s):  
Robail Yasrab ◽  
Jonathan A Atkinson ◽  
Darren M Wells ◽  
Andrew P French ◽  
Tony P Pridmore ◽  
...  

Abstract Background In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. Results We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. Conclusions We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Daniel Griffith ◽  
Alex S Holehouse

The rise of high-throughput experiments has transformed how scientists approach biological questions. The ubiquity of large-scale assays that can test thousands of samples in a day has necessitated the development of new computational approaches to interpret this data. Among these tools, machine learning approaches are increasingly being utilized due to their ability to infer complex nonlinear patterns from high-dimensional data. Despite their effectiveness, machine learning (and in particular deep learning) approaches are not always accessible or easy to implement for those with limited computational expertise. Here we present PARROT, a general framework for training and applying deep learning-based predictors on large protein datasets. Using an internal recurrent neural network architecture, PARROT is capable of tackling both classification and regression tasks while only requiring raw protein sequences as input. We showcase the potential uses of PARROT on three diverse machine learning tasks: predicting phosphorylation sites, predicting transcriptional activation function of peptides generated by high-throughput reporter assays, and predicting the fibrillization propensity of amyloid beta with data generated by deep mutational scanning. Through these examples, we demonstrate that PARROT is easy to use, performs comparably to state-of-the-art computational tools, and is applicable for a wide array of biological problems.


Author(s):  
D. Griffiths ◽  
J. Boehm

With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.


Author(s):  
Daniel Ray ◽  
Tim Collins ◽  
Prasad Ponnapalli

Extracting accurate heart rate estimations from wrist-worn photoplethysmography (PPG) devices is challenging due to the signal containing artifacts from several sources. Deep Learning approaches have shown very promising results outperforming classical methods with improvements of 21% and 31% on two state-of-the-art datasets. This paper provides an analysis of several data-driven methods for creating deep neural network architectures with hopes of further improvements.


2019 ◽  
Author(s):  
Benoit Playe ◽  
Véronique Stoven

AbstractAmong virtual screening methods that have been developed to facilitate the drug discovery process, chemogenomics presents the particularity to tackle the question of predicting ligands for proteins, at at scales both in the protein and chemical spaces. Therefore, in addition to to predict drug candidates for a given therapeutic protein target, like more classical ligand-based or receptor-based methods do, chemogenomics can also predict off-targets at the proteome level, and therefore, identify potential side-effects or drug repositioning opportunities. In this study, we study and compare machine-learning and deep learning approaches for chemogenomics, that are applicable to screen large sets of compounds against large sets of druggable proteins. State-of-the-art drug chemogenomics methods rely on expert-based chemical and protein descriptors or similarity measures. The recent development of deep learning approaches enabled to design algorithms that learn numerical abstract representations of molecular graphs and protein sequences in an end-to-end fashion, i.e., so that the learnt features optimise the objective function of the drug-target interaction prediction task. In this paper, we address drug-target interaction prediction at the druggable proteome-level, with what we define as the chemogenomic neuron network. This network consists of a feed-forward neuron network taking as input the combination of molecular and protein representations learnt by molecular graph and protein sequence encoders. We first propose a standard formulation of this chemogenomic neuron network. Then, we compare the performances of the standard chemogenomic network to reference deep learning or shallow (machine-learning without deep learning) methods. In particular, we show that such a representation learning approach is competitive with state-of-the-art chemogenomics with shallow methods, but not ultimately superior. We evaluate the most promising neuron network architectures and data augmentation techniques, such as multi-view and transfer learning, to improve the prediction performance of the chemogenomic network. Our results shed new insights on the design of chemogenomics approaches based on representation learning algorithms. Most importantly, we conclude from our observations that a promising research direction is to integrate heterogeneous sources of data such as various bioactivity datasets, or independently, multiple molecule and protein attribute views, instead of focusing on sophisticated, yet intuitively relevant, encoder’s neuron network architecture.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1396
Author(s):  
Thanh-Hung Vo ◽  
Guee-Sang Lee ◽  
Hyung-Jeong Yang ◽  
In-Jae Oh ◽  
Soo-Hyung Kim ◽  
...  

Due to the increase of lung cancer globally, and particularly in Korea, survival analysis for this type of cancer has gained prominence in recent years. For this task, mathematical and traditional machine learning approaches are commonly used by medical doctors. While the deep learning approach has had proven success in computer vision tasks, natural language processing and other AI techniques are also adopted for this task. Due to the privacy issues and management process, data in medicine are difficult to collect, which leads to a paucity of samples. The small number of samples makes it difficult to use deep learning and renders this approach unusable. In this investigation, we propose a network architecture that combines a variational autoencoder (VAE) with the typical DNN architecture to solve the survival analysis task. With a training size of n = 4107, MVAESA achieves a C-index of 0.722 while CoxCC, CoxPH, and CoxTime achieved scores of 0.713, 0.703, and 0.710, respectively. With a small training size of n = 379, MVAESA achieves a C-index of 0.707, compared with 0.689, 0.688 and 0.690 for CoxCC, CoxPH, and CoxTime, respectively. The results show that the combination of a VAE with a target task makes the network more stable and that the network could be trained using a small-sized sample.


2021 ◽  
Author(s):  
Daniel Griffith ◽  
Alex S Holehouse

The rise of high-throughput experiments has transformed how scientists approach biological questions. The ubiquity of large-scale assays that can test thousands of samples in a day has necessitated the development of new computational approaches to interpret this data. Among these tools, machine learning approaches are increasingly being utilized due to their ability to infer complex non-linear patterns from high-dimensional data. Despite their effectiveness, machine learning (and in particular deep learning) approaches are not always accessible or easy to implement for those with limited computational expertise. Here we present PARROT, a general framework for training and applying deep learning-based predictors on large protein datasets. Using an internal recurrent neural network architecture, PARROT is capable of tackling both classification and regression tasks while only requiring raw protein sequences as input. We showcase the potential uses of PARROT on three diverse machine learning tasks: predicting phosphorylation sites, predicting transcriptional activation function of peptides generated by high-throughput reporter assays, and predicting the fibrillization propensity of amyloid-beta with data generated by deep mutational scanning. Through these examples, we demonstrate that PARROT is easy to use, performs comparably to state-of-the-art computational tools, and is applicable for a wide array of biological problems.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Qian Wu ◽  
Weiling Zhao ◽  
Xiaobo Yang ◽  
Hua Tan ◽  
Lei You ◽  
...  

2020 ◽  
Author(s):  
Priyanka Meel ◽  
Farhin Bano ◽  
Dr. Dinesh K. Vishwakarma

Sign in / Sign up

Export Citation Format

Share Document