scholarly journals Deep-learning-based flexible pipeline for segmenting and tracking cells in 3D image time series for whole brain imaging

2018 ◽  
Author(s):  
Chentao Wen ◽  
Takuya Miura ◽  
Yukako Fujie ◽  
Takayuki Teramoto ◽  
Takeshi Ishihara ◽  
...  

AbstractThe brain is a complex system that operates based on coordinated neuronal activities. Brain-wide cellular calcium imaging techniques have quickly advanced in recent years and become powerful tools for understanding the neuronal activities of small animal models. The whole brain imaging generally requires to extract the neuronal activities from three-dimensional (3D) image series. Unfortunately, the 3D image series are obtained under imaging conditions different among laboratories and extracting neuronal activities from the data requires multiple processes. Therefore researchers need to develop their own software, which has prevented the application of whole-brain imaging experiments in more laboratories. Here, we combined traditional image processing techniques with the powerful deep-learning method which can be flexibly modified to fit 3D image data in the nematode Caenorhabditis elegans obtained under different conditions. We first trained the 3D U-net deep network to classify each pixel into cell and non-cell categories. Cells merged as a whole region were further separated into individual cells by watershed segmentation. The cells were then tracked in 3D space over time with the combination of a feedforward network and a point set registration method to use local and global relative positions of the cells, respectively. Remarkably, one manually annotated 3D image combined with data augmentation was sufficient for training the deep networks to obtain satisfactory tracking results. Our method correctly tracked more than 98% of neurons in three different image datasets and successfully extracted brain-wide neuronal activities. Our method worked well even when the sampling rate was reduced: 86% correct in case 4/5 frames were removed, and when artificial noise was added into the raw images: 91% correct in case 35 times of background-level noise was added. Our results proved that deep learning is widely applicable to different datasets and can help us in establishing a flexible pipeline for extracting whole brain activities.

2020 ◽  
Vol 11 (7) ◽  
pp. 3567
Author(s):  
Kefu Ning ◽  
Xiaoyu Zhang ◽  
Xuefei Gao ◽  
Tao Jiang ◽  
He Wang ◽  
...  

Author(s):  
Kefu Ning ◽  
Xiaoyu Zhang ◽  
Xuefei Gao ◽  
Tao Jiang ◽  
He Wang ◽  
...  

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Yong He ◽  
Hong Zeng ◽  
Yangyang Fan ◽  
Shuaisheng Ji ◽  
Jianjian Wu

In this paper, we proposed an approach to detect oilseed rape pests based on deep learning, which improves the mean average precision (mAP) to 77.14%; the result increased by 9.7% with the original model. We adopt this model to mobile platform to let every farmer able to use this program, which will diagnose pests in real time and provide suggestions on pest controlling. We designed an oilseed rape pest imaging database with 12 typical oilseed rape pests and compared the performance of five models, SSD w/Inception is chosen as the optimal model. Moreover, for the purpose of the high mAP, we have used data augmentation (DA) and added a dropout layer. The experiments are performed on the Android application we developed, and the result shows that our approach surpasses the original model obviously and is helpful for integrated pest management. This application has improved environmental adaptability, response speed, and accuracy by contrast with the past works and has the advantage of low cost and simple operation, which are suitable for the pest monitoring mission of drones and Internet of Things (IoT).


Cell Reports ◽  
2021 ◽  
Vol 34 (5) ◽  
pp. 108709
Author(s):  
Xiaojun Wang ◽  
Hanqing Xiong ◽  
Yurong Liu ◽  
Tao Yang ◽  
Anan Li ◽  
...  

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


Sign in / Sign up

Export Citation Format

Share Document