scholarly journals DeepNeuron: An Open Deep Learning Toolbox for Neuron Tracing

2018 ◽  
Author(s):  
Zhi Zhou ◽  
Hsien-Chi Kuo ◽  
Hanchuan Peng ◽  
Fuhui Long

AbstractReconstructing three-dimensional (3D) morphology of neurons is essential to understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semi-automatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new open source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4582
Author(s):  
Changjie Cai ◽  
Tomoki Nishimura ◽  
Jooyeon Hwang ◽  
Xiao-Ming Hu ◽  
Akio Kuroda

Fluorescent probes can be used to detect various types of asbestos (serpentine and amphibole groups); however, the fiber counting using our previously developed software was not accurate for samples with low fiber concentration. Machine learning-based techniques (e.g., deep learning) for image analysis, particularly Convolutional Neural Networks (CNN), have been widely applied to many areas. The objectives of this study were to (1) create a database of a wide-range asbestos concentration (0–50 fibers/liter) fluorescence microscopy (FM) images in the laboratory; and (2) determine the applicability of the state-of-the-art object detection CNN model, YOLOv4, to accurately detect asbestos. We captured the fluorescence microscopy images containing asbestos and labeled the individual asbestos in the images. We trained the YOLOv4 model with the labeled images using one GTX 1660 Ti Graphics Processing Unit (GPU). Our results demonstrated the exceptional capacity of the YOLOv4 model to learn the fluorescent asbestos morphologies. The mean average precision at a threshold of 0.5 ([email protected]) was 96.1% ± 0.4%, using the National Institute for Occupational Safety and Health (NIOSH) fiber counting Method 7400 as a reference method. Compared to our previous counting software (Intec/HU), the YOLOv4 achieved higher accuracy (0.997 vs. 0.979), particularly much higher precision (0.898 vs. 0.418), recall (0.898 vs. 0.780) and F-1 score (0.898 vs. 0.544). In addition, the YOLOv4 performed much better for low fiber concentration samples (<15 fibers/liter) compared to Intec/HU. Therefore, the FM method coupled with YOLOv4 is remarkable in detecting asbestos fibers and differentiating them from other non-asbestos particles.


Author(s):  
Jun-Li Xu ◽  
Cecilia Riccioli ◽  
Ana Herrero-Langreo ◽  
Aoife Gowen

Deep learning (DL) has recently achieved considerable successes in a wide range of applications, such as speech recognition, machine translation and visual recognition. This tutorial provides guidelines and useful strategies to apply DL techniques to address pixel-wise classification of spectral images. A one-dimensional convolutional neural network (1-D CNN) is used to extract features from the spectral domain, which are subsequently used for classification. In contrast to conventional classification methods for spectral images that examine primarily the spectral context, a three-dimensional (3-D) CNN is applied to simultaneously extract spatial and spectral features to enhance classificationaccuracy. This tutorial paper explains, in a stepwise manner, how to develop 1-D CNN and 3-D CNN models to discriminate spectral imaging data in a food authenticity context. The example image data provided consists of three varieties of puffed cereals imaged in the NIR range (943–1643 nm). The tutorial is presented in the MATLAB environment and scripts and dataset used are provided. Starting from spectral image pre-processing (background removal and spectral pre-treatment), the typical steps encountered in development of CNN models are presented. The example dataset provided demonstrates that deep learning approaches can increase classification accuracy compared to conventional approaches, increasing the accuracy of the model tested on an independent image from 92.33 % using partial least squares-discriminant analysis to 99.4 % using 3-CNN model at pixel level. The paper concludes with a discussion on the challenges and suggestions in the application of DL techniques for spectral image classification.


2019 ◽  
Vol 16 (12) ◽  
pp. 1323-1331 ◽  
Author(s):  
Yichen Wu ◽  
Yair Rivenson ◽  
Hongda Wang ◽  
Yilin Luo ◽  
Eyal Ben-David ◽  
...  

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Romain Franconville ◽  
Celia Beron ◽  
Vivek Jayaraman

The central complex is a highly conserved insect brain region composed of morphologically stereotyped neurons that arborize in distinctively shaped substructures. The region is implicated in a wide range of behaviors and several modeling studies have explored its circuit computations. Most studies have relied on assumptions about connectivity between neurons based on their overlap in light microscopy images. Here, we present an extensive functional connectome of Drosophila melanogaster’s central complex at cell-type resolution. Using simultaneous optogenetic stimulation, calcium imaging and pharmacology, we tested the connectivity between 70 presynaptic-to-postsynaptic cell-type pairs. We identified numerous inputs to the central complex, but only a small number of output channels. Additionally, the connectivity of this highly recurrent circuit appears to be sparser than anticipated from light microscopy images. Finally, the connectivity matrix highlights the potentially critical role of a class of bottleneck interneurons. All data are provided for interactive exploration on a website.


Materials ◽  
2020 ◽  
Vol 13 (23) ◽  
pp. 5419 ◽  
Author(s):  
Anna Machrowska ◽  
Jakub Szabelski ◽  
Robert Karpiński ◽  
Przemysław Krakowski ◽  
Józef Jonak ◽  
...  

The purpose of the study was to test the usefulness of deep learning artificial neural networks and statistical modeling in predicting the strength of bone cements with defects. The defects are related to the introduction of admixtures, such as blood or saline, as contaminants into the cement at the preparation stage. Due to the wide range of applications of deep learning, among others in speech recognition, bioinformation processing, and medication design, the extent was checked to which it is possible to obtain information related to the prediction of the compressive strength of bone cements. Development and improvement of deep learning network (DLN) algorithms and statistical modeling in the analysis of changes in the mechanical parameters of the tested materials will enable determining an acceptable margin of error during surgery or cement preparation in relation to the expected strength of the material used to fill bone cavities. The use of the abovementioned computer methods may, therefore, play a significant role in the initial qualitative assessment of the effects of procedures and, thus, mitigation of errors resulting in failure to maintain the required mechanical parameters and patient dissatisfaction.


2020 ◽  
Vol 10 (19) ◽  
pp. 6735 ◽  
Author(s):  
Zishu Liu ◽  
Wei Song ◽  
Yifei Tian ◽  
Sumi Ji ◽  
Yunsick Sung ◽  
...  

Point clouds have been widely used in three-dimensional (3D) object classification tasks, i.e., people recognition in unmanned ground vehicles. However, the irregular data format of point clouds and the large number of parameters in deep learning networks affect the performance of object classification. This paper develops a 3D object classification system using a broad learning system (BLS) with a feature extractor called VB-Net. First, raw point clouds are voxelized into voxels. Through this step, irregular point clouds are converted into regular voxels which are easily processed by the feature extractor. Then, a pre-trained VoxNet is employed as a feature extractor to extract features from voxels. Finally, those features are used for object classification by the applied BLS. The proposed system is tested on the ModelNet40 dataset and ModelNet10 dataset. The average recognition accuracy was 83.99% and 90.08%, respectively. Compared to deep learning networks, the time consumption of the proposed system is significantly decreased.


Sign in / Sign up

Export Citation Format

Share Document