scholarly journals Deep learning-based real-time detection of neurons in brain slices for in vitro physiology

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mighten C. Yip ◽  
Mercedes M. Gonzalez ◽  
Christopher R. Valenta ◽  
Matthew J. M. Rowan ◽  
Craig R. Forest

AbstractA common electrophysiology technique used in neuroscience is patch clamp: a method in which a glass pipette electrode facilitates single cell electrical recordings from neurons. Typically, patch clamp is done manually in which an electrophysiologist views a brain slice under a microscope, visually selects a neuron to patch, and moves the pipette into close proximity to the cell to break through and seal its membrane. While recent advances in the field of patch clamping have enabled partial automation, the task of detecting a healthy neuronal soma in acute brain tissue slices is still a critical step that is commonly done manually, often presenting challenges for novices in electrophysiology. To overcome this obstacle and progress towards full automation of patch clamp, we combined the differential interference microscopy optical technique with an object detection-based convolutional neural network (CNN) to detect healthy neurons in acute slice. Utilizing the YOLOv3 convolutional neural network architecture, we achieved a 98% reduction in training times to 18 min, compared to previously published attempts. We also compared networks trained on unaltered and enhanced images, achieving up to 77% and 72% mean average precision, respectively. This novel, deep learning-based method accomplishes automated neuronal detection in brain slice at 18 frames per second with a small data set of 1138 annotated neurons, rapid training time, and high precision. Lastly, we verified the health of the identified neurons with a patch clamp experiment where the average access resistance was 29.25 M$$\Omega$$ Ω (n = 9). The addition of this technology during live-cell imaging for patch clamp experiments can not only improve manual patch clamping by reducing the neuroscience expertise required to select healthy cells, but also help achieve full automation of patch clamping by nominating cells without human assistance.

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Guangpeng Fan ◽  
Feixiang Chen ◽  
Danyu Chen ◽  
Yan Li ◽  
Yanqi Dong

In the geological survey, the recognition and classification of rock lithology are an important content. The recognition method based on rock thin section leads to long recognition period and high recognition cost, and the recognition accuracy cannot be guaranteed. Moreover, the above method cannot provide an effective solution in the field. As a communication device with multiple sensors, smartphones are carried by most geological survey workers. In this paper, a smartphone application based on the convolutional neural network is developed. In this application, the phone’s camera can be used to take photos of rocks. And the types and lithology of rocks can be quickly and accurately identified in a very short time. This paper proposed a method for quickly and accurately recognizing rock lithology in the field. Based on ShuffleNet, a lightweight convolutional neural network used in deep learning, combined with the transfer learning method, the recognition model of the rock image was established. The trained model was then deployed to the smartphone. A smartphone application for identifying rock lithology was designed and developed to verify its usability and accuracy. The research results showed that the accuracy of the recognition model in this paper was 97.65% on the verification data set of the PC. The accuracy of recognition on the test data set of the smartphone was 95.30%, among which the average recognition time of the single sheet was 786 milliseconds, the maximum value was 1,045 milliseconds, and the minimum value was 452 milliseconds. And the single-image accuracy above 96% accounted for 95% of the test data set. This paper presented a new solution for the rapid and accurate recognition of rock lithology in field geological surveys, which met the needs of geological survey personnel to quickly and accurately identify rock lithology in field operations.


2019 ◽  
Author(s):  
Dan MacLean

AbstractGene Regulatory networks that control gene expression are widely studied yet the interactions that make them up are difficult to predict from high throughput data. Deep Learning methods such as convolutional neural networks can perform surprisingly good classifications on a variety of data types and the matrix-like gene expression profiles would seem to be ideal input data for deep learning approaches. In this short study I compiled training sets of expression data using the Arabidopsis AtGenExpress global stress expression data set and known transcription factor-target interactions from the Arabidopsis PLACE database. I built and optimised convolutional neural networks with a best model providing 95 % accuracy of classification on a held-out validation set. Investigation of the activations within this model revealed that classification was based on positive correlation of expression profiles in short sections. This result shows that a convolutional neural network can be used to make classifications and reveal the basis of those calssifications for gene expression data sets, indicating that a convolutional neural network is a useful and interpretable tool for exploratory classification of biological data. The final model is available for download and as a web application.


2021 ◽  
Vol 87 (8) ◽  
pp. 577-591
Author(s):  
Fengpeng Li ◽  
Jiabao Li ◽  
Wei Han ◽  
Ruyi Feng ◽  
Lizhe Wang

Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.


2021 ◽  
Vol 11 ◽  
Author(s):  
Yinxiang Guo ◽  
Jianing Xu ◽  
Xiangzhi Li ◽  
Lin Zheng ◽  
Wei Pan ◽  
...  

Patients with thyroid cancer will take a small dose of 131I after undergoing a total thyroidectomy. Single-photon emission computed tomography (SPECT) is used to diagnose whether thyroid tissue remains in the body. However, it is difficult for human eyes to observe the specificity of SPECT images in different categories, and it is difficult for doctors to accurately diagnose the residual thyroid tissue in patients based on SPECT images. At present, the research on the classification of thyroid tissue residues after thyroidectomy is still in a blank state. This paper proposes a ResNet-18 fine-tuning method based on the convolutional neural network model. First, preprocess the SPECT images to improve the image quality and remove background interference. Secondly, use the preprocessed image samples to fine-tune the pretrained ResNet-18 model to obtain better features and finally use the Softmax classifier to diagnose the residual thyroid tissue. The method has been tested on SPECT images of 446 patients collected by local hospital and compared with the widely used lightweight network SqueezeNet model and ShuffleNetV2 model. Due to the small data set, this paper conducted 10 random grouping experiments. Each experiment divided the data set into training set and test set at a ratio of 3:1. The accuracy and sensitivity rates of the model proposed in this paper are 96.69% and 94.75%, which are significantly higher than other models (p < 0.05). The specificity and precision rates are 99.6% and 99.96%, respectively, and there is no significant difference compared with other models. (p > 0.05). The area under the curve of the proposed model, SqueezeNet, and ShuffleNetv2 are 0.988 (95% CI, 0.941–1.000), 0.898 (95% CI, 0.819–0.951) (p = 0.0257), and 0.885 (95% CI, 0.803–0.941) (p = 0.0057) (p < 0.05). We prove that this thyroid tissue residue classification system can be used as a computer-aided diagnosis method to effectively improve the diagnostic accuracy of thyroid tissue residues. While more accurately diagnosing patients with residual thyroid tissue in the body, we try our best to avoid the occurrence of overtreatment, which reflects its potential clinical application value.


2020 ◽  
Vol 222 (1) ◽  
pp. 247-259 ◽  
Author(s):  
Davood Moghadas

SUMMARY Conventional geophysical inversion techniques suffer from several limitations including computational cost, nonlinearity, non-uniqueness and dimensionality of the inverse problem. Successful inversion of geophysical data has been a major challenge for decades. Here, a novel approach based on deep learning (DL) inversion via convolutional neural network (CNN) is proposed to instantaneously estimate subsurface electrical conductivity (σ) layering from electromagnetic induction (EMI) data. In this respect, a fully convolutional network was trained on a large synthetic data set generated based on 1-D EMI forward model. The accuracy of the proposed approach was examined using several synthetic scenarios. Moreover, the trained network was used to find subsurface electromagnetic conductivity images (EMCIs) from EMI data measured along two transects from Chicken Creek catchment (Brandenburg, Germany). Dipole–dipole electrical resistivity tomography data were measured as well to obtain reference subsurface σ distributions down to a 6 m depth. The inversely estimated models were juxtaposed and compared with their counterparts obtained from a spatially constrained deterministic algorithm as a standard code. Theoretical simulations demonstrated a well performance of the algorithm even in the presence of noise in data. Moreover, application of the DL inversion for subsurface imaging from Chicken Creek catchment manifested the accuracy and robustness of the proposed approach for EMI inversion. This approach returns subsurface σ distribution directly from EMI data in a single step without any iterations. The proposed strategy simplifies considerably EMI inversion and allows for rapid and accurate estimation of subsurface EMCI from multiconfiguration EMI data.


Author(s):  
Anan Zhang ◽  
Jiahui He ◽  
Yu Lin ◽  
Qian Li ◽  
Wei Yang ◽  
...  

Purpose Considering the problem that the high recognition rate of deep learning requires the support of mass data, this study aims to propose an insulating fault identification method based on small data set convolutional neural network (CNN). Design/methodology/approach Because of the chaotic characteristics of partial discharge (PD) signals, the equivalent transformation of the PD signal of unit power frequency period is carried out by phase space reconstruction to derive the chaotic features. At the same time, geometric, fractal, entropy and time domain features are extracted to increase the volume of feature data. Finally, the combined features are constructed and imported into CNN to complete PD recognition. Findings The results of the case study show that the proposed method can realize the PD recognition of small data set and make up for the shortcomings of the methods based on CNN. Also, the 1-CNN built in this paper has better recognition performance for four typical insulation faults of cable accessories. The recognition performance is improved by 4.37% and 1.25%, respectively, compared with similar methods based on support vector machine and BPNN. Originality/value In this paper, a method of insulation fault recognition based on CNN with small data set is proposed, which can solve the difficulty to realize insulation fault recognition of cable accessories and deep data mining because of insufficient measure data.


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
ZHUANG FENG ◽  
Fuyuki Tokuda ◽  
Adam Purnomo ◽  
Kazuhiro Kosuge

<div>This paper proposes a deep learning-based fast grasp detection method with a small dataset for robotic bin-picking. We consider the problem of grasping stacked up mechanical parts on a planar workspace using a parallel gripper. In this paper, we use a deep neural network to solve the problem with a single depth image. To reduce the computation time, we propose an edge-based algorithm to generate potential grasps. Then, a convolutional neural network (CNN) is applied to evaluate the robustness of all potential grasps for bin-picking. Finally, the proposed method ranks them and the object is grasped by using the grasp with the highest score. In bin-picking experiments, we evaluate the proposed method with a 7-DOF manipulator using textureless mechanical parts with complex shapes. The success ratio of grasping is 97%, and the average computation time of CNN inference is less than 0.23[s] on a laptop PC without a GPU array. In addition, we also confirm that the proposed method can be applied to unseen objects which are not included in the training dataset. </div>


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bin Zheng ◽  
Tao Huang

In order to achieve the accuracy of mango grading, a mango grading system was designed by using the deep learning method. The system mainly includes CCD camera image acquisition, image preprocessing, model training, and model evaluation. Aiming at the traditional deep learning, neural network training needs a large number of sample data sets; a convolutional neural network is proposed to realize the efficient grading of mangoes through the continuous adjustment and optimization of super-parameters and batch size. The ultra-lightweight SqueezeNet related algorithm is introduced. Compared with AlexNet and other related algorithms with the same accuracy level, it has the advantages of small model scale and fast operation speed. The experimental results show that the convolutional neural network model after super-parameters optimization and adjustment has excellent effect on deep learning image processing of small sample data set. Two hundred thirty-four Jinhuang mangoes of Panzhihua were picked in the natural environment and tested. The analysis results can meet the requirements of the agricultural industry standard of the People’s Republic of China—mango and mango grade specification. At the same time, the average accuracy rate was 97.37%, the average error rate was 2.63%, and the average loss value of the model was 0.44. The processing time of an original image with a resolution of 500 × 374 was only 2.57 milliseconds. This method has important theoretical and application value and can provide a powerful means for mango automatic grading.


2020 ◽  
pp. bjophthalmol-2020-316274
Author(s):  
Sukkyu Sun ◽  
Ahnul Ha ◽  
Young Kook Kim ◽  
Byeong Wook Yoo ◽  
Hee Chan Kim ◽  
...  

Background/AimsTo evaluate, with spectral-domain optical coherence tomography (SD-OCT), the glaucoma-diagnostic ability of a deep-learning classifier.MethodsA total of 777 Cirrus high-definition SD-OCT image sets of the retinal nerve fibre layer (RNFL) and ganglion cell-inner plexiform layer (GCIPL) of 315 normal subjects, 219 patients with early-stage primary open-angle glaucoma (POAG) and 243 patients with moderate-to-severe-stage POAG were aggregated. The image sets were divided into a training data set (252 normal, 174 early POAG and 195 moderate-to-severe POAG) and a test data set (63 normal, 45 early POAG and 48 moderate-to-severe POAG). The visual geometry group (VGG16)-based dual-input convolutional neural network (DICNN) was adopted for the glaucoma diagnoses. Unlike other networks, the DICNN structure takes two images (both RNFL and GCIPL) as inputs. The glaucoma-diagnostic ability was computed according to both accuracy and area under the receiver operating characteristic curve (AUC).ResultsFor the test data set, DICNN could distinguish between patients with glaucoma and normal subjects accurately (accuracy=92.793%, AUC=0.957 (95% CI 0.943 to 0.966), sensitivity=0.896 (95% CI 0.896 to 0.917), specificity=0.952 (95% CI 0.921 to 0.952)). For distinguishing between patients with early-stage glaucoma and normal subjects, DICNN’s diagnostic ability (accuracy=85.185%, AUC=0.869 (95% CI 0.825 to 0.879), sensitivity=0.921 (95% CI 0.813 to 0.905), specificity=0.756 (95% CI 0.610 to 0.790)]) was higher than convolutional neural network algorithms that trained with RNFL or GCIPL separately.ConclusionThe deep-learning algorithm using SD-OCT can distinguish normal subjects not only from established patients with glaucoma but also from patients with early-stage glaucoma. The deep-learning model with DICNN, as trained by both RNFL and GCIPL thickness map data, showed a high diagnostic ability for discriminatingpatients with early-stage glaucoma from normal subjects.


Author(s):  
T. Jiang ◽  
X. J. Wang

Abstract. In recent years, deep learning technology has been continuously developed and gradually transferred to various fields. Among them, Convolutional Neural Network (CNN), which has the ability to extract deep features of images due to its unique network structure, plays an increasingly important role in the realm of Hyperspectral images classification. This paper attempts to construct a features fusion model that combines the deep features derived from 1D-CNN and 2D-CNN, and explores the potential of features fusion model in the field of hyperspectral image classification. The experiment is based on the deep learning open source framework TensorFlow with Python3 as programming environment. Firstly, constructing multi-layer perceptron (MLP), 1D-CNN and 2DCNN models respectively, and then, using the pre-trained 1D-CNN and 2D-CNN models as feature extractors, finally, extracting features via constructing the features fusion model. The general open hyperspectral datasets (Pavia University) were selected as a test to compare classification accuracy and classification confidence among different models. The experimental results show that the features fusion model obtains higher overall accuracy (99.65%), Kappa coefficient (0.9953) and lower uncertainty for the boundary and unknown regions (3.43%) in the data set. Since features fusion model inherits the structural characteristics of 1D-CNN and 2DCNN, the complementary advantages between the models are achieved. The spectral and spatial features of hyperspectral images are fully exploited, thus getting state-of-the-art classification accuracy and generalization performance.


Sign in / Sign up

Export Citation Format

Share Document