scholarly journals AxonDeep: Automated Optic Nerve Axon Segmentation in Mice with Deep Learning.

2021 ◽  
Author(s):  
Wenxiang Deng ◽  
Adam Hedberg-Buenz ◽  
Dana A Soukup ◽  
Sima Taghizadeh ◽  
Michael G Anderson ◽  
...  

Purpose: Optic nerve damage is the principal feature of glaucoma and contributes to vision loss in many diseases. In animal models, nerve health has traditionally been assessed by human experts that grade damage qualitatively or manually quantify axons from sampling limited areas from histologic cross sections of nerve. Both approaches are prone to variability and time consuming. Automated approaches have begun to emerge, but shortcomings have limited wide-spread application. Here, we seek improvements through use of deep-learning approaches for segmenting and quantifying axons from cross sections of mouse optic nerve. Methods: Two deep-learning approaches were developed and evaluated: (1) a traditional supervised approach using a fully convolutional network trained with only labeled data and (2) a semi-supervised approach trained with both labeled and unlabeled data using a generative-adversarial-network framework. Results: From comparisons with an independent test set of images with manually marked axon centers and boundaries, both deep-learning approaches performed above an existing baseline automated approach and similarly to two independent experts. Performance of the semi-supervised approach was superior and implemented into AxonDeep. Conclusions: AxonDeep performs automated quantification and segmentation of axons similar to that of experts without the time- and labor-constraints associated with manual performance. The quantitative and objective nature of AxonDeep reduces variability arising from differences in model, methodology, and user that often compromise manual performance of these tasks. Translational Relevance: Use of deep learning for axon quantification provides rapid, objective, and higher throughput analysis of optic nerve that would otherwise not be possible.

2019 ◽  
Vol 8 (6) ◽  
pp. 258 ◽  
Author(s):  
Yu Feng ◽  
Frank Thiemann ◽  
Monika Sester

Cartographic generalization is a problem, which poses interesting challenges to automation. Whereas plenty of algorithms have been developed for the different sub-problems of generalization (e.g., simplification, displacement, aggregation), there are still cases, which are not generalized adequately or in a satisfactory way. The main problem is the interplay between different operators. In those cases the human operator is the benchmark, who is able to design an aesthetic and correct representation of the physical reality. Deep learning methods have shown tremendous success for interpretation problems for which algorithmic methods have deficits. A prominent example is the classification and interpretation of images, where deep learning approaches outperform traditional computer vision methods. In both domains-computer vision and cartography-humans are able to produce good solutions. A prerequisite for the application of deep learning is the availability of many representative training examples for the situation to be learned. As this is given in cartography (there are many existing map series), the idea in this paper is to employ deep convolutional neural networks (DCNNs) for cartographic generalizations tasks, especially for the task of building generalization. Three network architectures, namely U-net, residual U-net and generative adversarial network (GAN), are evaluated both quantitatively and qualitatively in this paper. They are compared based on their performance on this task at target map scales 1:10,000, 1:15,000 and 1:25,000, respectively. The results indicate that deep learning models can successfully learn cartographic generalization operations in one single model in an implicit way. The residual U-net outperforms the others and achieved the best generalization performance.


Author(s):  
Haoliang Jiang ◽  
Zhenguo Nie ◽  
Roselyn Yeo ◽  
Amir Barati Farimani ◽  
Levent Burak Kara

Abstract Using deep learning to analyze mechanical stress distributions has been gaining interest with the demand for fast stress analysis methods. Deep learning approaches have achieved excellent outcomes when utilized to speed up stress computation and learn the physics without prior knowledge of underlying equations. However, most studies restrict the variation of geometry or boundary conditions, making these methods difficult to be generalized to unseen configurations. We propose a conditional generative adversarial network (cGAN) model for predicting 2D von Mises stress distributions in solid structures. The cGAN learns to generate stress distributions conditioned by geometries, load, and boundary conditions through a two-player minimax game between two neural networks with no prior knowledge. By evaluating the generative network on two stress distribution datasets under multiple metrics, we demonstrate that our model can predict more accurate high-resolution stress distributions than a baseline convolutional neural network model, given various and complex cases of geometry, load and boundary conditions.


2021 ◽  
Author(s):  
Taki Hasan Rafi ◽  
Young Woong-Ko

Cardiovascular disease is now one of the leading causes of morbidity and mortality in humans. Electrocardiogram (ECG) is a reliable tool for monitoring the health of the cardiovascular system. Currently, there has been a lot of focus on accurately categorizing heartbeats. There is a high demand on automatic ECG classification systems to assist medical professionals. In this paper we proposed a new deep learning method called HeartNet for developing an automatic ECG classifier. The proposed deep learning method is compressed by multi-head attention mechanism on top of CNN model. The main challenge of insufficient data label is solved by adversarial data synthesis adopting generative adversarial network (GAN) with generating additional training samples. It drastically improves the overall performance of the proposed method by 5-10% on each insufficient data label category. We evaluated our proposed method utilizing MIT-BIH dataset. Our proposed method has shown 99.67 ± 0.11 accuracy and 89.24 ± 1.71 MCC trained with adversarial data synthesized dataset. However, we have also utilized two individual datasets such as Atrial Fibrillation Detection Database and PTB Diagnostic Database to see the performance of our proposed model on ECG classification. The effectiveness and robustness of proposed method are validated by extensive experiments, comparison and analysis. Later on, we also highlighted some limitations of this work.


Author(s):  
S. Rakesh Kumar ◽  
S. Muthuramalingam ◽  
Fadi Al-Turjman

Multilingual and multimodal data analysis is the emerging news feed evaluation system. News feed analysis and evaluations are interrelated processes, which are useful in understanding the news factors. The news feed evaluation system can be implemented for single or multilingual language models. Classification techniques used on multilingual news analysis require deep layered learning techniques rather than conventional approaches. In this proposed work, a hierarchical structure of deep learning algorithms is implemented for making an effective complex news evaluation system. Deep learning techniques such as the Deep Cooperative Multilingual Reinforcement Learning Model, the Multidimensional Genetic Algorithm, and the Multilingual Generative Adversarial Network are developed to evaluate a vast number of news feeds. The proposed tech-niques collaborate in a pipeline order to build a deep news feed evaluation system. The implementation details project that the newly proposed system performs 5% to 12% better than the other news evaluation systems.


2017 ◽  
Author(s):  
Mario Valerio Giuffrida ◽  
Hanno Scharr ◽  
Sotirios A Tsaftaris

AbstractIn recent years, there has been an increasing interest in image-based plant phenotyping, applying state-of-the-art machine learning approaches to tackle challenging problems, such as leaf segmentation (a multi-instance problem) and counting. Most of these algorithms need labelled data to learn a model for the task at hand. Despite the recent release of a few plant phenotyping datasets, large annotated plant image datasets for the purpose of training deep learning algorithms are lacking. One common approach to alleviate the lack of training data is dataset augmentation. Herein, we propose an alternative solution to dataset augmentation for plant phenotyping, creating artificial images of plants using generative neural networks. We propose the Arabidopsis Rosette Image Generator (through) Adversarial Network: a deep convolutional network that is able to generate synthetic rosette-shaped plants, inspired by DC-GAN (a recent adversarial network model using convolutional layers). Specifically, we trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset, containing Arabidopsis Thaliana plants. We show that our model is able to generate realistic 128 × 128 colour images of plants. We train our network conditioning on leaf count, such that it is possible to generate plants with a given number of leaves suitable, among others, for training regression based models. We propose a new Ax dataset of artificial plants images, obtained by our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting algorithm, showing that the testing error is reduced when Ax is used as part of the training data.


2021 ◽  
Vol 88 (5) ◽  
Author(s):  
Haoliang Jiang ◽  
Zhenguo Nie ◽  
Roselyn Yeo ◽  
Amir Barati Farimani ◽  
Levent Burak Kara

Abstract Using deep learning to analyze mechanical stress distributions is gaining interest with the demand for fast stress analysis. Deep learning approaches have achieved excellent outcomes when utilized to speed up stress computation and learn the physical nature without prior knowledge of underlying equations. However, most studies restrict the variation of geometry or boundary conditions, making it difficult to generalize the methods to unseen configurations. We propose a conditional generative adversarial network (cGAN) model called StressGAN for predicting 2D von Mises stress distributions in solid structures. The StressGAN model learns to generate stress distributions conditioned by geometries, loads, and boundary conditions through a two-player minimax game between two neural networks with no prior knowledge. By evaluating the generative network on two stress distribution datasets under multiple metrics, we demonstrate that our model can predict more accurate stress distributions than a baseline convolutional neural-network model, given various and complex cases of geometries, loads, and boundary conditions.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2021 ◽  
Author(s):  
Tham Vo

Abstract In abstractive summarization task, most of proposed models adopt the deep recurrent neural network (RNN)-based encoder-decoder architecture to learn and generate meaningful summary for a given input document. However, most of recent RNN-based models always suffer the challenges related to the involvement of much capturing high-frequency/reparative phrases in long documents during the training process which leads to the outcome of trivial and generic summaries are generated. Moreover, the lack of thorough analysis on the sequential and long-range dependency relationships between words within different contexts while learning the textual representation also make the generated summaries unnatural and incoherent. To deal with these challenges, in this paper we proposed a novel semantic-enhanced generative adversarial network (GAN)-based approach for abstractive text summarization task, called as: SGAN4AbSum. We use an adversarial training strategy for our text summarization model in which train the generator and discriminator to simultaneously handle the summary generation and distinguishing the generated summary with the ground-truth one. The input of generator is the jointed rich-semantic and global structural latent representations of training documents which are achieved by applying a combined BERT and graph convolutional network (GCN) textual embedding mechanism. Extensive experiments in benchmark datasets demonstrate the effectiveness of our proposed SGAN4AbSum which achieve the competitive ROUGE-based scores in comparing with state-of-the-art abstractive text summarization baselines.


2021 ◽  
Author(s):  
James Howard ◽  
◽  
Joe Tracey ◽  
Mike Shen ◽  
Shawn Zhang ◽  
...  

Borehole image logs are used to identify the presence and orientation of fractures, both natural and induced, found in reservoir intervals. The contrast in electrical or acoustic properties of the rock matrix and fluid-filled fractures is sufficiently large enough that sub-resolution features can be detected by these image logging tools. The resolution of these image logs is based on the design and operation of the tools, and generally is in the millimeter per pixel range. Hence the quantitative measurement of actual width remains problematic. An artificial intelligence (AI) -based workflow combines the statistical information obtained from a Machine-Learning (ML) segmentation process with a multiple-layer neural network that defines a Deep Learning process that enhances fractures in a borehole image. These new images allow for a more robust analysis of fracture widths, especially those that are sub-resolution. The images from a BHTV log were first segmented into rock and fluid-filled fractures using a ML-segmentation tool that applied multiple image processing filters that captured information to describe patterns in fracture-rock distribution based on nearest-neighbor behavior. The robust ML analysis was trained by users to identify these two components over a short interval in the well, and then the regression model-based coefficients applied to the remaining log. Based on the training, each pixel was assigned a probability value between 1.0 (being a fracture) and 0.0 (pure rock), with most of the pixels assigned one of these two values. Intermediate probabilities represented pixels on the edge of rock-fracture interface or the presence of one or more sub-resolution fractures within the rock. The probability matrix produced a map or image of the distribution of probabilities that determined whether a given pixel in the image was a fracture or partially filled with a fracture. The Deep Learning neural network was based on a Conditional Generative Adversarial Network (cGAN) approach where the probability map was first encoded and combined with a noise vector that acted as a seed for diverse feature generation. This combination was used to generate new images that represented the BHTV response. The second layer of the neural network, the adversarial or discriminator portion, determined whether the generated images were representative of the actual BHTV by comparing the generated images with actual images from the log and producing an output probability of whether it was real or fake. This probability was then used to train the generator and discriminator models that were then applied to the entire log. Several scenarios were run with different probability maps. The enhanced BHTV images brought out fractures observed in the core photos that were less obvious in the original BTHV log through enhanced continuity and improved resolution on fracture widths.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3913 ◽  
Author(s):  
Mingxuan Li ◽  
Ou Li ◽  
Guangyi Liu ◽  
Ce Zhang

With the recently explosive growth of deep learning, automatic modulation recognition has undergone rapid development. Most of the newly proposed methods are dependent on large numbers of labeled samples. We are committed to using fewer labeled samples to perform automatic modulation recognition in the cognitive radio domain. Here, a semi-supervised learning method based on adversarial training is proposed which is called signal classifier generative adversarial network. Most of the prior methods based on this technology involve computer vision applications. However, we improve the existing network structure of a generative adversarial network by adding the encoder network and a signal spatial transform module, allowing our framework to address radio signal processing tasks more efficiently. These two technical improvements effectively avoid nonconvergence and mode collapse problems caused by the complexity of the radio signals. The results of simulations show that compared with well-known deep learning methods, our method improves the classification accuracy on a synthetic radio frequency dataset by 0.1% to 12%. In addition, we verify the advantages of our method in a semi-supervised scenario and obtain a significant increase in accuracy compared with traditional semi-supervised learning methods.


Sign in / Sign up

Export Citation Format

Share Document