scholarly journals Application of Deep Learning in Petrographic Coal Images Segmentation

Minerals ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1265
Author(s):  
Sebastian Iwaszenko ◽  
Leokadia Róg

The study of the petrographic structure of medium- and high-rank coals is important from both a cognitive and a utilitarian point of view. The petrographic constituents and their individual characteristics and features are responsible for the properties of coal and the way it behaves in various technological processes. This paper considers the application of convolutional neural networks for coal petrographic images segmentation. The U-Net-based model for segmentation was proposed. The network was trained to segment inertinite, liptinite, and vitrinite. The segmentations prepared manually by a domain expert were used as the ground truth. The results show that inertinite and vitrinite can be successfully segmented with minimal difference from the ground truth. The liptinite turned out to be much more difficult to segment. After usage of transfer learning, moderate results were obtained. Nevertheless, the application of the U-Net-based network for petrographic image segmentation was successful. The results are good enough to consider the method as a supporting tool for domain experts in everyday work.

Author(s):  
Hoseok Choi ◽  
Seokbeen Lim ◽  
Kyeongran Min ◽  
Kyoung-ha Ahn ◽  
Kyoung-Min Lee ◽  
...  

Abstract Objective: With the development in the field of neural networks, Explainable AI (XAI), is being studied to ensure that artificial intelligence models can be explained. There are some attempts to apply neural networks to neuroscientific studies to explain neurophysiological information with high machine learning performances. However, most of those studies have simply visualized features extracted from XAI and seem to lack an active neuroscientific interpretation of those features. In this study, we have tried to actively explain the high-dimensional learning features contained in the neurophysiological information extracted from XAI, compared with the previously reported neuroscientific results. Approach: We designed a deep neural network classifier using 3D information (3D DNN) and a 3D class activation map (3D CAM) to visualize high-dimensional classification features. We used those tools to classify monkey electrocorticogram (ECoG) data obtained from the unimanual and bimanual movement experiment. Main results: The 3D DNN showed better classification accuracy than other machine learning techniques, such as 2D DNN. Unexpectedly, the activation weight in the 3D CAM analysis was high in the ipsilateral motor and somatosensory cortex regions, whereas the gamma-band power was activated in the contralateral areas during unimanual movement, which suggests that the brain signal acquired from the motor cortex contains information about both contralateral movement and ipsilateral movement. Moreover, the hand-movement classification system used critical temporal information at movement onset and offset when classifying bimanual movements. Significance: As far as we know, this is the first study to use high-dimensional neurophysiological information (spatial, spectral, and temporal) with the deep learning method, reconstruct those features, and explain how the neural network works. We expect that our methods can be widely applied and used in neuroscience and electrophysiology research from the point of view of the explainability of XAI as well as its performance.


Author(s):  
Jamilu Adamu

Activation Functions are crucial parts of the Deep Learning Artificial Neural Networks. From the Biological point of view, a neuron is just a node with many inputs and one output. A neural network consists of many interconnected neurons. It is a “simple” device that receives data at the input and provides a response. The function of neurons is to process and transmit information; the neuron is the basic unit in the nervous system. Carly Vandergriendt (2018) stated the human brain at birth consists of an estimated 100 billion Neurons. The ability of a machine to mimic human intelligence is called Machine Learning. Deep Learning Artificial Neural Networks was designed to work like a human brain with the aid of arbitrary choice of Non-linear Activation Functions. Currently, there is no rule of thumb on the choice of Activation Functions, “Try out different things and see what combinations lead to the best performance”, however, sincerely; the choice of Activation Functions should not be Trial and error. Jamilu (2019) proposed that Activation Functions shall be emanated from AI-ML-Purified Data Set and its choice shall satisfy Jameel’s ANNAF Stochastic and or Deterministic Criterion. The objectives of this paper are to propose instances where Deep Learning Artificial Neural Networks are SUPERINTELLIGENT. Using Jameel’s ANNAF Stochastic and or Deterministic Criterion, the paper proposed four classes where Deep Learning Artificial Neural Networks are Superintelligent namely; Stochastic Superintelligent, Deterministic Superintelligent, and Stochastic-Deterministic 1st and 2nd Levels Superintelligence. Also, a Normal Probabilistic-Deterministic case was proposed.


2021 ◽  
Vol 1 ◽  
Author(s):  
Andreas Berberich ◽  
Andreas Kurz ◽  
Sebastian Reinhard ◽  
Torsten Johann Paul ◽  
Paul Ray Burd ◽  
...  

Single-molecule super-resolution microscopy (SMLM) techniques like dSTORM can reveal biological structures down to the nanometer scale. The achievable resolution is not only defined by the localization precision of individual fluorescent molecules, but also by their density, which becomes a limiting factor e.g., in expansion microscopy. Artificial deep neural networks can learn to reconstruct dense super-resolved structures such as microtubules from a sparse, noisy set of data points. This approach requires a robust method to assess the quality of a predicted density image and to quantitatively compare it to a ground truth image. Such a quality measure needs to be differentiable to be applied as loss function in deep learning. We developed a new trainable quality measure based on Fourier Ring Correlation (FRC) and used it to train deep neural networks to map a small number of sampling points to an underlying density. Smooth ground truth images of microtubules were generated from localization coordinates using an anisotropic Gaussian kernel density estimator. We show that the FRC criterion ideally complements the existing state-of-the-art multiscale structural similarity index, since both are interpretable and there is no trade-off between them during optimization. The TensorFlow implementation of our FRC metric can easily be integrated into existing deep learning workflows.


2020 ◽  
Author(s):  
Sophie Giffard-Roisin ◽  
Saumya Sinha ◽  
Fatima Karbou ◽  
Michael Deschatres ◽  
Anna Karas ◽  
...  

<p>Achieving reliable observations of avalanche debris is crucial for many applications including avalanche forecasting. The ability to continuously monitor the avalanche activity, in space and time, would provide indicators on the potential instability of the snowpack and would allow a better characterization of avalanche risk periods and zones. In this work, we use Sentinel-1 SAR (synthetic aperture radar) data and an independent in-situ avalanche inventory (as ground truth labels) to automatically detect avalanche debris in the French Alps during the remarkable winter season 2017-18. </p><p>Two main challenges are specific to this data: (i) the imbalance of the data with a small number of positive samples — or avalanche — (ii) the uncertainty of the labels coming from a separate in-situ inventory. We propose to compare two different deep learning methods on SAR image patches in order to tackle these issues: a fully supervised convolutional neural networks model and an unsupervised approach able to detect anomalies based on a variational autoencoder. Our preliminary results show that we are able to successfully locate new avalanche deposits with as much as 77% confidence on the most susceptible mountain zone (compared to 53% with a baseline method) on a balanced dataset.</p><p>In order to make an efficient use of remote sensing measurements on a complex terrain, we explore the following question: to what extent can deep learning methods improve the detection of avalanche deposits and help us to derive relevant avalanche activity statistics at different scales (in time and space) that could be useful for a large number of users (researchers, forecasters, government operators)?</p>


2020 ◽  
Author(s):  
Pierandrea Cancian ◽  
Nina Cortese ◽  
Matteo Donadon ◽  
Marco Di Maio ◽  
Cristiana Soldani ◽  
...  

Abstract Quantitative analysis of tumor microenvironment (TME) provides prognostic and predictive information in several human cancers but, with few exceptions, it is not performed in the daily clinical practice being time-consuming. We recently showed that the morphology of tumor associated macrophages (TAM) correlates with outcome in patients with colo-rectal liver metastases (CLM). However, as for other TME components, recognizing and characterizing hundreds of TAM in a single histopathological slide is unfeasible. To fasten this process, we explored a deep-learning based solution. We tested three convolutional neural networks (CNN), Unet, SegNet and DeepLab-v3, and compared their results according to IoU (intersection over union), a metric describing the similarity between what CNN predicts as TAM and the ground truth and SBD (symmetric best dice), which indicates the ability of CNN to separate different TAMs. Unet and SegNet showed intrinsic limitations in discriminating single TAMs (highest SBD 61.34 ± 2.21), whereas DeepLab-v3 accurately recognized TAM from the background [IoU (89.13 ± 3.85)] and separated different TAM [SBD (79.00 ± 3.72)]. This deep-learning pipeline to recognize TAMs in digital slides, will allow the characterization of TAM-related metrics in the daily clinical practice, allowing the implementation of prognostic tools


2021 ◽  
Author(s):  
filippo portera

We consider some supervised binary classification tasks and a regression task, whereas SVM and Deep Learning, at present, exhibitthe best generalization performances. We extend the work [3] on a gen-eralized quadratic loss for learning problems that examines pattern cor-relations in order to concentrate the learning problem into input spaceregions where patterns are more densely distributed. From a shallowmethods point of view (e.g.: SVM), since the following mathematicalderivation of problem (9) in [3] is incorrect, we restart from problem (8)in [3] and we try to solve it with one procedure that iterates over the dualvariables until the primal and dual objective functions converge. In ad-dition we propose another algorithm that tries to solve the classificationproblem directly from the primal problem formulation. We make alsouse of Multiple Kernel Learning to improve generalization performances.Moreover, we introduce for the first time a custom loss that takes in con-sideration pattern correlation for a shallow and a Deep Learning task.We propose some pattern selection criteria and the results on 4 UCIdata-sets for the SVM method. We also report the results on a largerbinary classification data-set based on Twitter, again drawn from UCI,combined with shallow Learning Neural Networks, with and without thegeneralized quadratic loss. At last, we test our loss with a Deep NeuralNetwork within a larger regression task taken from UCI. We comparethe results of our optimizers with the well known solver SVMlightandwith Keras Multi-Layers Neural Networks with standard losses and witha parameterized generalized quadratic loss, and we obtain comparable results.


Author(s):  
T. S. Akiyama ◽  
J. Marcato Junior ◽  
W. N. Gonçalves ◽  
P. O. Bressan ◽  
A. Eltner ◽  
...  

Abstract. The use of deep learning (DL) with convolutional neural networks (CNN) to monitor surface water can be a valuable supplement to costly and labour-intense standard gauging stations. This paper presents the application of a recent CNN semantic segmentation method (SegNet) to automatically segment river water in imagery acquired by RGB sensors. This approach can be used as a new supporting tool because there are only a few studies using DL techniques to monitor water resources. The study area is a medium-scale river (Wesenitz) located in the East of Germany. The captured images reflect different periods of the day over a period of approximately 50 days, allowing for the analysis of the river in different environmental conditions and situations. In the experiments, we evaluated the input image resolutions of 256 × 256 and 512 × 512 pixels to assess their influence on the performance of river segmentation. The performance of the CNN was measured with the pixel accuracy and IoU metrics revealing an accuracy of 98% and 97%, respectively, for both resolutions, indicating that our approach is efficient to segment water in RGB imagery.


Cancers ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 3313
Author(s):  
Pierandrea Cancian ◽  
Nina Cortese ◽  
Matteo Donadon ◽  
Marco Di Maio ◽  
Cristiana Soldani ◽  
...  

Quantitative analysis of Tumor Microenvironment (TME) provides prognostic and predictive information in several human cancers but, with few exceptions, it is not performed in daily clinical practice since it is extremely time-consuming. We recently showed that the morphology of Tumor Associated Macrophages (TAMs) correlates with outcome in patients with Colo-Rectal Liver Metastases (CLM). However, as for other TME components, recognizing and characterizing hundreds of TAMs in a single histopathological slide is unfeasible. To fasten this process, we explored a deep-learning based solution. We tested three Convolutional Neural Networks (CNNs), namely UNet, SegNet and DeepLab-v3, with three different segmentation strategies, semantic segmentation, pixel penalties and instance segmentation. The different experiments are compared according to the Intersection over Union (IoU), a metric describing the similarity between what CNN predicts as TAM and the ground truth, and the Symmetric Best Dice (SBD), which indicates the ability of CNN to separate different TAMs. UNet and SegNet showed intrinsic limitations in discriminating single TAMs (highest SBD 61.34±2.21), whereas DeepLab-v3 accurately recognized TAMs from the background (IoU 89.13±3.85) and separated different TAMs (SBD 79.00±3.72). This deep-learning pipeline to recognize TAMs in digital slides will allow the characterization of TAM-related metrics in the daily clinical practice, allowing the implementation of prognostic tools.


2021 ◽  
Author(s):  
Adrian Krenzer ◽  
Kevin Makowski ◽  
Amar Hekalo ◽  
Daniel Fitting ◽  
Joel Troya ◽  
...  

Abstract Background: Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all of the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g. visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Results: Using this framework we were able to reduce work load of domain experts on average by a factor of 20. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated pre-annotation model enhances the annotation speed further. Through a study with 10 participants we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion: In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sign in / Sign up

Export Citation Format

Share Document