Development of Deep Learning Framework for Mathematical Morphology

Author(s):  
Frank Y. Shih ◽  
Yucong Shen ◽  
Xin Zhong

Mathematical morphology has been applied as a collection of nonlinear operations related to object features in images. In this paper, we present morphological layers in deep learning framework, namely MorphNet, to perform atomic morphological operations, such as dilation and erosion. For propagation of losses through the proposed deep learning framework, we approximate the dilation and erosion operations by differential and smooth multivariable functions of the softmax function, and therefore enable the neural network to be optimized. The proposed operations are analyzed by the derivative of approximation functions in the deep learning framework. Experimental results show that the output structuring element of a morphological neuron and the target structuring element are matched to confirm the efficiency and correctness of the proposed framework.

2021 ◽  
Vol 7 ◽  
pp. e436
Author(s):  
Zhiwu Xu ◽  
Cheng Wen ◽  
Shengchao Qin ◽  
Mengda He

Deep learning is one of the most advanced forms of machine learning. Most modern deep learning models are based on an artificial neural network, and benchmarking studies reveal that neural networks have produced results comparable to and in some cases superior to human experts. However, the generated neural networks are typically regarded as incomprehensible black-box models, which not only limits their applications, but also hinders testing and verifying. In this paper, we present an active learning framework to extract automata from neural network classifiers, which can help users to understand the classifiers. In more detail, we use Angluin’s L* algorithm as a learner and the neural network under learning as an oracle, employing abstraction interpretation of the neural network for answering membership and equivalence queries. Our abstraction consists of value, symbol and word abstractions. The factors that may affect the abstraction are also discussed in the paper. We have implemented our approach in a prototype. To evaluate it, we have performed the prototype on a MNIST classifier and have identified that the abstraction with interval number 2 and block size 1 × 28 offers the best performance in terms of F1 score. We also have compared our extracted DFA against the DFAs learned via the passive learning algorithms provided in LearnLib and the experimental results show that our DFA gives a better performance on the MNIST dataset.


2019 ◽  
Vol 19 (2) ◽  
pp. 424-442 ◽  
Author(s):  
Tian Guo ◽  
Lianping Wu ◽  
Cunjun Wang ◽  
Zili Xu

Extracting damage features precisely while overcoming the adverse interferences of measurement noise and incomplete data is a problem demanding prompt solution in structural health monitoring (SHM). In this article, we present a deep-learning-based method that can extract the damage features from mode shapes without utilizing any hand-engineered feature or prior knowledge. To meet various requirements of the damage scenarios, we use convolutional neural network (CNN) algorithm and design a new network architecture: a multi-scale module, which helps in extracting features at various scales that can reduce the interference of contaminated data; stacked residual learning modules, which help in accelerating the network convergence; and a global average pooling layer, which helps in reducing the consumption of computing resources and obtaining a regression performance. An extensive evaluation of the proposed method is conducted by using datasets based on numerical simulations, along with two datasets based on laboratory measurements. The transferring parameter methodology is introduced to reduce retraining requirement without any decreases in precision. Furthermore, we plot the feature vectors of each layer to discuss the damage features learned at these layers and additionally provide the basis for explaining the working principle of the neural network. The results show that our proposed method has accuracy improvements of at least 10% over other network architectures.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


2021 ◽  
Vol 9 (Suppl 3) ◽  
pp. A874-A874
Author(s):  
David Soong ◽  
David Soong ◽  
David Soong ◽  
Anantharaman Muthuswamy ◽  
Clifton Drew ◽  
...  

BackgroundRecent advances in machine learning and digital pathology have enabled a variety of applications including predicting tumor grade and genetic subtypes, quantifying the tumor microenvironment (TME), and identifying prognostic morphological features from H&E whole slide images (WSI). These supervised deep learning models require large quantities of images manually annotated with cellular- and tissue-level details by pathologists, which limits scale and generalizability across cancer types and imaging platforms. Here we propose a semi-supervised deep learning framework that automatically annotates biologically relevant image content from hundreds of solid tumor WSI with minimal pathologist intervention, thus improving quality and speed of analytical workflows aimed at deriving clinically relevant features.MethodsThe dataset consisted of >200 H&E images across >10 solid tumor types (e.g. breast, lung, colorectal, cervical, and urothelial cancers) from advanced disease patients. WSI were first partitioned into small tiles of 128μm for feature extraction using a 50-layer convolutional neural network pre-trained on the ImageNet database. Dimensionality reduction and unsupervised clustering were applied to the resultant embeddings and image clusters were identified with enriched histological and morphological characteristics. A random subset of representative tiles (<0.5% of whole slide tissue areas) from these distinct image clusters was manually reviewed by pathologists and assigned to eight histological and morphological categories: tumor, stroma/connective tissue, necrotic cells, lymphocytes, red blood cells, white blood cells, normal tissue and glass/background. This dataset allowed the development of a multi-label deep neural network to segment morphologically distinct regions and detect/quantify histopathological features in WSI.ResultsAs representative image tiles within each image cluster were morphologically similar, expert pathologists were able to assign annotations to multiple images in parallel, effectively at 150 images/hour. Five-fold cross-validation showed average prediction accuracy of 0.93 [0.8–1.0] and area under the curve of 0.90 [0.8–1.0] over the eight image categories. As an extension of this classifier framework, all whole slide H&E images were segmented and composite lymphocyte, stromal, and necrotic content per patient tumor was derived and correlated with estimates by pathologists (p<0.05).ConclusionsA novel and scalable deep learning framework for annotating and learning H&E features from a large unlabeled WSI dataset across tumor types was developed. This automated approach accurately identified distinct histomorphological features, with significantly reduced labeling time and effort required for pathologists. Further, this classifier framework was extended to annotate regions enriched in lymphocytes, stromal, and necrotic cells – important TME contexture with clinical relevance for patient prognosis and treatment decisions.


2020 ◽  
pp. 74-80
Author(s):  
Philippe Schweizer ◽  

We would like to show the small distance in neutropsophy applications in sciences and humanities, has both finally consider as a terminal user a human. The pace of data production continues to grow, leading to increased needs for efficient storage and transmission. Indeed, the consumption of this information is preferably made on mobile terminals using connections invoiced to the user and having only reduced storage capacities. Deep learning neural networks have recently exceeded the compression rates of algorithmic techniques for text. We believe that they can also significantly challenge classical methods for both audio and visual data (images and videos). To obtain the best physiological compression, i.e. the highest compression ratio because it comes closest to the specificity of human perception, we propose using a neutrosophical representation of the information for the entire compression-decompression cycle. Such a representation consists for each elementary information to add to it a simple neutrosophical number which informs the neural network about its characteristics relative to compression during this treatment. Such a neutrosophical number is in fact a triplet (t,i,f) representing here the belonging of the element to the three constituent components of information in compression; 1° t = the true significant part to be preserved, 2° i = the inderterminated redundant part or noise to be eliminated in compression and 3° f = the false artifacts being produced in the compression process (to be compensated). The complexity of human perception and the subtle niches of its defects that one seeks to exploit requires a detailed and complex mapping that a neural network can produce better than any other algorithmic solution, and networks with deep learning have proven their ability to produce a detailed boundary surface in classifiers.


Author(s):  
Lifu Wang ◽  
Bo Shen ◽  
Ning Zhao ◽  
Zhiyuan Zhang

The residual network is now one of the most effective structures in deep learning, which utilizes the skip connections to “guarantee" the performance will not get worse. However, the non-convexity of the neural network makes it unclear whether the skip connections do provably improve the learning ability since the nonlinearity may create many local minima. In some previous works [Freeman and Bruna, 2016], it is shown that despite the non-convexity, the loss landscape of the two-layer ReLU network has good properties when the number m of hidden nodes is very large. In this paper, we follow this line to study the topology (sub-level sets) of the loss landscape of deep ReLU neural networks with a skip connection and theoretically prove that the skip connection network inherits the good properties of the two-layer network and skip connections can help to control the connectedness of the sub-level sets, such that any local minima worse than the global minima of some two-layer ReLU network will be very “shallow". The “depth" of these local minima are at most O(m^(η-1)/n), where n is the input dimension, η<1. This provides a theoretical explanation for the effectiveness of the skip connection in deep learning.


2021 ◽  
Vol 14 (6) ◽  
pp. 3421-3435
Author(s):  
Zhenjiao Jiang ◽  
Dirk Mallants ◽  
Lei Gao ◽  
Tim Munday ◽  
Gregoire Mariethoz ◽  
...  

Abstract. This study introduces an efficient deep-learning model based on convolutional neural networks with joint autoencoder and adversarial structures for 3D subsurface mapping from 2D surface observations. The method was applied to delineate paleovalleys in an Australian desert landscape. The neural network was trained on a 6400 km2 domain by using a land surface topography as 2D input and an airborne electromagnetic (AEM)-derived probability map of paleovalley presence as 3D output. The trained neural network has a squared error <0.10 across 99 % of the training domain and produces a squared error <0.10 across 93 % of the validation domain, demonstrating that it is reliable in reconstructing 3D paleovalley patterns beyond the training area. Due to its generic structure, the neural network structure designed in this study and the training algorithm have broad application potential to construct 3D geological features (e.g., ore bodies, aquifer) from 2D land surface observations.


Author(s):  
Xi Li ◽  
Ting Wang ◽  
Shexiong Wang

It draws researchers’ attentions how to make use of the log data effectively without paying much for storing them. In this paper, we propose pattern-based deep learning method to extract the features from log datasets and to facilitate its further use at the reasonable expense of the storage performances. By taking the advantages of the neural network and thoughts to combine statistical features with experts’ knowledge, there are satisfactory results in the experiments on some specified datasets and on the routine systems that our group maintains. Processed on testing data sets, the model is 5%, at least, more likely to outperform its competitors in accuracy perspective. More importantly, its schema unveils a new way to mingle experts’ experiences with statistical log parser.


2020 ◽  
Vol 8 ◽  
Author(s):  
Adil Khadidos ◽  
Alaa O. Khadidos ◽  
Srihari Kannan ◽  
Yuvaraj Natarajan ◽  
Sachi Nandan Mohanty ◽  
...  

In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.


Sign in / Sign up

Export Citation Format

Share Document