scholarly journals A Dual Autoencoder and Singular Value Decomposition Based Feature Optimization for the Detection of Brain Tumor from MRI Images

Author(s):  
Aswani K ◽  
Menaka D

Abstract IntroductionThe brain tumor is the growth of abnormal cells inside the brain. These cells can be grown into malignant or benign tumors. Segmentation of tumor from MRI images using image processing techniques started decades back. Image processing based brain tumor segmentation can be divided in to three categories conventional image processing methods, Machine Learning methods and Deep Learning methods. Conventional methods lacks the accuracy in segmentation due to complex spatial variation of tumor. Machine Learning methods stand as a good alternative to conventional methods. Methods like SVM, KNN, Fuzzy and a combination of either of these provide good accuracy with reasonable processing speed. The difficulty in processing the various feature extraction methods and maintain accuracy as per the medical standards still exist as a limitation for machine learning methods. In Deep Learning features are extracted automatically in various stages of the network and maintain accuracy as per the medical standards. Huge database requirement and high computational time is still poses a problem for deep learning. MethodTo overcome the limitations specified above we propose an unsupervised dual autoencoder with latent space optimization here. The model require only normal MRI images for its training thus reducing the huge tumor database requirement. With a set of normal class data, an autoencoder can reproduce the feature vector into an output layer. This trained autoencoder works well with normal data while it fails to reproduce an anomaly to the output layer. But a classical autoencoder suffer due to poor latent space optimization. The Latent space loss of classical autoencoder is reduced using an auxiliary encoder along with the feature optimization based on Singular value Decomposition (SVD). The patches used for training are not traditional square patches but we took both horizontal and vertical patches to keep both local and global appearance features on the training set. An Autoencoder is applied separately for learning both horizontal and vertical patches. While training a logistic sigmoid transfer function is used for both encoder and decoder parts. SGD optimizer is used for optimization with an initial learning rate of .001 and the maximum epochs used are 4000. The network is trained in MATLAB 2018a with a processor capacity of 3.7 GHz with NVIDIA GPU and 16 GB of RAM.ResultsThe results are obtained using a patch size of 16x64, 64x16 for horizontal and vertical patches respectively. In Glioma images tumor is not grown from a point rather it spreads randomly. Region filling and connectivity operations are performed to get the final tumor segmentation. Overall the method segments Meningioma better than Gliomas. Three evaluation metrics are considered to measure the performance of the proposed system such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity.ConclusionAn unsupervised method for the segmentation of brain tumor from MRI images is proposed here. The proposed dual autoencoder with SVD based feature optimization reduce the latent space loss in the classical autoencoder. The proposed method have advantages in computational efficiency, no need of huge database requirement and better accuracy than machine learning methods. The method is compared Machine Learning methods Like SVM, KNN and supervised deep learning methods like CNN and commentable results are obtained.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
K. Aswani ◽  
D. Menaka

Abstract Background The brain tumor is the growth of abnormal cells inside the brain. These cells can be grown into malignant or benign tumors. Segmentation of tumor from MRI images using image processing techniques started decades back. Image processing based brain tumor segmentation can be divided in to three categories conventional image processing methods, Machine Learning methods and Deep Learning methods. Conventional methods lacks the accuracy in segmentation due to complex spatial variation of tumor. Machine Learning methods stand as a good alternative to conventional methods. Methods like SVM, KNN, Fuzzy and a combination of either of these provide good accuracy with reasonable processing speed. The difficulty in processing the various feature extraction methods and maintain accuracy as per the medical standards still exist as a limitation for machine learning methods. In Deep Learning features are extracted automatically in various stages of the network and maintain accuracy as per the medical standards. Huge database requirement and high computational time is still poses a problem for deep learning. To overcome the limitations specified above we propose an unsupervised dual autoencoder with latent space optimization here. The model require only normal MRI images for its training thus reducing the huge tumor database requirement. With a set of normal class data, an autoencoder can reproduce the feature vector into an output layer. This trained autoencoder works well with normal data while it fails to reproduce an anomaly to the output layer. But a classical autoencoder suffer due to poor latent space optimization. The Latent space loss of classical autoencoder is reduced using an auxiliary encoder along with the feature optimization based on singular value decomposition (SVD). The patches used for training are not traditional square patches but we took both horizontal and vertical patches to keep both local and global appearance features on the training set. An Autoencoder is applied separately for learning both horizontal and vertical patches. While training a logistic sigmoid transfer function is used for both encoder and decoder parts. SGD optimizer is used for optimization with an initial learning rate of .001 and the maximum epochs used are 4000. The network is trained in MATLAB 2018a with a processor capacity of 3.7 GHz with NVIDIA GPU and 16 GB of RAM. Results The results are obtained using a patch size of 16 × 64, 64 × 16 for horizontal and vertical patches respectively. In Glioma images tumor is not grown from a point rather it spreads randomly. Region filling and connectivity operations are performed to get the final tumor segmentation. Overall the method segments Meningioma better than Gliomas. Three evaluation metrics are considered to measure the performance of the proposed system such as Dice Similarity Coefficient, Positive Predictive Value, and Sensitivity. Conclusion An unsupervised method for the segmentation of brain tumor from MRI images is proposed here. The proposed dual autoencoder with SVD based feature optimization reduce the latent space loss in the classical autoencoder. The proposed method have advantages in computational efficiency, no need of huge database requirement and better accuracy than machine learning methods. The method is compared Machine Learning methods Like SVM, KNN and supervised deep learning methods like CNN and commentable results are obtained.


Author(s):  
Padmapriya Thiyagarajan ◽  
Sriramakrishnan Padmanaban ◽  
Kalaiselvi Thiruvenkadam ◽  
Somasundaram Karuppanagounder

Background: Among the brain-related diseases, brain tumor segmentation on magnetic resonance imaging (MRI) scans is one of the highly focused research domains in the medical community. Brain tumor segmentation is a very challenging task due to its asymmetric form and uncertain boundaries. This process segregates the tumor region into the active tumor, necrosis and edema from normal brain tissues such as white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF). Introduction: The proposed paper analyzed the advancement of brain tumor segmentation from conventional image processing techniques, to deep learning through machine learning on MRI of human head scans. Method: State-of-the-art methods of these three techniques are investigated, and the merits and demerits are discussed. Results: The prime motivation of the paper is to instigate the young researchers towards the development of efficient brain tumor segmentation techniques using conventional and recent technologies. Conclusion: The proposed analysis concluded that the conventional and machine learning methods were mostly applied for brain tumor detection, whereas deep learning methods were good at tumor substructures segmentation.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


2021 ◽  
Author(s):  
Shidong Li ◽  
Jianwei Liu ◽  
Zhanjie Song

Abstract Since magnetic resonance imaging (MRI) has superior soft tissue contrast, contouring (brain) tumor accurately by MRI images is essential in medical image processing. Segmenting tumor accurately is immensely challenging, since tumor and normal tissues are often inextricably intertwined in the brain. It is also extremely time consuming manually. Late deep learning techniques start to show reasonable success in brain tumor segmentation automatically. The purpose of this study is to develop a new region-ofinterest-aided (ROI-aided) deep learning technique for automatic brain tumor MRI segmentation. The method consists of two major steps. Step one is to use a 2D network with U-Net architecture to localize the tumor ROI, which is to reduce the impact of normal tissue’s disturbance. Then a 3D U-Net is performed in step 2 for tumor segmentation within identified ROI. The proposed method is validated on MICCAI BraTS 2015 Challenge with 220 high Gliomas grade (HGG) and 54 low Gliomas grade (LGG) patients’ data. The Dice similarity coefficient and the Hausdorff distance between the manual tumor contour and that segmented by the proposed method are 0.876 ±0.068 and 3.594±1.347 mm, respectively. These numbers are indications that our proposed method is an effective ROI-aided deep learning strategy for brain MRI tumor segmentation, and a valid and useful tool in medical image processing.


2021 ◽  
Author(s):  
Timo Kumpula ◽  
Janne Mäyrä ◽  
Anton Kuzmin ◽  
Arto Viinikka ◽  
Sonja Kivinen ◽  
...  

<p>Sustainable forest management increasingly highlights the maintenance of biological diversity and requires up-to-date information on the occurrence and distribution of key ecological features in forest environments. Different proxy variables indicating species richness and quality of the sites are essential for efficient detecting and monitoring forest biodiversity. European aspen (Populus tremula L.) is a minor deciduous tree species with a high importance in maintaining biodiversity in boreal forests. Large aspen trees host hundreds of species, many of them classified as threatened. However, accurate fine-scale spatial data on aspen occurrence remains scarce and incomprehensive.</p><p> </p><p>We studied detection of aspen using different remote sensing techniques in Evo, southern Finland. Our study area of 83 km<sup>2</sup> contains both managed and protected southern boreal forests characterized by Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst), and birch (Betula pendula and pubescens L.), whereas European aspen has a relatively sparse and scattered occurrence in the area. We collected high-resolution airborne hyperspectral and airborne laser scanning data covering the whole study area and ultra-high resolution unmanned aerial vehicle (UAV) data with RGB and multispectral sensors from selected parts of the area. We tested the discrimination of aspen from other species at tree level using different machine learning methods (Support Vector Machines, Random Forest, Gradient Boosting Machine) and deep learning methods (3D convolutional neural networks).</p><p> </p><p>Airborne hyperspectral and lidar data gave excellent results with machine learning and deep learning classification methods The highest classification accuracies for aspen varied between 91-92% (F1-score). The most important wavelengths for discriminating aspen from other species included reflectance bands of red edge range (724–727 nm) and shortwave infrared (1520–1564 nm and 1684–1706 nm) (Viinikka et al. 2020; Mäyrä et al 2021). Aspen detection using RGB and multispectral data also gave good results (highest F1-score of aspen = 87%) (Kuzmin et al 2021). Different remote sensing data enabled production of a spatially explicit map of aspen occurrence in the study area. Information on aspen occurrence and abundance can significantly contribute to biodiversity management and conservation efforts in boreal forests. Our results can be further utilized in upscaling efforts aiming at aspen detection over larger geographical areas using satellite images.</p>


2019 ◽  
Vol 11 (2) ◽  
pp. 196 ◽  
Author(s):  
Omid Ghorbanzadeh ◽  
Thomas Blaschke ◽  
Khalil Gholamnia ◽  
Sansar Meena ◽  
Dirk Tiede ◽  
...  

There is a growing demand for detailed and accurate landslide maps and inventories around the globe, but particularly in hazard-prone regions such as the Himalayas. Most standard mapping methods require expert knowledge, supervision and fieldwork. In this study, we use optical data from the Rapid Eye satellite and topographic factors to analyze the potential of machine learning methods, i.e., artificial neural network (ANN), support vector machines (SVM) and random forest (RF), and different deep-learning convolution neural networks (CNNs) for landslide detection. We use two training zones and one test zone to independently evaluate the performance of different methods in the highly landslide-prone Rasuwa district in Nepal. Twenty different maps are created using ANN, SVM and RF and different CNN instantiations and are compared against the results of extensive fieldwork through a mean intersection-over-union (mIOU) and other common metrics. This accuracy assessment yields the best result of 78.26% mIOU for a small window size CNN, which uses spectral information only. The additional information from a 5 m digital elevation model helps to discriminate between human settlements and landslides but does not improve the overall classification accuracy. CNNs do not automatically outperform ANN, SVM and RF, although this is sometimes claimed. Rather, the performance of CNNs strongly depends on their design, i.e., layer depth, input window sizes and training strategies. Here, we conclude that the CNN method is still in its infancy as most researchers will either use predefined parameters in solutions like Google TensorFlow or will apply different settings in a trial-and-error manner. Nevertheless, deep-learning can improve landslide mapping in the future if the effects of the different designs are better understood, enough training samples exist, and the effects of augmentation strategies to artificially increase the number of existing samples are better understood.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Yan Wang ◽  
Hao Zhang ◽  
Zhanliang Sang ◽  
Lingwei Xu ◽  
Conghui Cao ◽  
...  

Automatic modulation recognition has successfully used various machine learning methods and achieved certain results. As a subarea of machine learning, deep learning has made great progress in recent years and has made remarkable progress in the field of image and language processing. Deep learning requires a large amount of data support. As a communication field with a large amount of data, there is an inherent advantage of applying deep learning. However, the extensive application of deep learning in the field of communication has not yet been fully developed, especially in underwater acoustic communication. In this paper, we mainly discuss the modulation recognition process which is an important part of communication process by using the deep learning method. Different from the common machine learning methods that require feature extraction, the deep learning method does not require feature extraction and obtains more effects than common machine learning.


Sign in / Sign up

Export Citation Format

Share Document