scholarly journals State-of-the-Art CNN Optimizer for Brain Tumor Segmentation in Magnetic Resonance Images

2020 ◽  
Vol 10 (7) ◽  
pp. 427
Author(s):  
Muhammad Yaqub ◽  
Jinchao Feng ◽  
M. Sultan Zia ◽  
Kaleem Arshid ◽  
Kebin Jia ◽  
...  

Brain tumors have become a leading cause of death around the globe. The main reason for this epidemic is the difficulty conducting a timely diagnosis of the tumor. Fortunately, magnetic resonance images (MRI) are utilized to diagnose tumors in most cases. The performance of a Convolutional Neural Network (CNN) depends on many factors (i.e., weight initialization, optimization, batches and epochs, learning rate, activation function, loss function, and network topology), data quality, and specific combinations of these model attributes. When we deal with a segmentation or classification problem, utilizing a single optimizer is considered weak testing or validity unless the decision of the selection of an optimizer is backed up by a strong argument. Therefore, optimizer selection processes are considered important to validate the usage of a single optimizer in order to attain these decision problems. In this paper, we provides a comprehensive comparative analysis of popular optimizers of CNN to benchmark the segmentation for improvement. In detail, we perform a comparative analysis of 10 different state-of-the-art gradient descent-based optimizers, namely Adaptive Gradient (Adagrad), Adaptive Delta (AdaDelta), Stochastic Gradient Descent (SGD), Adaptive Momentum (Adam), Cyclic Learning Rate (CLR), Adaptive Max Pooling (Adamax), Root Mean Square Propagation (RMS Prop), Nesterov Adaptive Momentum (Nadam), and Nesterov accelerated gradient (NAG) for CNN. The experiments were performed on the BraTS2015 data set. The Adam optimizer had the best accuracy of 99.2% in enhancing the CNN ability in classification and segmentation.

Author(s):  
Ahmad AL Smadi ◽  
Atif Mehmood ◽  
Ahed Abugabah ◽  
Eiad Almekhlafi ◽  
Ahmad Mohammad Al-smadi

<p>In computer vision, image classification is one of the potential image processing tasks. Nowadays, fish classification is a wide considered issue within the areas of machine learning and image segmentation. Moreover, it has been extended to a variety of domains, such as marketing strategies. This paper presents an effective fish classification method based on convolutional neural networks (CNNs). The experiments were conducted on the new dataset of Bangladesh’s indigenous fish species with three kinds of splitting: 80-20%, 75-25%, and 70-30%. We provide a comprehensive comparison of several popular optimizers of CNN. In total, we perform a comparative analysis of 5 different state-of-the-art gradient descent-based optimizers, namely adaptive delta (AdaDelta), stochastic gradient descent (SGD), adaptive momentum (Adam), adaptive max pooling (Adamax), Root mean square propagation (Rmsprop), for CNN. Overall, the obtained experimental results show that Rmsprop, Adam, Adamax performed well compared to the other optimization techniques used, while AdaDelta and SGD performed the worst. Furthermore, the experimental results demonstrated that Adam optimizer attained the best results in performance measures for 70-30% and 80-20% splitting experiments, while the Rmsprop optimizer attained the best results in terms of performance measures of 70-25% splitting experiments. Finally, the proposed model is then compared with state-of-the-art deep CNNs models. Therefore, the proposed model attained the best accuracy of 98.46% in enhancing the CNN ability in classification, among others.</p>


Author(s):  
Mallikarjunaswamy Shivagangadharaiah Matada ◽  
Mallikarjun Sayabanna Holi ◽  
Rajesh Raman ◽  
Sujana Theja Jayaramu Suvarna

Background: Osteoarthritis (OA) is a degenerative disease of joint cartilage affecting the elderly people around the world. Visualization and quantification of cartilage is very much essential for the assessment of OA and rehabilitation of the affected people. Magnetic Resonance Imaging (MRI) is the most widely used imaging modality in the treatment of knee joint diseases. But there are many challenges in proper visualization and quantification of articular cartilage using MRI. Volume rendering and 3D visualization can provide an overview of anatomy and disease condition of knee joint. In this work, cartilage is segmented from knee joint MRI, visualized in 3D using Volume of Interest (VOI) approach. Methods: Visualization of cartilage helps in the assessment of cartilage degradation in diseased knee joints. Cartilage thickness and volume were quantified using image processing techniques in OA affected knee joints. Statistical analysis is carried out on processed data set consisting of 110 of knee joints which include male (56) and female (54) of normal (22) and different stages of OA (88). The differences in thickness and volume of cartilage were observed in cartilage in groups based on age, gender and BMI in normal and progressive OA knee joints. Results: The results show that size and volume of cartilage are found to be significantly low in OA as compared to normal knee joints. The cartilage thickness and volume is significantly low for people with age 50 years and above and Body Mass Index (BMI) equal and greater than 25. Cartilage volume correlates with the progression of the disease and can be used for the evaluation of the response to therapies. Conclusion: The developed methods can be used as helping tool in the assessment of cartilage degradation in OA affected knee joint patients and treatment planning.


2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


NeuroImage ◽  
2018 ◽  
Vol 183 ◽  
pp. 150-172 ◽  
Author(s):  
Aaron Carass ◽  
Jennifer L. Cuzzocreo ◽  
Shuo Han ◽  
Carlos R. Hernandez-Castillo ◽  
Paul E. Rasser ◽  
...  

2021 ◽  
pp. 1-27
Author(s):  
Tim Sainburg ◽  
Leland McInnes ◽  
Timothy Q. Gentner

Abstract UMAP is a nonparametric graph-based dimensionality reduction algorithm using applied Riemannian geometry and algebraic topology to find low-dimensional embeddings of structured data. The UMAP algorithm consists of two steps: (1) computing a graphical representation of a data set (fuzzy simplicial complex) and (2) through stochastic gradient descent, optimizing a low-dimensional embedding of the graph. Here, we extend the second step of UMAP to a parametric optimization over neural network weights, learning a parametric relationship between data and embedding. We first demonstrate that parametric UMAP performs comparably to its nonparametric counterpart while conferring the benefit of a learned parametric mapping (e.g., fast online embeddings for new data). We then explore UMAP as a regularization, constraining the latent distribution of autoencoders, parametrically varying global structure preservation, and improving classifier accuracy for semisupervised learning by capturing structure in unlabeled data.


2021 ◽  
Author(s):  
Kun-Cheng Ke ◽  
Ming-Shyan Huang

Abstract Injection molding has been broadly used in the mass production of plastic parts and must meet the requirements of efficiency and quality consistency. Machine learning can effectively predict the quality of injection molded part. However, the performance of machine learning models largely depends on the accuracy of the training. Hyperparameters such as activation functions, momentum, and learning rate are crucial to the accuracy and efficiency of model training. This research further analyzed the influence of hyperparameters on testing accuracy, explored the corresponding optimal learning rate, and provided the optimal training model for predicting the quality of injection molded parts. In this study, stochastic gradient descent (SGD) and stochastic gradient descent with momentum were used to optimize the artificial neural network model. Through optimization of these training model hyperparameters, the width testing accuracy of the injection product improved. The experimental results indicated that in the absence of momentum effects, all five activation functions can achieve more than 90% of the training accuracy with a learning rate of 0.1. Moreover, when optimized with the SGD, the learning rate of the Sigmoid activation function was 0.1, and the testing accuracy reached 95.8%. Although momentum had the least influence on accuracy, it affected the convergence speed of the Sigmoid function, which reduced the number of required learning iterations (82.4% reduction rate). Optimizing hyperparameter settings can improve the accuracy of model testing and markedly reduce training time.


2019 ◽  
Vol 10 (1) ◽  
pp. 64
Author(s):  
Yi Lin ◽  
Honggang Zhang

In the era of Big Data, multi-instance learning, as a weakly supervised learning framework, has various applications since it is helpful to reduce the cost of the data-labeling process. Due to this weakly supervised setting, learning effective instance representation/embedding is challenging. To address this issue, we propose an instance-embedding regularizer that can boost the performance of both instance- and bag-embedding learning in a unified fashion. Specifically, the crux of the instance-embedding regularizer is to maximize correlation between instance-embedding and underlying instance-label similarities. The embedding-learning framework was implemented using a neural network and optimized in an end-to-end manner using stochastic gradient descent. In experiments, various applications were studied, and the results show that the proposed instance-embedding-regularization method is highly effective, having state-of-the-art performance.


Geophysics ◽  
2011 ◽  
Vol 76 (1) ◽  
pp. T13-T25 ◽  
Author(s):  
Jun Matsushima ◽  
Makoto Suzuki ◽  
Yoshibumi Kato ◽  
Shuichi Rokugawa

Seismic attenuation is not due entirely to intrinsic properties; a component due to scattering effects is included. Although different techniques have been used to experimentally investigate the attenuation of seismic waves, not so many laboratory measurements of attenuation have taken into account the effect of scattering attenuation. Herein, partially frozen brine as a solid-liquid coexistence system is used to investigate attenuation phenomena. We obtained a series of 2D apparent diffusion coefficient (ADC) maps of the ice-brine coexisting system using a diffusion-weighted magnetic resonance imaging (DW-MRI) technique at [Formula: see text], and found a strongly heterogeneous spatial distribution of unfrozen brine. From these maps, we constructed a synthetic seismic data set propagating through 2D media, and generated synthetic data with a second-order finite-difference scheme for the 2D acoustic wave equation. We estimated ultrasonic scattering attenuation in such systems by the centroid frequency shift method and by assuming that the quality factor ([Formula: see text]-value) is independent of frequency. The estimated scattering attenuation ranges from 0.015 to 0.05, corresponding to 10% to 30% of the total attenuation measured in laboratory experiments.


2018 ◽  
Author(s):  
Kazunori D Yamada

ABSTRACTIn the deep learning era, stochastic gradient descent is the most common method used for optimizing neural network parameters. Among the various mathematical optimization methods, the gradient descent method is the most naive. Adjustment of learning rate is necessary for quick convergence, which is normally done manually with gradient descent. Many optimizers have been developed to control the learning rate and increase convergence speed. Generally, these optimizers adjust the learning rate automatically in response to learning status. These optimizers were gradually improved by incorporating the effective aspects of earlier methods. In this study, we developed a new optimizer: YamAdam. Our optimizer is based on Adam, which utilizes the first and second moments of previous gradients. In addition to the moment estimation system, we incorporated an advantageous part of AdaDelta, namely a unit correction system, into YamAdam. According to benchmark tests on some common datasets, our optimizer showed similar or faster convergent performance compared to the existing methods. YamAdam is an option as an alternative optimizer for deep learning.


Sign in / Sign up

Export Citation Format

Share Document