scholarly journals Batch Normalized Convolution Neural Network for Liver Segmentation

2020 ◽  
Vol 11 (5) ◽  
pp. 21-35
Author(s):  
Fatima Abdalbagi ◽  
Serestina Viriri ◽  
Mohammed Tajalsir Mohammed

With the huge innovative improvement in all lifestyles, it has been important to build up the clinical fields, remembering the finding for which treatment is done; where the fruitful treatment relies upon the preoperative. Models for the preoperative, for example, planning to understand the complex internal structure of the liver and precisely localize the liver surface and its tumors; there are various algorithms proposed to do the automatic liver segmentation. In this paper, we propose a Batch Normalization After All - Convolutional Neural Network (BATA-Convnet) model to segment the liver CT images using Deep Learning Technique. The proposed liver segmentation model consists of four main steps: pre-processing, training the BATA-Convnet, liver segmentation, and the postprocessing step to maximize the result efficiency. Medical Image Computing and Computer Assisted Intervention (MICCAI) dataset and 3DImage Reconstruction for Comparison of Algorithm Database (3D-IRCAD) were used in the experimentation and the average results using MICCAI are 0.91% for Dice, 13.44% for VOE, 0.23% for RVD, 0.29mm for ASD, 1.35mm for RMSSD and 0.36mm for MaxASD. The average results using 3DIRCAD dataset are 0.84% for Dice, 13.24% for VOE, 0.16% for RVD, 0.32mm for ASD, 1.17mm for RMSSD and 0.33mm for MaxASD.

2019 ◽  
Vol 90 (9-10) ◽  
pp. 971-980 ◽  
Author(s):  
Pandia Rajan Jeyaraj ◽  
Edward Rajan Samuel Nadar

This research paper focuses on the innovative detection of defects in fabric. This approach is based on the design and development of a computer-assisted system using the deep learning technique. The classification network is modeled using the ResNet512-based Convolutional Neural Network to learn the deep features in the presented fabric. Being an accurate method, this enables accurate localization of minute defects too. Our classification is based on three major steps; firstly, an image acquired by the NI Vision model and pre-processed for a standard pattern to Kullback Leibler Divergence calculation. Secondly, standard textile fabrics are presented to train the Convolutional Neural Network to classify the defective region and the defect-free region. Finally, the testing fabrics are examined by the trained deep Convolutional Neural Network algorithm. To verify the performance, multiple fabrics are presented and the classification accuracy is evaluated. For standard defects on defective fabrics, an average accuracy of 96.5% with 98.5% precision is obtained. Experimental results on the standard Textile Texture Database dataset confirmed that our method provides better results compared with similar recent classification methods, such as the Support Vector Machine and Bayesian classifier.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Jin-Woong Lee ◽  
Woon Bae Park ◽  
Jin Hee Lee ◽  
Satendra Pal Singh ◽  
Kee-Sun Sohn

AbstractHere we report a facile, prompt protocol based on deep-learning techniques to sort out intricate phase identification and quantification problems in complex multiphase inorganic compounds. We simulate plausible powder X-ray diffraction (XRD) patterns for 170 inorganic compounds in the Sr-Li-Al-O quaternary compositional pool, wherein promising LED phosphors have been recently discovered. Finally, 1,785,405 synthetic XRD patterns are prepared by combinatorically mixing the simulated powder XRD patterns of 170 inorganic compounds. Convolutional neural network (CNN) models are built and eventually trained using this large prepared dataset. The fully trained CNN model promptly and accurately identifies the constituent phases in complex multiphase inorganic compounds. Although the CNN is trained using the simulated XRD data, a test with real experimental XRD data returns an accuracy of nearly 100% for phase identification and 86% for three-step-phase-fraction quantification.


2021 ◽  
Author(s):  
Ghassan Mohammed Halawani

The main purpose of this project is to modify a convolutional neural network for image classification, based on a deep-learning framework. A transfer learning technique is used by the MATLAB interface to Alex-Net to train and modify the parameters in the last two fully connected layers of Alex-Net with a new dataset to perform classifications of thousands of images. First, the general common architecture of most neural networks and their benefits are presented. The mathematical models and the role of each part in the neural network are explained in detail. Second, different neural networks are studied in terms of architecture, application, and the working method to highlight the strengths and weaknesses of each of neural network. The final part conducts a detailed study on one of the most powerful deep-learning networks in image classification – i.e. the convolutional neural network – and how it can be modified to suit different classification tasks by using transfer learning technique in MATLAB.


Author(s):  
Giovanni Da Silva ◽  
Aristófanes Silva ◽  
Anselmo De Paiva ◽  
Marcelo Gattass

Lung cancer presents the highest mortality rate, besides being one of the smallest survival rates after diagnosis. Thereby, early detection is extremely important for the diagnosis and treatment. This paper proposes three different architectures of Convolutional Neural Network (CNN), which is a deep learning technique, for classification of malignancy of lung nodules without computing the morphology and texture features. The methodology was tested onto the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best accuracy of 82.3%, sensitivity of 79.4% and specificity 83.8%.


10.2196/24762 ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. e24762
Author(s):  
Hyun-Lim Yang ◽  
Chul-Woo Jung ◽  
Seong Mi Yang ◽  
Min-Soo Kim ◽  
Sungho Shim ◽  
...  

Background Arterial pressure-based cardiac output (APCO) is a less invasive method for estimating cardiac output without concerns about complications from the pulmonary artery catheter (PAC). However, inaccuracies of currently available APCO devices have been reported. Improvements to the algorithm by researchers are impossible, as only a subset of the algorithm has been released. Objective In this study, an open-source algorithm was developed and validated using a convolutional neural network and a transfer learning technique. Methods A retrospective study was performed using data from a prospective cohort registry of intraoperative bio-signal data from a university hospital. The convolutional neural network model was trained using the arterial pressure waveform as input and the stroke volume (SV) value as the output. The model parameters were pretrained using the SV values from a commercial APCO device (Vigileo or EV1000 with the FloTrac algorithm) and adjusted with a transfer learning technique using SV values from the PAC. The performance of the model was evaluated using absolute error for the PAC on the testing dataset from separate periods. Finally, we compared the performance of the deep learning model and the FloTrac with the SV values from the PAC. Results A total of 2057 surgical cases (1958 training and 99 testing cases) were used in the registry. In the deep learning model, the absolute errors of SV were 14.5 (SD 13.4) mL (10.2 [SD 8.4] mL in cardiac surgery and 17.4 [SD 15.3] mL in liver transplantation). Compared with FloTrac, the absolute errors of the deep learning model were significantly smaller (16.5 [SD 15.4] and 18.3 [SD 15.1], P<.001). Conclusions The deep learning–based APCO algorithm showed better performance than the commercial APCO device. Further improvement of the algorithm developed in this study may be helpful for estimating cardiac output accurately in clinical practice and optimizing high-risk patient care.


Author(s):  
Senthil Pandi Sankareswaran ◽  
Mahadevan Krishnan

Background: Image registration is the process of aligning two or more images in a single coordinate. Now a days, medical image registration plays a significant role in computer assisted disease diagnosis, treatment, and surgery. The different modalities available in the medical image makes medical image registration as an essential step in Computer Assisted Diagnosis(CAD), Computer-Aided Therapy (CAT) and Computer-Assisted Surgery (CAS). Problem definition: Recently many learning based methods were employed for disease detection and classification but those methods were not suitable for real time due to delayed response and need of pre alignment,labeling. Method: The proposed research constructed a deep learning model with Rigid transform and B-Spline transform for medical image registration for an automatic brain tumour finding. The proposed research consists of two steps. First steps uses Rigid transformation based Convolutional Neural Network and the second step uses B-Spline transform based Convolutional Neural Network. The model is trained and tested with 3624 MR (Magnetic Resonance) images to assess the performance. The researchers believe that MR images helps in success the treatment of brain tumour people. Result: The result of the proposed method is compared with the Rigid Convolutional Neural Network (CNN), Rigid CNN + Thin-Plat Spline (TPS), Affine CNN, Voxel morph, ADMIR (Affine and Deformable Medical Image Registration) and ANT(Advanced Normalization Tools) using DICE score, average symmetric surface distance (ASD), and Hausdorff distance. Conclusion: The RBCNN model will help the physician to automatically detect and classify the brain tumor quickly(18 Sec) and efficiently with out doing any pre-alignment and labeling.


Author(s):  
Xiangbo Lin ◽  
Xiaoxi Li

Background: This review aims to identify the development of the algorithms for brain tissue and structure segmentation in MRI images. Discussion: Starting from the results of the Grand Challenges on brain tissue and structure segmentation held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this review analyses the development of the algorithms and discusses the tendency from multi-atlas label fusion to deep learning. The intrinsic characteristics of the winners’ algorithms on the Grand Challenges from the year 2012 to 2018 are analyzed and the results are compared carefully. Conclusion: Although deep learning has got higher rankings in the challenge, it has not yet met the expectations in terms of accuracy. More effective and specialized work should be done in the future.


2021 ◽  
Vol 1 (1) ◽  
pp. 33-44
Author(s):  
Zahraa Z. Edie ◽  
Ammar D. Jasim

In this paper, we propose a malware classification and detection framework using transfer learning based on existing Deep Learning models that have been pre-trained on massive image datasets, we applied a deep Convolutional Neural Network (CNN) based on Xception model to perform malware image classification. The Xception model is a recently developed special CNN architecture that is more powerful with less overfitting problems than the current popular CNN models such as VGG16, The experimental results on a Malimg Dataset which is comprising 9,821 samples from 26 different families ,Malware samples are represented as byteplot grayscale images and a deep neural network is trained freezing the convolutional layers of Xception model adapting the last layer to malware family classification , The performance of our approach was compared with other methods including KNN, SVM, VGG16 etc. , the Xception model can effectively be used to classify and detect  malware families and  achieve the highest validation accuracy  than all other approaches including VGG16 model which are using image-based malware, our approach does not require any features engineering, making it more effective to adapt to any future evolution in malware, and very much less time consuming than the champion’s solution.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Feng-Ping An ◽  
Jun-e Liu

Medical image segmentation is a key technology for image guidance. Therefore, the advantages and disadvantages of image segmentation play an important role in image-guided surgery. Traditional machine learning methods have achieved certain beneficial effects in medical image segmentation, but they have problems such as low classification accuracy and poor robustness. Deep learning theory has good generalizability and feature extraction ability, which provides a new idea for solving medical image segmentation problems. However, deep learning has problems in terms of its application to medical image segmentation: one is that the deep learning network structure cannot be constructed according to medical image characteristics; the other is that the generalizability y of the deep learning model is weak. To address these issues, this paper first adapts a neural network to medical image features by adding cross-layer connections to a traditional convolutional neural network. In addition, an optimized convolutional neural network model is established. The optimized convolutional neural network model can segment medical images using the features of two scales simultaneously. At the same time, to solve the generalizability problem of the deep learning model, an adaptive distribution function is designed according to the position of the hidden layer, and then the activation probability of each layer of neurons is set. This enhances the generalizability of the dropout model, and an adaptive dropout model is proposed. This model better addresses the problem of the weak generalizability of deep learning models. Based on the above ideas, this paper proposes a medical image segmentation algorithm based on an optimized convolutional neural network with adaptive dropout depth calculation. An ultrasonic tomographic image and lumbar CT medical image were separately segmented by the method of this paper. The experimental results show that not only are the segmentation effects of the proposed method improved compared with those of the traditional machine learning and other deep learning methods but also the method has a high adaptive segmentation ability for various medical images. The research work in this paper provides a new perspective for research on medical image segmentation.


Sign in / Sign up

Export Citation Format

Share Document