scholarly journals Combined Atlas and Convolutional Neural Network-Based Segmentation of the Hippocampus from MRI According to the ADNI Harmonized Protocol

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2427
Author(s):  
Samaneh Nobakht ◽  
Morgan Schaeffer ◽  
Nils Forkert ◽  
Sean Nestor ◽  
Sandra E. Black ◽  
...  

Hippocampus atrophy is an early structural feature that can be measured from magnetic resonance imaging (MRI) to improve the diagnosis of neurological diseases. An accurate and robust standardized hippocampus segmentation method is required for reliable atrophy assessment. The aim of this work was to develop and evaluate an automatic segmentation tool (DeepHarp) for hippocampus delineation according to the ADNI harmonized hippocampal protocol (HarP). DeepHarp utilizes a two-step process. First, the approximate location of the hippocampus is identified in T1-weighted MRI datasets using an atlas-based approach, which is used to crop the images to a region-of-interest (ROI) containing the hippocampus. In the second step, a convolutional neural network trained using datasets with corresponding manual hippocampus annotations is used to segment the hippocampus from the cropped ROI. The proposed method was developed and validated using 107 datasets with manually segmented hippocampi according to the ADNI-HarP standard as well as 114 multi-center datasets of patients with Alzheimer’s disease, mild cognitive impairment, cerebrovascular disease, and healthy controls. Twenty-three independent datasets manually segmented according to the ADNI-HarP protocol were used for testing to assess the accuracy, while an independent test-retest dataset was used to assess precision. The proposed DeepHarp method achieved a mean Dice similarity score of 0.88, which was significantly better than four other established hippocampus segmentation methods used for comparison. At the same time, the proposed method also achieved a high test-retest precision (mean Dice score: 0.95). In conclusion, DeepHarp can automatically segment the hippocampus from T1-weighted MRI datasets according to the ADNI-HarP protocol with high accuracy and robustness, which can aid atrophy measurements in a variety of pathologies.

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yanyan Pan ◽  
Huiping Zhang ◽  
Jinsuo Yang ◽  
Jing Guo ◽  
Zhiguo Yang ◽  
...  

This study aimed to explore the application value of multimodal magnetic resonance imaging (MRI) images based on the deep convolutional neural network (Conv.Net) in the diagnosis of strokes. Specifically, four automatic segmentation algorithms were proposed to segment multimodal MRI images of stroke patients. The segmentation effects were evaluated factoring into DICE, accuracy, sensitivity, and segmentation distance coefficient. It was found that although two-dimensional (2D) full convolutional neural network-based segmentation algorithm can locate and segment the lesion, its accuracy was low; the three-dimensional one exhibited higher accuracy, with various objective indicators improved, and the segmentation accuracy of the training set and the test set was 0.93 and 0.79, respectively, meeting the needs of automatic diagnosis. The asymmetric 3D residual U-Net network had good convergence and high segmentation accuracy, and the 3D deep residual network proposed on its basis had good segmentation coefficients, which can not only ensure segmentation accuracy but also avoid network degradation problems. In conclusion, the Conv.Net model can accurately segment the foci of patients with ischemic stroke and is suggested in clinic.


2020 ◽  
Vol 44 (1) ◽  
pp. 74-81 ◽  
Author(s):  
T.A. Pashina ◽  
A.V. Gaidel ◽  
P.M. Zelter ◽  
A.V. Kapishnikov ◽  
A.V. Nikonorov

This article discusses the creation of masks for highlighting the lungs in computed tomography images using three methods – the Otsu method, a simple convolutional neural network consisting of 10 identical layers, and the convolutional neural network U-Net. We perform a study and comparison of methods used for automatically highlighting the region of interest (ROI) in computed tomography images of the lungs, which were provided as a courtesy from the Clinics of Samara State Medical University. The solution to this problem is relevant, because medical workers have to manually select the ROI as the first step of the automated processing of lung CT images. An algorithm for post-processing images based on the search for contours, which allows one to improve the quality of segmentation, is proposed. It is concluded that the U-Net highlights the ROI relating to the lung better than the other two methods. At the same time, the simple convolutional neural network highlights the ROI with an accuracy of 97.5%, which is better than the accuracy of 96.7% of the Otsu method and 96.4% of the U-Net.


Author(s):  
Liang Kim Meng ◽  
Azira Khalil ◽  
Muhamad Hanif Ahmad Nizar ◽  
Maryam Kamarun Nisham ◽  
Belinda Pingguan-Murphy ◽  
...  

Background: Bone Age Assessment (BAA) refers to a clinical procedure that aims to identify a discrepancy between biological and chronological age of an individual by assessing the bone age growth. Currently, there are two main methods of executing BAA which are known as Greulich-Pyle and Tanner-Whitehouse techniques. Both techniques involve a manual and qualitative assessment of hand and wrist radiographs, resulting in intra and inter-operator variability accuracy and time-consuming. An automatic segmentation can be applied to the radiographs, providing the physician with more accurate delineation of the carpal bone and accurate quantitative analysis. Methods: In this study, we proposed an image feature extraction technique based on image segmentation with the fully convolutional neural network with eight stride pixel (FCN-8). A total of 290 radiographic images including both female and the male subject of age ranging from 0 to 18 were manually segmented and trained using FCN-8. Results and Conclusion: The results exhibit a high training accuracy value of 99.68% and a loss rate of 0.008619 for 50 epochs of training. The experiments compared 58 images against the gold standard ground truth images. The accuracy of our fully automated segmentation technique is 0.78 ± 0.06, 1.56 ±0.30 mm and 98.02% in terms of Dice Coefficient, Hausdorff Distance, and overall qualitative carpal recognition accuracy, respectively.


2021 ◽  
Vol 7 (10) ◽  
pp. 850
Author(s):  
Veena Mayya ◽  
Sowmya Kamath Shevgoor ◽  
Uma Kulkarni ◽  
Manali Hazarika ◽  
Prabal Datta Barua ◽  
...  

Microbial keratitis is an infection of the cornea of the eye that is commonly caused by prolonged contact lens wear, corneal trauma, pre-existing systemic disorders and other ocular surface disorders. It can result in severe visual impairment if improperly managed. According to the latest World Vision Report, at least 4.2 million people worldwide suffer from corneal opacities caused by infectious agents such as fungi, bacteria, protozoa and viruses. In patients with fungal keratitis (FK), often overt symptoms are not evident, until an advanced stage. Furthermore, it has been reported that clear discrimination between bacterial keratitis and FK is a challenging process even for trained corneal experts and is often misdiagnosed in more than 30% of the cases. However, if diagnosed early, vision impairment can be prevented through early cost-effective interventions. In this work, we propose a multi-scale convolutional neural network (MS-CNN) for accurate segmentation of the corneal region to enable early FK diagnosis. The proposed approach consists of a deep neural pipeline for corneal region segmentation followed by a ResNeXt model to differentiate between FK and non-FK classes. The model trained on the segmented images in the region of interest, achieved a diagnostic accuracy of 88.96%. The features learnt by the model emphasize that it can correctly identify dominant corneal lesions for detecting FK.


Author(s):  
Truong Quang Vinh ◽  
Dinh Viet Hai

Convolutional neural network (CNN) is one of the most promising algorithms that outweighs other traditional methods in terms of accuracy in classification tasks. However, several CNNs, such as VGG, demand a huge computation in convolutional layers. Many accelerators implemented on powerful FPGAs have been introduced to address the problems. In this paper, we present a VGG-based accelerator which is optimized for a low-cost FPGA. In order to optimize the FPGA resource of logic element and memory, we propose a dedicated input buffer that maximizes the data reuse. In addition, we design a low resource processing engine with the optimal number of Multiply Accumulate (MAC) units. In the experiments, we use VGG16 model for inference to evaluate the performance of our accelerator and achieve a throughput of 38.8[Formula: see text]GOPS at a clock speed of 150[Formula: see text]MHz on Intel Cyclone V SX SoC. The experimental results show that our design is better than previous works in terms of resource efficiency.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Valli Bhasha A. ◽  
Venkatramana Reddy B.D.

Purpose The problems of Super resolution are broadly discussed in diverse fields. Rather than the progression toward the super resolution models for real-time images, operating hyperspectral images still remains a challenging problem. Design/methodology/approach This paper aims to develop the enhanced image super-resolution model using “optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT), and Optimized Deep Convolutional Neural Network”. Once after converting the HR images into LR images, the NSSR images are generated by the optimized NSSR. Then the ADWT is used for generating the subbands of both NSSR and HRSB images. The residual image with this information is obtained by the optimized Deep CNN. All the improvements on the algorithms are done by the Opposition-based Barnacles Mating Optimization (O-BMO), with the objective of attaining the multi-objective function concerning the “Peak Signal-to-Noise Ratio (PSNR), and Structural similarity (SSIM) index”. Extensive analysis on benchmark hyperspectral image datasets shows that the proposed model achieves superior performance over typical other existing super-resolution models. Findings From the analysis, the overall analysis of the suggested and the conventional super resolution models relies that the PSNR of the improved O-BMO-(NSSR+DWT+CNN) was 38.8% better than bicubic, 11% better than NSSR, 16.7% better than DWT+CNN, 1.3% better than NSSR+DWT+CNN, and 0.5% better than NSSR+FF-SHO-(DWT+CNN). Hence, it has been confirmed that the developed O-BMO-(NSSR+DWT+CNN) is performing well in converting LR images to HR images. Originality/value This paper adopts a latest optimization algorithm called O-BMO with optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT) and Optimized Deep Convolutional Neural Network for developing the enhanced image super-resolution model. This is the first work that uses O-BMO-based Deep CNN for image super-resolution model enhancement.


Sign in / Sign up

Export Citation Format

Share Document