Deep neural networks for segmentation of basal ganglia sub-structures in brain MR images

Author(s):  
Akshay Sethi ◽  
Akshat Sinha ◽  
Ayush Agarwal ◽  
Chetan Arora ◽  
Anubha Gupta
2021 ◽  
Vol 67 ◽  
pp. 101817
Author(s):  
Yunzhi Huang ◽  
Sahar Ahmad ◽  
Jingfan Fan ◽  
Dinggang Shen ◽  
Pew-Thian Yap

2018 ◽  
Author(s):  
Gary H. Chang ◽  
David T. Felson ◽  
Shangran Qiu ◽  
Terence D. Capellini ◽  
Vijaya B. Kolachalama

ABSTRACTBackground and objectiveIt remains difficult to characterize pain in knee joints with osteoarthritis solely by radiographic findings. We sought to understand how advanced machine learning methods such as deep neural networks can be used to analyze raw MRI scans and predict bilateral knee pain, independent of other risk factors.MethodsWe developed a deep learning framework to associate information from MRI slices taken from the left and right knees of subjects from the Osteoarthritis Initiative with bilateral knee pain. Model training was performed by first extracting features from two-dimensional (2D) sagittal intermediate-weighted turbo spin echo slices. The extracted features from all the 2D slices were subsequently combined to directly associate using a fused deep neural network with the output of interest as a binary classification problem.ResultsThe deep learning model resulted in predicting bilateral knee pain on test data with 70.1% mean accuracy, 51.3% mean sensitivity, and 81.6% mean specificity. Systematic analysis of the predictions on the test data revealed that the model performance was consistent across subjects of different Kellgren-Lawrence grades.ConclusionThe study demonstrates a proof of principle that a machine learning approach can be applied to associate MR images with bilateral knee pain.SIGNIFICANCE AND INNOVATIONKnee pain is typically considered as an early indicator of osteoarthritis (OA) risk. Emerging evidence suggests that MRI changes are linked to pre-clinical OA, thus underscoring the need for building image-based models to predict knee pain. We leveraged a state-of-the-art machine learning approach to associate raw MR images with bilateral knee pain, independent of other risk factors.


2021 ◽  
Vol 11 (5) ◽  
pp. 1364-1371
Author(s):  
Ching Wai Yong ◽  
Kareen Teo ◽  
Belinda Pingguan Murphy ◽  
Yan Chai Hum ◽  
Khin Wee Lai

In recent decades, convolutional neural networks (CNNs) have delivered promising results in vision-related tasks across different domains. Previous studies have introduced deeper network architectures to further improve the performances of object classification, localization, and segmentation. However, this induces the complexity in mapping network’s layer to the processing elements in the ventral visual pathway. Although CORnet models are not precisely biomimetic, they are closer approximations to the anatomy of ventral visual pathway compared with other deep neural networks. The uniqueness of this architecture inspires us to extend it into a core object segmentation network, CORSegnet-Z. This architecture utilizes CORnet-Z building blocks as the encoding elements. We train and evaluate the proposed model using two large datasets. Our proposed model shows significant improvements on the segmentation metrics in delineating cartilage tissues from knee magnetic resonance (MR) images and segmenting lesion boundary from dermoscopic images.


2020 ◽  
Vol 37 (4) ◽  
pp. 593-601
Author(s):  
Premamayudu Bulla ◽  
Lakshmipathi Anantha ◽  
Subbarao Peram

To investigate the effect of deep neural networks with transfer learning on MR images for tumor classification and improve the classification metrics by building image-level, stratified image-level, and patient-level models. Three thousand sixty-four T1-weighted magnetic resonance (MR) imaging from two hundred thirty-three patient cases of three brain tumors types (meningioma, glioma, and pituitary) were collected and it includes coronal, sagittal and axial views. The average number of brain images of each patient in three views is fourteen in the collected dataset. The classification is performed in a model of cross-trained with a pre-trained InceptionV3 model. Three image-level and one patient-level models are built on the MR imaging dataset. The models are evaluated in classification metrics such as accuracy, loss, precision, recall, kappa, and AUC. The proposed models are validated using four approaches: holdout validation, 10-fold cross-validation, stratified 10-fold cross-validation, and group 10-fold cross-validation. The generalization capability and improvement of the network are tested by using cropped and uncropped images of the dataset. The best results for group 10-fold cross-validation (patient-level) are obtained on the used dataset (ACC=99.82). A deep neural network with transfer learning can be used to classify brain tumors from MR images. Our patient-level network model noted the best results in classification to improve accuracy.


2020 ◽  
Vol 27 ◽  
pp. 141-145 ◽  
Author(s):  
Kevin J. Chung ◽  
Roberto Souza ◽  
Richard Frayne

Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Sign in / Sign up

Export Citation Format

Share Document