scholarly journals Computational Analysis of Vertebral Body for Compression Fracture using Texture and Shape Features

The primary goal in this paper is to automate radiological measurements of Vertebral Body (VB) in Magnetic Resonance Imaging (MRI) spinal scans. It starts by preprocessing the images, then detect and localize the VB regions, next segment and label VBs and finally classify each VB into three cases as being normal or fractured in case 1, benign or malignant in case 2 and normal, benign or malignant in case 3. The task is accomplished by extracting and combining distinct features of VB such as boundary, gray levels, shape and texture features using various Machine Learning techniques. The class balance deficit dataset towards normal and fractures is balanced by data augmentation which provides an enriched dataset for the learning system to perform precise differentiation between classes. On a clinical spine dataset, the method is tested and validated on 535 VBs for segmentation attaining an average accuracy 94.59% and on 315 VBs for classification with an average accuracy of 96.07% for case 1, 93.23% for case 2 and 92.3% for case 3.

Author(s):  
Adela Arpitha ◽  
Lalitha Rangarajan

The primary goal in this paper is to automate radiological measurements of Vertebral Body (VB) in Magnetic Resonance Imaging (MRI) spinal scans. It starts by preprocessing the images, then detect and localize the VB regions, next segment and label VBs and finally classify each VB into three cases as being normal or fractured in case 1, benign or malignant in case 2 and normal, benign or malignant in case 3. The task is accomplished by extracting and combining distinct features of VB such as boundary, gray levels, shape and texture features using various Machine Learning techniques. The class balance deficit dataset towards normal and fractures is balanced by data augmentation which provides an enriched dataset for the learning system to perform precise differentiation between classes. On a clinical spine dataset, the method is tested and validated on 535 VBs for segmentation attaining an average accuracy 94.59% and on 315 VBs for classification with an average accuracy of 96.07% for case 1, 93.23% for case 2 and 92.3% for case 3.


2001 ◽  
Vol 44 (2) ◽  
pp. 145 ◽  
Author(s):  
Hyuk Jung Kim ◽  
Seon Kyu Lee ◽  
Hee Young Hwang ◽  
Hyung Sik Kim ◽  
Joon Seok Ko ◽  
...  

2021 ◽  
Vol 11 (10) ◽  
pp. 4554
Author(s):  
João F. Teixeira ◽  
Mariana Dias ◽  
Eva Batista ◽  
Joana Costa ◽  
Luís F. Teixeira ◽  
...  

The scarcity of balanced and annotated datasets has been a recurring problem in medical image analysis. Several researchers have tried to fill this gap employing dataset synthesis with adversarial networks (GANs). Breast magnetic resonance imaging (MRI) provides complex, texture-rich medical images, with the same annotation shortage issues, for which, to the best of our knowledge, no previous work tried synthesizing data. Within this context, our work addresses the problem of synthesizing breast MRI images from corresponding annotations and evaluate the impact of this data augmentation strategy on a semantic segmentation task. We explored variations of image-to-image translation using conditional GANs, namely fitting the generator’s architecture with residual blocks and experimenting with cycle consistency approaches. We studied the impact of these changes on visual verisimilarity and how an U-Net segmentation model is affected by the usage of synthetic data. We achieved sufficiently realistic-looking breast MRI images and maintained a stable segmentation score even when completely replacing the dataset with the synthetic set. Our results were promising, especially when concerning to Pix2PixHD and Residual CycleGAN architectures.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii93-ii93
Author(s):  
Kate Connor ◽  
Emer Conroy ◽  
Kieron White ◽  
Liam Shiels ◽  
William Gallagher ◽  
...  

Abstract Despite magnetic resonance imaging (MRI) being the gold-standard imaging modality in the glioblastoma (GBM) setting, the availability of rodent MRI scanners is relatively limited. CT is a clinically relevant alternative which is more widely available in the pre-clinic. To study the utility of contrast-enhanced (CE)-CT in GBM xenograft modelling, we optimized CT protocols on two instruments (IVIS-SPECTRUM-CT;TRIUMPH-PET/CT) with/without delivery of contrast. As radiomics analysis may facilitate earlier detection of tumors by CT alone, allowing for deeper analyses of tumor characteristics, we established a radiomic pipeline for extraction and selection of tumor specific CT-derived radiomic features (inc. first order statistics/texture features). U87R-Luc2 GBM cells were implanted orthotopically into NOD/SCID mice (n=25) and tumor growth monitored via weekly BLI. Concurrently mice underwent four rounds of CE-CT (IV iomeprol/iopamidol; 50kV-scan). N=45 CE-CT images were semi-automatically delineated and radiomic features were extracted (Pyradiomics 2.2.0) at each imaging timepoint. Differences between normal and tumor tissue were analyzed using recursive selection. Using either CT instrument/contrast, tumors > 0.4cm3 were not detectable until week-9 post-implantation. Radiomic analysis identified three features (waveletHHH_firstorder_Median, original_glcm_Correlation and waveletLHL_firstorder_Median) at week-3 and -6 which may be early indicators of tumor presence. These features are now being assessed in CE-CT scans collected pre- and post-temozolomide treatment in a syngeneic model of mesenchymal GBM. Nevertheless, BLI is significantly more sensitive than CE-CT (either visually or using radiomic-enhanced CT feature extraction) with luciferase-positive tumors detectable at week-1. In conclusion, U87R-Luc2 tumors > 0.4cm3 are only detectable by Week-8 using CE-CT and either CT instrument studied. Nevertheless, radiomic analysis has defined features which may allow for earlier tumor detection at Week-3, thus expanding the utility of CT in the preclinical setting. Overall, this work supports the discovery of putative prognostic pre-clinical CT-derived radiomic signatures which may ultimately be assessed as early disease markers in patient datasets.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fahime Khozeimeh ◽  
Danial Sharifrazi ◽  
Navid Hoseini Izadi ◽  
Javad Hassannataj Joloudari ◽  
Afshin Shoeibi ◽  
...  

AbstractCOVID-19 has caused many deaths worldwide. The automation of the diagnosis of this virus is highly desired. Convolutional neural networks (CNNs) have shown outstanding classification performance on image datasets. To date, it appears that COVID computer-aided diagnosis systems based on CNNs and clinical information have not yet been analysed or explored. We propose a novel method, named the CNN-AE, to predict the survival chance of COVID-19 patients using a CNN trained with clinical information. Notably, the required resources to prepare CT images are expensive and limited compared to those required to collect clinical data, such as blood pressure, liver disease, etc. We evaluated our method using a publicly available clinical dataset that we collected. The dataset properties were carefully analysed to extract important features and compute the correlations of features. A data augmentation procedure based on autoencoders (AEs) was proposed to balance the dataset. The experimental results revealed that the average accuracy of the CNN-AE (96.05%) was higher than that of the CNN (92.49%). To demonstrate the generality of our augmentation method, we trained some existing mortality risk prediction methods on our dataset (with and without data augmentation) and compared their performances. We also evaluated our method using another dataset for further generality verification. To show that clinical data can be used for COVID-19 survival chance prediction, the CNN-AE was compared with multiple pre-trained deep models that were tuned based on CT images.


PEDIATRICS ◽  
1986 ◽  
Vol 78 (2) ◽  
pp. 251-256
Author(s):  
Roger J. Packer ◽  
Robert A. Zimmerman ◽  
Leslie N. Sutton ◽  
Larissa T. Bilaniuk ◽  
Derek A. Bruce ◽  
...  

Correct diagnosis of spinal cord disease in childhood is often delayed, resulting in irreversible neurologic deficits. A major reason for this delay is the lack of a reliable means to noninvasively visualize the spinal cord. Magnetic resonance imaging (MRI) should be useful in the evaluation of diseases of the spinal cord. A 1.5 Tesla MRI unit with a surface coil was used to study 41 children, including eight patients with intrinsic spinal cord lesions, eight patients with masses compressing the cord, 12 patients with congenital anomalies of the cord or surrounding bony structures, three patients with syrinxes, and three patients with vertebral body abnormalities. Intrinsic lesions of the cord were well seen in all cases as intrinsic irregularly widened, abnormally intense cord regions. MRI was helpful in following the course of disease in patients with primary spinal cord tumors. Areas of tumor were separable from syrinx cavities. Extrinsic lesions compressing the cord and vertebral body disease were also well visualized. Congenital anomalies of the spinal cord, including tethering and lipomatous tissue, were better seen on MRI than by any other radiographic technique. MRI is an excellent noninvasive "screening" technique for children with suspected spinal cord disease and may be the only study needed in many patients with congenital spinal cord anomalies. It is also an excellent means to diagnose and follow patients with other forms of intra- and extraspinal pathology.


Author(s):  
Jinfang Zeng ◽  
Youming Li ◽  
Yu Zhang ◽  
Da Chen

Environmental sound classification (ESC) is a challenging problem due to the complexity of sounds. To date, a variety of signal processing and machine learning techniques have been applied to ESC task, including matrix factorization, dictionary learning, wavelet filterbanks and deep neural networks. It is observed that features extracted from deeper networks tend to achieve higher performance than those extracted from shallow networks. However, in ESC task, only the deep convolutional neural networks (CNNs) which contain several layers are used and the residual networks are ignored, which lead to degradation in the performance. Meanwhile, a possible explanation for the limited exploration of CNNs and the difficulty to improve on simpler models is the relative scarcity of labeled data for ESC. In this paper, a residual network called EnvResNet for the ESC task is proposed. In addition, we propose to use audio data augmentation to overcome the problem of data scarcity. The experiments will be performed on the ESC-50 database. Combined with data augmentation, the proposed model outperforms baseline implementations relying on mel-frequency cepstral coefficients and achieves results comparable to other state-of-the-art approaches in terms of classification accuracy.


2020 ◽  
Vol 99 (9) ◽  
pp. 239s-245s
Author(s):  
CHAO LI ◽  
◽  
QIYUE WANG ◽  
WENHUA JIAO ◽  
MICHAEL JOHNSON ◽  
...  

An innovative method was proposed to determine weld joint penetration using machine learning techniques. In our approach, the dot-structured laser images reflected from an oscillating weld pool surface were captured. Experienced welders typically evaluate the weld penetration status based on this reflected laser pattern. To overcome the challenges in identifying features and accurately processing the images using conventional machine vision algorithms, we proposed the use the raw images without any processing as the input to a convolutional neural network (CNN). The labels needed to train the CNN were the measured weld penetration states, obtained from the images on the backside of the workpiece as a set of discrete weld penetration categories. The raw data, images, and penetration state were generated from extensive experiments using an automated robotic gas tungsten arc welding process. Data augmentation was performed to enhance the robustness of the trained network, which led to 270,000 training examples, 45,000 validation examples, and 45,000 test examples. A six-layer convolutional neural net-work trained with a modified mini-batch gradient descent method led to a final testing accuracy of 90.7%. A voting mechanism based on three continuous images increased the classification accuracy to 97.6%.


Sign in / Sign up

Export Citation Format

Share Document