scholarly journals Effect of Patient Clinical Variables in Osteoporosis Classification Using Hip X-rays in Deep Learning Analysis

Medicina ◽  
2021 ◽  
Vol 57 (8) ◽  
pp. 846
Author(s):  
Norio Yamamoto ◽  
Shintaro Sukegawa ◽  
Kazutaka Yamashita ◽  
Masaki Manabe ◽  
Keisuke Nakano ◽  
...  

Background and Objectives: A few deep learning studies have reported that combining image features with patient variables enhanced identification accuracy compared with image-only models. However, previous studies have not statistically reported the additional effect of patient variables on the image-only models. This study aimed to statistically evaluate the osteoporosis identification ability of deep learning by combining hip radiographs with patient variables. Materials andMethods: We collected a dataset containing 1699 images from patients who underwent skeletal-bone-mineral density measurements and hip radiography at a general hospital from 2014 to 2021. Osteoporosis was assessed from hip radiographs using convolutional neural network (CNN) models (ResNet18, 34, 50, 101, and 152). We also investigated ensemble models with patient clinical variables added to each CNN. Accuracy, precision, recall, specificity, F1 score, and area under the curve (AUC) were calculated as performance metrics. Furthermore, we statistically compared the accuracy of the image-only model with that of an ensemble model that included images plus patient factors, including effect size for each performance metric. Results: All metrics were improved in the ResNet34 ensemble model compared with the image-only model. The AUC score in the ensemble model was significantly improved compared with the image-only model (difference 0.004; 95% CI 0.002–0.0007; p = 0.0004, effect size: 0.871). Conclusions: This study revealed the additional effect of patient variables in identification of osteoporosis using deep CNNs with hip radiographs. Our results provided evidence that the patient variables had additive synergistic effects on the image in osteoporosis identification.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jonathan Stubblefield ◽  
Mitchell Hervert ◽  
Jason L. Causey ◽  
Jake A. Qualls ◽  
Wei Dong ◽  
...  

AbstractOne of the challenges with urgent evaluation of patients with acute respiratory distress syndrome (ARDS) in the emergency room (ER) is distinguishing between cardiac vs infectious etiologies for their pulmonary findings. We conducted a retrospective study with the collected data of 171 ER patients. ER patient classification for cardiac and infection causes was evaluated with clinical data and chest X-ray image data. We show that a deep-learning model trained with an external image data set can be used to extract image features and improve the classification accuracy of a data set that does not contain enough image data to train a deep-learning model. An analysis of clinical feature importance was performed to identify the most important clinical features for ER patient classification. The current model is publicly available with an interface at the web link: http://nbttranslationalresearch.org/.


Biomolecules ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 1534
Author(s):  
Norio Yamamoto ◽  
Shintaro Sukegawa ◽  
Akira Kitamura ◽  
Ryosuke Goto ◽  
Tomoyuki Noda ◽  
...  

This study considers the use of deep learning to diagnose osteoporosis from hip radiographs, and whether adding clinical data improves diagnostic performance over the image mode alone. For objective labeling, we collected a dataset containing 1131 images from patients who underwent both skeletal bone mineral density measurement and hip radiography at a single general hospital between 2014 and 2019. Osteoporosis was assessed from the hip radiographs using five convolutional neural network (CNN) models. We also investigated ensemble models with clinical covariates added to each CNN. The accuracy, precision, recall, specificity, negative predictive value (npv), F1 score, and area under the curve (AUC) score were calculated for each network. In the evaluation of the five CNN models using only hip radiographs, GoogleNet and EfficientNet b3 exhibited the best accuracy, precision, and specificity. Among the five ensemble models, EfficientNet b3 exhibited the best accuracy, recall, npv, F1 score, and AUC score when patient variables were included. The CNN models diagnosed osteoporosis from hip radiographs with high accuracy, and their performance improved further with the addition of clinical covariates from patient records.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Fan Yang ◽  
Zhi-Ri Tang ◽  
Jing Chen ◽  
Min Tang ◽  
Shengchun Wang ◽  
...  

Abstract Purpose The objective of this study is to construct a computer aided diagnosis system for normal people and pneumoconiosis using X-raysand deep learning algorithms. Materials and methods 1760 anonymous digital X-ray images of real patients between January 2017 and June 2020 were collected for this experiment. In order to concentrate the feature extraction ability of the model more on the lung region and restrain the influence of external background factors, a two-stage pipeline from coarse to fine was established. First, the U-Net model was used to extract the lung regions on each sides of the collection images. Second, the ResNet-34 model with transfer learning strategy was implemented to learn the image features extracted in the lung region to achieve accurate classification of pneumoconiosis patients and normal people. Results Among the 1760 cases collected, the accuracy and the area under curve of the classification model were 92.46% and 89% respectively. Conclusion The successful application of deep learning in the diagnosis of pneumoconiosis further demonstrates the potential of medical artificial intelligence and proves the effectiveness of our proposed algorithm. However, when we further classified pneumoconiosis patients and normal subjects into four categories, we found that the overall accuracy decreased to 70.1%. We will use the CT modality in future studies to provide more details of lung regions.


2021 ◽  
Author(s):  
Shintaro Sukegawa ◽  
Ai Fujimura ◽  
Akira Taguchi ◽  
Norio Yamamoto ◽  
Akira Kitamura ◽  
...  

Abstract Osteoporosis is becoming a global health issue due to increased life expectancy. However, it is difficult to detect in its early stages owing to a lack of discernible symptoms. Hence, screening for osteoporosis with widely used dental panoramic radiographs would be very cost-effective and useful. In this study, we investigate the use of deep learning to classify osteoporosis from dental panoramic radiographs. In addition, the effect of adding clinical covariate data to the radiographic images on the identification performance was assessed. For objective labeling, a dataset containing 778 images was collected from patients who underwent both skeletal-bone-mineral density measurement and dental panoramic radiography at a single general hospital between 2014 and 2020. Osteoporosis was assessed from the dental panoramic radiographs using convolutional neural network (CNN) models, including EfficientNet-b0, -b3, and -b7 and ResNet-18, -50, and -152. An ensemble model was also constructed with clinical covariates added to each CNN. The ensemble model exhibited improved performance on all metrics for all CNNs, especially accuracy and AUC. The results show that deep learning by CNN can accurately classify osteoporosis from dental panoramic radiographs. Furthermore, it was shown that the accuracy can be improved using an ensemble model with patient covariates.


EBioMedicine ◽  
2021 ◽  
Vol 70 ◽  
pp. 103517
Author(s):  
Vineet K. Raghu ◽  
Michael T. Lu
Keyword(s):  
X Rays ◽  

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2021 ◽  
pp. 1-11
Author(s):  
Yaning Liu ◽  
Lin Han ◽  
Hexiang Wang ◽  
Bo Yin

Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Makoto Nishimori ◽  
Kunihiko Kiuchi ◽  
Kunihiro Nishimura ◽  
Kengo Kusano ◽  
Akihiro Yoshida ◽  
...  

AbstractCardiac accessory pathways (APs) in Wolff–Parkinson–White (WPW) syndrome are conventionally diagnosed with decision tree algorithms; however, there are problems with clinical usage. We assessed the efficacy of the artificial intelligence model using electrocardiography (ECG) and chest X-rays to identify the location of APs. We retrospectively used ECG and chest X-rays to analyse 206 patients with WPW syndrome. Each AP location was defined by an electrophysiological study and divided into four classifications. We developed a deep learning model to classify AP locations and compared the accuracy with that of conventional algorithms. Moreover, 1519 chest X-ray samples from other datasets were used for prior learning, and the combined chest X-ray image and ECG data were put into the previous model to evaluate whether the accuracy improved. The convolutional neural network (CNN) model using ECG data was significantly more accurate than the conventional tree algorithm. In the multimodal model, which implemented input from the combined ECG and chest X-ray data, the accuracy was significantly improved. Deep learning with a combination of ECG and chest X-ray data could effectively identify the AP location, which may be a novel deep learning model for a multimodal model.


Sign in / Sign up

Export Citation Format

Share Document