scholarly journals Internet of Medical Things: An Effective and Fully Automatic IoT Approach Using Deep Learning and Fine-Tuning to Lung CT Segmentation

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6711
Author(s):  
Luís Fabrício de Freitas Souza ◽  
Iágson Carlos Lima Silva ◽  
Adriell Gomes Marques ◽  
Francisco Hércules dos S. Silva ◽  
Virgínia Xavier Nunes ◽  
...  

Several pathologies have a direct impact on society, causing public health problems. Pulmonary diseases such as Chronic obstructive pulmonary disease (COPD) are already the third leading cause of death in the world, leaving tuberculosis at ninth with 1.7 million deaths and over 10.4 million new occurrences. The detection of lung regions in images is a classic medical challenge. Studies show that computational methods contribute significantly to the medical diagnosis of lung pathologies by Computerized Tomography (CT), as well as through Internet of Things (IoT) methods based in the context on the health of things. The present work proposes a new model based on IoT for classification and segmentation of pulmonary CT images, applying the transfer learning technique in deep learning methods combined with Parzen’s probability density. The proposed model uses an Application Programming Interface (API) based on the Internet of Medical Things to classify lung images. The approach was very effective, with results above 98% accuracy for classification in pulmonary images. Then the model proceeds to the lung segmentation stage using the Mask R-CNN network to create a pulmonary map and use fine-tuning to find the pulmonary borders on the CT image. The experiment was a success, the proposed method performed better than other works in the literature, reaching high segmentation metrics values such as accuracy of 98.34%. Besides reaching 5.43 s in segmentation time and overcoming other transfer learning models, our methodology stands out among the others because it is fully automatic. The proposed approach has simplified the segmentation process using transfer learning. It has introduced a faster and more effective method for better-performing lung segmentation, making our model fully automatic and robust.

2019 ◽  
Vol 19 (S8) ◽  
Author(s):  
Chunlei Tang ◽  
Joseph M. Plasek ◽  
Haohan Zhang ◽  
Min-Jeoung Kang ◽  
Haokai Sheng ◽  
...  

Abstract Background Chronic obstructive pulmonary disease (COPD) is a progressive lung disease that is classified into stages based on disease severity. We aimed to characterize the time to progression prior to death in patients with COPD and to generate a temporal visualization that describes signs and symptoms during different stages of COPD progression. Methods We present a two-step approach for visualizing COPD progression at the level of unstructured clinical notes. We included 15,500 COPD patients who both received care within Partners Healthcare’s network and died between 2011 and 2017. We first propose a four-layer deep learning model that utilizes a specially configured recurrent neural network to capture irregular time lapse segments. Using those irregular time lapse segments, we created a temporal visualization (the COPD atlas) to demonstrate COPD progression, which consisted of representative sentences at each time window prior to death based on a fraction of theme words produced by a latent Dirichlet allocation model. We evaluated our approach on an annotated corpus of COPD patients’ unstructured pulmonary, radiology, and cardiology notes. Results Experiments compared to the baselines showed that our proposed approach improved interpretability as well as the accuracy of estimating COPD progression. Conclusions Our experiments demonstrated that the proposed deep-learning approach to handling temporal variation in COPD progression is feasible and can be used to generate a graphical representation of disease progression using information extracted from clinical notes.


Genes ◽  
2019 ◽  
Vol 10 (10) ◽  
pp. 783 ◽  
Author(s):  
Ozretić ◽  
da Silva Filho ◽  
Catalano ◽  
Sokolović ◽  
Vukić-Dugac ◽  
...  

Chronic obstructive pulmonary disease (COPD) is a chronic disease characterized by a progressive decline in lung function due to airflow limitation, mainly related to IL-1β-induced inflammation. We have hypothesized that single nucleotide polymorphisms (SNPs) in NLRP genes, coding for key regulators of IL-1β, are associated with pathogenesis and clinical phenotypes of COPD. We recruited 704 COPD individuals and 1238 healthy controls for this study. Twenty non-synonymous SNPs in 10 different NLRP genes were genotyped. Genetic associations were estimated using logistic regression, adjusting for age, gender, and smoking history. The impact of genotypes on patients’ overall survival was analyzed with the Kaplan–Meier method with the log-rank test. Serum IL-1β concentration was determined by high sensitivity assay and expression analysis was done by RT-PCR. Decreased lung function, measured by a forced expiratory volume in 1 s (FEV1% predicted), was significantly associated with the minor allele genotypes (AT + TT) of NLRP1 rs12150220 (p = 0.0002). The same rs12150220 genotypes exhibited a higher level of serum IL-1β compared to the AA genotype (p = 0.027) in COPD patients. NLRP8 rs306481 minor allele genotypes (AG + AA) were more common in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) definition of group A (p = 0.0083). Polymorphisms in NLRP1 (rs12150220; OR = 0.55, p = 0.03) and NLRP4 (rs12462372; OR = 0.36, p = 0.03) were only nominally associated with COPD risk. In conclusion, coding polymorphisms in NLRP1 rs12150220 show an association with COPD disease severity, indicating that the fine-tuning of the NLRP1 inflammasome could be important in maintaining lung tissue integrity and treating the chronic inflammation of airways.


2020 ◽  
Vol 10 (4) ◽  
pp. 213 ◽  
Author(s):  
Ki-Sun Lee ◽  
Jae Young Kim ◽  
Eun-tae Jeon ◽  
Won Suk Choi ◽  
Nan Hee Kim ◽  
...  

According to recent studies, patients with COVID-19 have different feature characteristics on chest X-ray (CXR) than those with other lung diseases. This study aimed at evaluating the layer depths and degree of fine-tuning on transfer learning with a deep convolutional neural network (CNN)-based COVID-19 screening in CXR to identify efficient transfer learning strategies. The CXR images used in this study were collected from publicly available repositories, and the collected images were classified into three classes: COVID-19, pneumonia, and normal. To evaluate the effect of layer depths of the same CNN architecture, CNNs called VGG-16 and VGG-19 were used as backbone networks. Then, each backbone network was trained with different degrees of fine-tuning and comparatively evaluated. The experimental results showed the highest AUC value to be 0.950 concerning COVID-19 classification in the experimental group of a fine-tuned with only 2/5 blocks of the VGG16 backbone network. In conclusion, in the classification of medical images with a limited number of data, a deeper layer depth may not guarantee better results. In addition, even if the same pre-trained CNN architecture is used, an appropriate degree of fine-tuning can help to build an efficient deep learning model.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Guoxin Zhang ◽  
Zengcai Wang ◽  
Lei Zhao ◽  
Yazhou Qi ◽  
Jinshan Wang

This study employs the mechanical vibration and acoustic waves of a hydraulic support tail beam for an accurate and fast coal-rock recognition. The study proposes a diagnosis method based on bimodal deep learning and Hilbert-Huang transform. The bimodal deep neural networks (DNN) adopt bimodal learning and transfer learning. The bimodal learning method attempts to learn joint representation by considering acceleration and sound pressure modalities, which both contribute to coal-rock recognition. The transfer learning method solves the problem regarding DNN, in which a large number of labeled training samples are necessary to optimize the parameters while the labeled training sample is limited. A suitable installation location for sensors is determined in recognizing coal-rock. The extraction features of acceleration and sound pressure signals are combined and effective combination features are selected. Bimodal DNN consists of two deep belief networks (DBN), each DBN model is trained with related samples, and the parameters of the pretrained DBNs are transferred to the final recognition model. Then the parameters of the proposed model are continuously optimized by pretraining and fine-tuning. Finally, the comparison of experimental results demonstrates the superiority of the proposed method in terms of recognition accuracy.


2018 ◽  
Vol 06 (01) ◽  
pp. 21-31 ◽  
Author(s):  
Toru Kimura ◽  
Takashi Kawakami ◽  
Akihiro Kikuchi ◽  
Ryosuke Ooev ◽  
Masaki Akiyama ◽  
...  

Healthcare ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1579
Author(s):  
Wansuk Choi ◽  
Seoyoon Heo

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.


Sign in / Sign up

Export Citation Format

Share Document