scholarly journals Deep Learning-Based Available and Common Clinical-Related Feature Variables Robustly Predict Survival in Community-Acquired Pneumonia

2021 ◽  
Vol Volume 14 ◽  
pp. 3701-3709
Author(s):  
Ding-Yun Feng ◽  
Yong Ren ◽  
Mi Zhou ◽  
Xiao-Ling Zou ◽  
Wen-Bin Wu ◽  
...  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xiaoguo Zhang ◽  
Dawei Wang ◽  
Jiang Shao ◽  
Song Tian ◽  
Weixiong Tan ◽  
...  

AbstractSince its first outbreak, Coronavirus Disease 2019 (COVID-19) has been rapidly spreading worldwide and caused a global pandemic. Rapid and early detection is essential to contain COVID-19. Here, we first developed a deep learning (DL) integrated radiomics model for end-to-end identification of COVID-19 using CT scans and then validated its clinical feasibility. We retrospectively collected CT images of 386 patients (129 with COVID-19 and 257 with other community-acquired pneumonia) from three medical centers to train and externally validate the developed models. A pre-trained DL algorithm was utilized to automatically segment infected lesions (ROIs) on CT images which were used for feature extraction. Five feature selection methods and four machine learning algorithms were utilized to develop radiomics models. Trained with features selected by L1 regularized logistic regression, classifier multi-layer perceptron (MLP) demonstrated the optimal performance with AUC of 0.922 (95% CI 0.856–0.988) and 0.959 (95% CI 0.910–1.000), the same sensitivity of 0.879, and specificity of 0.900 and 0.887 on internal and external testing datasets, which was equivalent to the senior radiologist in a reader study. Additionally, diagnostic time of DL-MLP was more efficient than radiologists (38 s vs 5.15 min). With an adequate performance for identifying COVID-19, DL-MLP may help in screening of suspected cases.


Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3452 ◽  
Author(s):  
Xiaoquan Lu ◽  
Yu Zhou ◽  
Zhongdong Wang ◽  
Yongxian Yi ◽  
Longji Feng ◽  
...  

Non-technical losses (NTL) caused by fault or electricity theft is greatly harmful to the power grid. Industrial customers consume most of the power energy, and it is important to reduce this part of NTL. Currently, most work concentrates on analyzing characteristic of electricity consumption to detect NTL among residential customers. However, the related feature models cannot be adapted to industrial customers because they do not have a fixed electricity consumption pattern. Therefore, this paper starts from the principle of electricity measurement, and proposes a deep learning-based method to extract advanced features from massive smart meter data rather than artificial features. Firstly, we organize electricity magnitudes as one-dimensional sample data and embed the knowledge of electricity measurement in channels. Then, this paper proposes a semi-supervised deep learning model which uses a large number of unlabeled data and adversarial module to avoid overfitting. The experiment results show that our approach can achieve satisfactory performance even when trained by very small samples. Compared with the state-of-the-art methods, our method has achieved obvious improvement in all metrics.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Tahereh Javaheri ◽  
Morteza Homayounfar ◽  
Zohreh Amoozgar ◽  
Reza Reiazi ◽  
Fatemeh Homayounieh ◽  
...  

AbstractCoronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase-polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method; however, its accuracy in detection is only ~70–75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80–98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source framework, CovidCTNet, composed of a set of deep learning algorithms that accurately differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 95% compared to radiologists (70%). CovidCTNet is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. To facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and model parameter details as open-source. Open-source sharing of CovidCTNet enables developers to rapidly improve and optimize services while preserving user privacy and data ownership.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5878 ◽  
Author(s):  
Fares Bougourzi ◽  
Riccardo Contino ◽  
Cosimo Distante ◽  
Abdelmalik Taleb-Ahmed

Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets.


2020 ◽  
Vol 396 ◽  
pp. 375-382 ◽  
Author(s):  
Xiaofeng Yuan ◽  
Chen Ou ◽  
Yalin Wang ◽  
Chunhua Yang ◽  
Weihua Gui

Author(s):  
Zongsheng Zheng ◽  
Chenyu Hu ◽  
Zhaorong Liu ◽  
Jianbo Hao ◽  
Qian Hou ◽  
...  

AbstractTropical cyclone, also known as typhoon, is one of the most destructive weather phenomena. Its intense cyclonic eddy circulations often cause serious damages to coastal areas. Accurate classification or prediction for typhoon intensity is crucial to the disaster warning and mitigation management. But typhoon intensity-related feature extraction is a challenging task as it requires significant pre-processing and human intervention for analysis, and its recognition rate is poor due to various physical factors such as tropical disturbance. In this study, we built a Typhoon-CNNs framework, an automatic classifier for typhoon intensity based on convolutional neural network (CNN). Typhoon-CNNs framework utilized a cyclical convolution strategy supplemented with dropout zero-set, which extracted sensitive features of existing spiral cloud band (SCB) more effectively and reduces over-fitting phenomenon. To further optimize the performance of Typhoon-CNNs, we also proposed the improved activation function (T-ReLU) and the loss function (CE-FMCE). The improved Typhoon-CNNs was trained and validated using more than 10,000 multiple sensor satellite cloud images of National Institute of Informatics. The classification accuracy reached to 88.74%. Compared with other deep learning methods, the accuracy of our improved Typhoon-CNNs was 7.43% higher than ResNet50, 10.27% higher than InceptionV3 and 14.71% higher than VGG16. Finally, by visualizing hierarchic feature maps derived from Typhoon-CNNs, we can easily identify the sensitive characteristics such as typhoon eyes, dense-shadowing cloud areas and SCBs, which facilitates classify and forecast typhoon intensity.


2020 ◽  
Vol 30 (12) ◽  
pp. 6828-6837 ◽  
Author(s):  
Zhang Li ◽  
Zheng Zhong ◽  
Yang Li ◽  
Tianyu Zhang ◽  
Liangxin Gao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document