scholarly journals Differentiating COVID-19 from other types of pneumonia with convolutional neural networks

Author(s):  
Ilker Ozsahin ◽  
Confidence Onyebuchi ◽  
Boran Sekeroglu

AbstractINTRODUCTIONA widely-used method for diagnosing COVID-19 is the nucleic acid test based on real-time reverse transcriptase-polymerase chain reaction (RT-PCR). However, the sensitivity of real time RT-PCR tests is low and it can take up to 8 hours to receive the test results. Radiologic methods can provide higher sensitivity. The aim of this study is to investigate the use of X-ray and convolutional neural networks for the diagnosis of COVID-19 and to differentiate it from viral and/or bacterial pneumonia, as 2-class (bacterial pneumonia vs COVID-19 and viral pneumonia vs COVID-19) and 3- class (bacterial pneumonia, COVID-19, and healthy group (BCH), and among viral pneumonia, COVID- 19, and healthy group (VCH)) experiments.METHODS225 COVID-19, 1,583 healthy control, 2,780 bacterial pneumonia, and 1,493 viral pneumonia chest X-ray images were used. 2-class- and 3-class-experiments were performed with different convolutional neural network (ConvNet) architectures, with different variations of convolutional layers and fully-connected layers.RESULTSThe results showed that bacterial pneumonia vs COVID-19 and viral pneumonia vs COVID- 19 reached a mean ROC AUC of 97.32% and 96.80%, respectively. In the 3-class-experiments, macro-average F1 scores of 95.79% and 94.59% were obtained in terms of detecting COVID-19 among BCH and VCH, respectively.CONCLUSIONSThe ConvNet was able to distinguish the COVID-19 images among non-COVID-19 images, namely bacterial and viral pneumonia as well as normal X-ray images.

2020 ◽  
Vol 12 (3) ◽  
pp. 132-141
Author(s):  
Nator Junior Carvalho da Costa ◽  
Jose Vigno Moura Sousa ◽  
Domingos Bruno Sousa Santos ◽  
Francisco das Chagas Fontenele Marques Junior ◽  
Rodrigo Teixeira de Melo

This paper describes a comparison between three pre-trained neural networks for the classification of chest X-ray images: Xception, Inception V3, and NasNetLarge. Networks were implemented using learning transfer; The database used was the chest x-ray data set, which contains a total of 5856 chest x-ray images of pediatric patients aged one to five years, with three classes: Normal Viral Pneumonia and Bacterial Pneumonia. Data were divided into three groups: validation, testing and training. A comparison was made with the work of kermany who implemented the Inception V3 network in two ways: (Pneumonia X Normal) and (Bacterial Pneumonia X Viral Pneumonia). The nets used had good accuracy, being the NasNetLarge network the best precision, which was 95.35 \% (Pneumonia X Normal) and 91.79 \% (Viral Pneumonia X Bacterial Pneumonia) against 92.80 \% in (Pneumonia X Normal) and 90.70 \% (Viral Pneumonia X Bacterial Pneumonia) from kermany's work, the Xception network also achieved an improvement in accuracy compared to kermany's work, with 93.59 \% at (Normal X Pneumonia) and 91.03 \% in (Viral Pneumonia X Bacterial Pneumonia).


2020 ◽  
Author(s):  
Antonios Makris ◽  
Ioannis Kontopoulos ◽  
Konstantinos Tserpes

AbstractThe COVID-19 pandemic in 2020 has highlighted the need to pull all available resources towards the mitigation of the devastating effects of such “Black Swan” events. Towards that end, we investigated the option to employ technology in order to assist the diagnosis of patients infected by the virus. As such, several state-of-the-art pre-trained convolutional neural networks were evaluated as of their ability to detect infected patients from chest X-Ray images. A dataset was created as a mix of publicly available X-ray images from patients with confirmed COVID-19 disease, common bacterial pneumonia and healthy individuals. To mitigate the small number of samples, we employed transfer learning, which transfers knowledge extracted by pre-trained models to the model to be trained. The experimental results demonstrate that the classification performance can reach an accuracy of 95% for the best two models.


Author(s):  
Muhammad Hanif Ahmad Nizar ◽  
Chow Khuen Chan ◽  
Azira Khalil ◽  
Ahmad Khairuddin Mohamed Yusof ◽  
Khin Wee Lai

Background: Valvular heart disease is a serious disease leading to mortality and increasing medical care cost. The aortic valve is the most common valve affected by this disease. Doctors rely on echocardiogram for diagnosing and evaluating valvular heart disease. However, the images from echocardiogram are poor in comparison to Computerized Tomography and Magnetic Resonance Imaging scan. This study proposes the development of Convolutional Neural Networks (CNN) that can function optimally during a live echocardiographic examination for detection of the aortic valve. An automated detection system in an echocardiogram will improve the accuracy of medical diagnosis and can provide further medical analysis from the resulting detection. Methods: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos. Results: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models. Conclusion: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.


2021 ◽  
Vol 11 (1) ◽  
pp. 28
Author(s):  
Ivan Lorencin ◽  
Sandi Baressi Šegota ◽  
Nikola Anđelić ◽  
Anđela Blagojević ◽  
Tijana Šušteršić ◽  
...  

COVID-19 represents one of the greatest challenges in modern history. Its impact is most noticeable in the health care system, mostly due to the accelerated and increased influx of patients with a more severe clinical picture. These facts are increasing the pressure on health systems. For this reason, the aim is to automate the process of diagnosis and treatment. The research presented in this article conducted an examination of the possibility of classifying the clinical picture of a patient using X-ray images and convolutional neural networks. The research was conducted on the dataset of 185 images that consists of four classes. Due to a lower amount of images, a data augmentation procedure was performed. In order to define the CNN architecture with highest classification performances, multiple CNNs were designed. Results show that the best classification performances can be achieved if ResNet152 is used. This CNN has achieved AUCmacro¯ and AUCmicro¯ up to 0.94, suggesting the possibility of applying CNN to the classification of the clinical picture of COVID-19 patients using an X-ray image of the lungs. When higher layers are frozen during the training procedure, higher AUCmacro¯ and AUCmicro¯ values are achieved. If ResNet152 is utilized, AUCmacro¯ and AUCmicro¯ values up to 0.96 are achieved if all layers except the last 12 are frozen during the training procedure.


Author(s):  
Biluo Shen ◽  
Zhe Zhang ◽  
Xiaojing Shi ◽  
Caiguang Cao ◽  
Zeyu Zhang ◽  
...  

Abstract Purpose Surgery is the predominant treatment modality of human glioma but suffers difficulty on clearly identifying tumor boundaries in clinic. Conventional practice involves neurosurgeon’s visual evaluation and intraoperative histological examination of dissected tissues using frozen section, which is time-consuming and complex. The aim of this study was to develop fluorescent imaging coupled with artificial intelligence technique to quickly and accurately determine glioma in real-time during surgery. Methods Glioma patients (N = 23) were enrolled and injected with indocyanine green for fluorescence image–guided surgery. Tissue samples (N = 1874) were harvested from surgery of these patients, and the second near-infrared window (NIR-II, 1000–1700 nm) fluorescence images were obtained. Deep convolutional neural networks (CNNs) combined with NIR-II fluorescence imaging (named as FL-CNN) were explored to automatically provide pathological diagnosis of glioma in situ in real-time during patient surgery. The pathological examination results were used as the gold standard. Results The developed FL-CNN achieved the area under the curve (AUC) of 0.945. Comparing to neurosurgeons’ judgment, with the same level of specificity >80%, FL-CNN achieved a much higher sensitivity (93.8% versus 82.0%, P < 0.001) with zero time overhead. Further experiments demonstrated that FL-CNN corrected >70% of the errors made by neurosurgeons. FL-CNN was also able to rapidly predict grade and Ki-67 level (AUC 0.810 and 0.625) of tumor specimens intraoperatively. Conclusion Our study demonstrates that deep CNNs are better at capturing important information from fluorescence images than surgeons’ evaluation during patient surgery. FL-CNN is highly promising to provide pathological diagnosis intraoperatively and assist neurosurgeons to obtain maximum resection safely. Trial registration ChiCTR ChiCTR2000029402. Registered 29 January 2020, retrospectively registered


Author(s):  
Naoki Matsumura ◽  
Yasuaki Ito ◽  
Koji Nakano ◽  
Akihiko Kasagi ◽  
Tsuguchika Tabaru

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2005
Author(s):  
Veronika Scholz ◽  
Peter Winkler ◽  
Andreas Hornig ◽  
Maik Gude ◽  
Angelos Filippatos

Damage identification of composite structures is a major ongoing challenge for a secure operational life-cycle due to the complex, gradual damage behaviour of composite materials. Especially for composite rotors in aero-engines and wind-turbines, a cost-intensive maintenance service has to be performed in order to avoid critical failure. A major advantage of composite structures is that they are able to safely operate after damage initiation and under ongoing damage propagation. Therefore, a robust, efficient diagnostic damage identification method would allow monitoring the damage process with intervention occurring only when necessary. This study investigates the structural vibration response of composite rotors by applying machine learning methods and the ability to identify, localise and quantify the present damage. To this end, multiple fully connected neural networks and convolutional neural networks were trained on vibration response spectra from damaged composite rotors with barely visible damage, mostly matrix cracks and local delaminations using dimensionality reduction and data augmentation. A databank containing 720 simulated test cases with different damage states is used as a basis for the generation of multiple data sets. The trained models are tested using k-fold cross validation and they are evaluated based on the sensitivity, specificity and accuracy. Convolutional neural networks perform slightly better providing a performance accuracy of up to 99.3% for the damage localisation and quantification.


Sign in / Sign up

Export Citation Format

Share Document