scholarly journals Deep Learning for COVID-19 Diagnosis from CT Images

2021 ◽  
Vol 11 (17) ◽  
pp. 8227 ◽  
Author(s):  
Andrea Loddo ◽  
Fabio Pili ◽  
Cecilia Di Ruberto

COVID-19, an infectious coronavirus disease, caused a pandemic with countless deaths. From the outset, clinical institutes have explored computed tomography as an effective and complementary screening tool alongside the reverse transcriptase-polymerase chain reaction. Deep learning techniques have shown promising results in similar medical tasks and, hence, may provide solutions to COVID-19 based on medical images of patients. We aim to contribute to the research in this field by: (i) Comparing different architectures on a public and extended reference dataset to find the most suitable; (ii) Proposing a patient-oriented investigation of the best performing networks; and (iii) Evaluating their robustness in a real-world scenario, represented by cross-dataset experiments. We exploited ten well-known convolutional neural networks on two public datasets. The results show that, on the reference dataset, the most suitable architecture is VGG19, which (i) Achieved 98.87% accuracy in the network comparison; (ii) Obtained 95.91% accuracy on the patient status classification, even though it misclassifies some patients that other networks classify correctly; and (iii) The cross-dataset experiments exhibit the limitations of deep learning approaches in a real-world scenario with 70.15% accuracy, which need further investigation to improve the robustness. Thus, VGG19 architecture showed promising performance in the classification of COVID-19 cases. Nonetheless, this architecture enables extensive improvements based on its modification, or even with preprocessing step in addition to it. Finally, the cross-dataset experiments exposed the critical weakness of classifying images from heterogeneous data sources, compatible with a real-world scenario.

2019 ◽  
Vol 28 (01) ◽  
pp. 102-114 ◽  
Author(s):  
Christoph Hoog Antink ◽  
Simon Lyra ◽  
Michael Paul ◽  
Xinchi Yu ◽  
Steffen Leonhardt

Objectives: Camera-based vital sign estimation allows the contactless assessment of important physiological parameters. Seminal contributions were made in the 1930s, 1980s, and 2000s, and the speed of development seems ever increasing. In this suivey, we aim to overview the most recent works in this area, describe their common features as well as shortcomings, and highlight interesting “outliers”. Methods: We performed a comprehensive literature research and quantitative analysis of papers published between 2016 and 2018. Quantitative information about the number of subjects, studies with healthy volunteers vs. pathological conditions, public datasets, laboratory vs. real-world works, types of camera, usage of machine learning, and spectral properties of data was extracted. Moreover, a qualitative analysis of illumination used and recent advantages in terms of algorithmic developments was also performed. Results: Since 2016, 116 papers were published on camera-based vital sign estimation and 59% of papers presented results on 20 or fewer subjects. While the average number of participants increased from 15.7 in 2016 to 22.9 in 2018, the vast majority of papers (n=100) were on healthy subjects. Four public datasets were used in 10 publications. We found 27 papers whose application scenario could be considered a real-world use case, such as monitoring during exercise or driving. These include 16 papers that dealt with non-healthy subjects. The majority of papers (n=61) presented results based on visual, red-green-blue (RGB) information, followed by RGB combined with other parts of the electromagnetic spectrum (n=18), and thermography only (n=12), while other works (n=25) used other mono- or polychromatic non-RGB data. Surprisingly, a minority of publications (n=39) made use of consumer-grade equipment. Lighting conditions were primarily uncontrolled or ambient. While some works focused on specialized aspects such as the removal of vital sign information from video streams to protect privacy or the influence of video compression, most algorithmic developments were related to three areas: region of interest selection, tracking, or extraction of a one-dimensional signal. Seven papers used deep learning techniques, 17 papers used other machine learning approaches, and 92 made no explicit use of machine learning. Conclusion: Although some general trends and frequent shortcomings are obvious, the spectrum of publications related to camera-based vital sign estimation is broad. While many creative solutions and unique approaches exist, the lack of standardization hinders comparability of these techniques and of their performance. We believe that sharing algorithms and/ or datasets will alleviate this and would allow the application of newer techniques such as deep learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Robbie Sadre ◽  
Baskaran Sundaram ◽  
Sharmila Majumdar ◽  
Daniela Ushizima

AbstractThe new coronavirus unleashed a worldwide pandemic in early 2020, and a fatality rate several times that of the flu. As the number of infections soared, and capabilities for testing lagged behind, chest X-ray (CXR) imaging became more relevant in the early diagnosis and treatment planning for patients with suspected or confirmed COVID-19 infection. In a few weeks, proposed new methods for lung screening using deep learning rapidly appeared, while quality assurance discussions lagged behind. This paper proposes a set of protocols to validate deep learning algorithms, including our ROI Hide-and-Seek protocol, which emphasizes or hides key regions of interest from CXR data. Our protocol allows assessing the classification performance for anomaly detection and its correlation to radiological signatures, an important issue overlooked in several deep learning approaches proposed so far. By running a set of systematic tests over CXR representations using public image datasets, we demonstrate the weaknesses of current techniques and offer perspectives on the advantages and limitations of automated radiography analysis when using heterogeneous data sources.


2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


2021 ◽  
Vol 11 (11) ◽  
pp. 4753
Author(s):  
Gen Ye ◽  
Chen Du ◽  
Tong Lin ◽  
Yan Yan ◽  
Jack Jiang

(1) Background: Deep learning has become ubiquitous due to its impressive performance in various domains, such as varied as computer vision, natural language and speech processing, and game-playing. In this work, we investigated the performance of recent deep learning approaches on the laryngopharyngeal reflux (LPR) diagnosis task. (2) Methods: Our dataset is composed of 114 subjects with 37 pH-positive cases and 77 control cases. In contrast to prior work based on either reflux finding score (RFS) or pH monitoring, we directly take laryngoscope images as inputs to neural networks, as laryngoscopy is the most common and simple diagnostic method. The diagnosis task is formulated as a binary classification problem. We first tested a powerful backbone network that incorporates residual modules, attention mechanism and data augmentation. Furthermore, recent methods in transfer learning and few-shot learning were investigated. (3) Results: On our dataset, the performance is the best test classification accuracy is 73.4%, while the best AUC value is 76.2%. (4) Conclusions: This study demonstrates that deep learning techniques can be applied to classify LPR images automatically. Although the number of pH-positive images used for training is limited, deep network can still be capable of learning discriminant features with the advantage of technique.


Author(s):  
Bosede Iyiade Edwards ◽  
Nosiba Hisham Osman Khougali ◽  
Adrian David Cheok

With recent focus on deep neural network architectures for development of algorithms for computer-aided diagnosis (CAD), we provide a review of studies within the last 3 years (2015-2017) reported in selected top journals and conferences. 29 studies that met our inclusion criteria were reviewed to identify trends in this field and to inform future development. Studies have focused mostly on cancer-related diseases within internal medicine while diseases within gender-/age-focused fields like gynaecology/pediatrics have not received much focus. All reviewed studies employed image datasets, mostly sourced from publicly available databases (55.2%) and few based on data from human subjects (31%) and non-medical datasets (13.8%), while CNN architecture was employed in most (70%) of the studies. Confirmation of the effect of data manipulation on quality of output and adoption of multi-class rather than binary classification also require more focus. Future studies should leverage collaborations with medical experts to aid future with actual clinical testing with reporting based on some generally applicable index to enable comparison. Our next steps on plans for CAD development for osteoarthritis (OA), with plans to consider multi-class classification and comparison across deep learning approaches and unsupervised architectures were also highlighted.


Author(s):  
Wolfram Höpken ◽  
Matthias Fuchs ◽  
Maria Lexhagen

The objective of this chapter is to address the above deficiencies in tourism by presenting the concept of the tourism knowledge destination – a specific knowledge management architecture that supports value creation through enhanced supplier interaction and decision making. Information from heterogeneous data sources categorized into explicit feedback (e.g. tourist surveys, user ratings) and implicit information traces (navigation, transaction and tracking data) is extracted by applying semantic mapping, wrappers or text mining (Lau et al., 2005). Extracted data are stored in a central data warehouse enabling a destination-wide and all-stakeholder-encompassing data analysis approach. By using machine learning techniques interesting patterns are detected and knowledge is generated in the form of validated models (e.g. decision trees, neural networks, association rules, clustering models). These models, together with the underlying data (in the case of exploratory data analysis) are interactively visualized and made accessible to destination stakeholders.


2022 ◽  
pp. 27-50
Author(s):  
Rajalaxmi Prabhu B. ◽  
Seema S.

A lot of user-generated data is available these days from huge platforms, blogs, websites, and other review sites. These data are usually unstructured. Analyzing sentiments from these data automatically is considered an important challenge. Several machine learning algorithms are implemented to check the opinions from large data sets. A lot of research has been undergone in understanding machine learning approaches to analyze sentiments. Machine learning mainly depends on the data required for model building, and hence, suitable feature exactions techniques also need to be carried. In this chapter, several deep learning approaches, its challenges, and future issues will be addressed. Deep learning techniques are considered important in predicting the sentiments of users. This chapter aims to analyze the deep-learning techniques for predicting sentiments and understanding the importance of several approaches for mining opinions and determining sentiment polarity.


IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 551-604
Author(s):  
Damien Warren Fernando ◽  
Nikos Komninos ◽  
Thomas Chen

This survey investigates the contributions of research into the detection of ransomware malware using machine learning and deep learning algorithms. The main motivations for this study are the destructive nature of ransomware, the difficulty of reversing a ransomware infection, and how important it is to detect it before infecting a system. Machine learning is coming to the forefront of combatting ransomware, so we attempted to identify weaknesses in machine learning approaches and how they can be strengthened. The threat posed by ransomware is exceptionally high, with new variants and families continually being found on the internet and dark web. Recovering from ransomware infections is difficult, given the nature of the encryption schemes used by them. The increase in the use of artificial intelligence also coincides with this boom in ransomware. The exploration into machine learning and deep learning approaches when it comes to detecting ransomware poses high interest because machine learning and deep learning can detect zero-day threats. These techniques can generate predictive models that can learn the behaviour of ransomware and use this knowledge to detect variants and families which have not yet been seen. In this survey, we review prominent research studies which all showcase a machine learning or deep learning approach when detecting ransomware malware. These studies were chosen based on the number of citations they had by other research. We carried out experiments to investigate how the discussed research studies are impacted by malware evolution. We also explored the new directions of ransomware and how we expect it to evolve in the coming years, such as expansion into IoT (Internet of Things), with IoT being integrated more into infrastructures and into homes.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2984
Author(s):  
Yue Mu ◽  
Tai-Shen Chen ◽  
Seishi Ninomiya ◽  
Wei Guo

Automatic detection of intact tomatoes on plants is highly expected for low-cost and optimal management in tomato farming. Mature tomato detection has been wildly studied, while immature tomato detection, especially when occluded with leaves, is difficult to perform using traditional image analysis, which is more important for long-term yield prediction. Therefore, tomato detection that can generalize well in real tomato cultivation scenes and is robust to issues such as fruit occlusion and variable lighting conditions is highly desired. In this study, we build a tomato detection model to automatically detect intact green tomatoes regardless of occlusions or fruit growth stage using deep learning approaches. The tomato detection model used faster region-based convolutional neural network (R-CNN) with Resnet-101 and transfer learned from the Common Objects in Context (COCO) dataset. The detection on test dataset achieved high average precision of 87.83% (intersection over union ≥ 0.5) and showed a high accuracy of tomato counting (R2 = 0.87). In addition, all the detected boxes were merged into one image to compile the tomato location map and estimate their size along one row in the greenhouse. By tomato detection, counting, location and size estimation, this method shows great potential for ripeness and yield prediction.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5213 ◽  
Author(s):  
Donato Impedovo ◽  
Fabrizio Balducci ◽  
Vincenzo Dentamaro ◽  
Giuseppe Pirlo

Automatic traffic flow classification is useful to reveal road congestions and accidents. Nowadays, roads and highways are equipped with a huge amount of surveillance cameras, which can be used for real-time vehicle identification, and thus providing traffic flow estimation. This research provides a comparative analysis of state-of-the-art object detectors, visual features, and classification models useful to implement traffic state estimations. More specifically, three different object detectors are compared to identify vehicles. Four machine learning techniques are successively employed to explore five visual features for classification aims. These classic machine learning approaches are compared with the deep learning techniques. This research demonstrates that, when methods and resources are properly implemented and tested, results are very encouraging for both methods, but the deep learning method is the most accurately performing one reaching an accuracy of 99.9% for binary traffic state classification and 98.6% for multiclass classification.


Sign in / Sign up

Export Citation Format

Share Document