scholarly journals Automated Chest Radiographs Triage Reading by a Deep Learning Referee Network

Author(s):  
Rafael Lopez-Gonzalez ◽  
Jose Sanchez-Garcia ◽  
Belen Fos-Guarinos ◽  
Fabio Garcia-Castro ◽  
Angel Alberich-Bayarri ◽  
...  

Chest radiographs are often obtained as a screening for early diagnosis tool to rule out abnormalities mainly related to different cardiovascular and respiratory diseases. Reading and reporting numerous chest radiographs is a complex and time-consuming task. This research proposes and evaluates a deep learning (DL) approach based on convolutional neural networks (CNN) combined with a referee fully connected neural network as a computer-aided diagnosis tool in chest X-ray triage and worklist prioritization. The CNN models were trained with a combination of three large scale databases: ChestX-ray14, CheXpert and PadChest. The final database contained 327,176 images labeled with findings obtained by natural language processing (NLP) techniques applied to the radiology reports. The dataset was split in 16 different balanced binary partitions, which were used to train 16 finding-specific classification CNNs. Afterwards, a normal vs abnormal partition of the dataset was created, being abnormal the presence of at least one pathologic change. This final partition was used to train a fully connected neural network as referee that was fed with all the 16 previously trained outcomes. The Area Under the Curve (AUC) analysis evaluated and compared the performance of the models. The system was successfully implemented and evaluated with a test set of 3400 images. The AUC of the normal vs abnormal classification was 0.94. The highest AUC of the finding-specific classifiers was 0.99 for hernia. The proposed system can be used to assist radiologists identifying abnormal exams, allowing a time-efficiency triage approach.

2021 ◽  
Vol 4 (2) ◽  
pp. 147-153
Author(s):  
Vina Ayumi ◽  
Ida Nurhaida

Deteksi dini terhadap adanya indikasi pasien dengan gejala COVID-19 perlu dilakukan untuk mengurangi penyebaran virus. Salah satu cara yang dapat dilakukan untuk mendeteksi virus COVID-19 adalah dengan cara mempelajari citra chest x-ray pasien dengan gejala Covid-19. Citra chest x-ray dianggap mampu menggambarkan kondisi paru-paru pasien COVID-19 sebagai alat bantu untuk diagnosa klinis. Penelitian ini mengusulkan pendekatan deep learning berbasis convolutional neural network (CNN) untuk klasifikasi gejala COVID-19 melalui citra chest X-Ray. Evaluasi performa metode yang diusulkan akan menggunakan perhitungan accuracy, precision, recall, f1-score, dan cohens kappa. Penelitian ini menggunakan model CNN dengan 2 lapis layer convolusi dan maxpoling serta fully-connected layer untuk output. Parameter-parameter yang digunakan diantaranya batch_size = 32, epoch = 50, learning_rate = 0.001, dengan optimizer yaitu Adam. Nilai akurasi validasi (val_acc) terbaik diperoleh pada epoch ke-49 dengan nilai 0.9606, nilai loss validasi (val_loss) 0.1471, akurasi training (acc) 0.9405, dan loss training (loss) 0.2558.


2021 ◽  
Vol 45 (10) ◽  
Author(s):  
A. W. Olthof ◽  
P. M. A. van Ooijen ◽  
L. J. Cornelissen

AbstractIn radiology, natural language processing (NLP) allows the extraction of valuable information from radiology reports. It can be used for various downstream tasks such as quality improvement, epidemiological research, and monitoring guideline adherence. Class imbalance, variation in dataset size, variation in report complexity, and algorithm type all influence NLP performance but have not yet been systematically and interrelatedly evaluated. In this study, we investigate these factors on the performance of four types [a fully connected neural network (Dense), a long short-term memory recurrent neural network (LSTM), a convolutional neural network (CNN), and a Bidirectional Encoder Representations from Transformers (BERT)] of deep learning-based NLP. Two datasets consisting of radiologist-annotated reports of both trauma radiographs (n = 2469) and chest radiographs and computer tomography (CT) studies (n = 2255) were split into training sets (80%) and testing sets (20%). The training data was used as a source to train all four model types in 84 experiments (Fracture-data) and 45 experiments (Chest-data) with variation in size and prevalence. The performance was evaluated on sensitivity, specificity, positive predictive value, negative predictive value, area under the curve, and F score. After the NLP of radiology reports, all four model-architectures demonstrated high performance with metrics up to > 0.90. CNN, LSTM, and Dense were outperformed by the BERT algorithm because of its stable results despite variation in training size and prevalence. Awareness of variation in prevalence is warranted because it impacts sensitivity and specificity in opposite directions.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 292 ◽  
Author(s):  
Md Zahangir Alom ◽  
Tarek M. Taha ◽  
Chris Yakopcic ◽  
Stefan Westberg ◽  
Paheding Sidike ◽  
...  

In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models.


2020 ◽  
pp. 119-130
Author(s):  
Shadman Q. Salih ◽  
Hawre Kh. Abdulla ◽  
Zanear Sh. Ahmed ◽  
Nigar M. Shafiq Surameery ◽  
Rasper Dh. Rashid

First outbreak of COVID-19 was in the city of Wuhan in China in Dec.2019 and then it becomes a pandemic disease all around the world. World Health Organization (WHO) confirmed more than 5.5 million cases and 341,155 deaths from the disease till the time of writing this paper. This new worldwide disease forced researchers to make more precise way to diagnose COVID-19. In the last decade, medical imaging techniques show its efficiency in helping radiologists to detect and diagnose the diseases. Deep learning and transfer learning algorithms are good techniques to detect disease from different image source types such as X-Ray and CT scan images. In this work we used a deep learning technique based on Convolution Neural Network (CNN) to detect and diagnose COVID-19 disease using Chest X-ray images.  Moreover, the modified AlexNet architecture is proposed in different scenarios were differing from each other in terms of the type of the pooling layers and/or the number of the neurons that have used in the second fully connected layer. The used chest X-ray images are gathered from two COVID-19 X-ray image datasets and one dataset includes large number of normal and pneumonia X-ray images. With the proposed models we obtained the same or even better result than the original AlexNet with having a smaller number of neurons in the second fully connected layer.


2021 ◽  
pp. 1-25
Author(s):  
Kwabena Adu ◽  
Yongbin Yu ◽  
Jingye Cai ◽  
Victor Dela Tattrah ◽  
James Adu Ansere ◽  
...  

The squash function in capsule networks (CapsNets) dynamic routing is less capable of performing discrimination of non-informative capsules which leads to abnormal activation value distribution of capsules. In this paper, we propose vertical squash (VSquash) to improve the original squash by preventing the activation values of capsules in the primary capsule layer to shrink non-informative capsules, promote discriminative capsules and avoid high information sensitivity. Furthermore, a new neural network, (i) skip-connected convolutional capsule (S-CCCapsule), (ii) Integrated skip-connected convolutional capsules (ISCC) and (iii) Ensemble skip-connected convolutional capsules (ESCC) based on CapsNets are presented where the VSquash is applied in the dynamic routing. In order to achieve uniform distribution of coupling coefficient of probabilities between capsules, we use the Sigmoid function rather than Softmax function. Experiments on Guangzhou Women and Children’s Medical Center (GWCMC), Radiological Society of North America (RSNA) and Mendeley CXR Pneumonia datasets were performed to validate the effectiveness of our proposed methods. We found that our proposed methods produce better accuracy compared to other methods based on model evaluation metrics such as confusion matrix, sensitivity, specificity and Area under the curve (AUC). Our method for pneumonia detection performs better than practicing radiologists. It minimizes human error and reduces diagnosis time.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

Abstract Background We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. Methods We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. Results At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85), and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73–0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in Centre 2. Conclusions This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.


Author(s):  
Yuheng Hu ◽  
Yili Hong

Residents often rely on newspapers and television to gather hyperlocal news for community awareness and engagement. More recently, social media have emerged as an increasingly important source of hyperlocal news. Thus far, the literature on using social media to create desirable societal benefits, such as civic awareness and engagement, is still in its infancy. One key challenge in this research stream is to timely and accurately distill information from noisy social media data streams to community members. In this work, we develop SHEDR (social media–based hyperlocal event detection and recommendation), an end-to-end neural event detection and recommendation framework with a particular use case for Twitter to facilitate residents’ information seeking of hyperlocal events. The key model innovation in SHEDR lies in the design of the hyperlocal event detector and the event recommender. First, we harness the power of two popular deep neural network models, the convolutional neural network (CNN) and long short-term memory (LSTM), in a novel joint CNN-LSTM model to characterize spatiotemporal dependencies for capturing unusualness in a region of interest, which is classified as a hyperlocal event. Next, we develop a neural pairwise ranking algorithm for recommending detected hyperlocal events to residents based on their interests. To alleviate the sparsity issue and improve personalization, our algorithm incorporates several types of contextual information covering topic, social, and geographical proximities. We perform comprehensive evaluations based on two large-scale data sets comprising geotagged tweets covering Seattle and Chicago. We demonstrate the effectiveness of our framework in comparison with several state-of-the-art approaches. We show that our hyperlocal event detection and recommendation models consistently and significantly outperform other approaches in terms of precision, recall, and F-1 scores. Summary of Contribution: In this paper, we focus on a novel and important, yet largely underexplored application of computing—how to improve civic engagement in local neighborhoods via local news sharing and consumption based on social media feeds. To address this question, we propose two new computational and data-driven methods: (1) a deep learning–based hyperlocal event detection algorithm that scans spatially and temporally to detect hyperlocal events from geotagged Twitter feeds; and (2) A personalized deep learning–based hyperlocal event recommender system that systematically integrates several contextual cues such as topical, geographical, and social proximity to recommend the detected hyperlocal events to potential users. We conduct a series of experiments to examine our proposed models. The outcomes demonstrate that our algorithms are significantly better than the state-of-the-art models and can provide users with more relevant information about the local neighborhoods that they live in, which in turn may boost their community engagement.


2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2021 ◽  
Vol 9 (Suppl 3) ◽  
pp. A874-A874
Author(s):  
David Soong ◽  
David Soong ◽  
David Soong ◽  
Anantharaman Muthuswamy ◽  
Clifton Drew ◽  
...  

BackgroundRecent advances in machine learning and digital pathology have enabled a variety of applications including predicting tumor grade and genetic subtypes, quantifying the tumor microenvironment (TME), and identifying prognostic morphological features from H&E whole slide images (WSI). These supervised deep learning models require large quantities of images manually annotated with cellular- and tissue-level details by pathologists, which limits scale and generalizability across cancer types and imaging platforms. Here we propose a semi-supervised deep learning framework that automatically annotates biologically relevant image content from hundreds of solid tumor WSI with minimal pathologist intervention, thus improving quality and speed of analytical workflows aimed at deriving clinically relevant features.MethodsThe dataset consisted of >200 H&E images across >10 solid tumor types (e.g. breast, lung, colorectal, cervical, and urothelial cancers) from advanced disease patients. WSI were first partitioned into small tiles of 128μm for feature extraction using a 50-layer convolutional neural network pre-trained on the ImageNet database. Dimensionality reduction and unsupervised clustering were applied to the resultant embeddings and image clusters were identified with enriched histological and morphological characteristics. A random subset of representative tiles (<0.5% of whole slide tissue areas) from these distinct image clusters was manually reviewed by pathologists and assigned to eight histological and morphological categories: tumor, stroma/connective tissue, necrotic cells, lymphocytes, red blood cells, white blood cells, normal tissue and glass/background. This dataset allowed the development of a multi-label deep neural network to segment morphologically distinct regions and detect/quantify histopathological features in WSI.ResultsAs representative image tiles within each image cluster were morphologically similar, expert pathologists were able to assign annotations to multiple images in parallel, effectively at 150 images/hour. Five-fold cross-validation showed average prediction accuracy of 0.93 [0.8–1.0] and area under the curve of 0.90 [0.8–1.0] over the eight image categories. As an extension of this classifier framework, all whole slide H&E images were segmented and composite lymphocyte, stromal, and necrotic content per patient tumor was derived and correlated with estimates by pathologists (p<0.05).ConclusionsA novel and scalable deep learning framework for annotating and learning H&E features from a large unlabeled WSI dataset across tumor types was developed. This automated approach accurately identified distinct histomorphological features, with significantly reduced labeling time and effort required for pathologists. Further, this classifier framework was extended to annotate regions enriched in lymphocytes, stromal, and necrotic cells – important TME contexture with clinical relevance for patient prognosis and treatment decisions.


Sign in / Sign up

Export Citation Format

Share Document