scholarly journals RadImageNet: A Large-scale Radiologic Dataset for Enhancing Deep Learning Transfer Learning Research

Author(s):  
Yang Yang ◽  
Xueyan Mei ◽  
Philip Robson ◽  
Brett Marinelli ◽  
Mingqian Huang ◽  
...  

Abstract Most current medical imaging Artificial Intelligence (AI) relies upon transfer learning using convolutional neural networks (CNNs) created using ImageNet, a large database of natural world images, including cats, dogs, and vehicles. Size, diversity, and similarity of the source data determine the success of the transfer learning on the target data. ImageNet is large and diverse, but there is a significant dissimilarity between its natural world images and medical images, leading Cheplygina to pose the question, “Why do we still use images of cats to help Artificial Intelligence interpret CAT scans?”. We present an equally large and diversified database, RadImageNet, consisting of 5 million annotated medical images consisting of CT, MRI, and ultrasound of musculoskeletal, neurologic, oncologic, gastrointestinal, endocrine, and pulmonary pathologies over 450,000 patients. The database is unprecedented in scale and breadth in the medical imaging field, constituting a more appropriate basis for medical imaging transfer learning applications. We found that RadImageNet transfer learning outperformed ImageNet in multiple independent applications, including improvements for bone age prediction from hand and wrist x-rays by 1.75 months (p<0.0001), pneumonia detection in ICU chest x-rays by 0.85% (p<0.0001), ACL tear detection on MRI by 10.72% (p<0.0001), SARS-CoV-2 detection on chest CT by 0.25% (p<0.0001) and hemorrhage detection on head CT by 0.13% (p<0.0001). The results indicate that our pre-trained models that are open-sourced on public domains will be a better starting point for transfer learning in radiologic imaging AI applications, including applications involving medical imaging modalities or anatomies not included in the RadImageNet database.

2019 ◽  
Vol 8 (4) ◽  
pp. 462 ◽  
Author(s):  
Muhammad Owais ◽  
Muhammad Arsalan ◽  
Jiho Choi ◽  
Kang Ryoung Park

Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).


2021 ◽  
Vol 8 (7) ◽  
pp. 418-422
Author(s):  
Nandu Kumar Thalange ◽  
Elham Elgabaly Moustafa Ahmed ◽  
Ajay Prasanth D'Souza ◽  
Mireille El Bejjani

Objective: Artificial intelligence (AI) is playing an increasing role in patient assessment. AI bone age analysis is such a tool, but its value in Arabic children presenting to an endocrine clinic has not been explored. We compared results from an experienced pediatric radiologist and the AI bone age system, BoneXpert (BX), (Visiana, Denmark) to assess its utility in a cohort of children presenting to the Al Jalila Children’s Specialty Hospital endocrine service. Materials and Methods: We conducted a retrospective chart review of 47 children with growth disorders, initially assessed by a single experienced radiologist and subsequently by BX, to confirm the usefulness of the BX system in our population. The results of the analyses were analysed using a Bland-Altman plot constructed to compare differences between the radiologist’s interpretation and BX across the available range of bone age. Results: Forty-four of the patient x-ray images were analysed by BX. Three X-ray images were rejected by BX due to post-processing artifacts, which prevented computer interpretation. For the remaining 44 X-rays, there was a close correlation between radiologist and BX results (r=0.93; p <0.00001).  Two radiographs were identified with a large discrepancy in the reported bone ages. Blinded, independent re-evaluation of the radiographs showed the original manually interpreted bone age to have been erroneous, with the BX results corresponding closely to the amended bone age. A small positive bias was noted in bone age (+0.39 years) in the BX analyses, relative to manual interpretation. Conclusions: AI bone age analysis was of high utility in Arabic children from UAE presenting to an endocrine clinic, with results highly comparable to an experienced radiologist. In the two cases where a large discrepancy was found, independent re-evaluation showed AI analysis was correct.


Author(s):  
Fouzia Altaf ◽  
Syed M. S. Islam ◽  
Naeem Khalid Janjua

AbstractDeep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.


Cancers ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1590
Author(s):  
Laith Alzubaidi ◽  
Muthana Al-Amidie ◽  
Ahmed Al-Asadi ◽  
Amjad J. Humaidi ◽  
Omran Al-Shamma ◽  
...  

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.


2021 ◽  
Vol 1 (2) ◽  
pp. 71-80
Author(s):  
Revella E. A. Armya Armya ◽  
Adnan Mohsin Abdulazeez

Medical image segmentation plays an essential role in computer-aided diagnostic systems in various applications. Therefore, researchers are attracted to apply new algorithms for medical image processing because it is a massive investment in developing medical imaging methods such as dermatoscopy, X-rays, microscopy, ultrasound, computed tomography (CT), positron emission tomography, and magnetic resonance imaging. (Magnetic Resonance Imaging), So segmentation of medical images is considered one of the most important medical imaging processes because it extracts the field of interest from the Return on investment (ROI) through an automatic or semi-automatic process. The medical image is divided into regions based on the specific descriptions, such as tissue/organ division in medical applications for border detection, tumor detection/segmentation, and comprehensive and accurate detection. Several methods of segmentation have been proposed in the literature, but their efficacy is difficult to compare. To better address, this issue, a variety of measurement standards have been suggested to decide the consistency of the segmentation outcome. Unsupervised ranking criteria use some of the statistics in the hash score based on the original picture. The key aim of this paper is to study some literature on unsupervised algorithms (K-mean, K-medoids) and to compare the working efficiency of unsupervised algorithms with different types of medical images.


2021 ◽  
Vol 8 (1) ◽  
pp. 9
Author(s):  
Buyut Khoirul Umri ◽  
Ema Utami ◽  
Mei P Kurniawan

Covid-19 menyerang sel-sel epitel yang melapisi saluran pernapasan sehingga dalam kasus ini dapat memanfaatkan gambar x-ray dada untuk menganalisis kesehatan paru-paru pada pasien. Menggunakan x-ray dalam bidang medis merupakan metode yang lebih cepat, lebih mudah dan tidak berbahaya yang dapat dimanfaatkan pada banyak hal. Salah satu metode yang paling sering digunakan dalam klasifikasi gambar adalah convolutional neural networks (CNN). CNN merupahan jenis neural network yang sering digunakan dalam data gambar dan sering digunakan dalam mendeteksi dan mengenali object pada sebuah gambar. Model arsitektur pada metode CNN juga dapat dikembangkan dengan transfer learning yang merupakan proses menggunakan kembali model pre-trained yang dilatih pada dataset besar, biasanya pada tugas klasifikasi gambar berskala besar. Tinjauan literature review ini digunakan untuk menganalisis penggunaan transfer learning pada CNN sebagai metode yang dapat digunakan untuk mendeteksi covid-19 pada gambar x-ray dada. Hasil sistematis review menunjukkan bahwa algoritma CNN dapat digunakan dengan akruasi yang baik dalam mendeteksi covid-19 pada gambar x-ray dada dan dengan pengembangan model transfer learning mampu mendapatkan performa yang maksimal dengan dataset yang besar maupun kecil.Kata Kunci—CNN, transfer learning, deteksi, covid-19Covid-19 attacks the epithelial cells lining the respiratory tract so that in this case it can utilize chest x-ray images to analyze the health of the lungs in patients. Using x-rays in the medical field is a faster, easier and harmless method that can be utilized in many ways. One of the most frequently used methods in image classification is convolutional neural networks (CNN). CNN is a type of neural network that is often used in image data and is often used in detecting and recognizing objects in an image. The architectural model in the CNN method can also be developed with transfer learning which is the process of reusing pre-trained models that are trained on large datasets, usually on the task of classifying large-scale images. This literature review review is used to analyze the use of transfer learning on CNN as a method that can be used to detect covid-19 on chest x-ray images. The systematic review results show that the CNN algorithm can be used with good accuracy in detecting covid-19 on chest x-ray images and by developing transfer learning models able to get maximum performance with large and small datasets.Keywords—CNN, transfer learning, detection, covid-19


Author(s):  
Md. Milon Islam ◽  
Md. Zabirul Islam ◽  
Amanullah Asraf ◽  
Weiping Ding

The confrontation of COVID-19 pandemic has become one of the promising challenges of the world healthcare. Accurate and fast diagnosis of COVID-19 cases is essential for correct medical treatment to control this pandemic. Compared with the reverse-transcription polymerase chain reaction (RT-PCR) method, chest radiography imaging techniques are shown to be more effective to detect coronavirus. For the limitation of available medical images, transfer learning is better suited to classify patterns in medical images. This paper presents a combined architecture of convolutional neural network (CNN) and recurrent neural network (RNN) to diagnose COVID-19 from chest X-rays. The deep transfer techniques used in this experiment are VGG19, DenseNet121, InceptionV3, and Inception-ResNetV2. CNN is used to extract complex features from samples and classified them using RNN. The VGG19-RNN architecture achieved the best performance among all the networks in terms of accuracy and computational time in our experiments. Finally, Gradient-weighted Class Activation Mapping (Grad-CAM) was used to visualize class-specific regions of images that are responsible to make decision. The system achieved promising results compared to other existing systems and might be validated in the future when more samples would be available. The experiment demonstrated a good alternative method to diagnose COVID-19 for medical staff.


2020 ◽  
Vol 245 ◽  
pp. 09011
Author(s):  
Michael Hildreth ◽  
Kenyi Paolo Hurtado Anampa ◽  
Cody Kankel ◽  
Scott Hampton ◽  
Paul Brenner ◽  
...  

The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a cloud-based data analysis platform deployed on top of Kubernetes clusters that was originally designed to enable analysis reusability and reproducibility. REANA is capable of orchestrating extremely complicated multi-step workflows, and uses Kubernetes clusters both for scheduling and distributing container-based workloads across a cluster of available machines, as well as instantiating and monitoring the concrete workloads themselves. This work describes the challenges and development efforts involved in extending REANA and the components that were developed in order to enable large scale deployment on High Performance Computing (HPC) resources. Using the Virtual Clusters for Community Computation (VC3) infrastructure as a starting point, we implemented REANA to work with a number of differing workload managers, including both high performance and high throughput, while simultaneously removing REANA’s dependence on Kubernetes support at the workers level.


2016 ◽  
Vol 25 (09) ◽  
pp. 1650110 ◽  
Author(s):  
S. P. Valan Arasu ◽  
S. Baulkani

Medical image fusion is the process of deriving vital information from multimodality medical images. Some important applications of image fusion are medical imaging, remote control sensing, personal computer vision and robotics. For medical diagnosis, computerized tomography (CT) gives the best information about denser tissue with a lesser amount of distortion and magnetic resonance image (MRI) gives the better information on soft tissue with little higher distortion. The main scheme is to combine CT and MRI images for getting most significant information. The need is to focus on less power consumption and less occupational area in the implementations of the applications involving image fusion using discrete wavelet transform (DWT). To design the DWT processor with low power and area, a low power multiplier and shifter are incorporated in the hardware. This low power DWT improves the spatial resolution of fused image and also preserve the color appearance. Also, the adaptation of the lifting scheme in the 2D DWT process further improves the power reduction. In order to implement this 2D DWT processor in field-programmable gate array (FPGA) architecture as a very large scale integration (VLSI)-based design, the process is simulated with Xilinx 14.1 tools and also using MATLAB. When comparing the performance of this low power DWT and other available methods, this high performance processor has 24%, 54% and 53% of improvements on the parameters like standard deviation (SD), root mean square error (RMSE) and entropy. Thus, we are obtaining a low power, low area and good performance FPGA architecture suited for VLSI, for extracting the needed information from multimodality medical images with image fusion.


Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 135
Author(s):  
Gelan Ayana ◽  
Jinhyung Park ◽  
Jin-Woo Jeong ◽  
Se-woon Choe

Breast cancer diagnosis is one of the many areas that has taken advantage of artificial intelligence to achieve better performance, despite the fact that the availability of a large medical image dataset remains a challenge. Transfer learning (TL) is a phenomenon that enables deep learning algorithms to overcome the issue of shortage of training data in constructing an efficient model by transferring knowledge from a given source task to a target task. However, in most cases, ImageNet (natural images) pre-trained models that do not include medical images, are utilized for transfer learning to medical images. Considering the utilization of microscopic cancer cell line images that can be acquired in large amount, we argue that learning from both natural and medical datasets improves performance in ultrasound breast cancer image classification. The proposed multistage transfer learning (MSTL) algorithm was implemented using three pre-trained models: EfficientNetB2, InceptionV3, and ResNet50 with three optimizers: Adam, Adagrad, and stochastic gradient de-scent (SGD). Dataset sizes of 20,400 cancer cell images, 200 ultrasound images from Mendeley and 400 ultrasound images from the MT-Small-Dataset were used. ResNet50-Adagrad-based MSTL achieved a test accuracy of 99 ± 0.612% on the Mendeley dataset and 98.7 ± 1.1% on the MT-Small-Dataset, averaging over 5-fold cross validation. A p-value of 0.01191 was achieved when comparing MSTL against ImageNet based TL for the Mendeley dataset. The result is a significant improvement in the performance of artificial intelligence methods for ultrasound breast cancer classification compared to state-of-the-art methods and could remarkably improve the early diagnosis of breast cancer in young women.


Sign in / Sign up

Export Citation Format

Share Document