scholarly journals Rheumatoid Arthritis Diagnosis: Deep Learning vs. Humane

2021 ◽  
Vol 12 (1) ◽  
pp. 10
Author(s):  
George P. Avramidis ◽  
Maria P. Avramidou ◽  
George A. Papakostas

Rheumatoid arthritis (RA) is a systemic autoimmune disease that preferably affects small joints. As the well-timed diagnosis of the disease is essential for the treatment of the patient, several works have been conducted in the field of deep learning to develop fast and accurate automatic methods for RA diagnosis. These works mainly focus on medical images as they use X-ray and ultrasound images as input for their models. In this study, we review the conducted works and compare the methods that use deep learning with the procedure that is commonly followed by a medical doctor for the RA diagnosis. The results show that 93% of the works use only image modalities as input for the models as distinct from the medical procedure where more patient medical data are taken into account. Moreover, only 15% of the works use direct explainability methods, meaning that the efforts for solving the trustworthiness issue of deep learning models were limited. In this context, this work reveals the gap between the deep learning approaches and the medical doctors’ practices traditionally applied and brings to light the weaknesses of the current deep learning technology to be integrated into a trustworthy context inside the existed medical infrastructures.

Author(s):  
Victoria Wu

Introduction: Scoliosis, an excessive curvature of the spine, affects approximately 1 in 1,000 individuals. As a result, there have formerly been implementations of mandatory scoliosis screening procedures. Screening programs are no longer widely used as the harms often outweigh the benefits; it causes many adolescents to undergo frequent diagnosis X-ray procedure This makes spinal ultrasounds an ideal substitute for scoliosis screening in patients, as it does not expose them to those levels of radiation. Spinal curvatures can be accurately computed from the location of spinal transverse processes, by measuring the vertebral angle from a reference line [1]. However, ultrasound images are less clear than x-ray images, making it difficult to identify the spinal processes. To overcome this, we employ deep learning using a convolutional neural network, which is a powerful tool for computer vision and image classification [2]. Method: A total of 2,752 ultrasound images were recorded from a spine phantom to train a convolutional neural network. Subsequently, we took another recording of 747 images to be used for testing. All the ultrasound images from the scans were then segmented manually, using the 3D Slicer (www.slicer.org) software. Next, the dataset was fed through a convolutional neural network. The network used was a modified version of GoogLeNet (Inception v1), with 2 linearly stacked inception models. This network was chosen because it provided a balance between accurate performance, and time efficient computations. Results: Deep learning classification using the Inception model achieved an accuracy of 84% for the phantom scan.  Conclusion: The classification model performs with considerable accuracy. Better accuracy needs to be achieved, possibly with more available data and improvements in the classification model.  Acknowledgements: G. Fichtinger is supported as a Canada Research Chair in Computer-Integrated Surgery. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 - Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program.    Figure 1: Ultrasound scan containing a transverse process (left), and ultrasound scan containing no transverse process (right).                                Figure 2: Accuracy of classification for training (red) and validation (blue). References:           Ungi T, King F, Kempston M, Keri Z, Lasso A, Mousavi P, Rudan J, Borschneck DP, Fichtinger G. Spinal Curvature Measurement by Tracked Ultrasound Snapshots. Ultrasound in Medicine and Biology, 40(2):447-54, Feb 2014.           Krizhevsky A, Sutskeyer I, Hinton GE. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25:1097-1105. 


2019 ◽  
Vol 54 (S1) ◽  
pp. 86-87
Author(s):  
X.P. Burgos‐Artizuu ◽  
E. Eixarch ◽  
D. Coronado‐Gutierrez ◽  
B. Valenzuela ◽  
E. Bonet‐Carne ◽  
...  

2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Okeke Stephen ◽  
Mangal Sain ◽  
Uchenna Joseph Maduh ◽  
Do-Un Jeong

This study proposes a convolutional neural network model trained from scratch to classify and detect the presence of pneumonia from a collection of chest X-ray image samples. Unlike other methods that rely solely on transfer learning approaches or traditional handcrafted techniques to achieve a remarkable classification performance, we constructed a convolutional neural network model from scratch to extract features from a given chest X-ray image and classify it to determine if a person is infected with pneumonia. This model could help mitigate the reliability and interpretability challenges often faced when dealing with medical imagery. Unlike other deep learning classification tasks with sufficient image repository, it is difficult to obtain a large amount of pneumonia dataset for this classification task; therefore, we deployed several data augmentation algorithms to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy.


2021 ◽  
Vol 3 (3) ◽  
pp. 190-207
Author(s):  
S. K. B. Sangeetha

In recent years, deep-learning systems have made great progress, particularly in the disciplines of computer vision and pattern recognition. Deep-learning technology can be used to enable inference models to do real-time object detection and recognition. Using deep-learning-based designs, eye tracking systems could determine the position of eyes or pupils, regardless of whether visible-light or near-infrared image sensors were utilized. For growing electronic vehicle systems, such as driver monitoring systems and new touch screens, accurate and successful eye gaze estimates are critical. In demanding, unregulated, low-power situations, such systems must operate efficiently and at a reasonable cost. A thorough examination of the different deep learning approaches is required to take into consideration all of the limitations and opportunities of eye gaze tracking. The goal of this research is to learn more about the history of eye gaze tracking, as well as how deep learning contributed to computer vision-based tracking. Finally, this research presents a generalized system model for deep learning-driven eye gaze direction diagnostics, as well as a comparison of several approaches.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 20235-20254
Author(s):  
Hanan S. Alghamdi ◽  
Ghada Amoudi ◽  
Salma Elhag ◽  
Kawther Saeedi ◽  
Jomanah Nasser

Sci ◽  
2022 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Steinar Valsson ◽  
Ognjen Arandjelović

With the increase in the availability of annotated X-ray image data, there has been an accompanying and consequent increase in research on machine-learning-based, and ion particular deep-learning-based, X-ray image analysis. A major problem with this body of work lies in how newly proposed algorithms are evaluated. Usually, comparative analysis is reduced to the presentation of a single metric, often the area under the receiver operating characteristic curve (AUROC), which does not provide much clinical value or insight and thus fails to communicate the applicability of proposed models. In the present paper, we address this limitation of previous work by presenting a thorough analysis of a state-of-the-art learning approach and hence illuminate various weaknesses of similar algorithms in the literature, which have not yet been fully acknowledged and appreciated. Our analysis was performed on the ChestX-ray14 dataset, which has 14 lung disease labels and metainfo such as patient age, gender, and the relative X-ray direction. We examined the diagnostic significance of different metrics used in the literature including those proposed by the International Medical Device Regulators Forum, and present the qualitative assessment of the spatial information learned by the model. We show that models that have very similar AUROCs can exhibit widely differing clinical applicability. As a result, our work demonstrates the importance of detailed reporting and analysis of the performance of machine-learning approaches in this field, which is crucial both for progress in the field and the adoption of such models in practice.


Author(s):  
Tanishka Dodiya

Abstract: COVID-19 also famously known as Coronavirus is one of the deadliest viruses found in the world, which has a high rate in both demise and spread. This has caused a severe pandemic in the world. The virus was first reported in Wuhan, China, registering causes like pneumonia. The first case was encountered on December 31, 2019. As of 20th October 2021, more than 242 million cases have been reported in more than 188 countries, and it has around 5 million deaths. COVID- 19 infected persons have pneumonia-like symptoms, and the infection damages the body's respiratory organs, making breathing difficult. The elemental clinical equipment as of now being employed for the analysis of COVID-19 is RT-PCR, which is costly, touchy, and requires specific clinical workforce. According to recent studies, chest X-ray scans include important information about the start of the infection, and this information may be examined so that diagnosis and treatment can begin sooner. This is where artificial intelligence meets the diagnostic capabilities of intimate clinicians. X-ray imaging is an effectively available apparatus that can be an astounding option in the COVID-19 diagnosis. The architecture usually used are VGG16, ResNet50, DenseNet121, Xception, ResNet18, etc. This deep learning based COVID detection system can be installed in hospitals for early diagnosis, or it can be used as a second opinion. Keywords: COVID-19, Deep Learning, CNN, CT-Image, Transfer Learning, VGG, ResNet, DenseNet


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Peng Bian ◽  
Xiyu Zhang ◽  
Ruihong Liu ◽  
Huijie Li ◽  
Qingqing Zhang ◽  
...  

The neural network algorithm of deep learning was applied to optimize and improve color Doppler ultrasound images, which was used for the research on elderly patients with chronic heart failure (CHF) complicated with sarcopenia, so as to analyze the effect of the deep-learning-based color Doppler ultrasound image on the diagnosis of CHF. 259 patients were selected randomly in this study, who were admitted to hospital from October 2017 to March 2020 and were diagnosed with sarcopenia. Then, all of them underwent cardiac ultrasound examination and were divided into two groups according to whether deep learning technology was used for image processing or not. A group of routine unprocessed images was set as the control group, and the images processed by deep learning were set as the experimental group. The results of color Doppler images before and after processing were analyzed and compared; that is, the processed images of the experimental group were clearer and had higher resolution than the unprocessed images of the control group, with the peak signal-to-noise ratio (PSNR) = 20 and structural similarity index measure (SSIM) = 0.09; the similarity between the final diagnosis results and the examination results of the experimental group (93.5%) was higher than that of the control group (87.0%), and the comparison was statistically significant ( P < 0.05 ); among all the patients diagnosed with sarcopenia, 88.9% were also eventually diagnosed with CHF and only a small part of them were diagnosed with other diseases, with statistical significance ( P < 0.05 ). In conclusion, deep learning technology had certain application value in processing color Doppler ultrasound images. Although there was no obvious difference between the color Doppler ultrasound images before and after processing, they could all make a better diagnosis. Moreover, the research results showed the correlation between CHF and sarcopenia.


Author(s):  
Shaymaa Taha Ahmed ◽  
Suhad Malallah Kadhem

<p class="0abstract"><strong>—</strong> Chest imaging diagnostics is crucial in the medical area due to many serious lung diseases like cancers and nodules and particularly with the current pandemic of Covid-19. Machine learning approaches yield prominent results toward the task of diagnosis. Recently, deep learning methods are utilized and recommended by many studies in this domain. The research aims to critically examine the newest lung disease detection procedures using deep learning algorithms that use X-ray and CT scan datasets. Here, the most recent studies in this area (2015-2021) have been reviewed and summarized to provide an overview of the most appropriate methods that should be used or developed in future works, what limitations should be considered, and at what level these techniques help physicians in identifying the disease with better accuracy. The lack of various standard datasets, the huge training set, the high dimensionality of data, and the independence of features have been the main limitations based on the literature. However, different architectures of deep learning are used by many researchers but, Convolutional Neural Networks (CNN) are still state-of-art techniques in dealing with image datasets.</p>


Sign in / Sign up

Export Citation Format

Share Document