scholarly journals Artificial Intelligence-Based Diagnosis of Cardiac and Related Diseases

2020 ◽  
Vol 9 (3) ◽  
pp. 871 ◽  
Author(s):  
Muhammad Arsalan ◽  
Muhammad Owais ◽  
Tahir Mahmood ◽  
Jiho Choi ◽  
Kang Ryoung Park

Automatic chest anatomy segmentation plays a key role in computer-aided disease diagnosis, such as for cardiomegaly, pleural effusion, emphysema, and pneumothorax. Among these diseases, cardiomegaly is considered a perilous disease, involving a high risk of sudden cardiac death. It can be diagnosed early by an expert medical practitioner using a chest X-Ray (CXR) analysis. The cardiothoracic ratio (CTR) and transverse cardiac diameter (TCD) are the clinical criteria used to estimate the heart size for diagnosing cardiomegaly. Manual estimation of CTR and other diseases is a time-consuming process and requires significant work by the medical expert. Cardiomegaly and related diseases can be automatically estimated by accurate anatomical semantic segmentation of CXRs using artificial intelligence. Automatic segmentation of the lungs and heart from the CXRs is considered an intensive task owing to inferior quality images and intensity variations using nonideal imaging conditions. Although there are a few deep learning-based techniques for chest anatomy segmentation, most of them only consider single class lung segmentation with deep complex architectures that require a lot of trainable parameters. To address these issues, this study presents two multiclass residual mesh-based CXR segmentation networks, X-RayNet-1 and X-RayNet-2, which are specifically designed to provide fine segmentation performance with a few trainable parameters compared to conventional deep learning schemes. The proposed methods utilize semantic segmentation to support the diagnostic procedure of related diseases. To evaluate X-RayNet-1 and X-RayNet-2, experiments were performed with a publicly available Japanese Society of Radiological Technology (JSRT) dataset for multiclass segmentation of the lungs, heart, and clavicle bones; two other publicly available datasets, Montgomery County (MC) and Shenzhen X-Ray sets (SC), were evaluated for lung segmentation. The experimental results showed that X-RayNet-1 achieved fine performance for all datasets and X-RayNet-2 achieved competitive performance with a 75% parameter reduction.

2021 ◽  
Author(s):  
Soumava Dey ◽  
Gunther Correia Bacellar ◽  
Mallikarjuna Basappa Chandrappa ◽  
Raj Kulkarni

The rise of the coronavirus disease 2019 (COVID-19) pandemic has made it necessary to improve existing medical screening and clinical management of this disease. While COVID-19 patients are known to exhibit a variety of symptoms, the major symptoms include fever, cough, and fatigue. Since these symptoms also appear in pneumonia patients, this creates complications in COVID-19 detection especially during the flu season. Early studies identified abnormalities in chest X-ray images of COVID-19 infected patients that could be beneficial for disease diagnosis. Therefore, chest X-ray image-based disease classification has emerged as an alternative to aid medical diagnosis. However, manual detection of COVID-19 from a set of chest X-ray images comprising both COVID-19 and pneumonia cases is cumbersome and prone to human error. Thus, artificial intelligence techniques powered by deep learning algorithms, which learn from radiography images and predict presence of COVID-19 have potential to enhance current diagnosis process. Towards this purpose, here we implemented a set of deep learning pre-trained models such as ResNet, VGG, Inception and EfficientNet in conjunction with developing a computer vision AI system based on our own convolutional neural network (CNN) model: Deep Learning in Healthcare (DLH)-COVID. All these CNN models cater to image classification exercise. We used publicly available resources of 6,432 images and further strengthened our model by tuning hyperparameters to provide better generalization during the model validation phase. Our final DLH-COVID model yielded the highest accuracy of 96% in detection of COVID-19 from chest X-ray images when compared to images of both pneumonia-affected and healthy individuals. Given the practicality of acquiring chest X-ray images by patients, we also developed a web application (link: https://toad.li/xray) based on our model to directly enable users to upload chest X-ray images and detect the presence of COVID-19 within a few seconds. Taken together, here we introduce a state-of-the-art artificial intelligence-based system for efficient COVID-19 detection and a user-friendly application that has the capacity to become a rapid COVID-19 diagnosis method in the near future.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Lars Banko ◽  
Phillip M. Maffettone ◽  
Dennis Naujoks ◽  
Daniel Olds ◽  
Alfred Ludwig

AbstractWe apply variational autoencoders (VAE) to X-ray diffraction (XRD) data analysis on both simulated and experimental thin-film data. We show that crystal structure representations learned by a VAE reveal latent information, such as the structural similarity of textured diffraction patterns. While other artificial intelligence (AI) agents are effective at classifying XRD data into known phases, a similarly conditioned VAE is uniquely effective at knowing what it doesn’t know: it can rapidly identify data outside the distribution it was trained on, such as novel phases and mixtures. These capabilities demonstrate that a VAE is a valuable AI agent for aiding materials discovery and understanding XRD measurements both ‘on-the-fly’ and during post hoc analysis.


Author(s):  
Jinyuan Dang ◽  
Hu Li ◽  
Kai Niu ◽  
Zhiyuan Xu ◽  
Jianhao Lin ◽  
...  

2020 ◽  
Vol 10 (20) ◽  
pp. 7347
Author(s):  
Jihyo Seo ◽  
Hyejin Park ◽  
Seungyeon Choo

Artificial intelligence presents an optimized alternative by performing problem-solving knowledge and problem-solving processes under specific conditions. This makes it possible to creatively examine various design alternatives under conditions that satisfy the functional requirements of the building. In this study, in order to develop architectural design automation technology using artificial intelligence, the characteristics of an architectural drawings, that is, the architectural elements and the composition of spaces expressed in the drawings, were learned, recognized, and inferred through deep learning. The biggest problem in applying deep learning in the field of architectural design is that the amount of publicly disclosed data is absolutely insufficient and that the publicly disclosed data also haves a wide variety of forms. Using the technology proposed in this study, it is possible to quickly and easily create labeling images of drawings, so it is expected that a large amount of data sets that can be used for deep learning for the automatic recommendation of architectural design or automatic 3D modeling can be obtained. This will be the basis for architectural design technology using artificial intelligence in the future, as it can propose an architectural plan that meets specific circumstances or requirements.


2019 ◽  
Vol 8 (9) ◽  
pp. 1446 ◽  
Author(s):  
Arsalan ◽  
Owais ◽  
Mahmood ◽  
Cho ◽  
Park

Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.


2021 ◽  
Author(s):  
Jeniffer Luz ◽  
Scenio De Araujo ◽  
Caio Abreu ◽  
Juvenal Silva Neto ◽  
Carlos Gulo

Since the beginning of the COVID-19 outbreak, the scientific communityhas been making efforts in several areas, either by seekingvaccines or improving the early diagnosis of the disease to contributeto the fight against the SARS-CoV-2 virus. The use of X-rayimaging exams becomes an ally in early diagnosis and has been thesubject of research by the medical image processing and analysiscommunity. Although the diagnosis of diseases by image is a consolidatedresearch theme, the proposed approach aims to: a) applystate-of-the-art machine learning techniques in X-ray images forthe COVID-19 diagnosis; b) identify COVID-19 features in imagingexamination; c) to develop an Artificial Intelligence model toreduce the disease diagnosis time; in addition to demonstrating thepotential of the Artificial Intelligence area as an incentive for theformation of critical mass and encouraging research in machinelearning and processing and analysis of medical images in the Stateof Mato Grosso, in Brazil. Initial results were obtained from experimentscarried out with the SVM (Support Vector Machine) classifier,induced on a publicly available image dataset from Kaggle repository.Six attributes suggested by Haralick, calculated on the graylevel co-occurrence matrix, were used to represent the images. Theprediction model was able to achieve 82.5% accuracy in recognizingthe disease. The next stage of the studies includes the study of deeplearning models.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7116
Author(s):  
Lucas O. Teixeira ◽  
Rodolfo M. Pereira ◽  
Diego Bertolini ◽  
Luiz S. Oliveira ◽  
Loris Nanni ◽  
...  

COVID-19 frequently provokes pneumonia, which can be diagnosed using imaging exams. Chest X-ray (CXR) is often useful because it is cheap, fast, widespread, and uses less radiation. Here, we demonstrate the impact of lung segmentation in COVID-19 identification using CXR images and evaluate which contents of the image influenced the most. Semantic segmentation was performed using a U-Net CNN architecture, and the classification using three CNN architectures (VGG, ResNet, and Inception). Explainable Artificial Intelligence techniques were employed to estimate the impact of segmentation. A three-classes database was composed: lung opacity (pneumonia), COVID-19, and normal. We assessed the impact of creating a CXR image database from different sources, and the COVID-19 generalization from one source to another. The segmentation achieved a Jaccard distance of 0.034 and a Dice coefficient of 0.982. The classification using segmented images achieved an F1-Score of 0.88 for the multi-class setup, and 0.83 for COVID-19 identification. In the cross-dataset scenario, we obtained an F1-Score of 0.74 and an area under the ROC curve of 0.9 for COVID-19 identification using segmented images. Experiments support the conclusion that even after segmentation, there is a strong bias introduced by underlying factors from different sources.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 669
Author(s):  
Irfan Ullah Khan ◽  
Nida Aslam ◽  
Talha Anwar ◽  
Hind S. Alsaif ◽  
Sara Mhd. Bachar Chrouf ◽  
...  

The coronavirus pandemic (COVID-19) is disrupting the entire world; its rapid global spread threatens to affect millions of people. Accurate and timely diagnosis of COVID-19 is essential to control the spread and alleviate risk. Due to the promising results achieved by integrating machine learning (ML), particularly deep learning (DL), in automating the multiple disease diagnosis process. In the current study, a model based on deep learning was proposed for the automated diagnosis of COVID-19 using chest X-ray images (CXR) and clinical data of the patient. The aim of this study is to investigate the effects of integrating clinical patient data with the CXR for automated COVID-19 diagnosis. The proposed model used data collected from King Fahad University Hospital, Dammam, KSA, which consists of 270 patient records. The experiments were carried out first with clinical data, second with the CXR, and finally with clinical data and CXR. The fusion technique was used to combine the clinical features and features extracted from images. The study found that integrating clinical data with the CXR improves diagnostic accuracy. Using the clinical data and the CXR, the model achieved an accuracy of 0.970, a recall of 0.986, a precision of 0.978, and an F-score of 0.982. Further validation was performed by comparing the performance of the proposed system with the diagnosis of an expert. Additionally, the results have shown that the proposed system can be used as a tool that can help the doctors in COVID-19 diagnosis.


Author(s):  
Mohammed Y. Kamil

COVID-19 disease has rapidly spread all over the world at the beginning of this year. The hospitals' reports have told that low sensitivity of RT-PCR tests in the infection early stage. At which point, a rapid and accurate diagnostic technique, is needed to detect the Covid-19. CT has been demonstrated to be a successful tool in the diagnosis of disease. A deep learning framework can be developed to aid in evaluating CT exams to provide diagnosis, thus saving time for disease control. In this work, a deep learning model was modified to Covid-19 detection via features extraction from chest X-ray and CT images. Initially, many transfer-learning models have applied and comparison it, then a VGG-19 model was tuned to get the best results that can be adopted in the disease diagnosis. Diagnostic performance was assessed for all models used via the dataset that included 1000 images. The VGG-19 model achieved the highest accuracy of 99%, sensitivity of 97.4%, and specificity of 99.4%. The deep learning and image processing demonstrated high performance in early Covid-19 detection. It shows to be an auxiliary detection way for clinical doctors and thus contribute to the control of the pandemic.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 331 ◽  
Author(s):  
Yifeng Xu ◽  
Huigang Wang ◽  
Xing Liu ◽  
Henry He ◽  
Qingyue Gu ◽  
...  

Recent advances in deep learning have shown exciting promise in low-level artificial intelligence tasks such as image classification, speech recognition, object detection, and semantic segmentation, etc. Artificial intelligence has made an important contribution to autopilot, which is a complex high-level intelligence task. However, the real autopilot scene is quite complicated. The first accident of autopilot occurred in 2016. It resulted in a fatal crash where the white side of a vehicle appeared similar to a brightly lit sky. The root of the problem is that the autopilot vision system cannot identify the part of a vehicle when the part is similar to the background. A method called DIDA was first proposed based on the deep learning network to see the hidden part. DIDA cascades the following steps: object detection, scaling, image inpainting assuming a hidden part beside the car, object re-detection from inpainted image, zooming back to the original size, and setting an alarm region by comparing two detected regions. DIDA was tested in a similar scene and achieved exciting results. This method solves the aforementioned problem only by using optical signals. Additionally, the vehicle dataset captured in Xi’an, China can be used in subsequent research.


Sign in / Sign up

Export Citation Format

Share Document