scholarly journals Application of Artificial Intelligence in Cardiovascular Imaging

2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Panjiang Ma ◽  
Qiang Li ◽  
Jianbin Li

During the last two decades, as computer technology has matured and business scenarios have diversified, the scale of application of computer systems in various industries has continued to expand, resulting in a huge increase in industry data. As for the medical industry, huge unstructured data has been accumulated, so exploring how to use medical image data more effectively to efficiently complete diagnosis has an important practical impact. For a long time, China has been striving to promote the process of medical informatization, and the combination of big data and artificial intelligence and other advanced technologies in the medical field has become a hot industry and a new development trend. This paper focuses on cardiovascular diseases and uses relevant deep learning methods to realize automatic analysis and diagnosis of medical images and verify the feasibility of AI-assisted medical treatment. We have tried to achieve a complete diagnosis of cardiovascular medical imaging and localize the vulnerable lesion area. (1) We tested the classical object based on a convolutional neural network and experiment, explored the region segmentation algorithm, and showed its application scenarios in the field of medical imaging. (2) According to the data and task characteristics, we built a network model containing classification nodes and regression nodes. After the multitask joint drill, the effect of diagnosis and detection was also enhanced. In this paper, a weighted loss function mechanism is used to improve the imbalance of data between classes in medical image analysis, and the effect of the model is enhanced. (3) In the actual medical process, many medical images have the label information of high-level categories but lack the label information of low-level lesions. The proposed system exposes the possibility of lesion localization under weakly supervised conditions by taking cardiovascular imaging data to resolve these issues. Experimental results have verified that the proposed deep learning-enabled model has the capacity to resolve the aforementioned issues with minimum possible changes in the underlined infrastructure.

2019 ◽  
Vol 8 (4) ◽  
pp. 462 ◽  
Author(s):  
Muhammad Owais ◽  
Muhammad Arsalan ◽  
Jiho Choi ◽  
Kang Ryoung Park

Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).


2020 ◽  
Vol 237 (12) ◽  
pp. 1438-1441
Author(s):  
Soenke Langner ◽  
Ebba Beller ◽  
Felix Streckenbach

AbstractMedical images play an important role in ophthalmology and radiology. Medical image analysis has greatly benefited from the application of “deep learning” techniques in clinical and experimental radiology. Clinical applications and their relevance for radiological imaging in ophthalmology are presented.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nicholas J. Tustison ◽  
Philip A. Cook ◽  
Andrew J. Holbrook ◽  
Hans J. Johnson ◽  
John Muschelli ◽  
...  

AbstractThe Advanced Normalizations Tools ecosystem, known as ANTsX, consists of multiple open-source software libraries which house top-performing algorithms used worldwide by scientific and research communities for processing and analyzing biological and medical imaging data. The base software library, ANTs, is built upon, and contributes to, the NIH-sponsored Insight Toolkit. Founded in 2008 with the highly regarded Symmetric Normalization image registration framework, the ANTs library has since grown to include additional functionality. Recent enhancements include statistical, visualization, and deep learning capabilities through interfacing with both the R statistical project (ANTsR) and Python (ANTsPy). Additionally, the corresponding deep learning extensions ANTsRNet and ANTsPyNet (built on the popular TensorFlow/Keras libraries) contain several popular network architectures and trained models for specific applications. One such comprehensive application is a deep learning analog for generating cortical thickness data from structural T1-weighted brain MRI, both cross-sectionally and longitudinally. These pipelines significantly improve computational efficiency and provide comparable-to-superior accuracy over multiple criteria relative to the existing ANTs workflows and simultaneously illustrate the importance of the comprehensive ANTsX approach as a framework for medical image analysis.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5097 ◽  
Author(s):  
Satya P. Singh ◽  
Lipo Wang ◽  
Sukrit Gupta ◽  
Haveesh Goli ◽  
Parasuraman Padmanabhan ◽  
...  

The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.


2020 ◽  
Vol 64 (2) ◽  
pp. 20508-1-20508-12 ◽  
Author(s):  
Getao Du ◽  
Xu Cao ◽  
Jimin Liang ◽  
Xueli Chen ◽  
Yonghua Zhan

Abstract Medical image analysis is performed by analyzing images obtained by medical imaging systems to solve clinical problems. The purpose is to extract effective information and improve the level of clinical diagnosis. In recent years, automatic segmentation based on deep learning (DL) methods has been widely used, where a neural network can automatically learn image features, which is in sharp contrast with the traditional manual learning method. U-net is one of the most important semantic segmentation frameworks for a convolutional neural network (CNN). It is widely used in the medical image analysis domain for lesion segmentation, anatomical segmentation, and classification. The advantage of this network framework is that it can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also help to improve accuracy in the diagnosis by medical images. Therefore, this article presents a literature review of medical image segmentation based on U-net, focusing on the successful segmentation experience of U-net for different lesion regions in six medical imaging systems. Along with the latest advances in DL, this article introduces the method of combining the original U-net architecture with deep learning and a method for improving the U-net network.


Author(s):  
Nicholas J. Tustison ◽  
Philip A. Cook ◽  
Andrew J. Holbrook ◽  
Hans J. Johnson ◽  
John Muschelli ◽  
...  

AbstractThe Advanced Normalizations Tools ecosystem, known as ANTsX, consists of multiple open-source software libraries which house top-performing algorithms used worldwide by scientific and research communities for processing and analyzing biological and medical imaging data. The base software library, ANTs, is built upon, and contributes to, the NIH-sponsored Insight Toolkit. Founded in 2008 with the highly regarded Symmetric Normalization image registration framework, the ANTs library has since grown to include additional functionality. Recent enhancements include statistical, visualization, and deep learning capabilities through interfacing with both the R statistical project (ANTsR) and Python (ANTsPy). Additionally, the corresponding deep learning extensions ANTsRNet and ANTsPyNet (built on the popular TensorFlow/Keras libraries) contain several popular network architectures and trained models for specific applications. One such comprehensive application is a deep learning analog for generating cortical thickness data from structural T1-weighted brain MRI. Not only does this significantly improve computational efficiency and provide comparable-to-superior accuracy over the existing ANTs pipelines but it also illustrates the importance of the comprehensive ANTsX approach as a framework for medical image analysis.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 115 ◽  
Author(s):  
Myung Jae Lim ◽  
Da Eun Kim ◽  
Dong Kun Chung ◽  
Hoon Lim ◽  
Young Man Kwon

Breast cancer is a highly contagious disease that has killed many people all over the world. It can be fully recovered from early detection. To enable the early detection of the breast cancer, it is very important to classify accurately whether it is breast cancer or not. Recently, the deep learning approach method on the medical images such as these histopathologic images of the breast cancer is showing higher level of accuracy and efficiency compared to the conventional methods. In this paper, the breast cancer histopathological image that is difficult to be distinguished was analyzed visually. And among the deep learning algorithms, the CNN(Convolutional Neural Network) specialized for the image was used to perform comparative analysis on whether it is breast cancer or not. Among the CNN algorithms, VGG16 and InceptionV3 were used, and transfer learning was used for the effective application of these algorithms.The data used in this paper is breast cancer histopathological image dataset classifying the benign and malignant of BreakHis. In the 2-class classification task, InceptionV3 achieved 98% accuracy. It is expected that this deep learning approach method will support the development of disease diagnosis through medical images.  


2021 ◽  
Vol 1 ◽  
Author(s):  
Shanshan Wang ◽  
Guohua Cao ◽  
Yan Wang ◽  
Shu Liao ◽  
Qian Wang ◽  
...  

Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.


2019 ◽  
Vol 14 (4) ◽  
pp. 450-469 ◽  
Author(s):  
Jiechao Ma ◽  
Yang Song ◽  
Xi Tian ◽  
Yiting Hua ◽  
Rongguo Zhang ◽  
...  

AbstractAs a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.


Cancers ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1590
Author(s):  
Laith Alzubaidi ◽  
Muthana Al-Amidie ◽  
Ahmed Al-Asadi ◽  
Amjad J. Humaidi ◽  
Omran Al-Shamma ◽  
...  

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.


Sign in / Sign up

Export Citation Format

Share Document