image modality
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 26)

H-INDEX

9
(FIVE YEARS 3)

2021 ◽  
Vol 7 (9) ◽  
pp. 180
Author(s):  
Ana C. Morgado ◽  
Catarina Andrade ◽  
Luís F. Teixeira ◽  
Maria João M. Vasconcelos

With the increasing adoption of teledermatology, there is a need to improve the automatic organization of medical records, being dermatological image modality a key filter in this process. Although there has been considerable effort in the classification of medical imaging modalities, this has not been in the field of dermatology. Moreover, as various devices are used in teledermatological consultations, image acquisition conditions may differ. In this work, two models (VGG-16 and MobileNetV2) were used to classify dermatological images from the Portuguese National Health System according to their modality. Afterwards, four incremental learning strategies were applied to these models, namely naive, elastic weight consolidation, averaged gradient episodic memory, and experience replay, enabling their adaptation to new conditions while preserving previously acquired knowledge. The evaluation considered catastrophic forgetting, accuracy, and computational cost. The MobileNetV2 trained with the experience replay strategy, with 500 images in memory, achieved a global accuracy of 86.04% with only 0.0344 of forgetting, which is 6.98% less than the second-best strategy. Regarding efficiency, this strategy took 56 s per epoch longer than the baseline and required, on average, 4554 megabytes of RAM during training. Promising results were achieved, proving the effectiveness of the proposed approach.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253205
Author(s):  
Chen-Hua Chiang ◽  
Chi-Lun Weng ◽  
Hung-Wen Chiu

Modern radiologic images comply with DICOM (digital imaging and communications in medicine) standard, which, upon conversion to other image format, would lose its image detail and information such as patient demographics or type of image modality that DICOM format carries. As there is a growing interest in using large amount of image data for research purpose and acquisition of large amount of medical image is now a standard practice in the clinical setting, efficient handling and storage of large amount of image data is important in both the clinical and research setting. In this study, four classes of images were created, namely, CT (computed tomography) of abdomen, CT of brain, MRI (magnetic resonance imaging) of brain and MRI of spine. After converting these images into JPEG (Joint Photographic Experts Group) format, our proposed CNN architecture could automatically classify these 4 groups of medical images by both their image modality and anatomic location. We achieved excellent overall classification accuracy in both validation and test sets (> 99.5%), specificity and F1 score (> 99%) in each category of this dataset which contained both diseased and normal images. Our study has shown that using CNN for medical image classification is a promising methodology and could work on non-DICOM images, which could potentially save image processing time and storage space.


Author(s):  
Lars Christian Ebert ◽  
Sabine Franckenberg ◽  
Till Sieberth ◽  
Wolf Schweitzer ◽  
Michael Thali ◽  
...  

AbstractPostmortem computed tomography (PMCT) is a standard image modality used in forensic death investigations. Case- and audience-specific visualizations are vital for identifying relevant findings and communicating them appropriately. Different data types and visualization methods exist in 2D and 3D, and all of these types have specific applications. 2D visualizations are more suited for the radiological assessment of PMCT data because they allow the depiction of subtle details. 3D visualizations are better suited for creating visualizations for medical laypersons, such as state attorneys, because they maintain the anatomical context. Visualizations can be refined by using additional techniques, such as annotation or layering. Specialized methods such as 3D printing and virtual and augmented reality often require data conversion. The resulting data can also be used to combine PMCT data with other 3D data such as crime scene laser scans to create crime scene reconstructions. Knowledge of these techniques is essential for the successful handling of PMCT data in a forensic setting. In this review, we present an overview of current visualization techniques for PMCT.


2021 ◽  
Vol 7 ◽  
pp. e491
Author(s):  
Nikita Bhatt ◽  
Amit Ganatra

The cross-modal retrieval (CMR) has attracted much attention in the research community due to flexible and comprehensive retrieval. The core challenge in CMR is the heterogeneity gap, which is generated due to different statistical properties of multi-modal data. The most common solution to bridge the heterogeneity gap is representation learning, which generates a common sub-space. In this work, we propose a framework called “Improvement of Deep Cross-Modal Retrieval (IDCMR)”, which generates real-valued representation. The IDCMR preserves both intra-modal and inter-modal similarity. The intra-modal similarity is preserved by selecting an appropriate training model for text and image modality. The inter-modal similarity is preserved by reducing modality-invariance loss. The mean average precision (mAP) is used as a performance measure in the CMR system. Extensive experiments are performed, and results show that IDCMR outperforms over state-of-the-art methods by a margin 4% and 2% relatively with mAP in the text to image and image to text retrieval tasks on MSCOCO and Xmedia dataset respectively.


2021 ◽  
Vol 9 (1) ◽  
pp. 1406-1412
Author(s):  
K. Santhi, A. Rama Mohan Reddy

Cardiovascular disease (CVD) is one of the critical diseases and the most common cause of morbidity and mortality worldwide. Therefore, early detection and prediction of such a disease is extremely essential for a healthy life. Cardiac imaging plays an important role in the diagnosis of cardiovascular disease but its role has been limited to visual assessment of heart structure and its function. However, with the advanced techniques and tools of big data and machine learning, it become easier to clinician to diagnose the CVD. Stenosis with in the Coronary Arteries (CA) are often determined by using the Coronary Cine Angiogram (CCA). It comes under the invasive image modality. CCA is the effective method to detect and predict the stenosis. In this paper a coronary analysis automation method is proposed in disease diagnosis. The proposed method includes pre-processing, segmentation, identifying vessel path and statistical analysis.


2021 ◽  
Vol 5 (1) ◽  
pp. 1-36
Author(s):  
Ghulam Murtaza ◽  
Ainuddin Wahid Abdul Wahab ◽  
Ghulam Raza ◽  
Liyana Shuib

Globally, breast cancer (BC) is the prevailing cause of unusual deaths in women. Breast tumor (BT) is a primary symptom and may lead to BC. Digital histology (DH) image modality is a gold standard medical test for a definite diagnosis of BC. Traditionally, DH images are visually examined by two or more pathologists to come up with a consensus for authentic BC detection which may cause a high error rate. Therefore, researchers had developed automated BC detection models using a machine learning (ML) based approach. Thus, this study aims to develop a BC detection model through ten feature extraction methods which extract both local and global type features from publicly available breast histology dataset. The extracted features are sorted by their weights, which are computed by the neighborhood component analysis method. A feature selection algorithm is developed to find the minimum number of discriminating features, evaluated through seven heterogeneous traditional ML classifiers. The proposed ML-based BC detection model acquired 90% accuracy for the initial testing set using 51 Harris features. Whereas, for the extended testing set, only three Harris features is shown 93% accuracy. The proposed BC detection model can assist the doctor in giving a second opinion.


Author(s):  
Huangpeng Dai ◽  
Qing Xie ◽  
Yanchun Ma ◽  
Yongjian Liu ◽  
Shengwu Xiong
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document