Transfer-Deep Learning Application for Ultrasonic Computed Tomographic Image Classification

Author(s):  
Marwa Fradi ◽  
Mouna Afif ◽  
El-Hadi Zahzeh ◽  
Kais Bouallegue ◽  
Mohsen Machhout
Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2021 ◽  
pp. 028418512098397
Author(s):  
Yang Li ◽  
Hong Qiu ◽  
Zhihui Hou ◽  
Jianfeng Zheng ◽  
Jianan Li ◽  
...  

Background Deep learning (DL) has achieved great success in medical imaging and could be utilized for the non-invasive calculation of fractional flow reserve (FFR) from coronary computed tomographic angiography (CCTA) (CT-FFR). Purpose To examine the ability of a DL-based CT-FFR in detecting hemodynamic changes of stenosis. Material and Methods This study included 73 patients (85 vessels) who were suspected of coronary artery disease (CAD) and received CCTA followed by invasive FFR measurements within 90 days. The diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristics curve (AUC) were compared between CT-FFR and CCTA. Thirty-nine patients who received drug therapy instead of revascularization were followed for up to 31 months. Major adverse cardiac events (MACE), unstable angina, and rehospitalization were evaluated and compared between the study groups. Results At the patient level, CT-FFR achieved 90.4%, 93.6%, 88.1%, 85.3%, and 94.9% in accuracy, sensitivity, specificity, PPV, and NPV, respectively. At the vessel level, CT-FFR achieved 91.8%, 93.9%, 90.4%, 86.1%, and 95.9%, respectively. CT-FFR exceeded CCTA in these measurements at both levels. The vessel-level AUC for CT-FFR also outperformed that for CCTA (0.957 vs. 0.599, P < 0.0001). Patients with CT-FFR ≤0.8 had higher rates of rehospitalization (hazard ratio [HR] 4.51, 95% confidence interval [CI] 1.08–18.9) and MACE (HR 7.26, 95% CI 0.88–59.8), as well as a lower rate of unstable angina (HR 0.46, 95% CI 0.07–2.91). Conclusion CT-FFR is superior to conventional CCTA in differentiating functional myocardial ischemia. In addition, it has the potential to differentiate prognoses of patients with CAD.


Sign in / Sign up

Export Citation Format

Share Document