Breast Infrared Thermography Segmentation Based on Adaptive Tuning of a Fully Convolutional Network

Author(s):  
Mazhar Basyouni Tayel ◽  
Azza Mahmoud Elbagoury

Background: Accurate segmentation of Breast Infrared Thermography is an important step for early detection of breast pathological changes. Automatic segmentation of Breast Infrared Thermography is a very challenging task, as it is difficult to find an accurate breast contour and extract regions of interest from it. Although several semi-automatic methods have been proposed for segmentation, their performance often depends on hand-crafted image features, as well as preprocessing operations. Objective: In this work, an approach to automatic semantic segmentation of the Breast Infrared Thermography is proposed based on end-to-end fully convolutional neural networks and without any pre or post-processing. Methods: The lack of labeled Breast Infrared Thermography data limits the complete utilization of fully convolutional neural networks. The proposed model overcomes this challenge by applying data augmentation and two-tier transfer learning from bigger datasets combined with adaptive multi-tier fine-tuning before training the fully convolutional neural networks model. Results: Experimental results show that the proposed approach achieves better segmentation results: 97.986% accuracy; 98.36% sensitivity and 97.61% specificity compared to hand-crafted segmentation methods. Conclusion: This work provided an end-to-end automatic semantic segmentation of Breast Infrared Thermography combined with fully convolutional networks, adaptive multi-tier fine-tuning and transfer learning. Also, this work was able to deal with challenges in applying convolutional neural networks on such data and achieving the state-of-the-art accuracy.

2021 ◽  
Vol 13 (3) ◽  
pp. 809-820
Author(s):  
V. Sowmya ◽  
R. Radha

Vehicle detection and recognition require demanding advanced computational intelligence and resources in a real-time traffic surveillance system for effective traffic management of all possible contingencies. One of the focus areas of deep intelligent systems is to facilitate vehicle detection and recognition techniques for robust traffic management of heavy vehicles. The following are such sophisticated mechanisms: Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Regional Convolutional Neural Networks (R-CNN), You Only Look Once (YOLO) model, etcetera. Accordingly, it is pivotal to choose the precise algorithm for vehicle detection and recognition, which also addresses the real-time environment. In this study, a comparison of deep learning algorithms, such as the Faster R-CNN, YOLOv2, YOLOv3, and YOLOv4, are focused on diverse aspects of the features. Two entities for transport heavy vehicles, the buses and trucks, constitute detection and recognition elements in this proposed work. The mechanics of data augmentation and transfer-learning is implemented in the model; to build, execute, train, and test for detection and recognition to avoid over-fitting and improve speed and accuracy. Extensive empirical evaluation is conducted on two standard datasets such as COCO and PASCAL VOC 2007. Finally, comparative results and analyses are presented based on real-time.


Author(s):  
Michael Majurski ◽  
Peter Bajcsy

We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that once a model is trained, it can be applied on arbitrarily sized images, although it is still constrained by the available GPU memory. This work is motivated by overcoming the GPU memory size constraint without numerically impacting the fnal result. Our approach is to select a tile size that will ft into GPU memory with a halo border of half the network receptive feld. Next, stride across the image by that tile size without the halo. The input tile halos will overlap, while the output tiles join exactly at the seams. Such an approach enables inference to be performed on whole slide microscopy images, such as those generated by a slide scanner. The novelty of this work is in documenting the formulas for determining tile size and stride and then validating them on U-Net and FC-DenseNet architectures. In addition, we quantify the errors due to tiling confgurations which do not satisfy the constraints, and we explore the use of architecture effective receptive felds to estimate the tiling parameters.


2020 ◽  
Vol 6 (12) ◽  
pp. 129
Author(s):  
Mario Manzo ◽  
Simone Pellino

Malignant melanoma is the deadliest form of skin cancer and, in recent years, is rapidly growing in terms of the incidence worldwide rate. The most effective approach to targeted treatment is early diagnosis. Deep learning algorithms, specifically convolutional neural networks, represent a methodology for the image analysis and representation. They optimize the features design task, essential for an automatic approach on different types of images, including medical. In this paper, we adopted pretrained deep convolutional neural networks architectures for the image representation with purpose to predict skin lesion melanoma. Firstly, we applied a transfer learning approach to extract image features. Secondly, we adopted the transferred learning features inside an ensemble classification context. Specifically, the framework trains individual classifiers on balanced subspaces and combines the provided predictions through statistical measures. Experimental phase on datasets of skin lesion images is performed and results obtained show the effectiveness of the proposed approach with respect to state-of-the-art competitors.


2020 ◽  
Author(s):  
Leonardo Rodrigues ◽  
Larissa Rodrigues ◽  
Danilo Da Silva ◽  
João Fernando Mari

Coronavirus Disease 2019 (COVID-19) pandemic rapidly spread globally, impacting the lives of billions of people. The effective screening of infected patients is a critical step to struggle with COVID-19, and treating the patients avoiding this quickly disease spread. The need for automated and scalable methods has increased due to the unavailability of accurate automated toolkits. Recent researches using chest X-ray images suggest they include relevant information about the COVID-19 virus. Hence, applying machine learning techniques combined with radiological imaging promises to identify this disease accurately. It is straightforward to collect these images once it is spreadly shared and analyzed in the world. This paper presents a method for automatic COVID-19 detection using chest Xray images through four convolutional neural networks, namely: AlexNet, VGG-11, SqueezeNet, and DenseNet-121. This method had been providing accurate diagnostics for positive or negative COVID-19 classification. We validate our experiments using a ten-fold cross-validation procedure over the training and test sets. Our findings include the shallow fine-tuning and data augmentation strategies that can assist in dealing with the low number of positive COVID-19 images publicly available. The accuracy for all CNNs is higher than 97.00%, and the SqueezeNet model achieved the best result with 99.20%.


2021 ◽  
Author(s):  
Mengyu Yang ◽  
Wensi Wang ◽  
Qiang Gao ◽  
Chen Zhao ◽  
Caole Li ◽  
...  

Abstract The monitoring of harmful algae is very important for the maintenance of the aquatic ecological environment. Traditional algae monitoring methods require professionals with substantial experience in algae species, which are time-consuming, expensive and limited in practice. The automatic classification of algae cell images and the identification of harmful algae images were realized by the combination of multiple Convolutional Neural Networks (CNNs) and deep learning techniques based on transfer learning in this work. 11 common harmful and 31 harmless algae genera were collected as input samples, the five CNNs classification models of AlexNet, VGG16, GoogLeNet, ResNet50, and MobileNetV2 were fine-tuned to automatically classify algae images, and the average accuracy was improved 11.9% when compared to models without fine-tuning. In order to monitor harmful algae which can cause red tides or produce toxins severely polluting drinking water, a new identification method of harmful algae which combines the recognition results of five CNN models was proposed, and the recall rate reached 98.0%. The experimental results validate that the recognition performance of harmful algae could be significantly improved by transfer learning, and the proposed identification method is effective in the preliminary screening of harmful algae and greatly reduces the workload of professional personnel.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Emre Kiyak ◽  
Gulay Unal

Purpose The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft. Design/methodology/approach First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed. Findings The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%. Originality/value Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.


Sign in / Sign up

Export Citation Format

Share Document