scholarly journals Reducing the Impact of Confounding Factors on Skin Cancer Classification via Image Segmentation: Technical Model Study

10.2196/21695 ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. e21695
Author(s):  
Roman C Maron ◽  
Achim Hekler ◽  
Eva Krieghoff-Henning ◽  
Max Schmitt ◽  
Justin G Schlager ◽  
...  

Background Studies have shown that artificial intelligence achieves similar or better performance than dermatologists in specific dermoscopic image classification tasks. However, artificial intelligence is susceptible to the influence of confounding factors within images (eg, skin markings), which can lead to false diagnoses of cancerous skin lesions. Image segmentation can remove lesion-adjacent confounding factors but greatly change the image representation. Objective The aim of this study was to compare the performance of 2 image classification workflows where images were either segmented or left unprocessed before the subsequent training and evaluation of a binary skin lesion classifier. Methods Separate binary skin lesion classifiers (nevus vs melanoma) were trained and evaluated on segmented and unsegmented dermoscopic images. For a more informative result, separate classifiers were trained on 2 distinct training data sets (human against machine [HAM] and International Skin Imaging Collaboration [ISIC]). Each training run was repeated 5 times. The mean performance of the 5 runs was evaluated on a multi-source test set (n=688) consisting of a holdout and an external component. Results Our findings showed that when trained on HAM, the segmented classifiers showed a higher overall balanced accuracy (75.6% [SD 1.1%]) than the unsegmented classifiers (66.7% [SD 3.2%]), which was significant in 4 out of 5 runs (P<.001). The overall balanced accuracy was numerically higher for the unsegmented ISIC classifiers (78.3% [SD 1.8%]) than for the segmented ISIC classifiers (77.4% [SD 1.5%]), which was significantly different in 1 out of 5 runs (P=.004). Conclusions Image segmentation does not result in overall performance decrease but it causes the beneficial removal of lesion-adjacent confounding factors. Thus, it is a viable option to address the negative impact that confounding factors have on deep learning models in dermatology. However, the segmentation step might introduce new pitfalls, which require further investigations.

2020 ◽  
Author(s):  
Roman C Maron ◽  
Achim Hekler ◽  
Eva Krieghoff-Henning ◽  
Max Schmitt ◽  
Justin G Schlager ◽  
...  

BACKGROUND Studies have shown that artificial intelligence achieves similar or better performance than dermatologists in specific dermoscopic image classification tasks. However, artificial intelligence is susceptible to the influence of confounding factors within images (eg, skin markings), which can lead to false diagnoses of cancerous skin lesions. Image segmentation can remove lesion-adjacent confounding factors but greatly change the image representation. OBJECTIVE The aim of this study was to compare the performance of 2 image classification workflows where images were either segmented or left unprocessed before the subsequent training and evaluation of a binary skin lesion classifier. METHODS Separate binary skin lesion classifiers (nevus vs melanoma) were trained and evaluated on segmented and unsegmented dermoscopic images. For a more informative result, separate classifiers were trained on 2 distinct training data sets (human against machine [HAM] and International Skin Imaging Collaboration [ISIC]). Each training run was repeated 5 times. The mean performance of the 5 runs was evaluated on a multi-source test set (n=688) consisting of a holdout and an external component. RESULTS Our findings showed that when trained on HAM, the segmented classifiers showed a higher overall balanced accuracy (75.6% [SD 1.1%]) than the unsegmented classifiers (66.7% [SD 3.2%]), which was significant in 4 out of 5 runs (<i>P</i>&lt;.001). The overall balanced accuracy was numerically higher for the unsegmented ISIC classifiers (78.3% [SD 1.8%]) than for the segmented ISIC classifiers (77.4% [SD 1.5%]), which was significantly different in 1 out of 5 runs (<i>P</i>=.004). CONCLUSIONS Image segmentation does not result in overall performance decrease but it causes the beneficial removal of lesion-adjacent confounding factors. Thus, it is a viable option to address the negative impact that confounding factors have on deep learning models in dermatology. However, the segmentation step might introduce new pitfalls, which require further investigations.


Filling a vacancy takes a lot of (costly) time. Automated preprocessing of applications using artificial intelligence technology can help to save time, e.g., by analyzing applications using machine learning algorithms. We investigate whether such systems are potentially biased in terms of gender, origin, and nobility. Using a corpus of common German reference letter sentences, we investigate two research questions. First, we test sentiment analysis systems offered by Amazon, Google, IBM and Microsoft. All tested services rate the sentiment of the same template sentences very inconsistently and biased at least with regard to gender. Second, we examine the impact of (im-)balanced training data sets on classifiers, which are trained to estimate the sentiment of sentences from our corpus. This experiment shows that imbalanced data, on the one hand, lead to biased results, but on the other hand, under certain conditions, can lead to fair results.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1573
Author(s):  
Loris Nanni ◽  
Giovanni Minchio ◽  
Sheryl Brahnam ◽  
Gianluca Maguolo ◽  
Alessandra Lumini

Traditionally, classifiers are trained to predict patterns within a feature space. The image classification system presented here trains classifiers to predict patterns within a vector space by combining the dissimilarity spaces generated by a large set of Siamese Neural Networks (SNNs). A set of centroids from the patterns in the training data sets is calculated with supervised k-means clustering. The centroids are used to generate the dissimilarity space via the Siamese networks. The vector space descriptors are extracted by projecting patterns onto the similarity spaces, and SVMs classify an image by its dissimilarity vector. The versatility of the proposed approach in image classification is demonstrated by evaluating the system on different types of images across two domains: two medical data sets and two animal audio data sets with vocalizations represented as images (spectrograms). Results show that the proposed system’s performance competes competitively against the best-performing methods in the literature, obtaining state-of-the-art performance on one of the medical data sets, and does so without ad-hoc optimization of the clustering methods on the tested data sets.


2021 ◽  
Vol 10 (4) ◽  
pp. 58-75
Author(s):  
Vivek Sen Saxena ◽  
Prashant Johri ◽  
Avneesh Kumar

Skin lesion melanoma is the deadliest type of cancer. Artificial intelligence provides the power to classify skin lesions as melanoma and non-melanoma. The proposed system for melanoma detection and classification involves four steps: pre-processing, resizing all the images, removing noise and hair from dermoscopic images; image segmentation, identifying the lesion area; feature extraction, extracting features from segmented lesion and classification; and categorizing lesion as malignant (melanoma) and benign (non-melanoma). Modified GrabCut algorithm is employed to generate skin lesion. Segmented lesions are classified using machine learning algorithms such as SVM, k-NN, ANN, and logistic regression and evaluated on performance metrics like accuracy, sensitivity, and specificity. Results are compared with existing systems and achieved higher similarity index and accuracy.


2021 ◽  
Author(s):  
Ying Hou ◽  
Yi-Hong Zhang ◽  
Jie Bao ◽  
Mei-Ling Bao ◽  
Guang Yang ◽  
...  

Abstract Purpose: A balance between preserving urinary continence and achievement of negative margins is of clinical relevance while implementary difficulty. Preoperatively accurate detection of prostate cancer (PCa) extracapsular extension (ECE) is thus crucial for determining appropriate treatment options. We aimed to develop and clinically validate an artificial intelligence (AI)-assisted tool for the detection of ECE in patients with PCa using multiparametric MRI. Methods: 849 patients with localized PCa underwent multiparametric MRI before radical prostatectomy were retrospectively included from two medical centers. The AI tool was built on a ResNeXt network embedded with a spatial attention map of experts’ prior knowledges (PAGNet) from 596 training data sets. The tool was validated in 150 internal and 103 external data sets, respectively; and its clinical applicability was compared with expert-based interpretation and AI-expert interaction.Results: An index PAGNet model using a single-slice image yielded the highest areas under the receiver operating characteristic curve (AUC) of 0.857 (95% confidence interval [CI], 0.827-0.884), 0.807 (95% CI, 0.735-0.867) and 0.728 (95% CI, 0.631-0.811) in the training, internal test and external test cohorts, compared to the conventional ResNeXt networks. For experts, the inter-reader agreement was observed in only 437/849 (51.5%) patients with a Kappa value 0.343. And the performance of two experts (AUC, 0.632 to 0.741 vs 0.715 to 0.857) was lower (paired comparison, all p values < 0.05) than that of AI assessment. When expert’ interpretations were adjusted by the AI assessments, the performance of both two experts was improved.Conclusion: Our AI tool, showing improved accuracy, offers a promising alternative to human experts for imaging staging of PCa ECE using multiparametric MRI.


Author(s):  
O. Majgaonkar ◽  
K. Panchal ◽  
D. Laefer ◽  
M. Stanley ◽  
Y. Zaki

Abstract. Classifying objects within aerial Light Detection and Ranging (LiDAR) data is an essential task to which machine learning (ML) is applied increasingly. ML has been shown to be more effective on LiDAR than imagery for classification, but most efforts have focused on imagery because of the challenges presented by LiDAR data. LiDAR datasets are of higher dimensionality, discontinuous, heterogenous, spatially incomplete, and often scarce. As such, there has been little examination into the fundamental properties of the training data required for acceptable performance of classification models tailored for LiDAR data. The quantity of training data is one such crucial property, because training on different sizes of data provides insight into a model’s performance with differing data sets. This paper assesses the impact of training data size on the accuracy of PointNet, a widely used ML approach for point cloud classification. Subsets of ModelNet ranging from 40 to 9,843 objects were validated on a test set of 400 objects. Accuracy improved logarithmically; decelerating from 45 objects onwards, it slowed significantly at a training size of 2,000 objects, corresponding to 20,000,000 points. This work contributes to the theoretical foundation for development of LiDAR-focused models by establishing a learning curve, suggesting the minimum quantity of manually labelled data necessary for satisfactory classification performance and providing a path for further analysis of the effects of modifying training data characteristics.


2021 ◽  
Author(s):  
J. Uttley ◽  
S. Fotios ◽  
C.J. Robbins ◽  
C. Moscoso

Cycling has a range of benefits and should be encouraged, but darkness may put people off from cycling due to reductions in visibility, road safety and personal security. We summarise analyses of observational data to confirm the negative impact darkness has on cycling rates. Using a Case / Control method that accounts for confounding factors such as time of day and seasonal variations in weather, we demonstrate a consistent effect of darkness across different locations and countries. The size of this effect varies though, suggesting certain unknown factors may be important in mediating the impact of darkness on cycling rates. One factor that is known to mediate the effect is road lighting. We show that increased illuminance can offset the reductions in cyclists caused by darkness and also that there may be an optimal illuminance after which no further benefits may be achieved.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Cheng-Hong Yang ◽  
Jai-Hong Ren ◽  
Hsiu-Chen Huang ◽  
Li-Yeh Chuang ◽  
Po-Yin Chang

Melanoma is a type of skin cancer that often leads to poor prognostic responses and survival rates. Melanoma usually develops in the limbs, including in fingers, palms, and the margins of the nails. When melanoma is detected early, surgical treatment may achieve a higher cure rate. The early diagnosis of melanoma depends on the manual segmentation of suspected lesions. However, manual segmentation can lead to problems, including misclassification and low efficiency. Therefore, it is essential to devise a method for automatic image segmentation that overcomes the aforementioned issues. In this study, an improved algorithm is proposed, termed EfficientUNet++, which is developed from the U-Net model. In EfficientUNet++, the pretrained EfficientNet model is added to the UNet++ model to accelerate segmentation process, leading to more reliable and precise results in skin cancer image segmentation. Two skin lesion datasets were used to compare the performance of the proposed EfficientUNet++ algorithm with other common models. In the PH2 dataset, EfficientUNet++ achieved a better Dice coefficient (93% vs. 76%–91%), Intersection over Union (IoU, 96% vs. 74%–95%), and loss value (30% vs. 44%–32%) compared with other models. In the International Skin Imaging Collaboration dataset, EfficientUNet++ obtained a similar Dice coefficient (96% vs. 94%–96%) but a better IoU (94% vs. 89%–93%) and loss value (11% vs. 13%–11%) than other models. In conclusion, the EfficientUNet++ model efficiently detects skin lesions by improving composite coefficients and structurally expanding the size of the convolution network. Moreover, the use of residual units deepens the network to further improve performance.


2021 ◽  
Vol 21 (6) ◽  
pp. 257-264
Author(s):  
Hoseon Kang ◽  
Jaewoong Cho ◽  
Hanseung Lee ◽  
Jeonggeun Hwang ◽  
Hyejin Moon

Urban flooding occurs during heavy rains of short duration, so quick and accurate warnings of the danger of inundation are required. Previous research proposed methods to estimate statistics-based urban flood alert criteria based on flood damage records and rainfall data, and developed a Neuro-Fuzzy model for predicting appropriate flood alert criteria. A variety of artificial intelligence algorithms have been applied to the prediction of the urban flood alert criteria, and their usage and predictive precision have been enhanced with the recent development of artificial intelligence. Therefore, this study predicted flood alert criteria and analyzed the effect of applying the technique to augmentation training data using the Artificial Neural Network (ANN) algorithm. The predictive performance of the ANN model was RMSE 3.39-9.80 mm, and the model performance with the extension of training data was RMSE 1.08-6.88 mm, indicating that performance was improved by 29.8-82.6%.


2019 ◽  
Author(s):  
Bastian Greshake Tzovaras ◽  
Mad Price Ball

The not-so-secret ingredient that underlies all successful Artificial Intelligence / Machine Learning (AI/ML) methods is training data. There would be no facial recognition, no targeted advertisements and no self-driving cars if it was not for large enough data sets with which those algorithms have been trained to perform their tasks. Given how central these data sets are, important ethics questions arise: How is data collection performed? And how do we govern its' use? This chapter – part of a forthcoming book – looks at why new data governance strategies are needed; investigates the relation of different data governance models to historic consent approaches; and compares different implementations of personal data exchange models.


Sign in / Sign up

Export Citation Format

Share Document