Realistic Hair Simulator for Skin lesion Images Using Conditional Generative Adversarial Network
Automated skin lesion analysis is one of the trending fields that has gained attention among the dermatologists and healthcare practitioners. Skin lesion restoration is an essential preprocessing step for lesion enhancements for accurate automated analysis and diagnosis. Digital hair removal is a non-invasive method for image enhancement by solving the hair-occlusion artefact in previously captured images. Several hair removal methods were proposed for skin delineation and removal. However, manual annotation is one of the main challenges that hinder the validation of these proposed methods on a large number of images or using benchmarking datasets for comparison purposes. In the presented work, we propose a realistic hair simulator based on context-aware image synthesis using image-to-image translation techniques via conditional adversarial generative networks for generation of different hair occlusions in skin images, along with the ground-truth mask for hair location. Besides, we explored using three loss functions including L1-norm, L2-norm and structural similarity index (SSIM) to maximise the synthesis quality. For quantitatively evaluate the realism of image synthesis, the t-SNE feature mapping and Bland-Altman test are employed as objective metrics. Experimental results show the superior performance of our proposed method compared to previous methods for hair synthesis with plausible colours and preserving the integrity of the lesion texture.