BACKGROUND
The automatic segmentation of skin lesions has been reported using the data of dermoscopic images. It is, however, not applicable to real-time detection using a smartphone.
OBJECTIVE
This study aims to examine a deep learning model for detecting and localizing positions of the mole on the captured images to precisely extract the crop images of the model without any other objects.
METHODS
The data were collected through public health events in Taiwan between December 2017 and February 2019. All the participants who concerned about the risk of their moles were asked to take the mole-images. Images were then measured and determined the risks by three dermatologists. We labeled the mole position with bounding boxes using the ‘LabelImg’ tool. Two architectures, SSD and Faster-RCNN, have been used to build eight different mole-detection models. The confidence score, intersection over union (IoU), and mean average precision (mAP) with the COCO metrics were used to measure the accuracy of those models.
RESULTS
2790-mole images were used for the development and the validation of the models. The Faster-RCNN Inception Resnet model had the highest overall mAP of 0.245, following by 0.234 of the Faster-RCNN Resnet 101, and 0.227 of the Faster-RCNN Resnet 50 model. The SSD Mobilenet v1 model had the lowest mAP of 0.142. The Faster-RCNN Inception Resnet model had a dominant AP of 0.377, 0.236, and 0.129 for the large, medium, and small size of moles. We observed that the Faster RCNN Inception Resnet has shown the best performance with the high confident scores (over 97%) for all kinds of moles.
CONCLUSIONS
We successfully developed the detection models based on the techniques of SSD and Faster-RCNN. These models might help researchers to localize accurately the position of the moles with its risks as a feasible detection app on the smartphone. We provided the pre-trained models for further studies via GitHub link, https://github.com/vietdaica/Mole_Detection.