Face Recognition with Convolutional Neural Network and Transfer Learning

Author(s):  
R.Meena Prakash ◽  
N. Thenmoezhi ◽  
M. Gayathri
Author(s):  
Zhongkui Fan ◽  
Ye-Peng Guan

Deep learning has achieved a great success in face recognition (FR), however, little work has been done to apply deep learning for face photo-sketch recognition. This paper proposes an adaptive scale local binary pattern extraction method for optical face features. The extracted features are classified by Gaussian process. The most authoritative optical face test set LFW is used to train the trained model. Test, the test accuracy is 98.7%. Finally, the face features extracted by this method and the face features extracted from the convolutional neural network method are adapted to sketch faces through transfer learning, and the results of the adaptation are compared and analyzed. Finally, the paper tested the open-source sketch face data set CUHK Face Sketch database(CUFS) using the multimedia experiment of the Chinese University of Hong Kong. The test result was 97.4%. The result was compared with the test results of traditional sketch face recognition methods. It was found that the method recognized High efficiency, it is worth promoting.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Sign in / Sign up

Export Citation Format

Share Document