RepoMedUNM: A New Dataset for Feature Extraction and Training of Deep Learning Network for Classification of Pap Smear Images

2021 ◽  
pp. 317-325
Author(s):  
Dwiza Riana ◽  
Sri Hadianti ◽  
Sri Rahayu ◽  
Frieyadie ◽  
Muhamad Hasan ◽  
...  
2020 ◽  
Vol 40 (1) ◽  
pp. 324-336 ◽  
Author(s):  
Soo-Yeon Han ◽  
No-Sang Kwak ◽  
Taegeun Oh ◽  
Seong-Whan Lee

Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 649
Author(s):  
Long Hoang ◽  
Suk-Hwan Lee ◽  
Ki-Ryong Kwon

3D shape recognition becomes necessary due to the popularity of 3D data resources. This paper aims to introduce the new method, hybrid deep learning network convolution neural network–support vector machine (CNN–SVM), for 3D recognition. The vertices of the 3D mesh are interpolated to be converted into Point Clouds; those Point Clouds are rotated for 3D data augmentation. We obtain and store the 2D projection of this 3D augmentation data in a 32 × 32 × 12 matrix, the input data of CNN–SVM. An eight-layer CNN is used as the algorithm for feature extraction, then SVM is applied for classifying feature extraction. Two big datasets, ModelNet40 and ModelNet10, of the 3D model are used for model validation. Based on our numerical experimental results, CNN–SVM is more accurate and efficient than other methods. The proposed method is 13.48% more accurate than the PointNet method in ModelNet10 and 8.5% more precise than 3D ShapeNets for ModelNet40. The proposed method works with both the 3D model in the augmented/virtual reality system and in the 3D Point Clouds, an output of the LIDAR sensor in autonomously driving cars.


Author(s):  
Chinedu Godswill Olebu ◽  
Jide Julius Popoola ◽  
Michael Rotimi Adu ◽  
Yekeen Olajide Olasoji ◽  
Samson Adenle Oyetunji

In face recognition system, the accuracy of recognition is greatly affected by varying degree of illumination on both the probe and testing faces. Particularly, the changes in direction and intensity of illumination are two major contributors to varying illumination. In overcoming these challenges, different approaches had been proposed. However, the study presented in this paper proposes a novel approach that uses deep learning, in a MATLAB environment, for classification of face images under varying illumination conditions. One thousand one hundred (1100) face images employed were obtained from Yale B extended database. The obtained face images were divided into ten (10) folders. Each folder was further divided into seven (7) subsets based on different azimuthal angle of illumination used. The images obtained were filtered using a combination of linear filters and anisotropic diffusion filter. The filtered images were then segmented into light and dark zones with respect to the azimuthal and elevation angles of illumination. Eighty percent (80%) of the images in each subset which forms the training set, were used to train the deep learning network while the remaining twenty percent (20%), which forms the testing set, were used to test the accuracy of classification of the deep learning network generated. With three successive iterations, the performance evaluation results showed that the classification accuracy varies from 81.82% to 100.00%.   


Sign in / Sign up

Export Citation Format

Share Document