attribute classification
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 43)

H-INDEX

14
(FIVE YEARS 4)

Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 25
Author(s):  
Jaehun Park ◽  
Kwangsu Kim

Face recognition, including emotion classification and face attribute classification, has seen tremendous progress during the last decade owing to the use of deep learning. Large-scale data collected from numerous users have been the driving force in this growth. However, face images containing the identities of the owner can potentially cause severe privacy leakage if linked to other sensitive biometric information. The novel discrete cosine transform (DCT) coefficient cutting method (DCC) proposed in this study combines DCT and pixelization to protect the privacy of the image. However, privacy is subjective, and it is not guaranteed that the transformed image will preserve privacy. To overcome this, a user study was conducted on whether DCC really preserves privacy. To this end, convolutional neural networks were trained for face recognition and face attribute classification tasks. Our survey and experiments demonstrate that a face recognition deep learning model can be trained with images that most people think preserve privacy at a manageable cost in classification accuracy.


Author(s):  
Joongnam Jeon ◽  
Bo-Seok Seo ◽  
Youngkwan Ju ◽  
Hong-Suk Shim ◽  
Hee-Seog Kang

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Shaoqi Hou ◽  
Chunhui Liu ◽  
Kangning Yin ◽  
Yiyin Ding ◽  
Zhiguo Wang ◽  
...  

Person Re-identification (Re-ID) is aimed at solving the matching problem of the same pedestrian at a different time and in different places. Due to the cross-device condition, the appearance of different pedestrians may have a high degree of similarity; at this time, using the global features of pedestrians to match often cannot achieve good results. In order to solve these problems, we designed a Spatial Attention Network Guided by Attribute Label (SAN-GAL), which is a dual-trace network containing both attribute classification and Re-ID. Different from the previous approach of simply adding a branch of attribute binary classification network, our SAN-GAL is mainly divided into two connecting steps. First, with attribute labels as guidance, we generate Attribute Attention Heat map (AAH) through Grad-CAM algorithm to accurately locate fine-grained attribute areas of pedestrians. Then, the Attribute Spatial Attention Module (ASAM) is constructed according to the AHH which is taken as the prior knowledge and introduced into the Re-ID network to assist in the discrimination of the Re-ID task. In particular, our SAN-GAL network can integrate the local attribute information and global ID information of pedestrians without introducing additional attribute region annotation, which has good flexibility and adaptability. The test results on Market1501 and DukeMTMC-reID show that our SAN-GAL can achieve good results and can achieve 85.8% Rank-1 accuracy on DukeMTMC-reID dataset, which is obviously competitive compared with most Re-ID algorithms.


2021 ◽  
pp. 103872
Author(s):  
Nicholas Altieri ◽  
Briton Park ◽  
Mara Olson ◽  
John DeNero ◽  
Anobel Y. Odisho ◽  
...  

Author(s):  
Mohammed Berrahal ◽  
Mostafa Azizi

Both human face recognition and generation by machines are currently an active area of computer vision, drawing curiosity of researchers, capable of performing amazing image analysis, and producing applications in multiple domains. In this paper, we propose a new approach for face attributes classification (FAC) taking advantage from both binary classification and data augmentation. With binary classification we can reach high prediction scores, while augmented data prevent overfitting and overcome the lack of data for sketched photos. Our approach, named Augmented binary multilabel CNN (ABM-CNN), consists of three steps: i) splitting data; ii) transformed-it to sketch (simplification process); iii) train separately each attribute with two convolutional neural networks; the whole process includes two networks: the first (resp. the second) one is to predict attributes on real images (resp. sketches) as inputs. Through experimentation, we figure out that some attributes give high prediction rates with sketches rather than with real images. On the other hand, we build a new face dataset, more consistent and complete, by generating images using Style-GAN model, to which we apply our method for extracting face attributes. As results, our proposal demonstrates more performances compared to those of related works.


2021 ◽  
Author(s):  
Vikram V. Ramaswamy ◽  
Sunnie S. Y. Kim ◽  
Olga Russakovsky

2021 ◽  
Vol 12 ◽  
Author(s):  
Junjie Li ◽  
Lihua Ma ◽  
Pingfei Zeng ◽  
Chunhua Kang

Maximum deviation global discrimination index (MDGDI) is a new item selection method for cognitive diagnostic computerized adaptive testing that allows for attribute coverage balance. We developed the maximum limitation global discrimination index (MLGDI) from MDGDI, which allows for both attribute coverage balance and item exposure control. MLGDI can realize the attribute coverage balance and exposure control of the item. Our simulation study aimed to evaluate the performance of our new method against maximum global discrimination index (GDI), modified maximum GDI (MMGDI), standardized weighted deviation GDI (SWDGDI), and constraint progressive with SWDGDI (CP_SWDGDI). The results indicated that (1a) under the condition of realizing the attribute coverage balance, MDGDI had the highest attribute classification accuracy; (1b) when the selection strategy accommodated the practical constraints of the attribute coverage balance and item exposure control, MLGDI had the highest attribute classification accuracy; (2) adding the item exposure control mechanism to the item selection method reduces the classification accuracy of the attributes of the item selection method; and (3) compared with GDI, MMGDI, SWDGDI, CP_SWDGDI, and MDGDI, MLGDI can better achieve the attribute-coverage requirement, control item exposure rate, and attribute correct classification rate.


Sign in / Sign up

Export Citation Format

Share Document