Pig Face Recognition Model Based on a Cascaded Network

2021 ◽  
Vol 37 (5) ◽  
pp. 879-890
Author(s):  
Rong Wang ◽  
ZaiFeng Shi ◽  
Qifeng Li ◽  
Ronghua Gao ◽  
Chunjiang Zhao ◽  
...  

HighlightsA pig face recognition model that cascades the pig face detection network and pig face recognition network is proposed.The pig face detection network can automatically extract pig face images to reduce the influence of the background.The proposed cascaded model reaches accuracies of 99.38%, 98.96% and 97.66% on the three datasets.An application is developed to automatically recognize individual pigs.Abstract. The identification and tracking of livestock using artificial intelligence technology have been a research hotspot in recent years. Automatic individual recognition is the key to realizing intelligent feeding. Although RFID can achieve identification tasks, it is expensive and easily fails. In this article, a pig face recognition model that cascades a pig face detection network and a pig face recognition network is proposed. First, the pig face detection network is utilized to crop the pig face images from videos and eliminate the complex background of the pig shed. Second, batch normalization, dropout, skip connection, and residual modules are exploited to design a pig face recognition network for individual identification. Finally, the cascaded network model based on the pig face detection and recognition network is deployed on a GPU server, and an application is developed to automatically recognize individual pigs. Additionally, class activation maps generated by grad-CAM are used to analyze the performance of features of pig faces learned by the model. Under free and unconstrained conditions, 46 pigs are selected to make a positive pig face dataset, original multiangle pig face dataset and enhanced multiangle pig face dataset to verify the pig face recognition cascaded model. The proposed cascaded model reaches accuracies of 99.38%, 98.96%, and 97.66% on the three datasets, which are higher than those of other pig face recognition models. The results of this study improved the recognition performance of pig faces under multiangle and multi-environment conditions. Keywords: CNN, Deep learning, Pig face detection, Pig face recognition.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 65091-65100
Author(s):  
Ayyad Maafiri ◽  
Omar Elharrouss ◽  
Saad Rfifi ◽  
Somaya Ali Al-Maadeed ◽  
Khalid Chougdali


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.



2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.



2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Zhifei Wang ◽  
Zhenjiang Miao ◽  
Yanli Wan ◽  
Zhen Tang

Low resolution (LR) in face recognition (FR) surveillance applications will cause the problem of dimensional mismatch between LR image and its high-resolution (HR) template. In this paper, a novel method called kernel coupled cross-regression (KCCR) is proposed to deal with this problem. Instead of processing in the original observing space directly, KCCR projects LR and HR face images into a unified nonlinear embedding feature space using kernel coupled mappings and graph embedding. Spectral regression is further employed to improve the generalization performance and reduce the time complexity. Meanwhile, cross-regression is developed to fully utilize the HR embedding to increase the information of the LR space, thus to improve the recognition performance. Experiments on the FERET and CMU PIE face database show that KCCR outperforms the existing structure-based methods in terms of recognition rate as well as time complexity.



Author(s):  
Shifeng Shang ◽  
Haiyan Liu ◽  
Qiang Qu ◽  
Guannan Li ◽  
Jie Cao


2019 ◽  
Vol 892 ◽  
pp. 200-209
Author(s):  
Rayner Pailus ◽  
Rayner Alfred

Adaboost Viola-Jones method is indeed a profound discovery in detecting face images mainly because it is fast, light and one of the easiest methods of detecting face images among other techniques of face detection. Viola Jones uses Haar wavelet filter to detect face images and it produces almost 80%accuracy of face detection. This paper discusses proposed methodology and algorithms that involved larger library of filters used to create more discrimination features among the images by processing the proposed 15 Haar rectangular features (an extension from 4 Haar wavelet filters of Viola Jones) and used them in multiple adaptive ensemble process of detecting face image. After facial detection, the process continues with normalization processes by applying feature extraction such as PCA combined with LDA or LPP to extract our week learners’ wavelet for more classification features. Upon the process of feature extraction proposed feature selection to index these extracted data. These extracted vectors are used for training and creating MADBoost (Multiple Adaptive Diversified Boost)(an improvement of Adaboost, which uses multiple feature extraction methods combined with multiple classifiers) is able to capture, recognize and distinguish face image (s) faster. MADBoost applies the ensemble approach with better weights for classification to produce better face recognition results. Three experiments have been conducted to investigate the performance of the proposed MADBoost with three other classifiers, Neural Network (NN), Support Vector Machines (SVM) and Adaboost classifiers using Principal Component Analysis (PCA) as the feature extraction method. These experiments were tested against obstacles of POIES (Pose, Obstruction, Illumination, Expression, Sizes). Based on the results obtained, Madboost is found to be able to improve the recognition performance in matching failures, incorrect matching, matching success percentages and acceptable time taken to perform the classification task.



2019 ◽  
Author(s):  
André C. Ferreira ◽  
Liliana R. Silva ◽  
Francesco Renna ◽  
Hanja B. Brandl ◽  
Julien P. Renoult ◽  
...  

ABSTRACTIndividual identification is a crucial step to answer many questions in evolutionary biology and is mostly performed by marking animals with tags. Such methods are well established but often make data collection and analyses time consuming and consequently are not suited for collecting very large datasets.Recent technological and analytical advances, such as deep learning, can help overcome these limitations by automatizing data collection and analysis. Currently one of the bottlenecks preventing the application of deep learning for individual identification is the need of hundreds to thousands of labelled pictures required for training convolutional neural networks (CNNs).Here, we describe procedures that improve data collection and allow individual identification in captive and wild birds and we apply it to three small bird species, the sociable weaver Philetairus socius, the great tit Parus major and the zebra finch Taeniopygia guttata.First, we present an automated method that allows the collection of large samples of individually labelled images. Second, we describe how to train a CNN to identify individuals. Third, we illustrate the general applicability of CNN for individual identification in animal studies by showing that the trained CNN can predict the identity of birds from images collected in contexts that differ from the ones originally used to train the CNNs. Fourth, we present a potential solution to solve the issues of new incoming individuals.Overall our work demonstrates the feasibility of applying state-of-the-art deep learning tools for individual identification of birds, both in the lab and in the wild. These techniques are made possible by our approaches that allow efficient collection of training data. The ability to conduct individual identification of birds without requiring external markers that can be visually identified by human observers represents a major advance over current methods.



Author(s):  
Yue Zhao ◽  
Jianbo Su

Some regions (or blocks) and their affiliated features of face images are normally of more importance for face recognition. However, the variety of feature contributions, which exerts different saliency on recognition, is usually ignored. This paper proposes a new sparse facial feature description model based on salience evaluation of regions and features, which not only considers the contributions of different face regions, but also distinguishes that of different features in the same region. Specifically, the structured sparse learning scheme is employed as the salience evaluation method to encourage sparsity at both the group and individual levels for balancing regions and features. Therefore, the new facial feature description model is obtained by combining the salience evaluation method with region-based features. Experimental results show that the proposed model achieves better performance with much lower feature dimensionality.



2013 ◽  
Vol 22 (01) ◽  
pp. 1250029 ◽  
Author(s):  
SHICAI YANG ◽  
GEORGE BEBIS ◽  
MUHAMMAD HUSSAIN ◽  
GHULAM MUHAMMAD ◽  
ANWAR M. MIRZA

Human faces can be arranged into different face categories using information from common visual cues such as gender, ethnicity, and age. It has been demonstrated that using face categorization as a precursor step to face recognition improves recognition rates and leads to more graceful errors. Although face categorization using common visual cues yields meaningful face categories, developing accurate and robust gender, ethnicity, and age categorizers is a challenging issue. Moreover, it limits the overall number of possible face categories and, in practice, yields unbalanced face categories which can compromise recognition performance. This paper investigates ways to automatically discover a categorization of human faces from a collection of unlabeled face images without relying on predefined visual cues. Specifically, given a set of face images from a group of known individuals (i.e., gallery set), our goal is finding ways to robustly partition the gallery set (i.e., face categories). The objective is being able to assign novel images of the same individuals (i.e., query set) to the correct face category with high accuracy and robustness. To address the issue of face category discovery, we represent faces using local features and apply unsupervised learning (i.e., clustering). To categorize faces in novel images, we employ nearest-neighbor algorithms or learn the separating boundaries between face categories using supervised learning (i.e., classification). To improve face categorization robustness, we allow face categories to share local features as well as to overlap. We demonstrate the performance of the proposed approach through extensive experiments and comparisons using the FERET database.



Sign in / Sign up

Export Citation Format

Share Document