Model-Based Illumination Correction for Face Images in Uncontrolled Scenarios

Author(s):  
Bas Boom ◽  
Luuk Spreeuwers ◽  
Raymond Veldhuis
Keyword(s):  
2021 ◽  
Vol 37 (5) ◽  
pp. 879-890
Author(s):  
Rong Wang ◽  
ZaiFeng Shi ◽  
Qifeng Li ◽  
Ronghua Gao ◽  
Chunjiang Zhao ◽  
...  

HighlightsA pig face recognition model that cascades the pig face detection network and pig face recognition network is proposed.The pig face detection network can automatically extract pig face images to reduce the influence of the background.The proposed cascaded model reaches accuracies of 99.38%, 98.96% and 97.66% on the three datasets.An application is developed to automatically recognize individual pigs.Abstract. The identification and tracking of livestock using artificial intelligence technology have been a research hotspot in recent years. Automatic individual recognition is the key to realizing intelligent feeding. Although RFID can achieve identification tasks, it is expensive and easily fails. In this article, a pig face recognition model that cascades a pig face detection network and a pig face recognition network is proposed. First, the pig face detection network is utilized to crop the pig face images from videos and eliminate the complex background of the pig shed. Second, batch normalization, dropout, skip connection, and residual modules are exploited to design a pig face recognition network for individual identification. Finally, the cascaded network model based on the pig face detection and recognition network is deployed on a GPU server, and an application is developed to automatically recognize individual pigs. Additionally, class activation maps generated by grad-CAM are used to analyze the performance of features of pig faces learned by the model. Under free and unconstrained conditions, 46 pigs are selected to make a positive pig face dataset, original multiangle pig face dataset and enhanced multiangle pig face dataset to verify the pig face recognition cascaded model. The proposed cascaded model reaches accuracies of 99.38%, 98.96%, and 97.66% on the three datasets, which are higher than those of other pig face recognition models. The results of this study improved the recognition performance of pig faces under multiangle and multi-environment conditions. Keywords: CNN, Deep learning, Pig face detection, Pig face recognition.


Author(s):  
Yue Zhao ◽  
Jianbo Su

Some regions (or blocks) and their affiliated features of face images are normally of more importance for face recognition. However, the variety of feature contributions, which exerts different saliency on recognition, is usually ignored. This paper proposes a new sparse facial feature description model based on salience evaluation of regions and features, which not only considers the contributions of different face regions, but also distinguishes that of different features in the same region. Specifically, the structured sparse learning scheme is employed as the salience evaluation method to encourage sparsity at both the group and individual levels for balancing regions and features. Therefore, the new facial feature description model is obtained by combining the salience evaluation method with region-based features. Experimental results show that the proposed model achieves better performance with much lower feature dimensionality.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jialing Feng ◽  
Zhexiao Guo ◽  
Jun Wang ◽  
Guo Dan

A rapid and objective assessment of the severity of facial paralysis allows rehabilitation physicians to choose the optimal rehabilitation treatment regimen for their patients. In this study, patients with facial paralysis were enrolled as study objects, and the eye aspect ratio (EAR) index was proposed for the eye region. The correlation between EAR and the facial nerve grading system 2.0 (FNGS 2.0) score was analyzed to verify the ability of EAR to enhance FNGS 2.0 for the rapid and objective assessment of the severity of the facial paralysis. Firstly, in order to accurately calculate the EAR, we constructed a landmark detection model based on the face images of facial paralysis patients (FP-FLDM). Evaluation results showed that the error rate of facial feature point detection in patients with facial paralysis of FP-FLDM is 17.1%, which was significantly superior to the landmark detection model based on normal face images (NF-FLDM). Secondly, in this study, the Fréchet distance was used to calculate the difference in bilateral EAR of facial paralysis patients and to verify the correlation between this difference and the corresponding FNGS 2.0 score. The results showed that the higher the FNGS 2.0 score , the greater the difference in bilateral EAR. The correlation coefficient between the bilateral EAR difference and the corresponding FNGS 2.0 score was 0.9673, indicating a high correlation. Finally, through a 10-fold crossvalidation, we can know that the accuracy of scoring the eyes of patients with facial paralysis using EAR was 85.7%, which can be used to enhance the objective and rapid assessment of the severity of facial paralysis by FNGS 2.0.


2020 ◽  
Vol 43 ◽  
Author(s):  
Peter Dayan

Abstract Bayesian decision theory provides a simple formal elucidation of some of the ways that representation and representational abstraction are involved with, and exploit, both prediction and its rather distant cousin, predictive coding. Both model-free and model-based methods are involved.


2001 ◽  
Vol 7 (S2) ◽  
pp. 578-579
Author(s):  
David W. Knowles ◽  
Sophie A. Lelièvre ◽  
Carlos Ortiz de Solόrzano ◽  
Stephen J. Lockett ◽  
Mina J. Bissell ◽  
...  

The extracellular matrix (ECM) plays a critical role in directing cell behaviour and morphogenesis by regulating gene expression and nuclear organization. Using non-malignant (S1) human mammary epithelial cells (HMECs), it was previously shown that ECM-induced morphogenesis is accompanied by the redistribution of nuclear mitotic apparatus (NuMA) protein from a diffuse pattern in proliferating cells, to a multi-focal pattern as HMECs growth arrested and completed morphogenesis . A process taking 10 to 14 days.To further investigate the link between NuMA distribution and the growth stage of HMECs, we have investigated the distribution of NuMA in non-malignant S1 cells and their malignant, T4, counter-part using a novel model-based image analysis technique. This technique, based on a multi-scale Gaussian blur analysis (Figure 1), quantifies the size of punctate features in an image. Cells were cultured in the presence and absence of a reconstituted basement membrane (rBM) and imaged in 3D using confocal microscopy, for fluorescently labeled monoclonal antibodies to NuMA (fαNuMA) and fluorescently labeled total DNA.


Sign in / Sign up

Export Citation Format

Share Document