scholarly journals Human Facial Feature Matching based on Motion-Smoothness Constraint

Author(s):  
Liguo Dong ◽  
Wenchao Xu ◽  
Xianxian Zeng
Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Jin Yang ◽  
Yuxuan Zhao ◽  
Shihao Yang ◽  
Xinxin Kang ◽  
Xinyan Cao ◽  
...  

In face recognition systems, highly robust facial feature representation and good classification algorithm performance can affect the effect of face recognition under unrestricted conditions. To explore the anti-interference performance of convolutional neural network (CNN) reconstructed by deep learning (DL) framework in face image feature extraction (FE) and recognition, in the paper, first, the inception structure in the GoogleNet network and the residual error in the ResNet network structure are combined to construct a new deep reconstruction network algorithm, with the random gradient descent (SGD) and triplet loss functions as the model optimizer and classifier, respectively, and it is applied to the face recognition in Labeled Faces in the Wild (LFW) face database. Then, the portrait pyramid segmentation and local feature point segmentation are applied to extract the features of face images, and the matching of face feature points is achieved using Euclidean distance and joint Bayesian method. Finally, Matlab software is used to simulate the algorithm proposed in this paper and compare it with other algorithms. The results show that the proposed algorithm has the best face recognition effect when the learning rate is 0.0004, the attenuation coefficient is 0.0001, the training method is SGD, and dropout is 0.1 (accuracy: 99.03%, loss: 0.0047, training time: 352 s, and overfitting rate: 1.006), and the algorithm proposed in this paper has the largest mean average precision compared to other CNN algorithms. The correct rate of face feature matching of the algorithm proposed in this paper is 84.72%, which is higher than LetNet-5, VGG-16, and VGG-19 algorithms, the correct rates of which are 6.94%, 2.5%, and 1.11%, respectively, but lower than GoogleNet, AlexNet, and ResNet algorithms. At the same time, the algorithm proposed in this paper has a faster matching time (206.44 s) and a higher correct matching rate (88.75%) than the joint Bayesian method, indicating that the deep reconstruction network algorithm proposed in this paper can be used in face image recognition, FE, and matching, and it has strong anti-interference.


2015 ◽  
Vol 24 (1) ◽  
pp. 26-39 ◽  
Author(s):  
Yvonne Gillette

Mobile technology provides a solution for individuals who require augmentative and alternative intervention. Principles of augmentative and alternative communication assessment and intervention, such as feature matching and the participation model, developed with dedicated speech-generating devices can be applied to these generic mobile technologies with success. This article presents a clinical review of an adult with aphasia who reached her goals for greater communicative participation through mobile technology. Details presented include device selection, sequence of intervention, and funding issues related to device purchase and intervention costs. Issues related to graduate student clinical education are addressed. The purpose of the article is to encourage clinicians to consider mobile technology when intervening with an individual diagnosed with mild receptive and moderate expressive aphasia featuring word-finding difficulties.


Author(s):  
Suresha .M ◽  
. Sandeep

Local features are of great importance in computer vision. It performs feature detection and feature matching are two important tasks. In this paper concentrates on the problem of recognition of birds using local features. Investigation summarizes the local features SURF, FAST and HARRIS against blurred and illumination images. FAST and Harris corner algorithm have given less accuracy for blurred images. The SURF algorithm gives best result for blurred image because its identify strongest local features and time complexity is less and experimental demonstration shows that SURF algorithm is robust for blurred images and the FAST algorithms is suitable for images with illumination.


2019 ◽  
Vol 141 (5) ◽  
Author(s):  
Wei Xiong ◽  
Qingbo He ◽  
Zhike Peng

Wayside acoustic defective bearing detector (ADBD) system is a potential technique in ensuring the safety of traveling vehicles. However, Doppler distortion and multiple moving sources aliasing in the acquired acoustic signals decrease the accuracy of defective bearing fault diagnosis. Currently, the method of constructing time-frequency (TF) masks for source separation was limited by an empirical threshold setting. To overcome this limitation, this study proposed a dynamic Doppler multisource separation model and constructed a time domain-separating matrix (TDSM) to realize multiple moving sources separation in the time domain. The TDSM was designed with two steps of (1) constructing separating curves and time domain remapping matrix (TDRM) and (2) remapping each element of separating curves to its corresponding time according to the TDRM. Both TDSM and TDRM were driven by geometrical and motion parameters, which would be estimated by Doppler feature matching pursuit (DFMP) algorithm. After gaining the source components from the observed signals, correlation operation was carried out to estimate source signals. Moreover, fault diagnosis could be carried out by envelope spectrum analysis. Compared with the method of constructing TF masks, the proposed strategy could avoid setting thresholds empirically. Finally, the effectiveness of the proposed technique was validated by simulation and experimental cases. Results indicated the potential of this method for improving the performance of the ADBD system.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 553
Author(s):  
Suresh Neethirajan ◽  
Inonge Reimert ◽  
Bas Kemp

Understanding animal emotions is a key to unlocking methods for improving animal welfare. Currently there are no ‘benchmarks’ or any scientific assessments available for measuring and quantifying the emotional responses of farm animals. Using sensors to collect biometric data as a means of measuring animal emotions is a topic of growing interest in agricultural technology. Here we reviewed several aspects of the use of sensor-based approaches in monitoring animal emotions, beginning with an introduction on animal emotions. Then we reviewed some of the available technological systems for analyzing animal emotions. These systems include a variety of sensors, the algorithms used to process biometric data taken from these sensors, facial expression, and sound analysis. We conclude that a single emotional expression measurement based on either the facial feature of animals or the physiological functions cannot show accurately the farm animal’s emotional changes, and hence compound expression recognition measurement is required. We propose some novel ways to combine sensor technologies through sensor fusion into efficient systems for monitoring and measuring the animals’ compound expression of emotions. Finally, we explore future perspectives in the field, including challenges and opportunities.


Author(s):  
Hung Phuoc Truong ◽  
Thanh Phuong Nguyen ◽  
Yong-Guk Kim

AbstractWe present a novel framework for efficient and robust facial feature representation based upon Local Binary Pattern (LBP), called Weighted Statistical Binary Pattern, wherein the descriptors utilize the straight-line topology along with different directions. The input image is initially divided into mean and variance moments. A new variance moment, which contains distinctive facial features, is prepared by extracting root k-th. Then, when Sign and Magnitude components along four different directions using the mean moment are constructed, a weighting approach according to the new variance is applied to each component. Finally, the weighted histograms of Sign and Magnitude components are concatenated to build a novel histogram of Complementary LBP along with different directions. A comprehensive evaluation using six public face datasets suggests that the present framework outperforms the state-of-the-art methods and achieves 98.51% for ORL, 98.72% for YALE, 98.83% for Caltech, 99.52% for AR, 94.78% for FERET, and 99.07% for KDEF in terms of accuracy, respectively. The influence of color spaces and the issue of degraded images are also analyzed with our descriptors. Such a result with theoretical underpinning confirms that our descriptors are robust against noise, illumination variation, diverse facial expressions, and head poses.


2021 ◽  
Author(s):  
Gabrielle E. Reimann ◽  
Catherine Walsh ◽  
Kelsey D. Csumitta ◽  
Patrick McClure ◽  
Francisco Pereira ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document