face region
Recently Published Documents


TOTAL DOCUMENTS

130
(FIVE YEARS 45)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
pp. 095679762110306
Author(s):  
Xiaomei Zhou ◽  
Shruti Vyas ◽  
Jinbiao Ning ◽  
Margaret C. Moulson

Everyday face recognition presents a difficult challenge because faces vary naturally in appearance as a result of changes in lighting, expression, viewing angle, and hairstyle. We know little about how humans develop the ability to learn faces despite natural facial variability. In the current study, we provide the first examination of attentional mechanisms underlying adults’ and infants’ learning of naturally varying faces. Adults ( n = 48) and 6- to 12-month-old infants ( n = 48) viewed videos of models reading a storybook; the facial appearance of these models was either high or low in variability. Participants then viewed the learned face paired with a novel face. Infants showed adultlike prioritization of face over nonface regions; both age groups fixated the face region more in the high- than low-variability condition. Overall, however, infants showed less ability to resist contextual distractions during learning, which potentially contributed to their lack of discrimination between the learned and novel faces. Mechanisms underlying face learning across natural variability are discussed.


Author(s):  
Saeed A. Awan ◽  
Syed Asif Ali ◽  
Imtiaz , Hussain ◽  
Basit Hassan ◽  
Syed Muhammad Ashfaq Ashraf

The COVID-19 pandemic is an incomparable disaster triggering massive fatalities and security glitches. Under the pressure of these black clouds public frequently wear masks as safeguard to their lives. Facial Recognition becomes a challenge because significant portion of human face is hidden behind mask. Primarily researchers focus to derive up with recommendations to tackle this problem through prompt and effective solution in this COVID-19 pandemic. This paper presents a trustworthy method to for the recognition of masked faces on un-occluded and deep learning-based features. The first stage is to capture the non-obstructed face region. Then we extract the most significant features from the attained regions (forehead and eye) through pre-trained deep learning CNN. Bag-of- word paradigm to has been applied to the feature maps to quantize them and to get a minor illustration comparing to the CNN’s fully connected layer. In the end a Multilayer Perceptron has been used for classification. High recognition performance with significant accuracy is seen in experimental results.


Author(s):  
Rafia Hassani ◽  
Mohamed Boumehraz ◽  
Maroua Hamzi

In this paper, a simple human-machine interface allowing people with severe disabilities to control a motorized wheelchair using mouth and tongue gesture is presented. The development of the proposed system consists of three principal phases: the first phase is mouth detection which performed by using haar cascade to detect the face area and template matching to detect mouth and tongue gestures from the lower face region. The second phase is command extraction; it is carried by determining the mouth and tongue gesture commands according to the detected gesture, the time taken to execute the gestures, and the previous command which is stored in each frame. Finally, the gesture commands are sent to the wheelchair as instruction using the Bluetooth serial port. The hardware used for this project were; laptop with universal serial bus (USB) webcam as a vision-based control unit, Bluetooth module to receive instructions comes from the vision control unit, standard joystick used in case of emergency, joystick emulator which delivers to the control board signals similar to the signals that are usually generated by the standard joystick, and ultrasonic sensors to provide safe navigation. The experimental results showed the success of the proposed control system based on mouth and tongue gestures.


2021 ◽  
Author(s):  
Debajyoty Banik ◽  
Saksham Rawat Rawat ◽  
Aayush Thakur ◽  
Pritee Parwekar ◽  
Suresh Chandra Satapathy

Abstract The outbreak of Coronavirus Disease 2019 (COVID-19) occurred at the end of 2019, and it has continued to be a source of misery for millions of people and companies well into 2020. There is a surge of concern among all persons, especially those who wish to resume in person activities, as the globe recovers from the epidemic and intends to return to a level of normalcy. Wearing a face mask greatly decreases the likelihood of viral transmission and gives a sense of security, according to studies. However, manually tracking the execution of this regulation is not possible. The key to this is technology. We present a Deep Learning-based system that can detect instances of improper use of face masks. A dual-stage Convolutional Neural Network (CNN)architecture is used in our system to recognie masked and unmasked faces. This will aid in the tracking of safety breaches, the promotion of face mask use, and the maintenance of a safe working environment. This paper will automate the tasks of mask detection in public places when incorporated with CCTV cameras and will alert the system manager when a person without mask or wearing incorrect mask tries to enter. This paper includes multi face detection model which has the potential to target and identify a group of people whether they are wearing masks or not. We tried to collect various facial pictures and tried to identify the face Region of Interest (ROI), and then we separated it. Applying facial milestones, to permit the restriction the eyes, nose, mouth, and so. face was then completed and we tried to detect the presence of mask. To prepare a custom face cover locator, breaking our venture into two unmistakable stages was required, each with its own separate sub-steps. 1. Preparing: Here, stacking our face veil discovery dataset from plate, preparing a model on this dataset, and afterward serializing the face cover locator to circle was the focus. 2. Sending: Once the face veil identifier is prepared, the accompanying advance of stacking the cover finder, performing face recognition, and afterward characterizing each face as with veil or without veil, can be executed.


2021 ◽  
Author(s):  
Radosław Wróbel ◽  
Gustaw Sierzputowski ◽  
Piotr Haller ◽  
Veselin Mihaylov ◽  
Radostin Dimitrov

The article presents analysis of road crash accidents. It presents the evolution of safety systems, starting from a description of the curently used vehicle-based systems, with particular emphasis on the prediction of the driver falling asleep. The article also proposes a proprietary system of sleep prediction based on the face detection of drivers. The detection of facial landmarks is presented as a two-step process: an algorithm finds faces in general, and then needs to localize key facial structures within the face region of interest. The article presents the operation of the algorithm to detect driver falling asleep; method of detection and analysis.


2021 ◽  
Vol 11 (19) ◽  
pp. 9174
Author(s):  
Sanoar Hossain ◽  
Saiyed Umer ◽  
Vijayan Asari ◽  
Ranjeet Kumar Rout

This work proposes a facial expression recognition system for a diversified field of applications. The purpose of the proposed system is to predict the type of expressions in a human face region. The implementation of the proposed method is fragmented into three components. In the first component, from the given input image, a tree-structured part model has been applied that predicts some landmark points on the input image to detect facial regions. The detected face region was normalized to its fixed size and then down-sampled to its varying sizes such that the advantages, due to the effect of multi-resolution images, can be introduced. Then, some convolutional neural network (CNN) architectures were proposed in the second component to analyze the texture patterns in the facial regions. To enhance the proposed CNN model’s performance, some advanced techniques, such data augmentation, progressive image resizing, transfer-learning, and fine-tuning of the parameters, were employed in the third component to extract more distinctive and discriminant features for the proposed facial expression recognition system. The performance of the proposed system, due to different CNN models, is fused to achieve better performance than the existing state-of-the-art methods and for this reason, extensive experimentation has been carried out using the Karolinska-directed emotional faces (KDEF), GENKI-4k, Cohn-Kanade (CK+), and Static Facial Expressions in the Wild (SFEW) benchmark databases. The performance has been compared with some existing methods concerning these databases, which shows that the proposed facial expression recognition system outperforms other competing methods.


Author(s):  
Yallamandaiah S. ◽  
Purnachand N.

<p>In the area of computer vision, face recognition is a challenging task because of the pose, facial expression, and illumination variations. The performance of face recognition systems reduces in an unconstrained environment. In this work, a new face recognition approach is proposed using a guided image filter, and a convolutional neural network (CNN). The guided image filter is a smoothing operator and performs well near the edges. Initially, the ViolaJones algorithm is used to detect the face region and then smoothened by a guided image filter. Later the proposed CNN is used to extract the features and recognize the faces. The experiments were performed on face databases like ORL, JAFFE, and YALE and attained a recognition rate of 98.33%, 99.53%, and 98.65% respectively. The experimental results show that the suggested face recognition method attains good results than some of the state-of-the-art techniques.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yujiang Lu ◽  
Yaju Liu ◽  
Jianwei Fei ◽  
Zhihua Xia

Recent progress in deep learning, in particular the generative models, makes it easier to synthesize sophisticated forged faces in videos, leading to severe threats on social media about personal privacy and reputation. It is therefore highly necessary to develop forensics approaches to distinguish those forged videos from the authentic. Existing works are absorbed in exploring frame-level cues but insufficient in leveraging affluent temporal information. Although some approaches identify forgeries from the perspective of motion inconsistency, there is so far not a promising spatiotemporal feature fusion strategy. Towards this end, we propose the Channel-Wise Spatiotemporal Aggregation (CWSA) module to fuse deep features of continuous video frames without any recurrent units. Our approach starts by cropping the face region with some background remained, which transforms the learning objective from manipulations to the difference between pristine and manipulated pixels. A deep convolutional neural network (CNN) with skip connections that are conducive to the preservation of detection-helpful low-level features is then utilized to extract frame-level features. The CWSA module finally makes the real or fake decision by aggregating deep features of the frame sequence. Evaluation against a list of large facial video manipulation benchmarks has illustrated its effectiveness. On all three datasets, FaceForensics++, Celeb-DF, and DeepFake Detection Challenge Preview, the proposed approach outperforms the state-of-the-art methods with significant advantages.


2021 ◽  
Vol 11 (8) ◽  
pp. 127-130
Author(s):  
Sunil V Jagtap ◽  
Atul Hulwan ◽  
Snigdha Vartak

Coronavirus disease 2019 (COVID-19) is an infection caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2). COVID-19 infection may be associated with a wide range of bacterial and fungal co-infections. Herewith a case of 46 year-old male patient of post COVID-19 developed co-infection. He had received steroid treatment and improved in last month. He is known case of diabetes type II since last one year and was on treatment. Now presented to our hospital having fever, facial pain, and swelling mid-face region. His RT-PCR test was positive. The CT scan of the nasal septum, medial walls of bilateral maxillary, ethmoid, sphenoid and frontal sinuses exteding into bilateral nasal cavities. Features suggestive of infective pathology invasive fungal rhinosinusitis On clinical, radio imaging and on histopathological findings diagnosed as maxillary mucormycosis with actinomycosis. Conclusion: We are presenting this rare case of COVID-19 associated with co-infection of mucormycosis and actinomycosis for its clinical, radio imaging, and on histopathological findings. Key words: Coronavirus Disease 2019 (COVID-19), Mucormycosis, Actinomycosis, Co-infections.


Sign in / Sign up

Export Citation Format

Share Document