scholarly journals Implementation of wheelchair controller using mouth and tongue gesture

Author(s):  
Rafia Hassani ◽  
Mohamed Boumehraz ◽  
Maroua Hamzi

In this paper, a simple human-machine interface allowing people with severe disabilities to control a motorized wheelchair using mouth and tongue gesture is presented. The development of the proposed system consists of three principal phases: the first phase is mouth detection which performed by using haar cascade to detect the face area and template matching to detect mouth and tongue gestures from the lower face region. The second phase is command extraction; it is carried by determining the mouth and tongue gesture commands according to the detected gesture, the time taken to execute the gestures, and the previous command which is stored in each frame. Finally, the gesture commands are sent to the wheelchair as instruction using the Bluetooth serial port. The hardware used for this project were; laptop with universal serial bus (USB) webcam as a vision-based control unit, Bluetooth module to receive instructions comes from the vision control unit, standard joystick used in case of emergency, joystick emulator which delivers to the control board signals similar to the signals that are usually generated by the standard joystick, and ultrasonic sensors to provide safe navigation. The experimental results showed the success of the proposed control system based on mouth and tongue gestures.

2011 ◽  
Vol 55-57 ◽  
pp. 77-81
Author(s):  
Hui Ming Huang ◽  
He Sheng Liu ◽  
Guo Ping Liu

In this paper, we proposed an efficient method to address the problem of color face image segmentation that is based on color information and saliency map. This method consists of three stages. At first, skin colored regions is detected using a Bayesian model of the human skin color. Then, we get a chroma chart that shows likelihoods of skin colors. This chroma chart is further segmented into skin region that satisfy the homogeneity property of the human skin. The third stage, visual attention model are employed to localize the face region according to the saliency map while the bottom-up approach utilizes both the intensity and color features maps from the test image. Experimental evaluation on test shows that the proposed method is capable of segmenting the face area quite effectively,at the same time, our methods shows good performance for subjects in both simple and complex backgrounds, as well as varying illumination conditions and skin color variances.


2019 ◽  
Vol 8 (4S2) ◽  
pp. 1031-1036

Machine analysis of face detection is an interesting topic for study in Human-Computer Interaction. The existing studies show that discovering the position and scale of the face region is difficult due to significant illumination variation, noise and appearance variation in unconstrained scenarios. This paper suggests a method to detect the location of face area using recently developed YouTube Video face database. In this work, each frame is formulated by normalization technique and separated into overlapping blocks. The Gabor filter is tuned to extract the Gabor features from the individual blocks. The averaged Gabor features are manipulated and local binary pattern histogram features are extracted. The extracted patterns are passed to the classifier with training images for face region identification. Our experimental results on YouTube video face database exhibits promising results and demonstrate a significant performance improvement when compared to the existing techniques. Furthermore, our proposed work is uncaring to head poses and sturdy to variations in illumination, appearance and noisy images


Author(s):  
Lhoussaine Bouhou ◽  
Rachid El Ayachi ◽  
Mohamed Baslam ◽  
Mohamed Oukessou

<p>Before you recognize anyone, it is essential to identify various characteristics variations from one person to another. among of this characteristics, we have those relating to the face. Nowadays the detection of skin regions in an image has become an important research topic for the location of a face in the image. In this research study, unlike previous research studies  related  to  this  topic  which  have  focused  on  images  inputs  data  faces,  we  are  more interested to the fields face detection in mixed-subject documents (text + images). The face detection system developed is based on the hybrid method to distinguish two categories of objects from the mixed document. The first category is all that is text or images containing figures having no skin color, and the second category is any figure with the same color as the skin. In the second phase the detection system is based on Template Matching method to distinguish among the figures of the second category only those that contain faces to detect them. To validate this study, the system developed is tested on the various documents which including text and image.</p>


Author(s):  
Lhoussaine Bouhou ◽  
Rachid El Ayachi ◽  
Mohamed Baslam ◽  
Mohamed Oukessou

<p>Before you recognize anyone, it is essential to identify various characteristics variations from one person to another. among of this characteristics, we have those relating to the face. Nowadays the detection of skin regions in an image has become an important research topic for the location of a face in the image. In this research study, unlike previous research studies  related  to  this  topic  which  have  focused  on  images  inputs  data  faces,  we  are  more interested to the fields face detection in mixed-subject documents (text + images). The face detection system developed is based on the hybrid method to distinguish two categories of objects from the mixed document. The first category is all that is text or images containing figures having no skin color, and the second category is any figure with the same color as the skin. In the second phase the detection system is based on Template Matching method to distinguish among the figures of the second category only those that contain faces to detect them. To validate this study, the system developed is tested on the various documents which including text and image.</p>


2014 ◽  
Vol 543-547 ◽  
pp. 2531-2534
Author(s):  
Hong Wei Di ◽  
Cai Yun Wang

To solve the problem that traditional automatic image clipping method is on the basic of simple principles, such as fixed size and fixed location, an improved arithmetic based on face recognition is proposed. Firstly, the face region is located by face detecting.Then according to the proportion of face area in the selected region on the template image,the clipping region size of the image to cut is matched.At last,through the relative position of the face center in the template image,the cutting position can be got. The experimental results show that this algorithm can achieve better clipping effect.


2020 ◽  
Vol 9 (1) ◽  
pp. 2348-2352

In today’s competitive world, with very less classroom time and increasing working hours, lecturers may need tools that can help them to manage precious class hours efficiently. Instead of focusing on teaching, lecturers are stuck with completing some formal duties, like taking attendance, maintaining the attendance record of each student, etc. Manual attendance marking unnecessarily consumes classroom time, whereas smart attendance through face recognition techniques helps in saving the classroom time of the lecturer. Attendance marking through face recognition can be implied in the classroom by capturing the image of the students in the classroom via the camera installed. Later through the HAAR Cascade algorithm and MTCNN model, face region needs to be taken as interest and the face of each student is bounded through a bounding box, and finally, attendance can be marked into the database based on their presence by using Decision Tree Algorithm.


2018 ◽  
Vol 6 (3) ◽  
pp. 29-38
Author(s):  
Mays Kareem Jabbar ◽  
Maab Alaa Hussain ◽  
Thaar A. Kareem

Face recognition is the process of finding the face of one or more people in an image or even in a video. There are variety techniques for face recognition used in the researches. In this paper various algorithms for face recognition on mobile phones or other electronic device are applied. firstly the face detection should be implemented in any face recognition system. To get the face detection many algorithms like color segmentation, template matching etc are applicated. Then the second phase of the proposed algorithm is implemented by using neural network Gabor with fuzzy system. The algorithm has been represented using MATLAB and then implemented it on the device. While implementing the proposed algorithm, a tradeoff between accuracy and computational complexity of the algorithm are made, because the face recognition system is implemented on a device with limited hardware capabilities


2005 ◽  
Vol 17 (3) ◽  
pp. 367-376 ◽  
Author(s):  
Katharina von Kriegstein ◽  
Andreas Kleinschmidt ◽  
Philipp Sterzer ◽  
Anne-Lise Giraud

Face and voice processing contribute to person recognition, but it remains unclear how the segregated specialized cortical modules interact. Using functional neuroimaging, we observed cross-modal responses to voices of familiar persons in the fusiform face area, as localized separately using visual stimuli. Voices of familiar persons only activated the face area during a task that emphasized speaker recognition over recognition of verbal content. Analyses of functional connectivity between cortical territories show that the fusiform face region is coupled with the superior temporal sulcus voice region during familiar speaker recognition, but not with any of the other cortical regions normally active in person recognition or in other tasks involving voices. These findings are relevant for models of the cognitive processes and neural circuitry involved in speaker recognition. They reveal that in the context of speaker recognition, the assessment of person familiarity does not necessarily engage supra-modal cortical substrates but can result from the direct sharing of information between auditory voice and visual face regions.


2020 ◽  
Vol 38 (3B) ◽  
pp. 98-103
Author(s):  
Atyaf S. Hamad ◽  
Alaa K. Farhan

This research presents a method of image encryption that has been designed based on the algorithm of complete shuffling, transformation of substitution box, and predicated image crypto-system. This proposed algorithm presents extra confusion in the first phase because of including an S-box based on using substitution by AES algorithm in encryption and its inverse in Decryption. In the second phase, shifting and rotation were used based on secrete key in each channel depending on the result from the chaotic map, 2D logistic map and the output was processed and used for the encryption algorithm. It is known from earlier studies that simple encryption of images based on the scheme of shuffling is insecure in the face of chosen cipher text attacks. Later, an extended algorithm has been projected. This algorithm performs well against chosen cipher text attacks. In addition, the proposed approach was analyzed for NPCR, UACI (Unified Average Changing Intensity), and Entropy analysis for determining its strength.


Author(s):  
Manpreet Kaur ◽  
Jasdev Bhatti ◽  
Mohit Kumar Kakkar ◽  
Arun Upmanyu

Introduction: Face Detection is used in many different steams like video conferencing, human-computer interface, in face detection, and in the database management of image. Therefore, the aim of our paper is to apply Red Green Blue ( Methods: The morphological operations are performed in the face region to a number of pixels as the proposed parameter to check either an input image contains face region or not. Canny edge detection is also used to show the boundaries of a candidate face region, in the end, the face can be shown detected by using bounding box around the face. Results: The reliability model has also been proposed for detecting the faces in single and multiple images. The results of the experiments reflect that the algorithm been proposed performs very well in each model for detecting the faces in single and multiple images and the reliability model provides the best fit by analyzing the precision and accuracy. Moreover Discussion: The calculated results show that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images. Also, the evaluated results by this paper provides the better testing strategies that helps to develop new techniques which leads to an increase in research effectiveness. Conclusion: The calculated value of all parameters is helpful for proving that the proposed algorithm has been performed very well in each model for detecting the face by using a bounding box around the face in single as well as multiple images. The precision and accuracy of all three models are analyzed through the reliability model. The comparison calculated in this paper reflects that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images.


Sign in / Sign up

Export Citation Format

Share Document