An expressional simplified mechanism in anthropomorphic face robot design

Robotica ◽  
2014 ◽  
Vol 34 (3) ◽  
pp. 652-670 ◽  
Author(s):  
Chyi-Yeu Lin ◽  
Chun-Chia Huang ◽  
Li-Chieh Cheng

SUMMARYThe goal of this research is to develop a low-cost face robot which has a lower degree-of-freedom facial expression mechanism. Many designs of facial robots have been announced and published in the past. Face robots can be classified into two major types based on their respective degrees of freedom. The first type has various facial expressions with higher degrees of freedom, and the second has finite facial expressions with fewer degrees of freedom. Due to the high cost of the higher-degree-of-freedom face robot, most commercial face robot products are designed in the lower-degrees-of-freedom form with finite facial expressions. Therefore, a face robot with a simplified facial expression mechanism is proposed in this research. The main purpose of this research is to develop a device with a lower degree-of-freedom mechanism that is able to generate many facial expressions while keeping one basic mouth shape variation. Our research provides a new face robot example and development direction to reduce costs and conserve energy.

2012 ◽  
Vol 619 ◽  
pp. 325-328
Author(s):  
You Jun Huang ◽  
Ze Lun Li ◽  
Zhi Cheng Huang

A teaching robot with three degree of freedom is designed. The three degrees of freedom are: waist rotation, lifting and stretching of the arm and opening and closing of the gripper. The designs of the main components are: a mobile chassis, parallel rails, horizontal rails and manipulator. The teaching robot designed has the features of low cost, easy to regulation, good repeatability and it has good promotion and application prospects in the field of teaching.


Author(s):  
Teruaki Ando ◽  
◽  
Atsushi Araki ◽  
Masayoshi Kanoh ◽  
Yutaro Tomoto ◽  
...  

In this paper, we created random facial expressions for the Mechadroid Type C3, a robot equipped with a high degree-of-freedom facial expression mechanism and which is intended to serve a receptionist function. Investigating the morphological characteristics and physiognomy features of these facial expressions, we evaluated what personality characteristics could be expressed by the face of the C3 and what impressions those facial expressions made on people. As a result, it was found that a baby-schema-cute face, modest face, and smiley face are the most suitable as the physiognomy of a reception robot.


Author(s):  
Manuel Rodrigues Quintas ◽  
Maria Teresa Restivo ◽  
José Rodrigues ◽  
Pedro Ubaldo

The concept and the use of haptic devices need to be disseminated and they should become familiar among young people. At present haptics are used in many everyday tasks in different fields. Additionally, their use in interaction with virtual reality applications simulating real systems sense of touch will increase the usersâ?? realism and immersion and, consequently, they will contribute to improve the intrinsic knowledge to the simulationsâ?? goals. However, haptics are associated with expensive equipment and usually they offer several degrees of freedom. The objective of this work is to make their cost not much more expensive than a â??specialâ? mouse by offering a low cost solution with just one degree of freedom (1DOF) useful in many simple cases. Additionally, it is also an objective of this work the development of simple virtual reality systems requiring interactions only requiring one degree of freedom. A low cost, single-axis force-feedback haptic device of 1 degree of freedom has been developed. For evaluating the interest of this prototype a â??Spring Constantâ? application was built and used as a demonstrator. The complete system - the haptic interacting with the â??Spring Constantâ? - will be described in the present work.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2250
Author(s):  
Leyuan Liu ◽  
Rubin Jiang ◽  
Jiao Huo ◽  
Jingying Chen

Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized “Self”—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB ×6), which enables the SD-CNN to run on low-cost hardware.


2012 ◽  
Vol 433-440 ◽  
pp. 7413-7419 ◽  
Author(s):  
Yu Zhang ◽  
Kuo Yang ◽  
Xue Ying Deng ◽  
Ying Shi

By analysing the formation mechanism of human facial expressions and summary of existing research about facial expression robot, the paper summarized the facial expression robot, developed three technical difficulties, and proposed the improvements according to the technical difficulties. Based on the above, the paper presented one facial robot with eight facial expressions of basic emotions. In mechanical part, the mechanical structure with 20 degrees of freedom is designed; in control part, SSC32 V2 is used to control 20 servo coordination movements; in simulation modelling part, a special silicon rubber material is developed and the soft part of material is used to as the skin of facial expression robot. The facial expression of robot substantially increased the extent of such robot simulation.


Author(s):  
Samule Lee ◽  
Seong-Yoon Shin

<p>Contemporary people have highly insufficient time and means of relieving their stress. Provision of a program that can solve such stress in daily life would make one’s life substantially more enjoyable. In this thesis, Face Song Player, which is a system that recognizes the facial expression of an individual and plays music that is appropriate for such person, is presented. It studies information on the facial contour lines and extracts an average, and acquires the facial shape information. MUCT DB was used as the DB for learning. For the recognition of facial expression, an algorithm was designed by using the differences in the characteristics of each of the expressions on the basis of expressionless images. Facial expression is extracted by acquiring information on the eyes, eyebrows, eyelids, mouth, lips and nasal cheeks for expressions of happiness, surprise and sorrow as well as absence of expression. There is an advantage of being able to obtain a substantial effect with very low cost through this system.</p>


Author(s):  
Samule Lee ◽  
Seong-Yoon Shin

<p>Contemporary people have highly insufficient time and means of relieving their stress. Provision of a program that can solve such stress in daily life would make one’s life substantially more enjoyable. In this thesis, Face Song Player, which is a system that recognizes the facial expression of an individual and plays music that is appropriate for such person, is presented. It studies information on the facial contour lines and extracts an average, and acquires the facial shape information. MUCT DB was used as the DB for learning. For the recognition of facial expression, an algorithm was designed by using the differences in the characteristics of each of the expressions on the basis of expressionless images. Facial expression is extracted by acquiring information on the eyes, eyebrows, eyelids, mouth, lips and nasal cheeks for expressions of happiness, surprise and sorrow as well as absence of expression. There is an advantage of being able to obtain a substantial effect with very low cost through this system.</p>


Author(s):  
Bharoto Yekti

The growth of 3D printing has been rapid over the decades. Laika is a United States-based animation production company, and the pioneer of 3D printing technology in stop-motion animation. Laika uses this technology in their production pipeline for making stop-motion puppets in most of their films, including their latest films, Kubo and the Two Strings (2016). Due to limited access and information of details of Laika’s facial expression, communities and fans of animation have tried to conduct experiments with their own 3D print, using footages of behind-the-screen processes from Laika studio. This paper explores facial expressions for creating stop-motion puppet using an affordable home scale 3D printer. Using limited technical information collected from documentation video from Laika as well as referring to articles written by stop-motion enthusiasts, this fan-based research ignites creativity to overcome the barriers of technology and access through strategies in producing affordable 3D print stop-motion animation. Keywords: Stop-motion animation, 3D printing, facial expressions.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document