Exerting control: the grammatical meaning of facial displays in signed languages

2021 ◽  
Vol 32 (4) ◽  
pp. 609-639
Author(s):  
Sara Siyavoshi ◽  
Sherman Wilcox

Abstract Signed languages employ finely articulated facial and head displays to express grammatical meanings such as mood and modality, complex propositions (conditionals, causal relations, complementation), information structure (topic, focus), assertions, content and yes/no questions, imperatives, and miratives. In this paper we examine two facial displays: an upper face display in which the eyebrows are pulled together called brow furrow, and a lower face display in which the corners of the mouth are turned down into a distinctive configuration that resembles a frown or upside-down U-shape. Our analysis employs Cognitive Grammar, specifically the control cycle and its manifestation in effective control and epistemic control. Our claim is that effective and epistemic control are associated with embodied actions. Prototypical physical effective control requires effortful activity and the forceful exertion of energy and is commonly correlated with upper face activity, often called the “face of effort.” The lower face display has been shown to be associated with epistemic indetermination, uncertainty, doubt, obviousness, and skepticism. We demonstrate that the control cycle unifies the diverse grammatical functions expressed by each facial display within a language, and that they express similar functions across a wide range of signed languages.

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1199
Author(s):  
Seho Park ◽  
Kunyoung Lee ◽  
Jae-A Lim ◽  
Hyunwoong Ko ◽  
Taehoon Kim ◽  
...  

Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region.


2020 ◽  
Vol 3 (1) ◽  
pp. 34-43
Author(s):  
Anna Kroma ◽  
Daria Sobkowska ◽  
Ewa Pelant ◽  
Iwona Micek ◽  
Maria Urbańska ◽  
...  

One of the most common defects in the facial area is the so‑called “double chin”. The problem affects both males and females, and its occurrence is associated with several different factors. It is commonly believed that this defect is a result of excessive adipose tissue location in the lower face. However, the problem seems to be more complex. It is also facilitated by the loss of firmness and elasticity of ageing skin, and in many persons the existence of a double chin results from their anatomical structure, and therefore is determined by genes. Besides, incorrect posture accentuates the defect. It may seem that such minor change does not affect the appearance to a large extent. Whereas, in most cases it causes that the border between the jaw line and the neck disappears what, in turn, distorts the face contour. All this makes a person with a double chin look much older than he or she in fact is. This leads to negative self‑esteem, makes people feel ashamed, and quite often is the main reason for avoiding interaction with other people. So, it is no wonder that persons with such defect seek methods offering its mitigation.Surgical methods give quick and satisfactory effects. However, they are fraught with high risk of complications, and as such they cause anxiety and ultimately discourage patients from making the decision. Luckily, significant improvements in the field of cosmetology and aesthetic surgery offer a wide range of possibilities to reduce a double chin in a manner which is completely non‑invasive, pain free, and does not require long lasting convalescence. In this case, very good results are achieved with the use of innovative technological solutions, such as HIFU and cryolipolysis. Aim. The main aim of this paper is to assess the efficiacy of cryolipolysis in the double chin reduction and the description of innovative instrumental methods allowing for a non‑invasive reduction of this defect.


2021 ◽  
pp. 1379-1398
Author(s):  
Norman Waterhouse ◽  
Naresh Noshi ◽  
Niall Kirkpatrick ◽  
Lisa Brendling

Facial ageing occurs as a consequence of multifactorial changes in both the external skin and underlying tissues. The ageing process may vary dramatically between individual patients and is thus influenced by genetic factors. When assessing the ageing face it is important to consider the skeletal architecture, the soft tissue layers including the anterior fat pads, the osseocutaneous ligament anchors, and finally the overlying skin. Assessment of the external skin incorporates factors such as dermal thinning, solar damage, lifestyle effects such as smoking, and Fitzpatrick skin type. Surgical correction of facial ageing attempts to reverse both gravitational change of soft tissues and also to restore volume loss. There are a variety of methods used to divide the face into regions, but for the purpose of this chapter, the surgical management of facial ageing will be separated into three anatomical areas: (1) upper face, including the upper eyelids, eyebrows, and forehead; (2) midface, including the lower eyelid/anterior cheek continuum; and (3) lower and lateral cheek, neck, and perioral region


2019 ◽  
Vol 30 (4) ◽  
pp. 655-686 ◽  
Author(s):  
Sara Siyavoshi

Abstract This paper presents a study of modality in Iranian Sign Language (ZEI) from a cognitive perspective, aimed at analyzing two linguistic channels: facial and manual. While facial markers and their grammatical functions have been studied in some sign languages, we have few detailed analyses of the facial channel in comparison with the manual channel in conveying modal concepts. This study focuses on the interaction between manual and facial markers. A description of manual modal signs is offered. Three facial markers and their modality values are also examined: squinted eyes, brow furrow, and downward movement of lip corners (horseshoe mouth). In addition to offering this first descriptive analysis of modality in ZEI, this paper also applies the Cognitive Grammar model of modality, the Control Cycle, and the Reality Model, classifying modals into two kinds, effective and epistemic. It is suggested that effective control, including effective modality, tends to be expressed on the hands, while facial markers play an important role in marking epistemic assessment, one manifestation of which is epistemic modality. ZEI, like some other sign languages, exhibits an asymmetry between the number of manual signs and facial markers expressing epistemic modality: while the face can be active in the expression of effective modality, it is commonly the only means of expressing epistemic modality. By positing an epistemic core in effective modality, Cognitive Grammar provides a theoretical basis for these findings.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Martin Schiavenato ◽  
Carl L. von Baeyer

Many pain assessment tools for preschool and school-aged children are based on facial expressions of pain. Despite broad use, their metrics are not rooted in the anatomic display of the facial pain expression. We aim to describe quantitatively the patterns of initiation and maintenance of the infant pain expression across an expressive cycle. We evaluated the trajectory of the pain expression of three newborns with the most intense facial display among 63 infants receiving a painful stimulus. A modified “point-pair” system was used to measure movement in key areas across the face by analyzing still pictures from video recording the procedure. Point-pairs were combined into “upper face” and “lower face” variables; duration and intensity of expression were standardized. Intensity and duration of expression varied among infants. Upper and lower face movement rose and overlapped in intensity about 30% into the expression. The expression reached plateau without major change for the duration of the expressive cycle. We conclude that there appears to be a shared pattern in the dynamic trajectory of the pain display among infants expressing extreme intensity. We speculate that these patterns are important in the communication of pain, and their incorporation in facial pain scales may improve current metrics.


2019 ◽  
Vol 9 (8) ◽  
pp. 188 ◽  
Author(s):  
Dong-Ho Lee ◽  
Sherryse L Corrow ◽  
Raika Pancaroglu ◽  
Jason J S Barton

The scanpaths of healthy subjects show biases towards the upper face, the eyes and the center of the face, which suggests that their fixations are guided by a feature hierarchy towards the regions most informative for face identification. However, subjects with developmental prosopagnosia have a lifelong impairment in face processing. Whether this is reflected in the loss of normal face-scanning strategies is not known. The goal of this study was to determine if subjects with developmental prosopagnosia showed anomalous scanning biases as they processed the identity of faces. We recorded the fixations of 10 subjects with developmental prosopagnosia as they performed a face memorization and recognition task, for comparison with 8 subjects with acquired prosopagnosia (four with anterior temporal lesions and four with occipitotemporal lesions) and 20 control subjects. The scanning of healthy subjects confirmed a bias to fixate the upper over the lower face, the eyes over the mouth, and the central over the peripheral face. Subjects with acquired prosopagnosia from occipitotemporal lesions had more dispersed fixations and a trend to fixate less informative facial regions. Subjects with developmental prosopagnosia did not differ from the controls. At a single-subject level, some developmental subjects performed abnormally, but none consistently across all metrics. Scanning distributions were not related to scores on perceptual or memory tests for faces. We conclude that despite lifelong difficulty with faces, subjects with developmental prosopagnosia still have an internal facial schema that guides their scanning behavior.


Author(s):  
Wendy Sandler ◽  
Diane Lillo-Martin ◽  
Svetlana Dachkovsky ◽  
Ronice Müller de Quadros

Sign languages are unlike spoken languages because they are produced by a wide range of visibly perceivable articulators: the hands, the face, the head, and the body. There is as yet no consensus on the division of labour between these articulators and the linguistic elements or subsystems that they subserve. For example, certain systematic facial expressions in sign languages have been argued to be the realization of syntactic structure by some researchers and of information structure, and thus prosodic in nature, by others. This chapter brings evidence from three unrelated sign languages for the latter claim. It shows that certain non-manual markers are best understood as representing pragmatic notions related to information structure, such as accessibility, contingency, and focus, and are thus part of the prosodic system in sign languages generally. The data and argumentation serve to sharpen the distinction between prosody and syntax in language generally.


2021 ◽  
Vol 14 ◽  
pp. 117954762199457
Author(s):  
Daniele Emedoli ◽  
Maddalena Arosio ◽  
Andrea Tettamanti ◽  
Sandro Iannaccone

Background: Buccofacial Apraxia is defined as the inability to perform voluntary movements of the larynx, pharynx, mandible, tongue, lips and cheeks, while automatic or reflexive control of these structures is preserved. Buccofacial Apraxia frequently co-occurs with aphasia and apraxia of speech and it has been reported as almost exclusively resulting from a lesion of the left hemisphere. Recent studies have demonstrated the benefit of treating apraxia using motor training principles such as Augmented Feedback or Action Observation Therapy. In light of this, the study describes the treatment based on immersive Action Observation Therapy and Virtual Reality Augmented Feedback in a case of Buccofacial Apraxia. Participant and Methods: The participant is a right-handed 58-years-old male. He underwent a neurosurgery intervention of craniotomy and exeresis of infra axial expansive lesion in the frontoparietal convexity compatible with an atypical meningioma. Buccofacial Apraxia was diagnosed by a neurologist and evaluated by the Upper and Lower Face Apraxia Test. Buccofacial Apraxia was quantified also by a specific camera, with an appropriately developed software, able to detect the range of motion of automatic face movements and the range of the same movements on voluntary requests. In order to improve voluntary movements, the participant completed fifteen 1-hour rehabilitation sessions, composed of a 20-minutes immersive Action Observation Therapy followed by a 40-minutes Virtual Reality Augmented Feedback sessions, 5 days a week, for 3 consecutive weeks. Results: After treatment, participant achieved great improvements in quality and range of facial movements, performing most of the facial expressions (eg, kiss, smile, lateral angle of mouth displacement) without unsolicited movement. Furthermore, the Upper and Lower Face Apraxia Test showed an improvement of 118% for the Upper Face movements and of 200% for the Lower Face movements. Conclusion: Performing voluntary movement in a Virtual Reality environment with Augmented Feedbacks, in addition to Action Observation Therapy, improved performances of facial gestures and consolidate the activations by the central nervous system based on principles of experience-dependent neural plasticity.


Plants ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 369
Author(s):  
Pasqua Veronico ◽  
Maria Teresa Melillo

Plant parasitic nematodes are annually responsible for the loss of 10%–25% of worldwide crop production, most of which is attributable to root-knot nematodes (RKNs) that infest a wide range of agricultural crops throughout the world. Current nematode control tools are not enough to ensure the effective management of these parasites, mainly due to the severe restrictions imposed on the use of chemical pesticides. Therefore, it is important to discover new potential nematicidal sources that are suitable for the development of additional safe and effective control strategies. In the last few decades, there has been an explosion of information about the use of seaweeds as plant growth stimulants and potential nematicides. Novel bioactive compounds have been isolated from marine cyanobacteria and sponges in an effort to find their application outside marine ecosystems and in the discovery of new drugs. Their potential as antihelmintics could also be exploited to find applicability against plant parasitic nematodes. The present review focuses on the activity of marine organisms on RKNs and their potential application as safe nematicidal agents.


2019 ◽  
Vol 2019 ◽  
pp. 1-21 ◽  
Author(s):  
Naeem Ratyal ◽  
Imtiaz Ahmad Taj ◽  
Muhammad Sajid ◽  
Anzar Mahmood ◽  
Sohail Razzaq ◽  
...  

Face recognition aims to establish the identity of a person based on facial characteristics and is a challenging problem due to complex nature of the facial manifold. A wide range of face recognition applications are based on classification techniques and a class label is assigned to the test image that belongs to the unknown class. In this paper, a pose invariant deeply learned multiview 3D face recognition approach is proposed and aims to address two problems: face alignment and face recognition through identification and verification setups. The proposed alignment algorithm is capable of handling frontal as well as profile face images. It employs a nose tip heuristic based pose learning approach to estimate acquisition pose of the face followed by coarse to fine nose tip alignment using L2 norm minimization. The whole face is then aligned through transformation using knowledge learned from nose tip alignment. Inspired by the intrinsic facial symmetry of the Left Half Face (LHF) and Right Half Face (RHF), Deeply learned (d) Multi-View Average Half Face (d-MVAHF) features are employed for face identification using deep convolutional neural network (dCNN). For face verification d-MVAHF-Support Vector Machine (d-MVAHF-SVM) approach is employed. The performance of the proposed methodology is demonstrated through extensive experiments performed on four databases: GavabDB, Bosphorus, UMB-DB, and FRGC v2.0. The results show that the proposed approach yields superior performance as compared to existing state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document