10. Introduction to the Simplified Sign System

2020 ◽  
pp. 1-32
Author(s):  
John D. Bonvillian ◽  
Nicole Kissane Lee ◽  
Tracy T. Dooley ◽  
Filip T. Loncke

Chapter 10 provides an introduction to the organization of the Simplified Sign System lexicon and its supporting materials. This chapter explains the various conventions used in the sign illustrations so that learners can accurately interpret the drawings, including the numbering of initial, intermediate, and final positions; the size, shape, and repetition of arrows, quotes, and other marks that depict the sign’s movement; and the provision of facial expressions on signs that convey emotional information. Drawings and expanded written descriptions of the handshapes used in the Simplified Sign System are provided, along with information on how prevalent each handshape is in the system and a sampling of the particular meanings that a handshape can convey within the system. Drawings and written descriptions of the various palm orientations and finger/knuckle orientations used in the system are provided as well so that family members, educators, and other professionals will be able to accurately interpret each sign’s written description. Also discussed in this chapter are the memory aids provided with each sign, natural variations in sign formation and production that are to be expected, as well as what to do if a sign learner has functional use of only one hand and arm.

2020 ◽  
pp. 33-1034
Author(s):  
John D. Bonvillian ◽  
Nicole Kissane Lee ◽  
Tracy T. Dooley ◽  
Filip T. Loncke

Chapter 11 contains the first one thousand signs of the Simplified Sign System lexicon, alphabetized by each sign’s main gloss. Each entry in the lexicon includes a hand-drawn illustration of how that sign is formed, a listing of any synonyms or antonyms related to that sign, and a written description of how the sign is formed (i.e., the handshape(s), palm orientation(s), finger/knuckle orientation(s), location, and movement parameters of the sign). Also provided are a short memory aid to help learners remember the sign’s formation and a longer memory aid that describes the visual and iconic link between how the sign is physically formed and the meaning it conveys. Many of the longer memory aids also include a definition of the main gloss and some of that sign’s synonyms. If users of the system wish to look up a particular vocabulary item, term, or idiomatic phrase, an alphabetized Sign Index that integrates all of the main sign glosses with all of their listed synonyms and antonyms is provided at the end of the volume. This Sign Index directs readers to the page that contains the main sign entry, its written description, and its memory aids.


2018 ◽  
Author(s):  
◽  
Sanchita Gargya

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] An extensive literature on the influence of emotion on memory asserts that memory for emotional information is remembered better than information lacking emotional content (Kensinger, 2009; Talmi et al., 2007; for review see Hamann, 2001). While decades of research have agreed upon memory advantages for emotional versus neutral information, research studying the impact of emotion on memory for associated details has shown differential effects of emotion on associated neutral details (Erk et al., 2003; Righi et al., 2015; Steinmetz et al., 2015). Using emotional-neutral stimulus pairs, the current set of experiments present novel findings from aging perspective to systematically explore the impact of embedded emotional information on associative memory representation of associated neutral episodic memory details. To accomplish this, three experiments were conducted. In all three experiments, younger and older participants were shown three types of emotional faces (happy, sad, and neutral) along with names. The first experiment investigated whether associative instructions and repetition of face-name pairs influence and promote formation of implicit emotional face-name associations. Using intentional and incidental instructions to encode face-name associations, in Experiment 2 and 3, respectively, participants' memory for whether names, shown with different facial expressions, can trigger emotional content of a study episode in the absence of the original emotional context at test, was assessed. Results indicate that while both younger and older adults show that names are integrated better with happy facial expressions than with sad expressions, older adults fail to show a benefit for associating a name with a happy emotional expression in the absence of associative encoding instructions. Overall, these results suggest that happy facial expressions can be implicitly learnt with or spilled over to associated neutral episodic details, like names. However, this integration is accomplished by older adults only under instructions to form face-name association.


2020 ◽  
pp. 281-310
Author(s):  
John D. Bonvillian ◽  
Nicole Kissane Lee ◽  
Tracy T. Dooley ◽  
Filip T. Loncke

Chapter 8 provides background information on the development of the Simplified Sign System. These steps are included so that investigators may replicate research findings and/or develop additional signs for their own sign-intervention programs. The authors first discuss efforts to find highly iconic or representative gestures in the dictionaries of various sign languages and sign systems from around the world. If necessary, signs were then modified to make them easier to produce based on the results of prior studies of signing errors made by students with autism, the sign-learning children of Deaf parents, and undergraduate students unfamiliar with any sign language. These potential signs were then tested with different undergraduate students to determine whether the signs were sufficiently memorable and accurately formed. Signs that did not meet criterion were either dropped from the system or subsequently modified and re-tested. Initial results from comparison studies between Simplified Signs and ASL signs and between Simplified Signs and Amer-Ind signs are presented as well. Finally, feedback from users influenced the course of the project. Memory aids were developed, especially for those persons who have less familiarity with sign languages, to help explain the ties between each sign and its referent in case that relationship is not readily or immediately apparent to a potential learner.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xiaoxiao Li

In the natural environment, facial and bodily expressions influence each other. Previous research has shown that bodily expressions significantly influence the perception of facial expressions. However, little is known about the cognitive processing of facial and bodily emotional expressions and its temporal characteristics. Therefore, this study presented facial and bodily expressions, both separately and together, to examine the electrophysiological mechanism of emotional recognition using event-related potential (ERP). Participants assessed the emotions of facial and bodily expressions that varied by valence (positive/negative) and consistency (matching/non-matching emotions). The results showed that bodily expressions induced a more positive P1 component and a shortened latency, whereas facial expressions triggered a more negative N170 and prolonged latency. Among N2 and P3, N2 was more sensitive to inconsistent emotional information and P3 was more sensitive to consistent emotional information. The cognitive processing of facial and bodily expressions had distinctive integrating features, with the interaction occurring in the early stage (N170). The results of the study highlight the importance of facial and bodily expressions in the cognitive processing of emotion recognition.


Author(s):  
Michela Balconi

Neuropsychological studies have underlined the significant presence of distinct brain correlates deputed to analyze facial expression of emotion. It was observed that some cerebral circuits were considered as specific for emotional face comprehension as a function of conscious vs. unconscious processing of emotional information. Moreover, the emotional content of faces (i.e. positive vs. negative; more or less arousing) may have an effect in activating specific cortical networks. Between the others, recent studies have explained the contribution of hemispheres in comprehending face, as a function of type of emotions (mainly related to the distinction positive vs. negative) and of specific tasks (comprehending vs. producing facial expressions). Specifically, ERPs (event-related potentials) analysis overview is proposed in order to comprehend how face may be processed by an observer and how he can make face a meaningful construct even in absence of awareness. Finally, brain oscillations is considered in order to explain the synchronization of neural populations in response to emotional faces when a conscious vs. unconscious processing is activated.


Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


2009 ◽  
Vol 32 (5) ◽  
pp. 405-406 ◽  
Author(s):  
Nicolas Vermeulen

AbstractVigil suggests that expressed emotions are inherently learned and triggered in social contexts. A strict reading of this account is not consistent with the findings that individuals, even those who are congenitally blind, do express emotions in the absence of an audience. Rather, grounded cognition suggests that facial expressions might also be an embodied support used to represent emotional information.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shu Zhang ◽  
Xinge Liu ◽  
Xuan Yang ◽  
Yezhi Shu ◽  
Niqi Liu ◽  
...  

Cartoon faces are widely used in social media, animation production, and social robots because of their attractive ability to convey different emotional information. Despite their popular applications, the mechanisms of recognizing emotional expressions in cartoon faces are still unclear. Therefore, three experiments were conducted in this study to systematically explore a recognition process for emotional cartoon expressions (happy, sad, and neutral) and to examine the influence of key facial features (mouth, eyes, and eyebrows) on emotion recognition. Across the experiments, three presentation conditions were employed: (1) a full face; (2) individual feature only (with two other features concealed); and (3) one feature concealed with two other features presented. The cartoon face images used in this study were converted from a set of real faces acted by Chinese posers, and the observers were Chinese. The results show that happy cartoon expressions were recognized more accurately than neutral and sad expressions, which was consistent with the happiness recognition advantage revealed in real face studies. Compared with real facial expressions, sad cartoon expressions were perceived as sadder, and happy cartoon expressions were perceived as less happy, regardless of whether full-face or single facial features were viewed. For cartoon faces, the mouth was demonstrated to be a feature that is sufficient and necessary for the recognition of happiness, and the eyebrows were sufficient and necessary for the recognition of sadness. This study helps to clarify the perception mechanism underlying emotion recognition in cartoon faces and sheds some light on directions for future research on intelligent human-computer interactions.


2022 ◽  
Vol 12 ◽  
Author(s):  
Marta F. Nudelman ◽  
Liana C. L. Portugal ◽  
Izabela Mocaiber ◽  
Isabel A. David ◽  
Beatriz S. Rodolpho ◽  
...  

Background: Evidence indicates that the processing of facial stimuli may be influenced by incidental factors, and these influences are particularly powerful when facial expressions are ambiguous, such as neutral faces. However, limited research investigated whether emotional contextual information presented in a preceding and unrelated experiment could be pervasively carried over to another experiment to modulate neutral face processing.Objective: The present study aims to investigate whether an emotional text presented in a first experiment could generate negative emotion toward neutral faces in a second experiment unrelated to the previous experiment.Methods: Ninety-nine students (all women) were randomly assigned to read and evaluate a negative text (negative context) or a neutral text (neutral text) in the first experiment. In the subsequent second experiment, the participants performed the following two tasks: (1) an attentional task in which neutral faces were presented as distractors and (2) a task involving the emotional judgment of neutral faces.Results: The results show that compared to the neutral context, in the negative context, the participants rated more faces as negative. No significant result was found in the attentional task.Conclusion: Our study demonstrates that incidental emotional information available in a previous experiment can increase participants’ propensity to interpret neutral faces as more negative when emotional information is directly evaluated. Therefore, the present study adds important evidence to the literature suggesting that our behavior and actions are modulated by previous information in an incidental or low perceived way similar to what occurs in everyday life, thereby modulating our judgments and emotions.


2018 ◽  
Author(s):  
Damien Dupré ◽  
Nicole Andelic ◽  
Anna Zajac ◽  
Gawain Morrison ◽  
Gary John McKeown

Sharing personal information is an important way of communicating on social media. Among the information possibly shared, new sensors and tools allow people to share emotion information via facial emotion recognition. This paper questions whether people are prepared to share personal information such as their own emotion on social media. In the current study we examined how factors such as felt emotion, motivation for sharing on social media as well as personality affected participants’ willingness to share self-reported emotion or facial expression online. By carrying out a GLMM analysis, this study found that participants’ willingness to share self-reported emotion and facial expressions was influenced by their personality traits and the motivation for sharing their emotion information that they were given. From our results we can conclude that the estimated level of privacy for certain emotional information, such as facial expression, is influenced by the motivation for sharing the information online.


Sign in / Sign up

Export Citation Format

Share Document