scholarly journals In the face of pain: The choice of visual cues in pain conditioning matters

2017 ◽  
Vol 21 (7) ◽  
pp. 1243-1251 ◽  
Author(s):  
N. Egorova ◽  
J. Park ◽  
J. Kong

2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.



Author(s):  
Elke B. Lange ◽  
Jens Fünderich ◽  
Hartmut Grimm

AbstractWe investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.



2021 ◽  
Author(s):  
K. Hamoodh ◽  
S. Fotios

In subsidiary roads, lighting is installed to meet the needs of pedestrians after dark for safety and their feeling of safety. One aspect is the need to evaluate other people to inform the approach-or-avoid decision. To investigate how changes in lighting matter for this task, we first need to know where people tend to look. Much past work assumes the face is the critical target but this assumption has yet to be tested. A pilot study suggested ability to see the hands and face were significant cues, but did not enable their separate contributions to be identified. This paper describes a second experiment conducted to compare the effect of changes in face and hand concealment on evaluations of safety. The results suggest significant differences between levels of face concealment but smaller differences for changes in hand concealment. The findings from both experiments support the importance of the face for evaluating other pedestrians.



2015 ◽  
Vol 21 (2) ◽  
pp. 146-155 ◽  
Author(s):  
Mirella Díaz-Santos ◽  
Bo Cao ◽  
Samantha A. Mauro ◽  
Arash Yazdanbakhsh ◽  
Sandy Neargarder ◽  
...  

AbstractParkinson’s disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity. (JINS, 2015, 21, 146–155)



2013 ◽  
Vol 56 (2) ◽  
pp. 471-480 ◽  
Author(s):  
Astrid Yi ◽  
Willy Wong ◽  
Moshe Eizenman

Purpose In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory–visual conditions. Method Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either maintained their gaze at a specific distance (0°, 2.5°, 5°, 10°, and 15°) from the center of the talker's mouth (CTM) or moved their eyes freely on the computer display. Eye movements were monitored with an eye-tracking system, and speech intelligibility was evaluated by the mean percentage of correctly perceived words. Results With a single talker and a fixed point of gaze, speech intelligibility was similar for all fixations within 10° of the CTM. With visual cues from two talker faces and a speech signal from one of the talkers, speech intelligibility was similar to that of a single talker for fixations within 2.5° of the CTM. With natural viewing of a single talker, gaze strategy changed with speech-signal-to-noise ratio (SNR). For low speech-SNR, a strategy that brought the point of gaze directly to within 2.5° of the CTM was used in approximately 80% of trials, whereas in high speech-SNR it was used in only approximately 50% of trials. Conclusions With natural viewing of a single talker and high speech-SNR, subjects can shift their gaze between points on the talker's face without compromising speech intelligibility. With low-speech SNR, subjects change their gaze patterns to fixate primarily on points that are in close proximity to the talker's mouth. The latter strategy is essential to optimize speech intelligibility in situations where there are simultaneous visual cues from multiple talkers (i.e., when some of the visual cues are distracters).



2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Rochelle S. Newman ◽  
Laura A. Kirby ◽  
Katie Von Holzen ◽  
Elizabeth Redcay

Abstract Background Adults and adolescents with autism spectrum disorders show greater difficulties comprehending speech in the presence of noise. Moreover, while neurotypical adults use visual cues on the mouth to help them understand speech in background noise, differences in attention to human faces in autism may affect use of these visual cues. No work has yet examined these skills in toddlers with ASD, despite the fact that they are frequently faced with noisy, multitalker environments. Methods Children aged 2-5 years, both with and without autism spectrum disorder (ASD), saw pairs of images in a preferential looking study and were instructed to look at one of the two objects. Sentences were presented in the presence of quiet or another background talker (noise). On half of the trials, the face of the target person speaking was presented, while half had no face present. Growth-curve modeling was used to examine the time course of children’s looking to the appropriate vs. opposite image. Results Noise impaired performance for both children with ASD and their age- and language-matched peers. When there was no face present on the screen, the effect of noise was generally similar across groups with and without ASD. But when the face was present, the noise had a more detrimental effect on children with ASD than their language-matched peers, suggesting neurotypical children were better able to use visual cues on the speaker’s face to aid performance. Moreover, those children with ASD who attended more to the speaker’s face showed better listening performance in the presence of noise. Conclusions Young children both with and without ASD show poorer performance comprehending speech in the presence of another talker than in quiet. However, results suggest that neurotypical children may be better able to make use of face cues to partially counteract the effects of noise. Children with ASD varied in their use of face cues, but those children who spent more time attending to the face of the target speaker appeared less disadvantaged by the presence of background noise, indicating a potential path for future interventions.



Author(s):  
Shlomit Beker ◽  
John J Foxe ◽  
Sophie Molholm

Anticipating near-future events is fundamental to adaptive behavior, whereby neural processing of predictable stimuli is significantly facilitated relative to non-predictable events. Neural oscillations appear to be a key anticipatory mechanism by which processing of upcoming stimuli is modified, and they often entrain to rhythmic environmental sequences. Clinical and anecdotal observations have led to the hypothesis that people with Autism Spectrum Disorder (ASD) may have deficits in generating predictions, and as such, a candidate neural mechanism may be failure to adequately entrain neural activity to repetitive environmental patterns to facilitate temporal predictions. We tested this hypothesis by interrogating temporal predictions and rhythmic entrainment using behavioral and electrophysiological approaches. We recorded high-density electroencephalography in children with ASD and Typically Developing (TD) age- and IQ-matched controls, while they reacted to an auditory target as quickly as possible. This auditory event was either preceded by predictive rhythmic visual cues, or not. Both ASD and control groups presented comparable behavioral facilitation in response to the Cue vs. No-Cue condition, challenging the hypothesis that children with ASD have deficits in generating temporal predictions. Analyses of the electrophysiological data, in contrast, revealed significantly reduced neural entrainment to the visual cues, and altered anticipatory processes in the ASD group. This was the case despite intact stimulus evoked visual responses. These results support intact temporal prediction in response to a cue in ASD, in the face of altered entrainment and anticipatory processes.



1975 ◽  
Vol 40 (4) ◽  
pp. 481-492 ◽  
Author(s):  
Norman P. Erber

Hearing-impaired persons usually perceive speech by watching the face of the talker while listening through a hearing aid. Normal-hearing persons also tend to rely on visual cues, especially when they communicate in noisy or reverberant environments. Numerous clinical and laboratory studies on the auditory-visual performance of normal-hearing and hearing-impaired children and adults demonstrate that combined auditory-visual perception is superior to perception through either audition or vision alone. This paper reviews these studies and provides a rationale for routine evaluation of auditory-visual speech perception in audiology clinics.



2012 ◽  
Vol 25 (0) ◽  
pp. 112 ◽  
Author(s):  
Lukasz Piwek ◽  
Karin Petrini ◽  
Frank E. Pollick

Multimodal perception of emotions has been typically examined using displays of a solitary character (e.g., the face–voice and/or body–sound of one actor). We extend investigation to more complex, dyadic point-light displays combined with speech. A motion and voice capture system was used to record twenty actors interacting in couples with happy, angry and neutral emotional expressions. The obtained stimuli were validated in a pilot study and used in the present study to investigate multimodal perception of emotional social interactions. Participants were required to categorize happy and angry expressions displayed visually, auditorily, or using emotionally congruent and incongruent bimodal displays. In a series of cross-validation experiments we found that sound dominated the visual signal in the perception of emotional social interaction. Although participants’ judgments were faster in the bimodal condition, the accuracy of judgments was similar for both bimodal and auditory-only conditions. When participants watched emotionally mismatched bimodal displays, they predominantly oriented their judgments towards the auditory rather than the visual signal. This auditory dominance persisted even when the reliability of auditory signal was decreased with noise, although visual information had some effect on judgments of emotions when it was combined with a noisy auditory signal. Our results suggest that when judging emotions from observed social interaction, we rely primarily on vocal cues from the conversation, rather then visual cues from their body movement.



2010 ◽  
Vol 7 (2) ◽  
pp. 161-162 ◽  
Author(s):  
Frederick R. Adler

The notion of chemical communication between plants and other organisms has gone from being viewed as a fringe idea to an accepted ecological phenomenon only recently. An Organized Oral Session at the August 2010 Ecological Society of America meeting in Pittsburgh examined the role of plant signalling both within and between plants, with speakers addressing the remarkably wide array of effects that plant signals have on plant physiology, species interactions and entire communities. In addition to the familiar way that plants communicate with mutualists like pollinators and fruit dispersers through both chemical and visual cues, speakers at this session described how plants communicate with themselves, with each other, with herbivores and with predators of those herbivores. These plant signals create a complex odour web superimposed upon the more classical food web itself, with its own dynamics in the face of exotic species and rapid community assembly and disassembly.



Sign in / Sign up

Export Citation Format

Share Document