Seeing the Way: the Role of Vision in Conversation Turn Exchange Perception

2017 ◽  
Vol 30 (7-8) ◽  
pp. 653-679 ◽  
Author(s):  
Nida Latif ◽  
Agnès Alsius ◽  
K. G. Munhall

During conversations, we engage in turn-taking behaviour that proceeds back and forth effortlessly as we communicate. In any given day, we participate in numerous face-to-face interactions that contain social cues from our partner and we interpret these cues to rapidly identify whether it is appropriate to speak. Although the benefit provided by visual cues has been well established in several areas of communication, the use of visual information to make turn-taking decisions during conversation is unclear. Here we conducted two experiments to investigate the role of visual information in identifying conversational turn exchanges. We presented clips containing single utterances spoken by single individuals engaged in a natural conversation with another. These utterances were from either right before a turn exchange (i.e., when the current talker would finish and the other would begin) or were utterances where the same talker would continue speaking. In Experiment 1, participants were presented audiovisual, auditory-only and visual-only versions of our stimuli and identified whether a turn exchange would occur or not. We demonstrated that although participants could identify turn exchanges with unimodal information alone, they performed best in the audiovisual modality. In Experiment 2, we presented participants audiovisual turn exchanges where the talker, the listener or both were visible. We showed that participants suffered a cost at identifying turns exchanges when visual cues from the listener were not available. Overall, we demonstrate that although auditory information is sufficient for successful conversation, visual information plays an important role in the overall efficiency of communication.

Author(s):  
Karolina Hansen ◽  
Tamara Rakić ◽  
Melanie C. Steffens

Abstract. Most research on ethnicity has focused on visual cues. However, accents are strong social cues that can match or contradict visual cues. We examined understudied reactions to people whose one cue suggests one ethnicity, whereas the other cue contradicts it. In an experiment conducted in Germany, job candidates spoke with an accent either congruent or incongruent with their (German or Turkish) appearance. Based on ethnolinguistic identity theory, we predicted that accents would be strong cues for categorization and evaluation. Based on expectancy violations theory we expected that incongruent targets would be evaluated more extremely than congruent targets. Both predictions were confirmed: accents strongly influenced perceptions and Turkish-looking German-accented targets were perceived as most competent of all targets (and additionally most warm). The findings show that bringing together visual and auditory information yields a more complete picture of the processes underlying impression formation.


2018 ◽  
Vol 40 (1) ◽  
pp. 93-109
Author(s):  
YI ZHENG ◽  
ARTHUR G. SAMUEL

AbstractIt has been documented that lipreading facilitates the understanding of difficult speech, such as noisy speech and time-compressed speech. However, relatively little work has addressed the role of visual information in perceiving accented speech, another type of difficult speech. In this study, we specifically focus on accented word recognition. One hundred forty-two native English speakers made lexical decision judgments on English words or nonwords produced by speakers with Mandarin Chinese accents. The stimuli were presented as either as videos that were of a relatively far speaker or as videos in which we zoomed in on the speaker’s head. Consistent with studies of degraded speech, listeners were more accurate at recognizing accented words when they saw lip movements from the closer apparent distance. The effect of apparent distance tended to be larger under nonoptimal conditions: when stimuli were nonwords than words, and when stimuli were produced by a speaker who had a relatively strong accent. However, we did not find any influence of listeners’ prior experience with Chinese accented speech, suggesting that cross-talker generalization is limited. The current study provides practical suggestions for effective communication between native and nonnative speakers: visual information is useful, and it is more useful in some circumstances than others.


Neurology ◽  
2018 ◽  
Vol 90 (11) ◽  
pp. e977-e984 ◽  
Author(s):  
Motoyasu Honma ◽  
Yuri Masaoka ◽  
Takeshi Kuroda ◽  
Akinori Futamura ◽  
Azusa Shiromaru ◽  
...  

ObjectiveTo determine whether Parkinson disease (PD) affects cross-modal function of vision and olfaction because it is known that PD impairs various cognitive functions, including olfaction.MethodsWe conducted behavioral experiments to identify the influence of PD on cross-modal function by contrasting patient performance with age-matched normal controls (NCs). We showed visual effects on the strength and preference of odor by manipulating semantic connections between picture/odorant pairs. In addition, we used brain imaging to identify the role of striatal presynaptic dopamine transporter (DaT) deficits.ResultsWe found that odor evaluation in participants with PD was unaffected by visual information, while NCs overestimated smell when sniffing odorless liquid while viewing pleasant/unpleasant visual cues. Furthermore, DaT deficit in striatum, for the posterior putamen in particular, correlated to few visual effects in participants with PD.ConclusionsThese findings suggest that PD impairs cross-modal function of vision/olfaction as a result of posterior putamen deficit. This cross-modal dysfunction may serve as the basis of a novel precursor assessment of PD.


2020 ◽  
pp. 002383091989888
Author(s):  
Luma Miranda ◽  
Marc Swerts ◽  
João Moraes ◽  
Albert Rilliard

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence (“ Como você sabe”), either as a statement (meaning “ As you know.”) or as an echo question (meaning “ As you know?”). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


2009 ◽  
Vol 26 (5) ◽  
pp. 427-438 ◽  
Author(s):  
Werner Goebl ◽  
Caroline Palmer

WE INVESTIGATED INFLUENCES OF AUDITORY FEEDBACK, musical role, and note ratio on synchronization in ensemble performance. Pianists performed duets on a piano keyboard; the pianist playing the upper part was designated the leader and the other pianist was the follower. They received full auditory feedback, one-way feedback (leaders heard themselves while followers heard both parts), or self-feedback only. The upper part contained more, fewer, or equal numbers of notes relative to the lower part. Temporal asynchronies increased as auditory feedback decreased: The pianist playing more notes preceded the other pianist, and this tendency increased with reduced feedback. Interonset timing suggested bidirectional adjustments during full feedback despite the leader/follower instruction, and unidirectional adjustment only during reduced feedback. Motion analyses indicated that leaders raised fingers higher and pianists' head movements became more synchronized as auditory feedback was reduced. These findings suggest that visual cues became more important when auditory information was absent.


2015 ◽  
Vol 3 (1-2) ◽  
pp. 88-101 ◽  
Author(s):  
Kathleen M. Einarson ◽  
Laurel J. Trainor

Recent work examined five-year-old children’s perceptual sensitivity to musical beat alignment. In this work, children watched pairs of videos of puppets drumming to music with simple or complex metre, where one puppet’s drumming sounds (and movements) were synchronized with the beat of the music and the other drummed with incorrect tempo or phase. The videos were used to maintain children’s interest in the task. Five-year-olds were better able to detect beat misalignments in simple than complex metre music. However, adults can perform poorly when attempting to detect misalignment of sound and movement in audiovisual tasks, so it is possible that the moving stimuli actually hindered children’s performance. Here we compared children’s sensitivity to beat misalignment in conditions with dynamic visual movement versus still (static) visual images. Eighty-four five-year-old children performed either the same task as described above or a task that employed identical auditory stimuli accompanied by a motionless picture of the puppet with the drum. There was a significant main effect of metre type, replicating the finding that five-year-olds are better able to detect beat misalignment in simple metre music. There was no main effect of visual condition. These results suggest that, given identical auditory information, children’s ability to judge beat misalignment in this task is not affected by the presence or absence of dynamic visual stimuli. We conclude that at five years of age, children can tell if drumming is aligned to the musical beat when the music has simple metric structure.


2021 ◽  
Vol 11 (6) ◽  
pp. 668-675
Author(s):  
Jamal Poursamimi ◽  
Malihe Khubroo ◽  
Seyyed Hossein Sanaeifar

Comprehending the striking role of Conversation Analysis (CA) research in a real context on the one hand, and a substantial part doctors play in doctor-patient conversation in the proceeding stages of receiving medical intensive care as an inherent nature of society on the other hand, provoked the researchers to conduct this research. To achieve this intention, the present study focuses on conversation aspects of doctor-patient talks in unconfirmed cases of COVID-19 in Golpayegan, Esfahan, Iran. This study tries to find out what conversation aspects are more frequently used by Iranian interlocutors in the context of the doctors’ office. Three doctor-patient meetings, for this purpose, were audio-recorded, then transcribed. The focus is on both the talk and nonverbal aspects of conversation to be analyzed. After doing the conversation analysis, it was found that turn-taking was the most frequently used conversation aspect. Because this investigation is among the first conversation analysis research which is conducted in the Iranian doctor-patient context in COVID-19 setting, it seems outstanding. In addition, as teaching conversation analysis to students in parallel with other outstanding skills, sub-skills, and language components has great importance, and the analysis method utilized in the current research is conversation analysis, this study sounds prominent.


2022 ◽  
Author(s):  
Nicole E Wynne ◽  
Karthikeyan Chandrasegaran ◽  
Lauren Fryzlewicz ◽  
Clément Vinauger

The diurnal mosquitoes Aedes aegypti are vectors of several arboviruses, including dengue, yellow fever, and Zika viruses. To find a host to feed on, they rely on the sophisticated integration of olfactory, visual, thermal, and gustatory cues reluctantly emitted by the hosts. If detected by their target, this latter may display defensive behaviors that mosquitoes need to be able to detect and escape. In humans, a typical response is a swat of the hand, which generates both mechanical and visual perturbations aimed at a mosquito. While the neuro-sensory mechanisms underlying the approach to the host have been the focus of numerous studies, the cues used by mosquitoes to detect and identify a potential threat remain largely understudied. In particular, the role of vision in mediating mosquitoes' ability to escape defensive hosts has yet to be analyzed. Here, we used programmable visual displays to generate expanding objects sharing characteristics with the visual component of an approaching hand and quantified the behavioral response of female mosquitoes. Results show that Ae. aegypti is capable of using visual information to decide whether to feed on an artificial host mimic. Stimulations delivered in a LED flight arena further reveal that landed females Ae. aegypti display a stereotypical escape strategy by taking off at an angle that is a function of the distance and direction of stimulus introduction. Altogether, this study demonstrates mosquitoes can use isolated visual cues to detect and avoid a potential threat.


2019 ◽  
Author(s):  
Meike Scheller ◽  
Francine Matorres ◽  
Lucy Tompkins ◽  
Anthony C. Little ◽  
Alexandra A. de Sousa

Cross-cultural research has repeatedly demonstrated sex differences in the importance of different partner characteristics when choosing a mate. Men typically report higher preferences for younger, more physically attractive women, while women prefer men that are wealthier and of higher status. As the assessment of such partner characteristics often relies on visual cues, this raises the question whether visual experience is necessary for sex-specific mate preferences to develop. To shed more light onto the emergence of sex differences in mate choice, the current study assessed how preferences for attractiveness, resources, and personality factors differ between sighted and blind individuals using an online questionnaire. We further investigate the role of social factors and sensory cue selection in these sex differences. Our sample consisted of 94 sighted and blind participants with different ages of blindness-onset, 19 blind/28 sighted males, and 19 blind/28 sighted females. Results replicated well-documented findings in the sighted, with men placing more importance on physical attractiveness and women placing more importance on status and resources. However, while physical attractiveness was less important to blind men, blind women considered physical attractiveness as important as sighted women. The importance of a high status and likeable personality was not influenced by sightedness. Blind individuals considered auditory cues more important than visual cues, while sighted males showed the opposite pattern. Further, relationship status and indirect, social influences were related to preferences. Overall, our findings shed light on the availability of visual information for the emergence of sex differences in mate preference.


Sign in / Sign up

Export Citation Format

Share Document