scholarly journals Visual threats reduce blood-feeding and trigger escape responses in Aedes aegypti mosquitoes.

2022 ◽  
Author(s):  
Nicole E Wynne ◽  
Karthikeyan Chandrasegaran ◽  
Lauren Fryzlewicz ◽  
Clément Vinauger

The diurnal mosquitoes Aedes aegypti are vectors of several arboviruses, including dengue, yellow fever, and Zika viruses. To find a host to feed on, they rely on the sophisticated integration of olfactory, visual, thermal, and gustatory cues reluctantly emitted by the hosts. If detected by their target, this latter may display defensive behaviors that mosquitoes need to be able to detect and escape. In humans, a typical response is a swat of the hand, which generates both mechanical and visual perturbations aimed at a mosquito. While the neuro-sensory mechanisms underlying the approach to the host have been the focus of numerous studies, the cues used by mosquitoes to detect and identify a potential threat remain largely understudied. In particular, the role of vision in mediating mosquitoes' ability to escape defensive hosts has yet to be analyzed. Here, we used programmable visual displays to generate expanding objects sharing characteristics with the visual component of an approaching hand and quantified the behavioral response of female mosquitoes. Results show that Ae. aegypti is capable of using visual information to decide whether to feed on an artificial host mimic. Stimulations delivered in a LED flight arena further reveal that landed females Ae. aegypti display a stereotypical escape strategy by taking off at an angle that is a function of the distance and direction of stimulus introduction. Altogether, this study demonstrates mosquitoes can use isolated visual cues to detect and avoid a potential threat.

2018 ◽  
Vol 40 (1) ◽  
pp. 93-109
Author(s):  
YI ZHENG ◽  
ARTHUR G. SAMUEL

AbstractIt has been documented that lipreading facilitates the understanding of difficult speech, such as noisy speech and time-compressed speech. However, relatively little work has addressed the role of visual information in perceiving accented speech, another type of difficult speech. In this study, we specifically focus on accented word recognition. One hundred forty-two native English speakers made lexical decision judgments on English words or nonwords produced by speakers with Mandarin Chinese accents. The stimuli were presented as either as videos that were of a relatively far speaker or as videos in which we zoomed in on the speaker’s head. Consistent with studies of degraded speech, listeners were more accurate at recognizing accented words when they saw lip movements from the closer apparent distance. The effect of apparent distance tended to be larger under nonoptimal conditions: when stimuli were nonwords than words, and when stimuli were produced by a speaker who had a relatively strong accent. However, we did not find any influence of listeners’ prior experience with Chinese accented speech, suggesting that cross-talker generalization is limited. The current study provides practical suggestions for effective communication between native and nonnative speakers: visual information is useful, and it is more useful in some circumstances than others.


Neurology ◽  
2018 ◽  
Vol 90 (11) ◽  
pp. e977-e984 ◽  
Author(s):  
Motoyasu Honma ◽  
Yuri Masaoka ◽  
Takeshi Kuroda ◽  
Akinori Futamura ◽  
Azusa Shiromaru ◽  
...  

ObjectiveTo determine whether Parkinson disease (PD) affects cross-modal function of vision and olfaction because it is known that PD impairs various cognitive functions, including olfaction.MethodsWe conducted behavioral experiments to identify the influence of PD on cross-modal function by contrasting patient performance with age-matched normal controls (NCs). We showed visual effects on the strength and preference of odor by manipulating semantic connections between picture/odorant pairs. In addition, we used brain imaging to identify the role of striatal presynaptic dopamine transporter (DaT) deficits.ResultsWe found that odor evaluation in participants with PD was unaffected by visual information, while NCs overestimated smell when sniffing odorless liquid while viewing pleasant/unpleasant visual cues. Furthermore, DaT deficit in striatum, for the posterior putamen in particular, correlated to few visual effects in participants with PD.ConclusionsThese findings suggest that PD impairs cross-modal function of vision/olfaction as a result of posterior putamen deficit. This cross-modal dysfunction may serve as the basis of a novel precursor assessment of PD.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 653-679 ◽  
Author(s):  
Nida Latif ◽  
Agnès Alsius ◽  
K. G. Munhall

During conversations, we engage in turn-taking behaviour that proceeds back and forth effortlessly as we communicate. In any given day, we participate in numerous face-to-face interactions that contain social cues from our partner and we interpret these cues to rapidly identify whether it is appropriate to speak. Although the benefit provided by visual cues has been well established in several areas of communication, the use of visual information to make turn-taking decisions during conversation is unclear. Here we conducted two experiments to investigate the role of visual information in identifying conversational turn exchanges. We presented clips containing single utterances spoken by single individuals engaged in a natural conversation with another. These utterances were from either right before a turn exchange (i.e., when the current talker would finish and the other would begin) or were utterances where the same talker would continue speaking. In Experiment 1, participants were presented audiovisual, auditory-only and visual-only versions of our stimuli and identified whether a turn exchange would occur or not. We demonstrated that although participants could identify turn exchanges with unimodal information alone, they performed best in the audiovisual modality. In Experiment 2, we presented participants audiovisual turn exchanges where the talker, the listener or both were visible. We showed that participants suffered a cost at identifying turns exchanges when visual cues from the listener were not available. Overall, we demonstrate that although auditory information is sufficient for successful conversation, visual information plays an important role in the overall efficiency of communication.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Harry Luiz Pilz-Junior ◽  
Alessandra Bittencourt de Lemos ◽  
Kauana Nunes de Almeida ◽  
Gertrudes Corção ◽  
Henri Stephan Schrekker ◽  
...  

Abstract Mosquitoes are important vectors of pathogens due to their blood feeding behavior. Aedes aegypti (Diptera: Culicidae) transmits arboviruses, such as dengue, Zika, and Chikungunya. This species carries several bacteria that may be beneficial for its biological and physiological development. Therefore, studying the response of its microbiota to chemical products could result in vector control. Recently, imidazolium salts (IS) were identified as effective Ae. aegypti larvicides. Considering the importance of the mosquito microbiota, this study addressed the influence of IS on the bacteria of Ae. aegypti larvae. After exposition of larvae to different IS concentrations, the cultured microbiota was identified through culturomics and mass spectrometry, and the non-cultivated microbiota was characterized by molecular markers. In addition, the influence of the IS on axenic larvae was studied for comparison. There was an alteration in both cultivable species and in their diversity, including modifications in bacterial communities. The axenic larvae were less susceptible to the IS, which was increased after exposing these larvae to bacteria of laboratory breeding water. This highlights the importance of understanding the role of the larval microbiota of Ae. aegypti in the development of imidazolium salt-based larvicides. Such effect of IS towards microbiota of Ae. aegypti larvae, through their antimicrobial action, increases their larvicidal potential.


2019 ◽  
Author(s):  
Meike Scheller ◽  
Francine Matorres ◽  
Lucy Tompkins ◽  
Anthony C. Little ◽  
Alexandra A. de Sousa

Cross-cultural research has repeatedly demonstrated sex differences in the importance of different partner characteristics when choosing a mate. Men typically report higher preferences for younger, more physically attractive women, while women prefer men that are wealthier and of higher status. As the assessment of such partner characteristics often relies on visual cues, this raises the question whether visual experience is necessary for sex-specific mate preferences to develop. To shed more light onto the emergence of sex differences in mate choice, the current study assessed how preferences for attractiveness, resources, and personality factors differ between sighted and blind individuals using an online questionnaire. We further investigate the role of social factors and sensory cue selection in these sex differences. Our sample consisted of 94 sighted and blind participants with different ages of blindness-onset, 19 blind/28 sighted males, and 19 blind/28 sighted females. Results replicated well-documented findings in the sighted, with men placing more importance on physical attractiveness and women placing more importance on status and resources. However, while physical attractiveness was less important to blind men, blind women considered physical attractiveness as important as sighted women. The importance of a high status and likeable personality was not influenced by sightedness. Blind individuals considered auditory cues more important than visual cues, while sighted males showed the opposite pattern. Further, relationship status and indirect, social influences were related to preferences. Overall, our findings shed light on the availability of visual information for the emergence of sex differences in mate preference.


Perception ◽  
10.1068/p7153 ◽  
2012 ◽  
Vol 41 (2) ◽  
pp. 175-192 ◽  
Author(s):  
Esteban R Calcagno ◽  
Ezequiel L Abregú ◽  
Manuel C Eguía ◽  
Ramiro Vergara

In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.


2021 ◽  
pp. 1-23
Author(s):  
Hye-Jung CHO ◽  
Jieun KIAER ◽  
Naya CHOI ◽  
Jieun SONG

Abstract In Korean language, questions containing ambiguous wh-words may be interpreted as either wh-questions or yes-no questions. This study investigated 43 Korean three-year-olds’ ability to disambiguate eight indeterminate questions using prosodic and visual cues. The intonation of each question provided a cue as to whether it should be interpreted as a wh-question or a yes-no question. The questions were presented alongside picture stimuli, which acted as either a matched (presentation of corresponding auditory-visual stimuli) or a mismatched contextual cue (presentation conflicting auditory-visual stimuli). Like adults, the children preferred to comprehend questions involving ambiguous wh-words as wh-questions, rather than yes-no questions. In addition, children were as effective as adults in disambiguating indeterminate questions using prosodic cues regardless of the visual cue. However, when confronted with conflicting auditory-visual stimuli (mismatched), the quality of children's responses was less accurate than adults’ responses.


2021 ◽  
pp. 1-21
Author(s):  
Xinyue Wang ◽  
Clemens Wöllner ◽  
Zhuanghua Shi

Abstract Compared to vision, audition has been considered to be the dominant sensory modality for temporal processing. Nevertheless, recent research suggests the opposite, such that the apparent inferiority of visual information in tempo judgements might be due to the lack of ecological validity of experimental stimuli, and reliable visual movements may have the potential to alter the temporal location of perceived auditory inputs. To explore the role of audition and vision in overall time perception, audiovisual stimuli with various degrees of temporal congruence were developed in the current study. We investigated which sensory modality weighs more in holistic tempo judgements with conflicting audiovisual information, and whether biological motion (point-light displays of dancers) rather than auditory cues (rhythmic beats) dominate judgements of tempo. A bisection experiment found that participants relied more on visual tempo compared to auditory tempo in overall tempo judgements. For fast tempi (150 to 180 BPM), participants judged ‘fast’ significantly more often with visual cues regardless of the auditory tempo, whereas for slow tempi (60 to 90 BPM), they did so significantly less often. Our results support the notion that visual stimuli with higher ecological validity have the potential to drive up or down the holistic perception of tempo.


Author(s):  
Grant McGuire ◽  
Molly Babel

AbstractWhile the role of auditory saliency is well accepted as providing insight into the shaping of phonological systems, the influence of visual saliency on such systems has been neglected. This paper provides evidence for the importance of visual information in historical phonological change and synchronic variation through a series of audio-visual experiments with the /f/∼/θ/ contrast. /θ/ is typologically rare, an atypical target in sound change, acquired comparatively late, and synchronically variable in language inventories. Previous explanations for these patterns have focused on either the articulatory difficulty of an interdental tongue gesture or the perceptual similarity /θ/ shares with labiodental fricatives. We hypothesize that the bias is due to an asymmetry in audio-visual phonetic cues and cue variability within and across talkers. Support for this hypothesis comes from a speech perception study that explored the weighting of audio and visual cues for /f/ and /θ/ identification in CV, VC, and VCV syllabic environments in /i/, /a/, or /u/ vowel contexts in Audio, Visual, and Audio-Visual experimental conditions using stimuli from ten different talkers. The results indicate that /θ/ is more variable than /f/, both in Audio and Visual conditions. We propose that it is this variability which contributes to the unstable nature of /θ/ across time and offers an improved explanation for the observed synchronic and diachronic asymmetries in its patterning.


Sign in / Sign up

Export Citation Format

Share Document