Chemical blindness in Liolaemus lizards is counterbalanced by visual signals, the case of two species with different communication modalities

2020 ◽  
Vol 41 (3) ◽  
pp. 323-336 ◽  
Author(s):  
Mario R. Ruiz-Monachesi ◽  
Soledad Valdecantos ◽  
Félix B. Cruz

Abstract Animals employ a wide variety of communication tactics that rely on distinct sensory modalities. Lizards are characterized by their heightened dependence on chemical and visual communication. Some authors have proposed that a reduced number of chemical secretory pores may be associated with an increased visual dependence in some species. Here, we study two species of Liolaemus lizards with different chemical features to compare their visual and chemical communication. The first species, L. coeruleus, does not have precloacal pores in either sex, while L. albiceps has precloacal pores in both sexes. We expect that L. coeruleus will principally adhere to the visual modality, while L. albiceps will show greater chemical responses. We filmed the lizard’s response to different chemical and visual stimuli. In the trials, both species demonstrated chemical self-recognition, L. albiceps exhibited less total time in motion but more behavioural displays in the presence of conspecific scents, suggesting conspecific chemical recognition too. On the other hand, visuals results showed that L. coeruleus reacted more to the presence of conspecifics than L. albiceps. These observations suggest that L. coeruleus relies more on visual signalization, while L. albiceps has a greater dependence on chemical communication. Our results may indicate a correspondence between precloacal secretions and the response to these by conspecifics in both species studied.

Nanophotonics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 3271-3278 ◽  
Author(s):  
Qian Ma ◽  
Qiao Ru Hong ◽  
Xin Xin Gao ◽  
Hong Bo Jing ◽  
Che Liu ◽  
...  

AbstractFor the intelligence of metamaterials, the -sensing mechanism and programmable reaction units are two important components for self-recognition and -determination. However, their realization still face great challenges. Here, we propose a smart sensing metasurface to achieve self-defined functions in the framework of digital coding metamaterials. A sensing unit that can simultaneously process the sensing channel and realize phase-programmable capability is designed by integrating radio frequency (RF) power detector and PIN diodes. Four sensing units distributed on the metasurface aperture can detect the microwave incidences in the x- and y-polarizations, while the other elements can modulate the reflected phase patterns under the control of a field programmable gate array (FPGA). To validate the performance, three schemes containing six coding patterns are presented and simulated, after which two of them are measured, showing good agreements with designs. We envision that this work may motivate studies on smart metamaterials with high-level recognition and manipulation.


Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 763-781 ◽  
Author(s):  
Jenni Heikkilä ◽  
Kimmo Alho ◽  
Kaisa Tiippana

Audiovisual semantic congruency during memory encoding has been shown to facilitate later recognition memory performance. However, it is still unclear whether this improvement is due to multisensory semantic congruency or just semantic congruencyper se. We investigated whether dual visual encoding facilitates recognition memory in the same way as audiovisual encoding. The participants memorized auditory or visual stimuli paired with a semantically congruent, incongruent or non-semantic stimulus in the same modality or in the other modality during encoding. Subsequent recognition memory performance was better when the stimulus was initially paired with a semantically congruent stimulus than when it was paired with a non-semantic stimulus. This congruency effect was observed with both audiovisual and dual visual stimuli. The present results indicate that not only multisensory but also unisensory semantically congruent stimuli can improve memory performance. Thus, the semantic congruency effect is not solely a multisensory phenomenon, as has been suggested previously.


2017 ◽  
Vol 35 (1) ◽  
pp. 77-93 ◽  
Author(s):  
Marilyn G. Boltz

Although the visual modality often dominates the auditory one, one exception occurs in the presence of tempo discrepancies between the two perceptual systems: variations in auditory rate typically have a greater influence on perceived visual rate than vice versa. This phenomenon, termed “auditory driving,” is investigated here through certain techniques used in cinematic art. Experiments 1 and 2 relied on montages (slideshows) of still photos accompanied by musical selections in which the perceived rate of one modality was assessed through a recognition task while the rate of the other modality was systematically varied. A similar methodological strategy was used in Experiments 3 and 4 in which film excerpts of various moving objects were accompanied by the sounds they typically produce. In both cases, auditory dominance was observed, which has implications at both a theoretical and applied level.


2020 ◽  
Author(s):  
Alper Kumcu

Linguistic synaesthesia (i.e., synaesthetic metaphor, intrafield metaphor or cross-modal metaphor) refers to instances in which expressions in different sensory modalities are combined as in the case of sweet (taste) melody (sound). Ullmann (1957) and later, Williams (1976) were first to show that synaesthetic transfers seem to follow a potentially universal pattern that goes from the lower (i.e., touch, taste and smell) to higher senses (i.e., hearing and sight) but not the other way around (e.g., melodious sweetness) Studies across languages, cultures, domains, and text types presented mixed results as to the universality claim of cross-modal mappings in linguistic synaesthesia (e.g., Jo, 2019; Strik Lievers, 2015; Zhao et al., 2019). To extend results to an underrepresented language and thus, to test the universality of the directionality principle, 5699 cases of linguistic synaesthesia in written and spoken Turkish were investigated using a general-purpose, large corpus. Results show that except for the transfers from smell to hearing which is unidirectional, synaesthetic transfers in Turkish do not comply with the directionality principle in the strictest sense. Although most transfers that follow the canonical direction were also significantly more frequent, there were instances of “backward transfers”. Further, two of the backward transfers (i.e., from smell to touch and from taste to touch) were significantly more frequent than their canonical counterparts (i.e., from touch to smell and from touch to taste). Results are compared against synaesthesia in other languages and discussed in the framework of linguistic universals and embodied cognition. Supplemental materials: https://osf.io/2unvy


2018 ◽  
Author(s):  
Tobias Heycke ◽  
Christoph Stahl

Evaluative Conditioning (EC) changes the preference towards a formerly neutral stimulus (Conditioned Stimulus; CS), by pairing it with a valent stimulus (Unconditioned Stimulus; US), in the direction of the valence of the US. When the CS is presented subliminally (i.e., too briefly to be consciously perceived), contingency awareness between CS and US can be ruled out. Hence, EC effects with subliminal CSs would support theories claiming that contingency awareness is not necessary for EC effects to occur. Recent studies reported the absence of EC with briefly presented CSs when both CS and US were presented in the visual modality, even though the CSs were identified at above-chance levels. Challenging this finding, Heycke and colleagues (2017) found some evidence for an EC effect with briefly presented visual stimuli in a cross-modal paradigm with auditory USs, but that study did not assess CS visibility. The present study attempted to replicate this EC effect with different stimuli and a CS visibility check. Overall EC for briefly presented stimuli was absent, and results from the visibility check show that an EC effect with briefly presented CSs was only found, when the CSs were identified at above-chance levels.


1975 ◽  
Vol 40 (1) ◽  
pp. 3-7 ◽  
Author(s):  
Gerda Smets

Ss take more time to perceive interesting/displeasing stimuli than uninteresting/pleasing ones. This is consistent with the results of former experiments. However we used a different operationalization of looking time, based on binocular rivalry. Each of six stimulus pairs was presented in a stereoscope. One member of each pair was interesting but displeasing in comparison to the other member. Stimulus complexity was under control. Due to binocular rivalry Ss perceived only one pattern a time. 20 Ss were asked to indicate which pattern they actually saw by pushing two buttons. For each stimulus pair was registered how long each button was pushed during each of six successive minutes. Unlike other operationalizations this one is less dependent on S's determination of what stimulus will be looked at or for how long. It has the advantage that it is bound up more exclusively with relations of similarity and dissimilarity between stimulus elements. It allows manipulation of exposure time in a systematic and continuous way. There is no significant interaction between looking and exposure time.


1974 ◽  
Vol 38 (2) ◽  
pp. 417-418 ◽  
Author(s):  
Robert Zenhausern ◽  
Claude Pompo ◽  
Michael Ciaiola

Simple and complex reaction time to visual stimuli was tested under 7 levels of accessory stimulation (white noise). Only the highest level of stimulation (70 db above threshold) lowered reaction time. The other levels had no effect.


2011 ◽  
Vol 105 (2) ◽  
pp. 674-686 ◽  
Author(s):  
Tetsuo Kida ◽  
Koji Inui ◽  
Emi Tanaka ◽  
Ryusuke Kakigi

Numerous studies have demonstrated effects of spatial attention within single sensory modalities (within-modal spatial attention) and the effect of directing attention to one sense compared with the other senses (intermodal attention) on cortical neuronal activity. Furthermore, recent studies have been revealing that the effects of spatial attention directed to a certain location in a certain sense spread to the other senses at the same location in space (cross-modal spatial attention). The present study used magnetoencephalography to examine the temporal dynamics of the effects of within-modal and cross-modal spatial and intermodal attention on cortical processes responsive to visual stimuli. Visual or tactile stimuli were randomly presented on the left or right side at a random interstimulus interval and subjects directed attention to the left or right when vision or touch was a task-relevant modality. Sensor-space analysis showed that a response around the occipitotemporal region at around 150 ms after visual stimulation was significantly enhanced by within-modal, cross-modal spatial, and intermodal attention. A later response over the right frontal region at around 200 ms was enhanced by within-modal spatial and intermodal attention, but not by cross-modal spatial attention. These effects were estimated to originate from the occipitotemporal and lateral frontal areas, respectively. Thus the results suggest different spatiotemporal dynamics of neural representations of cross-modal attention and intermodal or within-modal attention.


Sign in / Sign up

Export Citation Format

Share Document