visual signals
Recently Published Documents


TOTAL DOCUMENTS

694
(FIVE YEARS 189)

H-INDEX

54
(FIVE YEARS 3)

eLife ◽  
2022 ◽  
Vol 11 ◽  
Author(s):  
Baohua Zhou ◽  
Zifan Li ◽  
Sunnie Kim ◽  
John Lafferty ◽  
Damon A Clark

Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Emel Adamış ◽  
Fatih Pınarbaşı

Purpose This study aims to explore the visual social media (SM) (Instagram) communication and the visual characteristics of smart tourism destination (STD) communication from destination marketing/management organizations (DMOs) and user-generated content (UGC) perspectives, which refer to projected image and perceived image, respectively. Design/methodology/approach Three DMO official accounts of STDs (Helsinki, Gothenburg and Lyon) and corresponding official hashtags were selected for the sample and total 6,000 post data (1,000 × 6) were retrieved from Instagram. Visual communication content was examined with a netnographic design over a proposed four-level visual content framework using corresponding methodological approaches (thematic analysis, visual analysis, object detection and text mining) for each level. Findings Among the eight emerging themes dominating the images, communication of smart elements conveys far less than expected textual and visual signals from DMOs despite their smart status, and in turn, from UGC as well. UGC revealed three extra image themes regardless of smartness perception. DMOs tend to project and give voice to their standard metropolitan areas and neighborhoods while UGCs focus on food-related and emotional elements. The findings show a partial overlap between DMOs and UGCs, revealing discrepancies in objects contained in visuals, hashtags and emojis. Additionally, as a rare attempt, the proposed framework for visual content analysis showed the importance of integrated methods to investigate visual content effectively. Research limitations/implications The number of attributes in visual analysis and focusing on the observed elements in text content (text, hashtags and emojis) are the limitations of the study in terms of methodology. Originality/value Apart from the multiple integrated methods used over a netnographic design, this study differs from existing SM and smart destinations intersection literature by attempting to fill a gap in focusing on and exploring visual SM communication, which is scarce in tourism context, for the contents generated by DMOs and users.


2022 ◽  
Vol 76 (1) ◽  
Author(s):  
Alessandro Gallo ◽  
Anna Zanoli ◽  
Marta Caselli ◽  
Ivan Norscia ◽  
Elisabetta Palagi

Abstract Play fighting, the most common form of social play in mammals, is a fertile field to investigate the use of visual signals in animals’ communication systems. Visual signals can be exclusively emitted during play (e.g. play faces, PF, context-dependent signals), or they can be released under several behavioural domains (e.g. lip-smacking, LS, context-independent signals). Rapid facial mimicry (RFM) is the involuntary rapid facial congruent response produced after perceiving others’ facial expressions. RFM leads to behavioural and emotional synchronisation that often translates into the most balanced and longest playful interactions. Here, we investigate the role of playful communicative signals in geladas (Theropithecus gelada). We analysed the role of PF and LS produced by wild immature geladas during play fighting. We found that PFs, but not LS, were particularly frequent during the riskiest interactions such as those including individuals from different groups. Furthermore, we found that RFM (PF→PF) was highest when playful offensive patterns were not biased towards one of the players and when the session was punctuated by LS. Under this perspective, the presence of context-independent signals such as LS may be useful in creating an affiliative mood that enhances communication and facilitates most cooperative interactions. Indeed, we found that sessions punctuated by the highest frequency of RFM and LS were also the longest ones. Whether the complementary use of PF and LS is strategically guided by the audience or is the result of the emotional arousal experienced by players remains to be investigated. Significance Statement Facial expressions and their rapid replication by an observer are fundamental communicative tools during social contacts in human and non-human animals. Play fighting is one of the most complex forms of social interactions that can easily lead to misunderstanding if not modulated through an accurate use of social signals. Wild immature geladas are able to manage their play sessions thus limiting the risk of aggressive escalation. While playing with unfamiliar subjects belonging to other groups, they make use of a high number of play faces. Moreover, geladas frequently replicate others’ play faces and emit facial expressions of positive intent (i.e. lip-smacking) when engaging in well-balanced long play sessions. In this perspective, this “playful facial chattering” creates an affiliative mood that enhances communication and facilitates most cooperative interactions.


2021 ◽  
Author(s):  
Mate Aller ◽  
Heidi Solberg Okland ◽  
Lucy J MacGregor ◽  
Helen Blank ◽  
Matthew H. Davis

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase locking to auditory and visual signals in MEG recordings from 14 human participants (6 female) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared to audio-only (AO) speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus (STG) did not show above-chance partial coherence with visual speech signals during AV conditions, but did show partial coherence in VO conditions. Hence, visual speech enabled stronger phase locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.


2021 ◽  
Author(s):  
Jonathan D Blount ◽  
Hannah M Rowland ◽  
Christopher Mitchell ◽  
Michael P Speed ◽  
Graeme D Ruxton ◽  
...  

In a variety of aposematic species, the conspicuousness of an individual's warning signal and the quantity of its chemical defence are positively correlated. This apparent honest signalling in aposematism is predicted by resource competition models which assume that the production and maintenance of aposematic defences compete for access to antioxidant molecules that have dual functions as pigments directly responsible for colouration and in protecting against oxidative lipid damage. Here we study a model aposematic system - the monarch butterfly (Danaus plexippus) and make use of the variable phytochemistry of its larval host-plant, milkweeds (Asclepiadaceae), to manipulate the concentration of sequestered cardenolides. We test two fundamental assumptions of resource competition models: that (1) the possession of secondary defences is associated with costs in the form of oxidative lipid damage and reduced antioxidant defences; and (2) that oxidative damage or decreases in antioxidant defences can reduce the capacity of individuals to produce aposematic displays. Monarch caterpillars that sequestered the highest concentrations of cardenolides exhibited higher levels of oxidative lipid damage as adults. The relationship between warning signals, cardenolide concentrations and oxidative damage differed between the sexes. In male monarchs conspicuousness was explained by an interaction between oxidative damage and sequestration: as males sequester more cardenolides, those with high levels of oxidative damage become less conspicuous, while those that sequester lower levels of cardenolides equally invest in conspicuous with increasing oxidative damage. There was no significant effect of oxidative damage or concentration of sequestered cardenolides on female conspicuousness. Our results demonstrate physiological linkage between the production of coloration and protection from autotoxicity, that warning signals can be honest indicators of defensive capability, and that the relationships are different between the sexes.


2021 ◽  
pp. 1-21
Author(s):  
Louise Tosetto ◽  
Jane E. Williamson ◽  
Thomas E. White ◽  
Nathan S. Hart 

Bluelined goatfish (<i>Upeneichthys lineatus</i>) exhibit dynamic body colour changes and transform rapidly from a pale, buff/white, horizontally banded pattern to a conspicuous, vertically striped, red pattern when foraging. This red pattern is potentially an important foraging signal for communication with conspecifics, provided that <i>U. lineatus</i> can detect and discriminate the pattern. Using both physiological and behavioural experiments, we first examined whether <i>U. lineatus</i> possess visual pigments with sensitivity to long (“red”) wavelengths of light, and whether they can discriminate the colour red. Microspectrophotometric measurements of retinal photoreceptors showed that while <i>U. lineatus</i>lack visual pigments dedicated to the red part of the spectrum, their pigments likely confer some sensitivity in this spectral band. Behavioural colour discrimination experiments suggested that <i>U. lineatus</i>can distinguish a red reward stimulus from a grey distractor stimulus of variable brightness. Furthermore, when presented with red stimuli of varying brightness they could mostly discriminate the darker and lighter reds from the grey distractor. We also obtained anatomical estimates of visual acuity, which suggest that <i>U. lineatus</i> can resolve the contrasting bands of conspecifics approximately 7 m away in clear waters. Finally, we measured the spectral reflectance of the red and white colouration on the goatfish body. Visual models suggest that <i>U. lineatus</i> can discriminate both chromatic and achromatic differences in body colouration where longer wavelength light is available. This study demonstrates that <i>U. lineatus</i> have the capacity for colour vision and can likely discriminate colours in the long-wavelength region of the spectrum where the red body pattern reflects light strongly. The ability to see red may therefore provide an advantage in recognising visual signals from conspecifics. This research furthers our understanding of how visual signals have co-evolved with visual abilities, and the role of visual communication in the marine environment.


2021 ◽  
Vol 15 ◽  
Author(s):  
Sergio Delle Monache ◽  
Iole Indovina ◽  
Myrka Zago ◽  
Elena Daprati ◽  
Francesco Lacquaniti ◽  
...  

Gravity is a physical constraint all terrestrial species have adapted to through evolution. Indeed, gravity effects are taken into account in many forms of interaction with the environment, from the seemingly simple task of maintaining balance to the complex motor skills performed by athletes and dancers. Graviceptors, primarily located in the vestibular otolith organs, feed the Central Nervous System with information related to the gravity acceleration vector. This information is integrated with signals from semicircular canals, vision, and proprioception in an ensemble of interconnected brain areas, including the vestibular nuclei, cerebellum, thalamus, insula, retroinsula, parietal operculum, and temporo-parietal junction, in the so-called vestibular network. Classical views consider this stage of multisensory integration as instrumental to sort out conflicting and/or ambiguous information from the incoming sensory signals. However, there is compelling evidence that it also contributes to an internal representation of gravity effects based on prior experience with the environment. This a priori knowledge could be engaged by various types of information, including sensory signals like the visual ones, which lack a direct correspondence with physical gravity. Indeed, the retinal accelerations elicited by gravitational motion in a visual scene are not invariant, but scale with viewing distance. Moreover, the “visual” gravity vector may not be aligned with physical gravity, as when we watch a scene on a tilted monitor or in weightlessness. This review will discuss experimental evidence from behavioral, neuroimaging (connectomics, fMRI, TMS), and patients’ studies, supporting the idea that the internal model estimating the effects of gravity on visual objects is constructed by transforming the vestibular estimates of physical gravity, which are computed in the brainstem and cerebellum, into internalized estimates of virtual gravity, stored in the vestibular cortex. The integration of the internal model of gravity with visual and non-visual signals would take place at multiple levels in the cortex and might involve recurrent connections between early visual areas engaged in the analysis of spatio-temporal features of the visual stimuli and higher visual areas in temporo-parietal-insular regions.


Author(s):  
Satish Kumar Gupta ◽  
Ranjay Chakraborty ◽  
Pavan Kumar Verkicharla

AbstractThe stretching of a myopic eye is associated with several structural and functional changes in the retina and posterior segment of the eye. Recent research highlights the role of retinal signaling in ocular growth. Evidence from studies conducted on animal models and humans suggests that visual mechanisms regulating refractive development are primarily localized at the retina and that the visual signals from the retinal periphery are also critical for visually guided eye growth. Therefore, it is important to study the structural and functional changes in the retina in relation to refractive errors. This review will specifically focus on electroretinogram (ERG) changes in myopia and their implications in understanding the nature of retinal functioning in myopic eyes. Based on the available literature, we will discuss the fundamentals of retinal neurophysiology in the regulation of vision-dependent ocular growth, findings from various studies that investigated global and localized retinal functions in myopia using various types of ERGs.


Author(s):  
Susan Nittrouer ◽  
Joanna H. Lowenstein

Purpose: It is well recognized that adding the visual to the acoustic speech signal improves recognition when the acoustic signal is degraded, but how that visual signal affects postrecognition processes is not so well understood. This study was designed to further elucidate the relationships among auditory and visual codes in working memory, a postrecognition process. Design: In a main experiment, 80 young adults with normal hearing were tested using an immediate serial recall paradigm. Three types of signals were presented (unprocessed speech, vocoded speech, and environmental sounds) in three conditions (audio-only, audio–video with dynamic visual signals, and audio–picture with static visual signals). Three dependent measures were analyzed: (a) magnitude of the recency effect, (b) overall recall accuracy, and (c) response times, to assess cognitive effort. In a follow-up experiment, 30 young adults with normal hearing were tested largely using the same procedures, but with a slight change in order of stimulus presentation. Results: The main experiment produced three major findings: (a) unprocessed speech evoked a recency effect of consistent magnitude across conditions; vocoded speech evoked a recency effect of similar magnitude to unprocessed speech only with dynamic visual (lipread) signals; environmental sounds never showed a recency effect. (b) Dynamic and static visual signals enhanced overall recall accuracy to a similar extent, and this enhancement was greater for vocoded speech and environmental sounds than for unprocessed speech. (c) All visual signals reduced cognitive load, except for dynamic visual signals with environmental sounds. The follow-up experiment revealed that dynamic visual (lipread) signals exerted their effect on the vocoded stimuli by enhancing phonological quality. Conclusions: Acoustic and visual signals can combine to enhance working memory operations, but the source of these effects differs for phonological and nonphonological signals. Nonetheless, visual information can support better postrecognition processes for patients with hearing loss.


Author(s):  
Eleanor M. Caves ◽  
Fanny de Busserolles ◽  
Laura A. Kelley

Among fishes in the family Poeciliidae, signals such as colour patterns, ornaments, and courtship displays play important roles in mate choice and male-male competition. Despite this, visual capabilities in Poeciliids are understudied, in particular visual acuity, the ability to resolve detail. We used three methods to quantify visual acuity in male and female green swordtails (Xiphophorus helleri), a species in which body size and the length of the male's extended caudal fin (‘sword’) serve as assessment signals during mate choice and agonistic encounters. Topographic distribution of retinal ganglion cells (RGC) was similar in all individuals and characterized by areas of high cell densities located centro-temporally and nasally, as well as a weak horizontal streak. Based on the peak density of RGC in the centro-temporal area, anatomical acuity was estimated to be approximately 3 cycles/degree (cpd) in both sexes. However, a behavioural optomotor assay found significantly lower mean acuity in males (0.8 cpd) than females (3.0 cpd), which was not explained by differences in eye size between males and females. An additional behavioural assay, in which we trained individuals to discriminate striped gratings from grey stimuli of the same mean luminance, also showed lower acuity in males (1-2 cpd) than females (2-3 cpd). Thus, although retinal anatomy predicts identical acuity in males and females, two behavioural assays found higher acuity in females than males, a sexual dimorphism which is rare outside of invertebrates. Overall, our results have implications for understanding how Poeciliids perceive visual signals during mate choice and agonistic encounters.


Sign in / Sign up

Export Citation Format

Share Document