scholarly journals Panoramic spatial vision in the bay scallop Argopecten irradians

2021 ◽  
Vol 288 (1962) ◽  
Author(s):  
Daniel R. Chappell ◽  
Tyler M. Horan ◽  
Daniel I. Speiser

We have a growing understanding of the light-sensing organs and light-influenced behaviours of animals with distributed visual systems, but we have yet to learn how these animals convert visual input into behavioural output. It has been suggested they consolidate visual information early in their sensory-motor pathways, resulting in them being able to detect visual cues (spatial resolution) without being able to locate them (spatial vision). To explore how an animal with dozens of eyes processes visual information, we analysed the responses of the bay scallop Argopecten irradians to both static and rotating visual stimuli. We found A. irradians distinguish between static visual stimuli in different locations by directing their sensory tentacles towards them and were more likely to point their extended tentacles towards larger visual stimuli. We also found that scallops track rotating stimuli with individual tentacles and with rotating waves of tentacle extension. Our results show, to our knowledge for the first time that scallops have both spatial resolution and spatial vision, indicating their sensory-motor circuits include neural representations of their visual surroundings. Exploring a wide range of animals with distributed visual systems will help us learn the different ways non-cephalized animals convert sensory input into behavioural output.

2021 ◽  
pp. 1-23
Author(s):  
Hye-Jung CHO ◽  
Jieun KIAER ◽  
Naya CHOI ◽  
Jieun SONG

Abstract In Korean language, questions containing ambiguous wh-words may be interpreted as either wh-questions or yes-no questions. This study investigated 43 Korean three-year-olds’ ability to disambiguate eight indeterminate questions using prosodic and visual cues. The intonation of each question provided a cue as to whether it should be interpreted as a wh-question or a yes-no question. The questions were presented alongside picture stimuli, which acted as either a matched (presentation of corresponding auditory-visual stimuli) or a mismatched contextual cue (presentation conflicting auditory-visual stimuli). Like adults, the children preferred to comprehend questions involving ambiguous wh-words as wh-questions, rather than yes-no questions. In addition, children were as effective as adults in disambiguating indeterminate questions using prosodic cues regardless of the visual cue. However, when confronted with conflicting auditory-visual stimuli (mismatched), the quality of children's responses was less accurate than adults’ responses.


2021 ◽  
Author(s):  
Alison R Irwin ◽  
Suzanne T Williams ◽  
Daniel I Speiser ◽  
Nicholas W Roberts

All species within the conch snail family Strombidae possess large camera-type eyes that are surprisingly well-developed compared to those found in most other gastropods. Although these eyes are known to be structurally complex, very little research on their visual function has been conducted. Here, we use isoluminant expanding visual stimuli to measure the spatial resolution and contrast sensitivity of a strombid, Conomurex luhuanus. Using these stimuli, we show that this species responds to objects as small as 1.06° in its visual field. We also show that C. luhuanus responds to Michelson contrasts of 0.07, a low contrast sensitivity between object and background. The defensive withdrawal response elicited by visual stimuli of such small angular size and low contrast suggests that conch snails may use spatial vision for the early detection of potential predators. We support these findings with morphological estimations of spatial resolution of 1.04 ± 0.14°. These anatomical data therefore agree with the behavioural measures and highlight the benefits of integrating morphological and behavioural approaches in animal vision studies. Furthermore, using contemporary imaging techniques including serial block-face scanning electron microscopy (SBF-SEM), in conjunction with transmission electron microscopy (TEM), we found that C. luhuanus have more complex retinas, in terms of cell type diversity, than previous studies of the group have discovered using TEM alone. We found the C. luhuanus retina is comprised of six cell types, including a newly identified ganglion cell and accessory photoreceptor, rather than the previously described four cell types.


2019 ◽  
Author(s):  
Clément Vinauger ◽  
Floris Van Breugel ◽  
Lauren T. Locke ◽  
Kennedy K.S. Tobin ◽  
Michael H. Dickinson ◽  
...  

SummaryMosquitoes rely on the integration of multiple sensory cues, including olfactory, visual, and thermal stimuli, to detect, identify and locate their hosts [1–4]. Although we increasingly know more about the role of chemosensory behaviours in mediating mosquito-host interactions [1], the role of visual cues remains comparatively less studied [3], and how the combination of olfactory and visual information is integrated in the mosquito brain remains unknown. In the present study, we used a tethered-flight LED arena, which allowed for quantitative control over the stimuli, to show that CO2 exposure affects target-tracking responses, but not responses to large-field visual stimuli. In addition, we show that CO2 modulates behavioural responses to visual objects in a time-dependent manner. To gain insight into the neural basis of this olfactory and visual coupling, we conducted two-photon microscopy experiments in a new GCaMP6s-expressing mosquito line. Imaging revealed that the majority of ROIs in the lobula region of the optic lobe exhibited strong responses to small-field stimuli, but showed little response to a large-field stimulus. Approximately 20% of the neurons we imaged were modulated when an attractive odour preceded the visual stimulus; these same neurons also elicited a small response when the odour was presented alone. By contrast, imaging in the antennal lobe revealed no modulation when visual stimuli were presented before or after the olfactory stimulus. Together, our results are the first to reveal the dynamics of olfactory modulation in visually evoked behaviours of mosquitoes, and suggest that coupling between these sensory systems is asymmetrical and time-dependent.


2019 ◽  
Author(s):  
Le Wang ◽  
Devon Jakob ◽  
Haomin Wang ◽  
Alexis Apostolos ◽  
Marcos M. Pires ◽  
...  

<div>Infrared chemical microscopy through mechanical probing of light-matter interactions by atomic force microscopy (AFM) bypasses the diffraction limit. One increasingly popular technique is photo-induced force microscopy (PiFM), which utilizes the mechanical heterodyne signal detection between cantilever mechanical resonant oscillations and the photo induced force from light-matter interaction. So far, photo induced force microscopy has been operated in only one heterodyne configuration. In this article, we generalize heterodyne configurations of photoinduced force microscopy by introducing two new schemes: harmonic heterodyne detection and sequential heterodyne detection. In harmonic heterodyne detection, the laser repetition rate matches integer fractions of the difference between the two mechanical resonant modes of the AFM cantilever. The high harmonic of the beating from the photothermal expansion mixes with the AFM cantilever oscillation to provide PiFM signal. In sequential heterodyne detection, the combination of the repetition rate of laser pulses and polarization modulation frequency matches the difference between two AFM mechanical modes, leading to detectable PiFM signals. These two generalized heterodyne configurations for photo induced force microscopy deliver new avenues for chemical imaging and broadband spectroscopy at ~10 nm spatial resolution. They are suitable for a wide range of heterogeneous materials across various disciplines: from structured polymer film, polaritonic boron nitride materials, to isolated bacterial peptidoglycan cell walls. The generalized heterodyne configurations introduce flexibility for the implementation of PiFM and related tapping mode AFM-IR, and provide possibilities for additional modulation channel in PiFM for targeted signal extraction with nanoscale spatial resolution.</div>


BMC Biology ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Junko Yaguchi ◽  
Shunsuke Yaguchi

Abstract Background Light is essential for various biological activities. In particular, visual information through eyes or eyespots is very important for most of animals, and thus, the functions and developmental mechanisms of visual systems have been well studied to date. In addition, light-dependent non-visual systems expressing photoreceptor Opsins have been used to study the effects of light on diverse animal behaviors. However, it remains unclear how light-dependent systems were acquired and diversified during deuterostome evolution due to an almost complete lack of knowledge on the light-response signaling pathway in Ambulacraria, one of the major groups of deuterostomes and a sister group of chordates. Results Here, we show that sea urchin larvae utilize light for digestive tract activity. We found that photoirradiation of larvae induces pyloric opening even without addition of food stimuli. Micro-surgical and knockdown experiments revealed that this stimulating light is received and mediated by Go(/RGR)-Opsin (Opsin3.2 in sea urchin genomes) cells around the anterior neuroectoderm. Furthermore, we found that the anterior neuroectodermal serotoninergic neurons near Go-Opsin-expressing cells are essential for mediating light stimuli-induced nitric oxide (NO) release at the pylorus. Our results demonstrate that the light>Go-Opsin>serotonin>NO pathway functions in pyloric opening during larval stages. Conclusions The results shown here will lead us to understand how light-dependent systems of pyloric opening functioning via neurotransmitters were acquired and established during animal evolution. Based on the similarity of nervous system patterns and the gut proportions among Ambulacraria, we suggest the light>pyloric opening pathway may be conserved in the clade, although the light signaling pathway has so far not been reported in other members of the group. In light of brain-gut interactions previously found in vertebrates, we speculate that one primitive function of anterior neuroectodermal neurons (brain neurons) may have been to regulate the function of the digestive tract in the common ancestor of deuterostomes. Given that food consumption and nutrient absorption are essential for animals, the acquirement and development of brain-based sophisticated gut regulatory system might have been important for deuterostome evolution.


2021 ◽  
Vol 11 (8) ◽  
pp. 3397
Author(s):  
Gustavo Assunção ◽  
Nuno Gonçalves ◽  
Paulo Menezes

Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.


2021 ◽  
Author(s):  
Judith M. Varkevisser ◽  
Ralph Simon ◽  
Ezequiel Mendoza ◽  
Martin How ◽  
Idse van Hijlkema ◽  
...  

AbstractBird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.


Sign in / Sign up

Export Citation Format

Share Document