visual signal
Recently Published Documents


TOTAL DOCUMENTS

392
(FIVE YEARS 74)

H-INDEX

44
(FIVE YEARS 4)

eLife ◽  
2022 ◽  
Vol 11 ◽  
Author(s):  
Baohua Zhou ◽  
Zifan Li ◽  
Sunnie Kim ◽  
John Lafferty ◽  
Damon A Clark

Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.


2021 ◽  
Vol 6 (2) ◽  
pp. 85-93
Author(s):  
Andi Danang Krismawan ◽  
Lekso Budi Handoko

Various types of video player applications have been widely used by the community. The emergence of the latest version and a variety of features make people need to make a choice to use a video player application with a good visual level. The type of video that is often played is a file with an MP4 extension. This file type is not heavy but is usually intended for long file durations such as movies. In this paper, we will use a dataset in the form of a movie file with an MP4 extension. The video player applications used include VLC, Quick time, Potplayer, KMPLayer, Media Player Classic (MPC), DivX Player, ACG Player, Kodi, MediaMonkey. Through various empirical calculations, such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structutral Similarity Index Measurement (SSIM), Threshold F-ratio, Visual Signal to Noise Ratio (VSNR), Visual Quality Metric (VQM), and Multiscale - Structutral Similarity Index Measurement (MS-SSIM) has analyzed the visual capabilities of each video player application. Experimental results prove that the KMPlayer application gets the best visual results compared to other selected applications.


2021 ◽  
Vol 15 ◽  
Author(s):  
Thorben Hülsdünker ◽  
David Riedel ◽  
Hannes Käsbauer ◽  
Diemo Ruhnow ◽  
Andreas Mierau

Although vision is the dominating sensory system in sports, many situations require multisensory integration. Faster processing of auditory information in the brain may facilitate time-critical abilities such as reaction speed however previous research was limited by generic auditory and visual stimuli that did not consider audio-visual characteristics in ecologically valid environments. This study investigated the reaction speed in response to sport-specific monosensory (visual and auditory) and multisensory (audio-visual) stimulation. Neurophysiological analyses identified the neural processes contributing to differences in reaction speed. Nineteen elite badminton players participated in this study. In a first recording phase, the sound profile and shuttle speed of smash and drop strokes were identified on a badminton court using high-speed video cameras and binaural recordings. The speed and sound characteristics were transferred into auditory and visual stimuli and presented in a lab-based experiment, where participants reacted in response to sport-specific monosensory or multisensory stimulation. Auditory signal presentation was delayed by 26 ms to account for realistic audio-visual signal interaction on the court. N1 and N2 event-related potentials as indicators of auditory and visual information perception/processing, respectively were identified using a 64-channel EEG. Despite the 26 ms delay, auditory reactions were significantly faster than visual reactions (236.6 ms vs. 287.7 ms, p < 0.001) but still slower when compared to multisensory stimulation (224.4 ms, p = 0.002). Across conditions response times to smashes were faster when compared to drops (233.2 ms, 265.9 ms, p < 0.001). Faster reactions were paralleled by a lower latency and higher amplitude of the auditory N1 and visual N2 potentials. The results emphasize the potential of auditory information to accelerate the reaction time in sport-specific multisensory situations. This highlights auditory processes as a promising target for training interventions in racquet sports.


2021 ◽  
Vol 15 ◽  
Author(s):  
Hristofor Lukanov ◽  
Peter König ◽  
Gordon Pipa

While abundant in biology, foveated vision is nearly absent from computational models and especially deep learning architectures. Despite considerable hardware improvements, training deep neural networks still presents a challenge and constraints complexity of models. Here we propose an end-to-end neural model for foveal-peripheral vision, inspired by retino-cortical mapping in primates and humans. Our model has an efficient sampling technique for compressing the visual signal such that a small portion of the scene is perceived in high resolution while a large field of view is maintained in low resolution. An attention mechanism for performing “eye-movements” assists the agent in collecting detailed information incrementally from the observed scene. Our model achieves comparable results to a similar neural architecture trained on full-resolution data for image classification and outperforms it at video classification tasks. At the same time, because of the smaller size of its input, it can reduce computational effort tenfold and uses several times less memory. Moreover, we present an easy to implement bottom-up and top-down attention mechanism which relies on task-relevant features and is therefore a convenient byproduct of the main architecture. Apart from its computational efficiency, the presented work provides means for exploring active vision for agent training in simulated environments and anthropomorphic robotics.


2021 ◽  
Author(s):  
Yukari Takeda ◽  
Kazuma Sato ◽  
Yukari Hosoki ◽  
Shuji Tachibanaki ◽  
Chieko Koike ◽  
...  

Abstract Retinal photoreceptor cells, rods and cones, convert photons of light into chemical and electrical signals as the first step of the visual transduction cascade. Although the chemical processes in the phototransduction system are very similar to each other in these photoreceptors, the light sensitivity and time resolution of the photoresponse in rods are functionally different than those in the photoresponses of cones. To systematically investigate how photoresponses are divergently regulated in rods and cones, we have developed a detailed mathematical model on the basis of the Hamer model. The current model successfully reconstructed light intensity-, ATP- and GTP-dependent changes in concentrations of phosphorylated visual pigments (VPs), activated transducins (Tr*s) and phosphodiesterases (PDEs), as well as cyclic nucleotide-gated currents (ICNG) in rods and cones. In comparison to rods, the lower light sensitivity of cones was attributed not only to the lower affinity of activated VPs for Trs but also to the faster desensitization of the VPs. The assumption of an intermediate inactive state, MIIi, in the thermal decay of activated VPs was pivotal for inducing faster inactivation of VPs. In addition to the faster inactivation of VPs, calculating a faster rate of RGS9 intervention for PDE-induced Tr* inactivation in cones was indispensable for simulating the electrical waveforms of the light intensity-dependent ICNG at higher temporal resolution in experimental systems in vivo.


2021 ◽  
Vol 9 ◽  
Author(s):  
Casper J. van der Kooi

Floral pigments are a core component of flower colors, but how much pigment a flower should have to yield a strong visual signal to pollinators is unknown. Using an optical model and taking white, blue, yellow and red flowers as case studies, I investigate how the amount of pigment determines a flower’s color contrast. Modeled reflectance spectra are interpreted using established insect color vision models. Contrast as a function of the amount of pigment shows a pattern of diminishing return. Low pigment amounts yield pale colors, intermediate amounts yield high contrast, and extreme amounts of pigment do not further increase, and sometimes even decrease, a flower’s color contrast. An intermediate amount of floral pigment thus yields the highest visibility, a finding that is corroborated by previous behavioral experiments on bees. The implications for studies on plant-pollinator signaling, intraspecific flower color variation and the costs of flower color are discussed.


2021 ◽  
Author(s):  
Corentin Dupont ◽  
Claire Villemant ◽  
Tom Hattermann ◽  
Jeremie Pratviel ◽  
Laurence Gaume ◽  
...  

Sarracenia insectivorous plants show a diversity of visual features in their pitchers but their perception by insects and their role in attraction, have received little attention. They also vary in prey composition, with some species trapping more flying Hymenoptera, such as bees. To test the hypothesis of a link between visual signal variability and prey segregation ability, and to identify which signal could attract flying Hymenoptera, we characterised, the colour patterns of 32 pitchers belonging to four taxa, modelled their perception by flying Hymenoptera, and examined the prey they trapped. The pitchers of the four taxa differed in colour patterns, with notably two long-leaved taxa displaying clear areoles, which contrasted strongly in colour and brightness with the vegetative background and with other pitcher areas in the eyes of flying Hymenoptera. These taxa trapped high proportion of flying hymenoptera. This suggests that contrasting areoles may act as a visual lure for flying Hymenoptera, making plants particularly visible to these insects. Prey capture also differed according to pitcher stage, morphology, season and visual characteristics. Further studies on prey visitation are needed to better understand the link between prey capture and attraction feature.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Genting Liu ◽  
Qike Wang ◽  
Xianhui Liu ◽  
Xinyu Li ◽  
Xiunan Pang ◽  
...  

AbstractAntennae and maxillary palps are the most important chemical reception organs of flies. So far, the morphology of antennae and maxillary palps of flies of most feeding habits have been well described, except for that of relatively rare aquatic predatory species. This study describes sensilla on antennae and maxillary palps of three aquatic predatory Lispe species: Lispe longicollis, L. orientalis and L. pygmaea. Types, distribution, and density of sensilla are characterised via light and scanning electron microscopy. One type of mechanoreceptors is found on antennal scape. Mechanoreceptors (two subtypes) and one single pedicellar button (in L. pygmaea) are located on antennal pedicel. Four types of sensilla are discovered on antennal postpedicel: trichoid sensilla, basiconic sensilla (three subtypes), coeloconic sensilla and clavate sensilla. A unique character of these Lispe species is that the coeloconic sensilla are distributed sparsely on antennal postpedicel. Mechanoreceptors and basiconic sensilla are observed on the surface of maxillary palps in all three species. We demonstrated clear sexual dimorphism of the maxillary palps in some of the Lispe species, unlike most other Muscidae species, are larger in males than females. This, along with their courtship dance behaviour, suggest their function as both chemical signal receiver and visual signal conveyer, which is among the few records of a chemical reception organ act as a signal conveyer in insects.


2021 ◽  
Author(s):  
Nicole Lopez ◽  
Theodore Stankowich

Abstract Most sexual weapons in sexual combat and visual displays of dominance (e.g., antlers, horns) show positively allometry with body size for both growth during development and evolution across species, but allometry in species with more than one sexual weapon is unstudied. We examined the allometric relationships between body size and tusks (pure combat weapons) and/or antlers (both a visual signal and combat weapon) from forty-three artiodactyl species including the muntjaks (Muntiacinae), which uniquely have both antlers and tusks. We found that in Muntiacinae antler length scales positively allometrically with skull length, whereas tusk size scales isometrically suggesting greater energy investment in antlers as signals over tusks as combative weapons when both are present. Interspecifically, we found that species who possess only one weapon (either solely tusked or solely antlered) scaled positively allometrically with body mass, and the latter relationship levels off at larger body sizes. In our tusk analysis, when we included Muntiacinae species the positive allometric trend was not conserved resulting in an isometric relationship suggesting the possession of antlers negatively affect the energy investment in tusks as weapons. Overall, our findings show that species that possess dual weapons unproportionally invest energy in the development and maintenance of their multiple weapons.


2021 ◽  
Vol 38 (4) ◽  
pp. 1131-1139
Author(s):  
Shyamal S. Virnodkar ◽  
Vinod K. Pachghare ◽  
Virupakshagouda C. Patil ◽  
Sunil Kumar Jha

A single most immense abiotic stress globally affecting the productivity of all the crops is water stress. Hence, timely and accurate detection of the water-stressed crops is a necessary task for high productivity. Agricultural crop production can be managed and enhanced by spatial and temporal evaluation of water-stressed crops through remotely sensed data. However, detecting water-stressed crops from remote sensing images is a challenging task as various factors impacting spectral bands, vegetation indices (VIs) at the canopy and landscape scales, as well as the fact that the water stress detection threshold is crop-specific, there has yet to be substantial agreement on their usage as a pre-visual signal of water stress. This research takes the benefits of freely available remote sensing data and convolutional neural networks to perform semantic segmentation of water-stressed sugarcane crops. Here an architecture ‘DenseResUNet’ is proposed for water-stressed sugarcane crops using segmentation based on encoder-decoder approach. The novelty of the proposed approach lies in the replacement of classical convolution operation in the UNet with the dense block. The layers of a dense block are residual modules with a dense connection. The proposed model achieved 61.91% mIoU, and 80.53% accuracy on segmenting the water-stressed sugarcane fields. This study compares the proposed architecture with the UNet, ResUNet, and DenseUNet models achieving mIoU of 32.20%, 58.34%, and 53.15%, respectively. The results of this study reveal that the model has the potential to identify water-stressed crops from remotely sensed data through deep learning techniques.


Sign in / Sign up

Export Citation Format

Share Document