scholarly journals Does colour impact attention towards 2D images in geckos?

2021 ◽  
Author(s):  
Nathan Katlein ◽  
Miranda Ray ◽  
Anna Wilkinson ◽  
Julien Claude ◽  
Maria Kiskowski ◽  
...  

AbstractAnimals are exposed to different visual stimuli that influence how they perceive and interact with their environment. Visual information such as shape and colour can help the animal detect, discriminate and make appropriate behavioural decisions for mate selection, communication, camouflage, and foraging. In all major vertebrate groups, it has been shown that certain species can discriminate and prefer certain colours and that colours may increase the response to a stimulus. However, because colour is often studied together with other potentially confounding factors, it is still unclear to what extent colour discrimination plays a crucial role in the perception of and attention towards biologically relevant and irrelevant stimuli. To address these questions in reptiles, we assessed the response of three gecko species Correlophus ciliatus, Eublepharis macularius, and Phelsuma laticauda to familiar and novel 2D images in colour or grayscale. We found that while all species responded more often to the novel than to the familiar images, colour information did not influence object discrimination. We also found that the duration of interaction with images was significantly longer for the diurnal species, P. laticauda, than for the two nocturnal species, but this was independent from colouration. Finally, no differences among sexes were observed within or across species. Our results indicate that geckos discriminate between 2D images of different content independent of colouration, suggesting that colouration does not increase detectability or intensity of the response. These results are essential for uncovering which visual stimuli produce a response in animals and furthering our understanding of how animals use colouration and colour vision.

2021 ◽  
Vol 11 (2) ◽  
pp. 674
Author(s):  
Marianna Koctúrová ◽  
Jozef Juhár

With the ever-progressing development in the field of computational and analytical science the last decade has seen a big improvement in the accuracy of electroencephalography (EEG) technology. Studies try to examine possibilities to use high dimensional EEG data as a source for Brain to Computer Interface. Applications of EEG Brain to computer interface vary from emotion recognition, simple computer/device control, speech recognition up to Intelligent Prosthesis. Our research presented in this paper was focused on the study of the problematic speech activity detection using EEG data. The novel approach used in this research involved the use visual stimuli, such as reading and colour naming, and signals of speech activity detectable by EEG technology. Our proposed solution is based on a shallow Feed-Forward Artificial Neural Network with only 100 hidden neurons. Standard features such as signal energy, standard deviation, RMS, skewness, kurtosis were calculated from the original signal from 16 EEG electrodes. The novel approach in the field of Brain to computer interface applications was utilised to calculated additional set of features from the minimum phase signal. Our experimental results demonstrated F1 score of 86.80% and 83.69% speech detection accuracy based on the analysis of EEG signal from single subject and cross-subject models respectively. The importance of these results lies in the novel utilisation of the mobile device to record the nerve signals which can serve as the stepping stone for the transfer of Brain to computer interface technology from technology from a controlled environment to the real-life conditions.


2021 ◽  
Author(s):  
Judith M. Varkevisser ◽  
Ralph Simon ◽  
Ezequiel Mendoza ◽  
Martin How ◽  
Idse van Hijlkema ◽  
...  

AbstractBird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.


Author(s):  
Mark Edwards ◽  
Stephanie C. Goodhew ◽  
David R. Badcock

AbstractThe visual system uses parallel pathways to process information. However, an ongoing debate centers on the extent to which the pathways from the retina, via the Lateral Geniculate nucleus to the visual cortex, process distinct aspects of the visual scene and, if they do, can stimuli in the laboratory be used to selectively drive them. These questions are important for a number of reasons, including that some pathologies are thought to be associated with impaired functioning of one of these pathways and certain cognitive functions have been preferentially linked to specific pathways. Here we examine the two main pathways that have been the focus of this debate: the magnocellular and parvocellular pathways. Specifically, we review the results of electrophysiological and lesion studies that have investigated their properties and conclude that while there is substantial overlap in the type of information that they process, it is possible to identify aspects of visual information that are predominantly processed by either the magnocellular or parvocellular pathway. We then discuss the types of visual stimuli that can be used to preferentially drive these pathways.


AAESPH Review ◽  
1979 ◽  
Vol 4 (2) ◽  
pp. 136-147 ◽  
Author(s):  
Harvey N. Switzky ◽  
Janet Woolsey-Hill ◽  
Therese Quoss

Twelve profoundly retarded nonverbal, nonambulatory children were repeatedly exposed to one of two visual stimuli-a 2 times 2 or a 12 times 12 black-and-white checkerboard target-until a set criterion of habituation was demonstrated, as measured by a decrement in visual fixation time. When the habituation criterion was reached, the children were shown alternative presentations of the same and a novel target. Results showed an increase in visual fixation to the novel target. A control condition was instituted also; so that when the habituation criterion was reached, the children were shown only presentations of the same target. Results showed no increase in visual fixation to the same targets. Together these results suggest that profoundly retarded children do show habituation and dishabituation to visual stimuli, and are actively storing and processing information about their perceptual world. The educational implications of the habituation paradigm for the special education teacher in the classroom are discussed.


2020 ◽  
pp. 095679762095485
Author(s):  
Mathieu Landry ◽  
Jason Da Silva Castanheira ◽  
Jérôme Sackur ◽  
Amir Raz

Suggestions can cause some individuals to miss or disregard existing visual stimuli, but can they infuse sensory input with nonexistent information? Although several prominent theories of hypnotic suggestion propose that mental imagery can change our perceptual experience, data to support this stance remain sparse. The present study addressed this lacuna, showing how suggesting the presence of physically absent, yet critical, visual information transforms an otherwise difficult task into an easy one. Here, we show how adult participants who are highly susceptible to hypnotic suggestion successfully hallucinated visual occluders on top of moving objects. Our findings support the idea that, at least in some people, suggestions can add perceptual information to sensory input. This observation adds meaningful weight to theoretical, clinical, and applied aspects of the brain and psychological sciences.


i-Perception ◽  
2018 ◽  
Vol 9 (6) ◽  
pp. 204166951881570
Author(s):  
Sachiyo Ueda ◽  
Ayane Mizuguchi ◽  
Reiko Yakushijin ◽  
Akira Ishiguchi

To overcome limitations in perceptual bandwidth, humans condense various features of the environment into summary statistics. Variance constitutes indices that represent diversity within categories and also the reliability of the information regarding that diversity. Studies have shown that humans can efficiently perceive variance for visual stimuli; however, to enhance perception of environments, information about the external world can be obtained from multisensory modalities and integrated. Consequently, this study investigates, through two experiments, whether the precision of variance perception improves when visual information (size) and corresponding auditory information (pitch) are integrated. In Experiment 1, we measured the correspondence between visual size and auditory pitch for each participant by using adjustment measurements. The results showed a linear relationship between size and pitch—that is, the higher the pitch, the smaller the corresponding circle. In Experiment 2, sequences of visual stimuli were presented both with and without linked auditory tones, and the precision of perceived variance in size was measured. We consequently found that synchronized presentation of audio and visual stimuli that have the same variance improves the precision of perceived variance in size when compared with visual-only presentation. This suggests that audiovisual information may be automatically integrated in variance perception.


1999 ◽  
Vol 81 (5) ◽  
pp. 2558-2569 ◽  
Author(s):  
Pamela Reinagel ◽  
Dwayne Godwin ◽  
S. Murray Sherman ◽  
Christof Koch

Encoding of visual information by LGN bursts. Thalamic relay cells respond to visual stimuli either in burst mode, as a result of activation of a low-threshold Ca2+ conductance, or in tonic mode, when this conductance is inactive. We investigated the role of these two response modes for the encoding of the time course of dynamic visual stimuli, based on extracellular recordings of 35 relay cells from the lateral geniculate nucleus of anesthetized cats. We presented a spatially optimized visual stimulus whose contrast fluctuated randomly in time with frequencies of up to 32 Hz. We estimated the visual information in the neural responses using a linear stimulus reconstruction method. Both burst and tonic spikes carried information about stimulus contrast, exceeding one bit per action potential for the highest variance stimuli. The “meaning” of an action potential, i.e., the optimal estimate of the stimulus at times preceding a spike, was similar for burst and tonic spikes. In within-trial comparisons, tonic spikes carried about twice as much information per action potential as bursts, but bursts as unitary events encoded about three times more information per event than tonic spikes. The coding efficiency of a neuron for a particular stimulus is defined as the fraction of the neural coding capacity that carries stimulus information. Based on a lower bound estimate of coding efficiency, bursts had ∼1.5-fold higher efficiency than tonic spikes, or 3-fold if bursts were considered unitary events. Our main conclusion is that both bursts and tonic spikes encode stimulus information efficiently, which rules out the hypothesis that bursts are nonvisual responses.


2012 ◽  
Vol 25 (0) ◽  
pp. 169
Author(s):  
Tomoaki Nakamura ◽  
Yukio P. Gunji

The majority of research on audio–visual interaction focused on spatio-temporal factors and synesthesia-like phenomena. Especially, research on synesthesia-like phenomena has been advanced by Marks et al., and they found synesthesia-like correlation between brightness and size of visual stimuli and pitch of auditory stimuli (Marks, 1987). It seems that main interest of research on synesthesia-like phenomena is what perceptual similarity/difference between synesthetes and non-synesthetes is. We guessed that cross-modal phenomena of non-synesthetes on perceptual level emerge as a function to complement the absence or ambiguity of a certain stimulus. To verify the hypothesis, we investigated audio–visual interaction using movement (speed) of an object as visual stimuli and sine-waves as auditory stimuli. In this experiment objects (circles) moved at a fixed speed in one trial and the objects were masked in arbitrary positions, and auditory stimuli (high, middle, low pitch) were given simultaneously with the disappearance of objects. Subject reported the expected position of the objects when auditory stimuli stopped. Result showed that correlation between the position, i.e., the movement speed, of the object and pitch of sound was found. We conjecture that cross-modal phenomena on non-synesthetes tend to occur when one of sensory stimuli are absent/ambiguous.


1972 ◽  
Vol 50 (6) ◽  
pp. 777-780 ◽  
Author(s):  
Roger M. Evans ◽  
Mark E. Mattson

One-day-old domestic chicks responded selectively to individual maternal clucks that had previously been presented in association with a familiar visual stimulus. These results are interpreted as evidence that familiar visual stimuli can mediate the development of auditory discriminations between biologically relevant adult vocalizations. Further, such discriminations occur at a time when individual recognition of parental vocalizations is thought to be important for maintaining family units which are threatened with potential disruption after the development of locomotor ability in the precocial young.


Sign in / Sign up

Export Citation Format

Share Document