object discrimination
Recently Published Documents


TOTAL DOCUMENTS

196
(FIVE YEARS 16)

H-INDEX

32
(FIVE YEARS 2)

Author(s):  
Kathryne M Allen ◽  
Angeles Salles ◽  
Sanwook Park ◽  
Mounya Elhilali ◽  
Cynthia F. Moss

The discrimination of complex sounds is a fundamental function of the auditory system. This operation must be robust in the presence of noise and acoustic clutter. Echolocating bats are auditory specialists that discriminate sonar objects in acoustically complex environments. Bats produce brief signals, interrupted by periods of silence, rendering echo snapshots of sonar objects. Sonar object discrimination requires that bats process spatially and temporally overlapping echoes to make split-second decisions. The mechanisms that enable this discrimination are not well understood, particularly in complex environments. We explored the neural underpinnings of sonar object discrimination in the presence of acoustic scattering caused by physical clutter. We performed electrophysiological recordings in the inferior colliculus of awake big brown bats, to broadcasts of pre-recorded echoes from physical objects. We acquired single unit responses to echoes and discovered a sub-population of IC neurons that encode acoustic features that can be used to discriminate between sonar objects. We further investigated the effects of environmental clutter on this population's encoding of acoustic features. We discovered that the effect of background clutter on sonar object discrimination is highly variable and depends on object properties and target-clutter spatio-temporal separation. In many conditions, clutter impaired discrimination of sonar objects. However, in some instances clutter enhanced acoustic features of echo returns, enabling higher levels of discrimination. This finding suggests that environmental clutter may augment acoustic cues used for sonar target discrimination and provides further evidence in a growing body of literature that noise is not universally detrimental to sensory encoding.


Vision ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 14
Author(s):  
Hilary C. Pearson ◽  
Jonathan M. P. Wilbiks

Previous studies have focused on topics such as multimodal integration and object discrimination, but there is limited research on the effect of multimodal learning in memory. Perceptual studies have shown facilitative effects of multimodal stimuli for learning; the current study aims to determine whether this effect persists with memory cues. The purpose of this study was to investigate the effect that audiovisual memory cues have on memory recall, as well as whether the use of multiple memory cues leads to higher recall. The goal was to orthogonally evaluate the effect of the number of self-generated memory cues (one or three), and the modality of the self-generated memory-cue (visual: written words, auditory: spoken words, or audiovisual). A recall task was administered where participants were presented with their self-generated memory cues and asked to determine the target word. There was a significant main effect for number of cues, but no main effect for modality. A secondary goal of this study was to determine which types of memory cues result in the highest recall. Self-reference cues resulted in the highest accuracy score. This study has applications to improving academic performance by using the most efficient learning techniques.


2021 ◽  
Author(s):  
Nathan Katlein ◽  
Miranda Ray ◽  
Anna Wilkinson ◽  
Julien Claude ◽  
Maria Kiskowski ◽  
...  

AbstractAnimals are exposed to different visual stimuli that influence how they perceive and interact with their environment. Visual information such as shape and colour can help the animal detect, discriminate and make appropriate behavioural decisions for mate selection, communication, camouflage, and foraging. In all major vertebrate groups, it has been shown that certain species can discriminate and prefer certain colours and that colours may increase the response to a stimulus. However, because colour is often studied together with other potentially confounding factors, it is still unclear to what extent colour discrimination plays a crucial role in the perception of and attention towards biologically relevant and irrelevant stimuli. To address these questions in reptiles, we assessed the response of three gecko species Correlophus ciliatus, Eublepharis macularius, and Phelsuma laticauda to familiar and novel 2D images in colour or grayscale. We found that while all species responded more often to the novel than to the familiar images, colour information did not influence object discrimination. We also found that the duration of interaction with images was significantly longer for the diurnal species, P. laticauda, than for the two nocturnal species, but this was independent from colouration. Finally, no differences among sexes were observed within or across species. Our results indicate that geckos discriminate between 2D images of different content independent of colouration, suggesting that colouration does not increase detectability or intensity of the response. These results are essential for uncovering which visual stimuli produce a response in animals and furthering our understanding of how animals use colouration and colour vision.


2021 ◽  
Vol 15 ◽  
Author(s):  
Matías Quiñones ◽  
David Gómez ◽  
Rodrigo Montefusco-Siegmund ◽  
María de la Luz Aylwin

A brief image presentation is sufficient to discriminate and individuate objects of expertise. Although perceptual expertise is acquired through extensive practice that increases the resolution of representations and reduces the latency of image decoding and coarse and fine information extraction, it is not known how the stages of visual processing impact object discrimination learning (ODL). Here, we compared object discrimination with brief (100 ms) and long (1,000 ms) perceptual encoding times to test if the early and late visual processes are required for ODL. Moreover, we evaluated whether encoding time and discrimination practice shape perception and recognition memory processes during ODL. During practice of a sequential matching task with initially unfamiliar complex stimuli, we find greater discrimination with greater encoding times regardless of the extent of practice, suggesting that the fine information extraction during late visual processing is necessary for discrimination. Interestingly, the overall discrimination learning was similar for brief and long stimuli, suggesting that early stages of visual processing are sufficient for ODL. In addition, discrimination practice enhances perceive and know for brief and long stimuli and both processes are associated with performance, suggesting that early stage information extraction is sufficient for modulating the perceptual processes, likely reflecting an increase in the resolution of the representations and an early availability of information. Conversely, practice elicited an increase of familiarity which was not associated with discrimination sensitivity, revealing the acquisition of a general recognition memory. Finally, the recall is likely enhanced by practice and is associated with discrimination sensitivity for long encoding times, suggesting the engagement of recognition memory in a practice independent manner. These findings contribute to unveiling the function of early stages of visual processing in ODL, and provide evidence on the modulation of the perception and recognition memory processes during discrimination practice and its relationship with ODL and perceptual expertise acquisition.


2020 ◽  
Vol 13 (4) ◽  
pp. 706-717
Author(s):  
Kaavya Kanagaraj ◽  
Lakshmi Priya G.G.

Background: The proposed work uses two approaches as its background. They are (i) LBP approach (ii) Kirsch compass mask. : Texture classification plays a vital role in object discrimination from the original image. LBP is majorly used for classifying texture. Many filtering based methods co-occurrence matrix method, etc., were used, but due to the computational efficiency and invariance to monotonic grey level changes, LBP is adopted majorly. Second, as Edge plays a vital role in discriminating the object visually, Kirsch compass mask was applied to obtain maximum edge strength in 8 compass directions which has the advantage of changing the mask according to users own requirement than any other compass mask. Objective: The objective of our work was to extract better features and model a classifier for the Multimedia Event Detection task. Methods: The proposed work consists of two steps for feature extraction. Initially, an LBP based approach is used for object discrimination, later, convolution is used for object magnitude determination using Kirsch Compass mask. Eigenvalue decomposition is adopted for feature representation. Finally, a classifier is modelled using a chi-square kernel for the event classification task. Results: The proposed event detection work is experimented using Columbia Consumer Video (CCV) dataset. It contains 20 event based videos. The proposed work is evaluated with other existing works using mean Average Precision (mAP). Several experiments have been carried out to evaluate our work, they are LBP vs. non-LBP approach, Kirsch vs. Robinson compass mask, Kirsch masks angle wise analysis, comparison of above approaches are performed in a modeled classifier. Two approaches are used to compare the proposed work with other existing works.They are (i) Non Clustered Events (events were considered individually and one versus one strategy was followed) (ii) Clustered Events (some events were clustered and followed one vs. all strategy and remaining events were non-clustered). Conclusion: In the proposed work, a method for event detection is described. Feature extraction is performed using LBP based approach and Kirsch compass mask for convolution. For event detection, a classifier model is generated using the chi-square kernel. The accuracy of event classification is further increased using clustered events approach. The proposed work is compared with various state- of- the- art methods and proved that the proposed work obtained outstanding performance.


Author(s):  
Jun-ya Okamura ◽  
Jin Oshima ◽  
Reona Yamaguchi ◽  
Wakayo Yamashita ◽  
Gang Wang

Sign in / Sign up

Export Citation Format

Share Document