pattern discrimination
Recently Published Documents


TOTAL DOCUMENTS

485
(FIVE YEARS 25)

H-INDEX

40
(FIVE YEARS 1)

2021 ◽  
Author(s):  
◽  
Robin Fraser Patchett

<p>To test the hypothesis that prior patterned or varied auditory experience was necessary for the development of auditory frequency discrimination and auditory pattern discrimination, groups of sprague-Dawley albino rats were deprived of patterned sound from birth by the novel technique of rearing them in 'white' noise. The sound deprived rats learned a frequency discrimination as easily as controls reared in varied sound conditions, but showed inferior performance on an auditory pattern discrimination task. Supporting experiments showed that the inferiority of varied sound deprived animals on the pattern discrimination task was not likely to have been due to their emotional state at the time of the testing nor to their inferiority in learning to respond in a discrimination task compared with non-deprived controls. Open-field testing showed that the sound deprived subjects did not differ from non-deprived controls in 'emotionality'. The sound deprived rats were not inferior, either, to controls on a complex visual discrimination task. Experiments were also carried out to explore the effect of various durations of patterned sound deprivation and the effect of the deprivation at various times in the life cycle of the rat on auditory pattern discrimination. The results of these experiments favoured an explanation for the effect of varied sound experience which proposed that patterned auditory discrimination development depended, simply, on prior experience with varied sound rather than an explanation which proposed that the effect depended on varied sound experience during a particular sensitive period in the life of the rat. The research involved a total of seven different experiments, the similarities in the findings of which when compared with those of other investigators working in the area of the effects of deprivation of patterned light on visual discriminations were noted. The present experiments support generalizations about the role of prior experience on later behaviour, based largely on experiments in the visual mode, by supplying evidence from another sensory mode.</p>


2021 ◽  
Author(s):  
◽  
Robin Fraser Patchett

<p>To test the hypothesis that prior patterned or varied auditory experience was necessary for the development of auditory frequency discrimination and auditory pattern discrimination, groups of sprague-Dawley albino rats were deprived of patterned sound from birth by the novel technique of rearing them in 'white' noise. The sound deprived rats learned a frequency discrimination as easily as controls reared in varied sound conditions, but showed inferior performance on an auditory pattern discrimination task. Supporting experiments showed that the inferiority of varied sound deprived animals on the pattern discrimination task was not likely to have been due to their emotional state at the time of the testing nor to their inferiority in learning to respond in a discrimination task compared with non-deprived controls. Open-field testing showed that the sound deprived subjects did not differ from non-deprived controls in 'emotionality'. The sound deprived rats were not inferior, either, to controls on a complex visual discrimination task. Experiments were also carried out to explore the effect of various durations of patterned sound deprivation and the effect of the deprivation at various times in the life cycle of the rat on auditory pattern discrimination. The results of these experiments favoured an explanation for the effect of varied sound experience which proposed that patterned auditory discrimination development depended, simply, on prior experience with varied sound rather than an explanation which proposed that the effect depended on varied sound experience during a particular sensitive period in the life of the rat. The research involved a total of seven different experiments, the similarities in the findings of which when compared with those of other investigators working in the area of the effects of deprivation of patterned light on visual discriminations were noted. The present experiments support generalizations about the role of prior experience on later behaviour, based largely on experiments in the visual mode, by supplying evidence from another sensory mode.</p>


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Muhammad Islam ◽  
Zahid Shafiq ◽  
Fazal Mabood ◽  
Hakikulla H. Shah ◽  
Vandita Singh ◽  
...  

AbstractNew-generation chemosensors desire small organic molecules that are easy to synthesise and cost-effective. As a new interdisciplinary area of research, the integration of these chemosensors into keypad locks or other advanced communication protocols is becoming increasingly popular. Our lab has developed new chemosensor probes that contain 2-nitro- (1–3) and 4-fluoro-cinnamaldehyde (4–6) and applied them to the anion recognition and sensing process. Probes 1–6 are colorimetric sensors for naked-eye detection of AcO−/CN−/F−, while probes 4–6 could differentiate between F− and AcO−/CN− anions in acetonitrile. Using the density functional theory (DFT), it was found that probes 1–6 acted as effective chemosensors. By using Probe 5 as a chemosensor, we explored colorimetric recognition of multiple anions in more detail. Probe 5 was tested in combination with a combinatorial approach to demonstrate pattern-generation capability and its ability to distinguish among chemical inputs based on concentration. After pattern discrimination using principal component analysis (PCA), we examined anion selectivity using DFT computation. In our study, probe 5 demonstrates excellent performance as a chemosensor and shows promise as a future molecular-level keypad lock system.


Author(s):  
Valéria M. M. Gimenez ◽  
Patrícia M. Pauletti ◽  
Ana Carolina Sousa Silva ◽  
Ernane José Xavier Costa

2021 ◽  
Vol 11 (7) ◽  
pp. 3066
Author(s):  
Zhikang Fu ◽  
Jun Li ◽  
Guoqing Chen ◽  
Tianbao Yu ◽  
Tiansheng Deng

In the era of big data, massive harmful multimedia resources publicly available on the Internet greatly threaten children and adolescents. In particular, recognizing pornographic videos is of great importance for protecting the mental and physical health of the underage. In contrast to the conventional methods which are only built on image classifier without considering audio clues in the video, we propose a unified deep architecture termed PornNet integrating dual sub-networks for pornographic video recognition. More specifically, with image frames and audio clues extracted from the pornographic videos from scratch, they are respectively delivered to two deep networks for pattern discrimination. For discriminating pornographic frames, we propose a local-context aware network that takes into account the image context in capturing the key contents, whilst leveraging an attention network which can capture temporal information for recognizing pornographic audios. Thus, we incorporate the recognition scores generated from the two sub-networks into a unified deep architecture, while making use of a pre-defined aggregation function to produce the whole video recognition result. The experiments on our newly-collected large dataset demonstrate that our proposed method exhibits a promising performance, achieving an accuracy at 93.4% on the dataset including 1 k pornographic samples along with 1 k normal videos and 1 k sexy videos.


2021 ◽  
Vol 15 ◽  
Author(s):  
Hiromichi Tsukada ◽  
Minoru Tsukada

The spatiotemporal learning rule (STLR) proposed based on hippocampal neurophysiological experiments is essentially different from the Hebbian learning rule (HEBLR) in terms of the self-organization mechanism. The difference is the self-organization of information from the external world by firing (HEBLR) or not firing (STLR) output neurons. Here, we describe the differences of the self-organization mechanism between the two learning rules by simulating neural network models trained on relatively similar spatiotemporal context information. Comparing the weight distributions after training, the HEBLR shows a unimodal distribution near the training vector, whereas the STLR shows a multimodal distribution. We analyzed the shape of the weight distribution in response to temporal changes in contextual information and found that the HEBLR does not change the shape of the weight distribution for time-varying spatiotemporal contextual information, whereas the STLR is sensitive to slight differences in spatiotemporal contexts and produces a multimodal distribution. These results suggest a critical difference in the dynamic change of synaptic weight distributions between the HEBLR and STLR in contextual learning. They also capture the characteristics of the pattern completion in the HEBLR and the pattern discrimination in the STLR, which adequately explain the self-organization mechanism of contextual information learning.


2021 ◽  
Author(s):  
HaDi MaBouDi ◽  
Mark Roper ◽  
Marie Guiraud ◽  
James A.R. Marshall ◽  
Lars Chittka

AbstractActive vision, the ability of the visual system to actively sample and select relevant information out of a visual scene through eye and head movements, has been explored in a variety of animal species. Small-brained animals such as insects might rely even more on sequential acquisition of pattern features since there might be less parallel processing capacity in their brains than in vertebrates. To investigate how active vision strategies enable bees to solve visual tasks, here, we employed a simple visual discrimination task in which individual bees were presented with a multiplication symbol and a 45° rotated version of the same pattern (“plus sign”). High-speed videography of unrewarded tests and analysis of the bees’ flight paths shows that only a small region of the pattern is inspected before successfully accepting a target or rejecting a distractor. The bees’ scanning behaviour of the stimuli differed for plus signs and multiplication signs, but for each of these, the flight behaviour was consistent irrespective of whether the pattern was rewarding or unrewarding. Bees typically oriented themselves at ~±30° to the patterns such that only one eye had an unobscured view of stimuli. There was a significant preference for initially scanning the left side of the stimuli. Our results suggest that the bees’ movement may be an integral part of a strategy to efficiently analyse and encode their environment.Summary statementAutomated video tracking and flight analysis is proposed as the next milestone in understanding mechanisms underpinning active vision and cognitive visual abilities of bees.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0247495
Author(s):  
Nina Coy ◽  
Maria Bader ◽  
Erich Schröger ◽  
Sabine Grimm

The human auditory system often relies on relative pitch information to extract and identify auditory objects; such as when the same melody is played in different keys. The current study investigated the mental chronometry underlying the active discrimination of unfamiliar melodic six-tone patterns by measuring behavioural performance and event-related potentials (ERPs). In a roving standard paradigm, such patterns were either repeated identically within a stimulus train, carrying absolute frequency information about the pattern, or shifted in pitch (transposed) between repetitions, so only relative pitch information was available to extract the pattern identity. Results showed that participants were able to use relative pitch to detect when a new melodic pattern occurred. Though in the absence of absolute pitch sensitivity significantly decreased and behavioural reaction time to pattern changes increased. Mismatch-Negativity (MMN), an ERP indicator of auditory deviance detection, was elicited at approximately 206 ms after stimulus onset at frontocentral electrodes, even when only relative pitch was available to inform pattern discrimination. A P3a was elicited in both conditions, comparable in amplitude and latency. Increased latencies but no differences in amplitudes of N2b, and P3b suggest that processing at higher levels is affected when, in the absence of absolute pitch cues, relative pitch has to be extracted to inform pattern discrimination. Interestingly, the response delay of approximately 70 ms on the behavioural level, already fully manifests at the level of N2b. This is in accordance with recent findings on implicit auditory learning processes and suggests that in the absence of absolute pitch cues a slowing of target selection rather than a slowing of the auditory pattern change detection process causes the deterioration in behavioural performance.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Gregory Gauvain ◽  
Himanshu Akolkar ◽  
Antoine Chaffiol ◽  
Fabrice Arcizet ◽  
Mina A. Khoei ◽  
...  

AbstractVision restoration is an ideal medical application for optogenetics, because the eye provides direct optical access to the retina for stimulation. Optogenetic therapy could be used for diseases involving photoreceptor degeneration, such as retinitis pigmentosa or age-related macular degeneration. We describe here the selection, in non-human primates, of a specific optogenetic construct currently tested in a clinical trial. We used the microbial opsin ChrimsonR, and showed that the AAV2.7m8 vector had a higher transfection efficiency than AAV2 in retinal ganglion cells (RGCs) and that ChrimsonR fused to tdTomato (ChR-tdT) was expressed more efficiently than ChrimsonR. Light at 600 nm activated RGCs transfected with AAV2.7m8 ChR-tdT, from an irradiance of 1015 photons.cm−2.s−1. Vector doses of 5 × 1010 and 5 × 1011 vg/eye transfected up to 7000 RGCs/mm2 in the perifovea, with no significant immune reaction. We recorded RGC responses from a stimulus duration of 1 ms upwards. When using the recorded activity to decode stimulus information, we obtained an estimated visual acuity of 20/249, above the level of legal blindness (20/400). These results lay the groundwork for the ongoing clinical trial with the AAV2.7m8 - ChR-tdT vector for vision restoration in patients with retinitis pigmentosa.


Sign in / Sign up

Export Citation Format

Share Document