Feature detectors

Cortex ◽  
2011 ◽  
Vol 47 (5) ◽  
pp. 519-520
Author(s):  
Nicholas J. Wade
Keyword(s):  
Author(s):  
Joel Z. Leibo ◽  
Tomaso Poggio

This chapter provides an overview of biological perceptual systems and their underlying computational principles focusing on the sensory sheets of the retina and cochlea and exploring how complex feature detection emerges by combining simple feature detectors in a hierarchical fashion. We also explore how the microcircuits of the neocortex implement such schemes pointing out similarities to progress in the field of machine vision driven deep learning algorithms. We see signs that engineered systems are catching up with the brain. For example, vision-based pedestrian detection systems are now accurate enough to be installed as safety devices in (for now) human-driven vehicles and the speech recognition systems embedded in smartphones have become increasingly impressive. While not being entirely biologically based, we note that computational neuroscience, as described in this chapter, makes up a considerable portion of such systems’ intellectual pedigree.


2019 ◽  
Vol 116 (16) ◽  
pp. 7723-7731 ◽  
Author(s):  
Dmitry Krotov ◽  
John J. Hopfield

It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.


2008 ◽  
Vol 20 (5) ◽  
pp. 1261-1284 ◽  
Author(s):  
Cornelius Weber ◽  
Jochen Triesch

Current models for learning feature detectors work on two timescales: on a fast timescale, the internal neurons' activations adapt to the current stimulus; on a slow timescale, the weights adapt to the statistics of the set of stimuli. Here we explore the adaptation of a neuron's intrinsic excitability, termed intrinsic plasticity, which occurs on a separate timescale. Here, a neuron maintains homeostasis of an exponentially distributed firing rate in a dynamic environment. We exploit this in the context of a generative model to impose sparse coding. With natural image input, localized edge detectors emerge as models of V1 simple cells. An intermediate timescale for the intrinsic plasticity parameters allows modeling aftereffects. In the tilt aftereffect, after a viewer adapts to a grid of a certain orientation, grids of a nearby orientation will be perceived as tilted away from the adapted orientation. Our results show that adapting the neurons' gain-parameter but not the threshold-parameter accounts for this effect. It occurs because neurons coding for the adapting stimulus attenuate their gain, while others increase it. Despite its simplicity and low maintenance, the intrinsic plasticity model accounts for more experimental details than previous models without this mechanism.


Perception ◽  
1984 ◽  
Vol 13 (6) ◽  
pp. 675-686 ◽  
Author(s):  
Eg G J Eijkman

Experiments are reported in which line pictures were perturbed by omission or displacement of a combination of single pixels, fragments of lines, contours, and whole figures. Different effects of perturbation were expected by selectively violating visual syntactic rules or by impeding the contribution of certain feature detectors. The deterioration of the perturbed picture was measured according to standard psychophysical methods by rating on a 5-point scale. Multivariate methods were used to single out the relative effects of perturbation by, respectively, a set of single pixels, line fragments, contours and whole figures. Lines, as opposed to loose pixels, are clearly powerful descriptors of the pictures; contours or whole figures do not add significantly to what lines already describe. Different effects were observed if perturbations were dislocations rather than removals. Then contours and whole figures showed a typical disrupting effect compared to line fragments. These results have consequences for the development of a syntax of visual form perception. The perturbation method seems appropriate for identifying features or syntactic rules, although the results are dependent on a number of environmental and contextual factors.


2021 ◽  
pp. 51-64
Author(s):  
Ahmed A. Elngar ◽  
◽  
◽  
◽  
◽  
...  

Feature detection, description and matching are essential components of various computer vision applications; thus, they have received a considerable attention in the last decades. Several feature detectors and descriptors have been proposed in the literature with a variety of definitions for what kind of points in an image is potentially interesting (i.e., a distinctive attribute). This chapter introduces basic notation and mathematical concepts for detecting and describing image features. Then, it discusses properties of perfect features and gives an overview of various existing detection and description methods. Furthermore, it explains some approaches to feature matching. Finally, the chapter discusses the most used techniques for performance evaluation of detection algorithms.


Sign in / Sign up

Export Citation Format

Share Document