scholarly journals The whole is equal to the sum of its parts: pigeons (Columba livia) and crows (Corvus macrorhynchos) do not perceive emergent configurations

2020 ◽  
Author(s):  
Kazuhiro Goto

We previously demonstrated that chimpanzees, like humans, showed better accuracy and faster response time in discriminating visual patterns when the patterns were presented in redundant and uninformative contexts than when they were presented alone. In the present study, we examined the effect of redundant context on pattern discrimination in pigeons (Columba livia) and large-billed crows (Corvus macrorhynchos) using the same task and stimuli as those used in our previous study on chimpanzees. Birds were trained to search for an odd target among homogenous distractors. Each stimulus was presented in one of three ways: (1) alone, (2) with identical context that resulted in emergent configuration to chimpanzees (congruent context), or (3) with identical context that did not result in emergent configuration to chimpanzees (in- congruent context). In contrast to the facilitative effect of congruent contexts we previously reported in chimpanzees, the same contexts disrupted target localization performance in both pigeons and crows. These results imply that birds, unlike chimpanzees, do not perceive emergent configurations.

Science ◽  
1986 ◽  
Vol 232 (4746) ◽  
pp. 83-85 ◽  
Author(s):  
K. MIMURA

Pattern discrimination by dewinged walking flies (Boettcherisca peregrina) was tested in behavioral experiments. After emergence, the flies were deprived of light or visual patterns. Deprivation impaired the normal development of visual pattern discrimination without impairing phototaxis. Flies kept in a lighted, white, unpatterned environment could not discriminate visual patterns, nor could flies kept in continuous darkness. These results indicate that there is considerable plasticity in the structure of the visual system of these flies.


2012 ◽  
Vol 367 (1598) ◽  
pp. 1995-2006 ◽  
Author(s):  
Nina Stobbe ◽  
Gesche Westphal-Fitch ◽  
Ulrike Aust ◽  
W. Tecumseh Fitch

Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea ( Nestor notabilis ) and pigeons ( Columba livia ), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training.


2002 ◽  
Vol 205 (4) ◽  
pp. 549-557 ◽  
Author(s):  
Stefan Schuster ◽  
Silke Amtsfeld

SUMMARYSeveral insects use template-matching systems to recognize objects or environmental landmarks by comparing actual and stored retinal images. Such systems are not viewpoint-invariant and are useful only when the locations in which the images have been stored and where they are later retrieved coincide. Here, we describe that a vertebrate, the weakly electric fish Gnathonemus petersii, appears to use template-matching to recognize visual patterns that it had previously viewed from a fixed vantage point. This fish is nocturnal and uses its electrical sense to find its way in the dark, yet it has functional vision that appears to be well adapted to dim light conditions. We were able to train three fish in a two-alternative forced-choice procedure to discriminate a rewarded from an unrewarded visual pattern. From its daytime shelter, each fish viewed two visual patterns placed at a set distance behind a transparent Plexiglas screen that closed the shelter. When the screen was lifted, the fish swam towards one of the patterns to receive a food reward or to be directed back into its shelter. Successful pattern discrimination was limited to low ambient light intensities of approximately 10 lx and to pattern sizes subtending a visual angle greater than 3°. To analyze the characteristics used by the fish to discriminate the visual training patterns, we performed transfer tests in which the training patterns were replaced by other patterns. The results of all such transfer tests can best be explained by a template-matching mechanism in which the fish stores the view of the rewarded training pattern and chooses from two other patterns the one whose retinal appearance best matches the stored view.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Cheng Wang ◽  
Ding Wang ◽  
Lu Gao ◽  
Bin Yang

Due to practical limitations on size and cost, aerial vehicles generally cannot equip complicated sensors to form sensor array for target localization. In this paper, we investigate the direct position determination (DPD) of stationary source via single moving sensor. First, we analyze artificial signal structure and construct the DPD model with the frame periodicity of artificial signal. The model incorporates Doppler information extracted from both transformation frames and adjacent samples into target localization. Secondly, we consider the effect of oscillator instability and present an iterative solution for joint estimation of target location and phase noise caused by oscillator imperfection. The proposed technique fully exploits periodic structure of artificial wireless signal, which leads to significant enhancement in localization performance. Both theoretical analysis and simulations are presented to confirm its effectiveness.


1970 ◽  
Vol 140 (1) ◽  
pp. 81-100 ◽  
Author(s):  
Michael B. Pritz ◽  
William R. Mead ◽  
R. Glenn Northcutt

2012 ◽  
Vol 8 (10) ◽  
pp. 438090
Author(s):  
Xuefei Zhang ◽  
Qimei Cui ◽  
Yulong Shi ◽  
Xiaofeng Tao

In ill-conditioned communication environment, multiple target localization is of important practical significance. The cooperative group localization (CGL) model was firstly put forward, which has verified the effectiveness of localization performance gain and simultaneous multiple target localization in ill conditions. However, there exist two inherent difficulties: the strict demand for CGL topology and the high complexity. By the rational use of information to relax restrictions on topology and by dividing the complex problem into some simple local ones, the factor graph (FG) together with the sum-product algorithm is a perfect candidate for the problems above. In order to solve the two problems, we propose the weighted FG-based CGL (WFG-CGL) algorithm which incorporates the optimal weights based on the information reliability. In order to further reduce the complexity, we propose the low-complexity FG-based CGL (LCFG-CGL) algorithm. The Cramer-Rao lower bound (CRLB) of the localization error in CGL is first derived. Theoretical analysis and numerical results indicate that the proposed algorithms not only perform better in relaxing CGL topology requirement, but also enjoy high localization accuracy under low complexity in comparison with the existing CGL algorithm.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


Sign in / Sign up

Export Citation Format

Share Document