scholarly journals Object recognition during foveating eye movements

2009 ◽  
Vol 49 (18) ◽  
pp. 2241-2253 ◽  
Author(s):  
Alexander C. Schütz ◽  
Doris I. Braun ◽  
Karl R. Gegenfurtner
2012 ◽  
Vol 12 (9) ◽  
pp. 1006-1006
Author(s):  
C. Leek ◽  
C. Patterson ◽  
R. Rafal ◽  
F. Cristino

2012 ◽  
Vol 50 (9) ◽  
pp. 2142-2153 ◽  
Author(s):  
E. Charles Leek ◽  
Candy Patterson ◽  
Matthew A. Paul ◽  
Robert Rafal ◽  
Filipe Cristino

Perception ◽  
10.1068/p7257 ◽  
2012 ◽  
Vol 41 (11) ◽  
pp. 1289-1298 ◽  
Author(s):  
Yoshiyuki Ueda ◽  
Jun Saiki

Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning – haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.


Sign in / Sign up

Export Citation Format

Share Document