Perceptual Inspection Time: An Exploration of Tactics to Eliminate the Apparent-Movement Strategy

1990 ◽  
Author(s):  
Robert K. Young
Intelligence ◽  
2001 ◽  
Vol 29 (3) ◽  
pp. 219-230 ◽  
Author(s):  
C Stough ◽  
T.C Bates ◽  
G.L Mangan ◽  
I Colrain

2020 ◽  
Vol 36 (2) ◽  
pp. 296-302 ◽  
Author(s):  
Luke J. Hearne ◽  
Damian P. Birney ◽  
Luca Cocchi ◽  
Jason B. Mattingley

Abstract. The Latin Square Task (LST) is a relational reasoning paradigm developed by Birney, Halford, and Andrews (2006) . Previous work has shown that the LST elicits typical reasoning complexity effects, such that increases in complexity are associated with decrements in task accuracy and increases in response times. Here we modified the LST for use in functional brain imaging experiments, in which presentation durations must be strictly controlled, and assessed its validity and reliability. Modifications included presenting the components within each trial serially, such that the reasoning and response periods were separated. In addition, the inspection time for each LST problem was constrained to five seconds. We replicated previous findings of higher error rates and slower response times with increasing relational complexity and observed relatively large effect sizes (η2p > 0.70, r > .50). Moreover, measures of internal consistency and test-retest reliability confirmed the stability of the LST within and across separate testing sessions. Interestingly, we found that limiting the inspection time for individual problems in the LST had little effect on accuracy relative to the unconstrained times used in previous work, a finding that is important for future brain imaging experiments aimed at investigating the neural correlates of relational reasoning.


2019 ◽  
Vol 34 (12) ◽  
pp. 2807-2822 ◽  
Author(s):  
Florian Schwarzmueller ◽  
Nancy A. Schellhorn ◽  
Hazel Parry

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


Sign in / Sign up

Export Citation Format

Share Document