Visual cues and spatial pattern learning

2001 ◽  
Author(s):  
Michael F. Brown ◽  
Sue Yang ◽  
Kelly Digian
2002 ◽  
Vol 30 (4) ◽  
pp. 363-375 ◽  
Author(s):  
Michael F. Brown ◽  
Sue Y. Yang ◽  
Kelly A. Digian

2006 ◽  
Vol 34 (1) ◽  
pp. 102-108 ◽  
Author(s):  
Michael F. Brown ◽  
Gary W. Giumetti

2020 ◽  
pp. 026765831989682
Author(s):  
Dato Abashidze ◽  
Kim McDonough ◽  
Yang Gao

Recent research that explored how input exposure and learner characteristics influence novel L2 morphosyntactic pattern learning has exposed participants to either text or static images rather than dynamic visual events. Furthermore, it is not known whether incorporating eye gaze cues into dynamic visual events enhances dual pattern learning. Therefore, this exploratory eye-tracking study examined whether eye gaze cues during dynamic visual events facilitate novel L2 pattern learning. University students ( n = 72) were exposed to 36 training videos with two dual novel morphosyntactic patterns in pseudo-Georgian: completed events ( bich-ma kocn-ul gogoit, ‘boy kissed girl’) and ongoing actions ( bich-su kocn-ar gogoit, ‘boy is kissing girl’). They then carried out an immediate test with 24 items using the same vocabulary words, followed by a generalization test with 24 items created from new vocabulary words. Results indicated that learners who received the eye gaze cues scored significantly higher on the immediate test and relied on the verb cues more than on the noun cues. A post-hoc analysis of eye-movement data indicated that the gaze cues elicited longer looks to the correct images. Findings are discussed in relation to visual cues and novel morphosyntactic pattern learning.


2001 ◽  
Vol 27 (4) ◽  
pp. 407-416 ◽  
Author(s):  
Michael F. Brown ◽  
Christine Zeiler ◽  
Anica John

2000 ◽  
Vol 28 (3) ◽  
pp. 278-287 ◽  
Author(s):  
Michael F. Brown ◽  
Elizabeth Digello ◽  
Michelle Milewski ◽  
Meredith Wilson ◽  
Michael Kozak

1985 ◽  
Vol 60 (3) ◽  
pp. 891-902
Author(s):  
Ronald M. Ruff

A method to examine audiospatial integration was developed through studies of normal and brain-injured patients. The method used a sound source sequentially outlining a spatial pattern within an array of 100 loudspeakers. For 48 subjects tested with this method, the presence or absence of visual cues had no effect in audiospatial processing. Eye movements also did not match the perceived sound patterns. Significantly higher hit rates were obtained by placing the panel of loudspeakers in front, behind or above the subject rather than on the left or right. These differences were observed with involuntary but not with voluntary head-fixation. Theoretical concepts of audiospatial processing are discussed.


Sign in / Sign up

Export Citation Format

Share Document