Facilitation of Spatial Pattern Learning with Visual Cues in Real and Virtual Environments

2008 ◽  
Author(s):  
Bradley R. Sturz ◽  
Michael F. Brown ◽  
Debbie M. Kelly
2001 ◽  
Author(s):  
Michael F. Brown ◽  
Sue Yang ◽  
Kelly Digian

2002 ◽  
Vol 30 (4) ◽  
pp. 363-375 ◽  
Author(s):  
Michael F. Brown ◽  
Sue Y. Yang ◽  
Kelly A. Digian

PLoS ONE ◽  
2015 ◽  
Vol 10 (3) ◽  
pp. e0116211 ◽  
Author(s):  
Afshin Samani ◽  
Charles Pontonnier ◽  
Georges Dumont ◽  
Pascal Madeleine

Author(s):  
Elizabeth Thorpe Davis ◽  
Larry F. Hodges

Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.


2006 ◽  
Vol 34 (1) ◽  
pp. 102-108 ◽  
Author(s):  
Michael F. Brown ◽  
Gary W. Giumetti

2020 ◽  
pp. 026765831989682
Author(s):  
Dato Abashidze ◽  
Kim McDonough ◽  
Yang Gao

Recent research that explored how input exposure and learner characteristics influence novel L2 morphosyntactic pattern learning has exposed participants to either text or static images rather than dynamic visual events. Furthermore, it is not known whether incorporating eye gaze cues into dynamic visual events enhances dual pattern learning. Therefore, this exploratory eye-tracking study examined whether eye gaze cues during dynamic visual events facilitate novel L2 pattern learning. University students ( n = 72) were exposed to 36 training videos with two dual novel morphosyntactic patterns in pseudo-Georgian: completed events ( bich-ma kocn-ul gogoit, ‘boy kissed girl’) and ongoing actions ( bich-su kocn-ar gogoit, ‘boy is kissing girl’). They then carried out an immediate test with 24 items using the same vocabulary words, followed by a generalization test with 24 items created from new vocabulary words. Results indicated that learners who received the eye gaze cues scored significantly higher on the immediate test and relied on the verb cues more than on the noun cues. A post-hoc analysis of eye-movement data indicated that the gaze cues elicited longer looks to the correct images. Findings are discussed in relation to visual cues and novel morphosyntactic pattern learning.


Sign in / Sign up

Export Citation Format

Share Document