Creating Multimedia Image, Motion and Audio Applications using Macromedia Flash 8.0

2021 ◽  
Vol 5 (1) ◽  
pp. 197-204
Author(s):  
Apostolos Klonis ◽  
1963 ◽  
Vol 19 ◽  
pp. 126-131
Author(s):  
C. R. Lynds

The concern has been expressed many times by Dr. Bowen and others that a significant portion of the seeing deterioration may occur in levels of the atmosphere very near the ground, within a few tenths of meters of the ground. When I refer to the quality of seeing I am refering to the image size one observes in a telescope of very large aperture and I will assume that this is equivalent to image motion as observed in telescopes of very small aperture. I will not attempt a further justification for this concern; however this is the basis for the studies we are just beginning at Kitt Peak, where we will attempt to quantitatively show whether or not there is need for concern about the very low levels of the atmosphere. So we begin with the thesis that much of the poor seeing observed at a site, the enlargement of photographic or visual images as observed through a large telescope, is due to refractive inhomogeneities in the lower levels of the atmosphere, within less than 100 m above the telescope. We presume that these inhomogeneities are of local origin and that their distribution and motion is determined primarily by site topography, wind direction and velocity. The few experiments we have made thus far at Kitt Peak have been designed to ascertain quantitatively the importance of these factors. Our approach has been to make observations of the large-aperture seeing with simultaneous observations of the thermal structure of the air accessible to us immediately above the telescope.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 41-41
Author(s):  
J T Enright

Perception of visual direction was investigated by requiring subjects repeatedly to adjust a single small light, in an otherwise darkened room, to perceived ‘straight ahead’. This task presumably requires comparing concurrent extra-retinal information (either proprioception or an efference copy) with an internally stored ‘standard’ of comparison. Moment-to-moment precision in that performance is remarkably good, with median threshold (standard deviation) of 47 arc min. Nevertheless, the responses often involved a monotonic shift of direction over a few minutes during a test session in this reduced visual environment. These trends led to final settings that were immediately recognised as grossly erroneous when the room was relit, implying that the presumptive internal standard of comparison, while unstable, can be rapidly updated in a full visual environment. There are clear similarities between this phenomenon and the sudden ‘visual capture’ that occurs in a re-illuminated room, following distortions of visual direction that arose in a similarly reduced setting for subjects whose extraocular muscles were partially paralysed (Matin et al, 1982 Science216 198 – 201). In both cases, the visual stimuli that underlie rapid recalibration are unknown. Among the several possibilities that can be imagined, the strongest candidate hypothesis for this calibration of the straight-ahead direction is that, during fixation in a lit room, one utilises the directional distribution of image motion that arises because of microscale drift of the eye, as it moves toward its equilibrium orientation, much as a moving observer can use optic flow to evaluate ‘heading’ (the dynamic analogue of ‘straight ahead’).


Perception ◽  
1996 ◽  
Vol 25 (7) ◽  
pp. 797-814 ◽  
Author(s):  
Michiteru Kitazaki ◽  
Shinsuke Shimojo

The generic-view principle (GVP) states that given a 2-D image the visual system interprets it as a generic view of a 3-D scene when possible. The GVP was applied to 3-D-motion perception to show how the visual system decomposes retinal image motion into three components of 3-D motion: stretch/shrinkage, rotation, and translation. First, the optical process of retinal image motion was analyzed, and predictions were made based on the GVP in the inverse-optical process. Then experiments were conducted in which the subject judged perception of stretch/shrinkage, rotation in depth, and translation in depth for a moving bar stimulus. Retinal-image parameters—2-D stretch/shrinkage, 2-D rotation, and 2-D translation—were manipulated categorically and exhaustively. The results were highly consistent with the predictions. The GVP seems to offer a broad and general framework for understanding the ambiguity-solving process in motion perception. Its relationship to other constraints such as that of rigidity is discussed.


Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 937-949 ◽  
Author(s):  
Takanao Yajima ◽  
Hiroyasu Ujike ◽  
Keiji Uchikawa

The two main questions addressed in this study were (a) what effect does yoking the relative expansion and contraction (EC) of retinal images to forward and backward head movements have on the resultant magnitude and stability of perceived depth, and (b) how does this relative EC image motion interact with the depth cues of motion parallax? Relative EC image motion was produced by moving a small CCD camera toward and away from the stimulus, two random-dot surfaces separated in depth, in synchrony with the observers' forward and backward head movements. Observers viewed the stimuli monocularly, on a helmet-mounted display, while moving their heads at various velocities, including zero velocity. The results showed that (a) the magnitude of perceived depth was smaller with smaller head velocities (<10 cm s−1), including the zero-head-velocity condition, than with a larger velocity (10 cm s−1), and (b) perceived depth, when motion parallax and the EC image motion cues were simultaneously presented, is equal to the greater of the two possible perceived depths produced from either of these two cues alone. The results suggested the role of nonvisual information of self-motion on perceiving depth.


Perception ◽  
1993 ◽  
Vol 22 (12) ◽  
pp. 1441-1465 ◽  
Author(s):  
Jeffrey C Liter ◽  
Myron L Braunstein ◽  
Donald D Hoffman

Five experiments were conducted to examine constraints used to interpret structure-from-motion displays. Theoretically, two orthographic views of four or more points in rigid motion yield a one-parameter family of rigid three-dimensional (3-D) interpretations. Additional views yield a unique rigid interpretation. Subjects viewed two-view and thirty-view displays of five-point objects in apparent motion. The subjects selected the best 3-D interpretation from a set of 89 compatible alternatives (experiments 1–3) or judged depth directly (experiment 4). In both cases the judged depth increased when relative image motion increased, even when the increased motion was due to increased simulation rotation. Subjects also judged rotation to be greater when either simulated depth or simulated rotation increased (experiment 4). The results are consistent with a heuristic analysis in which perceived depth is determined by relative motion.


Export Citation Format

Share Document