scholarly journals Difference in Recognition of Optical Illusion Using Visual and Tactual Sense

1992 ◽  
Vol 4 (1) ◽  
pp. 58-62 ◽  
Author(s):  
Yukio Fukui ◽  
◽  
Makoto Shimojo

In addition to visual display, force-feedback will play an important role in interactive interface design for computers. Although the human visual system is generally more sensitive than haptics, the visual system is subject to optical illusions. In this study, we have conducted experiments to obtain visual and/or haptical sensitivity of recognition of the features of figures, such as the deformation of a circle or the bend of a line, under optical illusion. It was found that the figure which the subject judges to be a true circle or a line without bend is closer to the true figure, when using haptic exploration compared to that chosen by visual recognition. However, the probable error by haptics is larger than that by vision. Recognition using both haptics and vision causes conflict because of optical illusion, resulting in almost the same amount of errors as by vision alone.

2001 ◽  
Vol 13 (6) ◽  
pp. 569-574
Author(s):  
Masanori Idesawa ◽  

Human beings obtain big amount of information from the external world through their visual system. Automated system such as robot must provide the visual functions for their flexible operations in 3-D circumstances. In order to realize the visual function artificially, we would be better to learn from the human visual mechanism. Optical illusions would be a pure reflection of the human visual mechanism; they can be used for investigating human visual mechanism. New types of optical illusion with binocular viewing are introduced and investigated.


Author(s):  
Geska Helena Brečević ◽  
Robert Brečević

Discovered during a media-archeological investigation into optical illusions, trick photography, and discarded memorabilia, the photo-multigraph technique opened the door to an enchanted world of cloned appearances orbiting in a self-reflective solar system. Shapeshifting into our preferred artistic medium, this turn-of-the-century photographic technique becomes the video-multigraph. It is bizarrely noteworthy that self-isolation would become not only the subject of the piece, but also – due to the unforeseen spread of a recently mutated virus – the prevailing circumstances under which the work was to be completed. In Verfünfungseffekt, we use the medium of video to create a kaleidoscopic portrait-in-motion where the perspective-shifting shards of ego are recorded in a synchronized performance of solipsist intersubjectivity. The video-multigraph allows for the compositing of tiny offsets in time-shifting delays applied to one, or several, of the mirrored selves – shattering the cloned perfection, as well as the conformity, of the multiple presences. This optical illusion necessitates reflection on how media alters our perceptions of time and space; it thereby arouses wonder about our place in existence. Keywords: Photo-multigraph, fivefold-portrait, mirror photography, video-multigraph, crisis of presence


Perception ◽  
10.1068/p3393 ◽  
2003 ◽  
Vol 32 (4) ◽  
pp. 395-414 ◽  
Author(s):  
Marina V Danilova ◽  
John D Mollon

The visual system is known to contain hard-wired mechanisms that compare the values of a given stimulus attribute at adjacent positions in the visual field; but how are comparisons performed when the stimuli are not adjacent? We ask empirically how well a human observer can compare two stimuli that are separated in the visual field. For the stimulus attributes of spatial frequency, contrast, and orientation, we have measured discrimination thresholds as a function of the spatial separation of the discriminanda. The three attributes were studied in separate experiments, but in all cases the target stimuli were briefly presented Gabor patches. The Gabor patches lay on an imaginary circle, which was centred on the fixation point and had a radius of 5 deg of visual angle. Our psychophysical procedures were designed to ensure that the subject actively compared the two stimuli on each presentation, rather than referring just one stimulus to a stored template or criterion. For the cases of spatial frequency and contrast, there was no systematic effect of spatial separation up to 10 deg. We conclude that the subject's judgment does not depend on discontinuity detectors in the early visual system but on more central codes that represent the two stimuli individually. In the case of orientation discrimination, two naïve subjects performed as in the cases of spatial frequency and contrast; but two highly trained subjects showed a systematic increase of threshold with spatial separation, suggesting that they were exploiting a distal mechanism designed to detect the parallelism or non-parallelism of contours.


1979 ◽  
Vol 23 (1) ◽  
pp. 362-366 ◽  
Author(s):  
Wanda J. Smith

The design of workstations with visual displays has become the subject of considerable interest and concern during the past few years. One area of concern relates to the assumption that long term viewing of such displays at close focal distances may contribute to visual fatigue. A second is the effect on the human visual system of the frequent changes in surface illumination associated with display units used in combination with hard copy documents. As a consequence of these and other concerns, the popular press has published articles that have aroused the interest of various scientific organizations regarding the subject of these effects. This paper discusses a review of some of the literature regarding a limited aspect of this issue, namely the accommodation and pupillary systems as they relate to long term viewing of visual display units.


1995 ◽  
Vol 73 (3) ◽  
pp. 1201-1222 ◽  
Author(s):  
J. McIntyre ◽  
E. V. Gurfinkel ◽  
M. I. Lipshits ◽  
J. Droulez ◽  
V. S. Gurfinkel

1. When interacting with the environment, human arm movements may be prevented in certain directions (i.e., when sliding the hand along a surface) resulting in what is called a "constrained motion." In the directions that the movement is restricted, the subject is instead free to control the forces against the constraint. 2. Control strategies for constrained motion may be characterized by two extreme models. Under the active compliance model, an essentially feedback-based approach, measurements of contact force may be used in real time to modify the motor command and precisely control the forces generated against the constraint. Under the passive compliance model the motion would be executed in a feedforward manner, using an internal model of the constraint geometry. The feedforward model relies on the compliant behavior of the passive mechanical system to maintain contact while avoiding excessive contact forces. 3. Subjects performed a task in which they were required to slide the hand along a rigid surface. This task was performed in a virtual force environment in which contact forces were simulated by a two-dimensional force-actuated joystick. Unknown to the subject, the orientation of the surface constraint was varied from trial to trial, and contact force changes induced by these perturbations were measured. 4. Subjects showed variations in contact force correlated with the direction of the orientation perturbation. "Upward" tilts resulted in higher contact forces, whereas "downward" tilts resulted in lower contact forces. This result is consistent with a feedforward-based control of a passively compliant system. 5. Subject responses did not, however, correspond exactly to the predictions of a static analysis of a passive, feedforward-controlled system. A dynamic analysis reveals a much closer resemblance between a passive, feedforward model and the observed data. Numerical simulations demonstrate that a passive, dynamic system model of the movement captures many more of the salient features observed in the measured human data. 6. We conclude that human subjects execute surface-following motions in a largely feedforward manner, using an a priori model of the surface geometry. The evidence does not suggest that active, real time use of force feedback is used to guide the movement or to control limb impedance. We do not exclude, however, the possibility that the internal model of the constraint is updated at somewhat longer latencies on the basis of proprioceptive information.


1988 ◽  
Vol 32 (5) ◽  
pp. 278-278 ◽  
Author(s):  
Robert W. Root ◽  
Annette Canby

The term “direct manipulation” (or DM) often evokes images of interfaces which are intuitive, obvious, and easy to learn. We conducted an experiment to determine whether subjects could learn to use a DM interface without instruction, i.e., whether they could learn the interface syntax on their own merely by inspection and exploration of the interface. The research vehicle was a prototype DM applicationdesigned to allow end users to customize a telecommunications application. Three variations of the interface were created by manipulating elements of the DM syntax, specifically, moded operations and rules about selectingobjects before acting on them. Subjects carried out a set of five tasks in the presence of an experimenter, who was allowed to provide structured help when the subject could not make further progress. Results indicated that the syntax manipulations affected both the number and type of user errors and the amount of help needed to complete the tasks: the use of modes and selection rules significantly interfered with learning, and only four subjects out of thirty were able to perform the complete set of tasks without experimenter assistance. We also found, however, that more than half of the errors made by subjects were not directly related to syntax manipulations. These errors appear to stem more from conceptual problems, i.e., mismatches between the user's developing model of the interface and the model instantiated by the interface designer in the rules of interaction. These conceptual problems were observed across syntax manipulations and represent a significant portion of user's difficulties in learning the interface. Thus, our results shed light on the relationship between interface syntax, learning and usability in the DM paradigm, but they also point out the need for a cognitive account of the processes by which users acquire knowledge of interface characteristics and how that knowledge is related to interface design elements.


1992 ◽  
Vol 4 (1) ◽  
pp. 35-57 ◽  
Author(s):  
Isabelle Otto ◽  
Philippe Grandguillaume ◽  
Latifa Boutkhil ◽  
Yves Burnod ◽  
Emmanuel GuigonBurnod

A new type of biologically inspired multilayered network is proposed to model the properties of the primate visual system with respect to invariant visual recognition (IVR). This model is based on 10 major neurobiological and psychological constraints. The first five constraints shape the architecture and properties of the network. 1. The network model has a Y-like double-branched multilayered architecture, with one input (the retina) and two parallel outputs, the “What” and the “Where,” which model, respectively, the temporal pathway, specialized for “object” identification, and the parietal pathway specialized for “spatial” localization. 2. Four processing layers are sufficient to model the main functional steps of primate visual system that transform the retinal information into prototypes (object-centered reference frame) in the “What” branch and into an oculomotor command in the “Where” branch. 3. The distribution of receptive field sizes within and between the two functional pathways provides an appropriate tradeoff between discrimination and invariant recognition capabilities. 4. The two outputs are represented by a population coding: the ocular command is computed as a population vector in the “Where” branch and the prototypes are coded in a “semidistributed” way in the “What” branch. In the intermediate associative steps, processing units learn to associate prototypes (through feedback connections) to component features (through feedforward ones). 5. The basic processing units of the network do not model single cells but model the local neuronal circuits that combine different information flows organized in separate cortical layers. Such a biologically constrained model shows shift-invariant and size-invariant capabilities that resemble those of humans (psychological constraints): 6. During the Learning session, a set of patterns (26 capital letters and 2 geometric figures) are presented to the network: a single presentation of each pattern in one position (at the center) and with one size is sufficient to learn the corresponding prototypes (internal representations). These patterns are thus presented in widely varying new sizes and positions during the Recognition session: 7. The “What” branch of the network succeeds in immediate recognition for patterns presented in the central zone of the retina with the learned size. 8. The recognition by the “What” branch is resistant to changes in size within a limited range of variation related to the distribution of receptive field (RF) sizes in the successive processing steps of this pathway. 9. Even when ocular movements are not allowed, the recognition capabilities of the “What” branch are unaffected by changing positions around the learned one. This significant shift-invariance of the “What” branch is also related to the distribution of RF sizes. 10. When varying both sizes and locations, the “What” and the “Where” branches cooperate for recognition: the location coding in the “Where” branch can command, under the control of the “What” branch, an ocular movement efficient to reset peripheral patterns toward the central zone of the retina until successful recognition. This model results in predictions about anatomical connections and physiological interactions between temporal and parietal cortices.


Sign in / Sign up

Export Citation Format

Share Document