Hybrid Image Illusion

Author(s):  
Aude Oliva ◽  
Philippe G. Schyns

Artists, designers, photographers, and visual scientists are routinely looking for ways to create, out of a single image, the feeling that there is more to see than what meets the eye. Many well-known visual illusions are dual in nature, causing the viewer to experience two different interpretations of the same image. Hybrid images illustrate a double-image illusion, where different images are perceived depending on viewing distance, viewing duration, or image size: one that appears when the image is viewed up-close (displaying high spatial frequencies) and another that appears from afar (showing low spatial frequencies). This method can be used to create compelling dual images in which the observer experiences different percepts when interacting with the image.

1984 ◽  
Vol 24 (10) ◽  
pp. 1407-1413 ◽  
Author(s):  
C.R. Carlson ◽  
J.R. Moeller ◽  
C.H. Anderson

2020 ◽  
Vol 31 (7) ◽  
pp. 074010
Author(s):  
George Dimas ◽  
Federico Bianchi ◽  
Dimitris K Iakovidis ◽  
Alexandros Karargyris ◽  
Gastone Ciuti ◽  
...  
Keyword(s):  

1994 ◽  
Vol 375 ◽  
Author(s):  
Fuping Liu ◽  
Ian Baker ◽  
Michael Dudley

AbstractWhite-beam synchrotron X-ray topography has been used to study the circular, prismatic, [0001] dislocation loops which are commonly-observed on the (0001) plane in polycrystalline, freshwater ice. A new method, involving detailed analyses of the effects of beam divergence on the loop images, has been developed to determine whether a loop is of vacancy or interstitial type. In an 0002 image, one half of a loop (projected as an ellipse) appears as a single image and the other half as a double image. Experimentally, it was found that the 0002 vector drawn from the center of a loop passes through the single image if the loop is of vacancy-type and through the double image if a loop is of interstitial-type. This method of loop characterization was confirmed by performing theoretical analyses of both the dislocation image widths and their strain fields.


2018 ◽  
Author(s):  
Juan Chen ◽  
Irene Sperandio ◽  
Molly J. Henry ◽  
Melvyn A Goodale

AbstractOur visual system affords a distance-invariant percept of object size by integrating retinal image size with viewing distance (size constancy). Single-unit studies with animals have shown that real changes in distance can modulate the firing rate of neurons in primary visual cortex and even subcortical structures, which raises an intriguing possibility that the required integration for size constancy may occur in the initial visual processing in V1 or even earlier. In humans, however, EEG and brain imaging studies have typically manipulated the apparent (not real) distance of stimuli using pictorial illusions, in which the cues to distance are sparse and not congruent. Here, we physically moved the monitor to different distances from the observer, a more ecologically valid paradigm that emulates what happens in everyday life. Using this paradigm in combination with electroencephalography (EEG), we were able for the first time to examine how the computation of size constancy unfolds in real time under real-world viewing conditions. We showed that even when all distance cues were available and congruent, size constancy took about 150 ms to emerge in the activity of visual cortex. The 150-ms interval exceeds the time required for the visual signals to reach V1, but is consistent with the time typically associated with later processing within V1 or recurrent processing from higher-level visual areas. Therefore, this finding provides unequivocal evidence that size constancy does not occur during the initial signal processing in V1 or earlier, but requires subsequent processing, just like any other feature binding mechanisms.


2017 ◽  
Vol 284 (1858) ◽  
pp. 20170128 ◽  
Author(s):  
James B. Barnett ◽  
Innes C. Cuthill ◽  
Nicholas E. Scott-Samuel

The effect of viewing distance on the perception of visual texture is well known: spatial frequencies higher than the resolution limit of an observer's visual system will be summed and perceived as a single combined colour. In animal defensive colour patterns, distance-dependent pattern blending may allow aposematic patterns, salient at close range, to match the background to distant observers. Indeed, recent research has indicated that reducing the distance from which a salient signal can be detected can increase survival over camouflage or conspicuous aposematism alone. We investigated whether the spatial frequency of conspicuous and cryptically coloured stripes affects the rate of avian predation. Our results are consistent with pattern blending acting to camouflage salient aposematic signals effectively at a distance. Experiments into the relative rate of avian predation on edible model caterpillars found that increasing spatial frequency (thinner stripes) increased survival. Similarly, visual modelling of avian predators showed that pattern blending increased the similarity between caterpillar and background. These results show how a colour pattern can be tuned to reveal or conceal different information at different distances, and produce tangible survival benefits.


2020 ◽  
Vol 8 (1) ◽  
pp. 97-107
Author(s):  
Sergey A. Shoydin ◽  
Artem L. Pazoev

The problems of digital synthesis of holograms associated with a discrete representation of a signal forming a holographic image are analyzed. One of the significant limitations is the technological problems of the formation of holographic structures pointwise due to diffraction limitations of the size of the focused spot of the optical-mechanical builder. This narrows the spectrum of possible spatial frequencies of the pointwise synthesized hologram in comparison with the classical hologram recorded in an analog way, which in turn leads to difficulties in recording holograms with a large depth of 3D image. We discuss a way to overcome this problem by using an optical projection system with the possibility of both transverse and longitudinal image size. Some possibilities of constructing such systems are shown and experimentally confirmed, and some problems of deformation distortions of 3D images arising during their creation are analyzed.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3860
Author(s):  
Namhoon Kim ◽  
Junsu Bae ◽  
Cheolhwan Kim ◽  
Soyeon Park ◽  
Hong-Gyoo Sohn

This paper proposes a technique to estimate the distance between an object and a rolling shutter camera using a single image. The implementation of this technique uses the principle of the rolling shutter effect (RSE), a distortion within the rolling-shutter-type camera. The proposed technique has a mathematical strength compared to other single photo-based distance estimation methods that do not consider the geometric arrangement. The relationship between the distance and RSE angle was derived using the camera parameters (focal length, shutter speed, image size, etc.). Mathematical equations were derived for three different scenarios. The mathematical model was verified through experiments using a Nikon D750 and Nikkor 50 mm lens mounted on a car with varying speeds, object distances, and camera parameters. The results show that the mathematical model provides an accurate distance estimation of an object. The distance estimation error using the RSE due to the change in speed remained stable at approximately 10 cm. However, when the distance between the object and camera was more than 10 m, the estimated distance was sensitive to the RSE and the error increased dramatically.


Perception ◽  
1991 ◽  
Vol 20 (6) ◽  
pp. 733-754 ◽  
Author(s):  
Thomas S Collett ◽  
Urs Schwarz ◽  
Erik C Sobel

In the natural world, observers perceive an object to have a relatively fixed size and depth over a wide range of distances. Retinal image size and binocular disparity are to some extent scaled with distance to give observers a measure of size constancy. The angle of convergence of the two eyes and their accommodative states are one source of scaling information, but even at close range this must be supplemented by other cues. We have investigated how angular size and oculomotor state interact in the perception of size and depth at different distances. Computer-generated images of planar and stereoscopically simulated 3-D surfaces covered with an irregular blobby texture were viewed on a computer monitor. The monitor rested on a movable sled running on rails within a darkened tunnel. An observer looking into the tunnel could see nothing but the simulated surface so that oculomotor signals provided the major potential cues to the distance of the image. Observers estimated the height of the surface, their distance from it, or the stereoscopically simulated depth within it over viewing distances which ranged from 45 cm to 130 cm. The angular width of the images lay between 2 deg and 10 deg. Estimates of the magnitude of a constant simulated depth dropped with increasing viewing distance when surfaces were of constant angular size. But with surfaces of constant physical size, estimates were more nearly independent of viewing distance. At any one distance, depths appeared to be greater, the smaller the angular size of the image. With most observers, the influence of angular size on perceived depth grew with increasing viewing distance. These findings suggest that there are two components to scaling. One is independent of angular size and related to viewing distance. The second component is related to angular size, and the weighting accorded to it grows with viewing distance. Control experiments indicate that in the tunnel, oculomotor state provides the principal cue to viewing distance. Thus, the contribution of oculomotor signals to depth scaling is gradually supplanted by other cues as viewing distance grows. Binocular estimates of the heights and distances of planar surfaces of different sizes revealed that angular size and viewing distance interact in a similar way to determine perceived size and perceived distance.


Sign in / Sign up

Export Citation Format

Share Document