scholarly journals Viewing-patterns and perspectival painting: An eye-tracking study on the effect of the vanishing point

2021 ◽  
Vol 13 (2) ◽  
Author(s):  
Arthur Crucq

Linear perspective has long been used to create the illusion of three-dimensional space on the picture plane. One of its central axioms comes from Euclidean geometry and holds that all parallel lines converge in a single vanishing point. Although linear perspective provided the painter with a means to organize the painting, the question is whether the gaze of the beholder is also affected by the underlying structure of linear perspective: for instance, in such a way that the orthogonals leading to the vanishing point also automatically guides the beholder’s gaze. This was researched during a pilot study by means of an eye-tracking experiment at the Lab for Cognitive Research in Art History (CReA) of the University of Vienna. It appears that in some compositions the vanishing point attracts the view of the participant. This effect is more significant when the vanishing point coincides with the central vertical axis of the painting, but is even stronger when the vanishing point also coincides with a major visual feature such as an object or figure. The latter calls into question what exactly attracts the gaze of the viewer, i.e., what comes first: the geometrical construct of the vanishing point or the visual feature?

1995 ◽  
Vol 25 (3) ◽  
pp. 639-648 ◽  
Author(s):  
L. Mottron ◽  
S. Belleville

SYNOPSISThis study examines perspective construction in an autistic patient (E.C.) with quasi-normal intelligence who exhibits exceptional ability when performing three-dimensional drawings of inanimate objects. Examination of E.C.'s spontaneous graphic productions showed that although his drawings approximate the ‘linear perspective’ system, the subject does not use vanishing points in his productions. Nevertheless, a formal computational analysis of E.C.'s accuracy in an experimental task showed that he was able to draw objects rotated in three-dimensional space more accurately than over-trained controls. This accuracy was not modified by suppressing graphic cues that permitted the construction of a vanishing point. E.C. was also able to detect a perspective incongruency between an object and a landscape at a level superior to that of control subjects. Since E.C. does not construct vanishing points in his drawings, it is proposed that his production of a precise realistic perspective is reached without the use of explicit or implicit perspective rules. ‘Special abilities’ in perspective are examined in relation to existing theoretical models of the cognitive deficit in autism and are compared to other special abilities in autism.


2013 ◽  
Vol 36 (5) ◽  
pp. 546-547 ◽  
Author(s):  
Theresa Burt de Perera ◽  
Robert Holbrook ◽  
Victoria Davis ◽  
Alex Kacelnik ◽  
Tim Guilford

AbstractAnimals navigate through three-dimensional environments, but we argue that the way they encode three-dimensional spatial information is shaped by how they use the vertical component of space. We agree with Jeffery et al. that the representation of three-dimensional space in vertebrates is probably bicoded (with separation of the plane of locomotion and its orthogonal axis), but we believe that their suggestion that the vertical axis is stored “contextually” (that is, not containing distance or direction metrics usable for novel computations) is unlikely, and as yet unsupported. We describe potential experimental protocols that could clarify these differences in opinion empirically.


1995 ◽  
Vol 73 (2) ◽  
pp. 766-779 ◽  
Author(s):  
D. Tweed ◽  
B. Glenn ◽  
T. Vilis

1. Three-dimensional (3D) eye and head rotations were measured with the use of the magnetic search coil technique in six healthy human subjects as they made large gaze shifts. The aims of this study were 1) to see whether the kinematic rules that constrain eye and head orientations to two degrees of freedom between saccades also hold during movements; 2) to chart the curvature and looping in eye and head trajectories; and 3) to assess whether the timing and paths of eye and head movements are more compatible with a single gaze error command driving both movements, or with two different feedback loops. 2. Static orientations of the eye and head relative to space are known to resemble the distribution that would be generated by a Fick gimbal (a horizontal axis moving on a fixed vertical axis). We show that gaze point trajectories during eye-head gaze shifts fit the Fick gimbal pattern, with horizontal movements following straight "line of latitude" paths and vertical movements curving like lines of longitude. However, horizontal (and to a lesser extent vertical) movements showed direction-dependent looping, with rightward and leftward (and up and down) saccades tracing slightly different paths. Plots of facing direction (the analogue of gaze direction for the head) also showed the latitude/longitude pattern, without looping. In radial saccades, the gaze point initially moved more vertically than the target direction and then curved; head trajectories were straight. 3. The eye and head components of randomly sequenced gaze shifts were not time locked to one another. The head could start moving at any time from slightly before the eye until 200 ms after, and the standard deviation of this interval could be as large as 80 ms. The head continued moving for a long (up to 400 ms) and highly variable time after the gaze error had fallen to zero. For repeated saccades between the same targets, peak eye and head velocities were directly, but very weakly, correlated; fast eye movements could accompany slow head movements and vice versa. Peak head acceleration and deceleration were also very weakly correlated with eye velocity. Further, the head rotated about an essentially fixed axis, with a smooth bell-shaped velocity profile, whereas the axis of eye rotation relative to the head varied throughout the movement and the velocity profiles were more ragged. 4. Plots of 3D eye orientation revealed strong and consistent looping in eye trajectories relative to space.(ABSTRACT TRUNCATED AT 400 WORDS)


2017 ◽  
Vol 33 (S1) ◽  
pp. 245-245
Author(s):  
Luciano Recalde ◽  
José Núñez ◽  
César Yegros ◽  
Carolina Villegas

INTRODUCTION:There are different devices, systems and technologies for people with disabilities. It's necessary to provide information on the effectiveness of products in the market and competitiveness in terms of price-quality, and providing an endorsement in the acquisition of technologies that improve their quality of life. The use of eye tracking devices is growing and its implementation in different areas has attracted the attention of several developers. Therefore the need to generate a product that evaluates the functionality of such devices is necessary in order to avoid unnecessary expenses when acquiring or repairing one of these devices.METHODS:An interface was created with different functionalities such as the location of the coordinates in which the pointer is located, standardized graphic interface design to provide statistical data that allow an objective result for its subsequent analysis and an endless number of design possibilities.The tests performed were of accuracy and precision where the subject was asked to follow the instructions given and observe a sequence of points, especially the points located at the ends of the monitor as these are the critical points in which there is less coincidence between the cursor and the gaze.RESULTS:The results obtained provided information on the performance of the tracking device. In this way it was possible to establish that the accuracy of the ocular tracker: it was ± 12.83 pixels on the horizontal axis and ± 10.66 pixels on the vertical axis. The precision was ± 9.8 pixels on the horizontal axis and ± 14.23 pixels on the vertical axis.This shows the use phenomenon caused due to the limited mobility of the eyes in the vertical axis in comparison to the horizontal mobility. The precision data obtained indicate that, because the movement on the vertical axis is smaller, there is a less continuous spectrum of positions on the axis, which translates to less precision.CONCLUSIONS:The data obtained can be used to compare with the results of the test with other eye tracking devices and thus this could serve as a tool to select an eye tracking device according to the user's need and his economical capabilities.


Author(s):  
Michael Burch ◽  
Andrei Jalba ◽  
Carl van Dueren den Hollander

Face alignment and eye tracking for interactive applications should be performed with very low latency or users will notice the delay. In this chapter, a face alignment method for real-time applications is introduced featuring a convolutional neural network architecture for face and pose alignment. The performance of the novel method is compared to a face alignment algorithm included in the freely available OpenFace toolkit, which also focuses on real-time applications. The approach exceeds OpenFace's performance on both our own and the 300W test sets in terms of accuracy and robustness but requires significant parallel processing power, currently provided by the GPU. For the eye tracking application, stereo cameras are used as input to determine the position of a user's eyes in three-dimensional space. It does not require synchronized recordings, which may contain redundant information, and instead prefers staggered recordings, which maximize the number of possible model updates.


1999 ◽  
Vol 81 (1) ◽  
pp. 267-276 ◽  
Author(s):  
Douglas R. W. Wylie ◽  
Barrie J. Frost

Wylie, Douglas R. W. and Barrie J. Frost. Responses of Neurons in the nucleus of the basal optic root to translational and rotational flowfields. J. Neurophysiol. 81: 267–276, 1999. The nucleus of the basal optic root (nBOR) receives direct input from the contralateral retina and is the first step in a pathway dedicated to the analysis of optic flowfields resulting from self-motion. Previous studies have shown that most nBOR neurons exhibit direction selectivity in response to large-field stimuli moving in the contralateral hemifield, but a subpopulation of nBOR neurons has binocular receptive fields. In this study, the activity of binocular nBOR neurons was recorded in anesthetized pigeons in response to panoramic translational and rotational optic flow. Translational optic flow was produced by the “translator” projector described in the companion paper, and rotational optic flow was produced by a “planetarium projector” described by Wylie and Frost. The axis of rotation or translation could be positioned to any orientation in three-dimensional space. We recorded from 37 cells, most of which exhibited a strong contralateral dominance. Most of these cells were located in the caudal and dorsal aspects of the nBOR complex and many were localized to the subnucleus nBOR dorsalis. Other units were located outside the boundaries of the nBOR complex in the adjacent area ventralis of Tsai or mesencephalic reticular formation. Six cells responded best to rotational flowfields, whereas 31 responded best to translational flowfields. Of the rotation cells, three preferred rotation about the vertical axis and three preferred horizontal axes. Of the translation cells, 3 responded best to a flowfield simulating downward translation of the bird along a vertical axis, whereas the remaining 28 responded best to flowfields resulting from translation along axes in the horizontal plane. Seventeen of these cells preferred a flowfield resulting from the animal translating backward along an axis oriented ∼45° to the midline, but the best axes of the remaining eleven cells were distributed throughout the horizontal plane with no definitive clustering. These data are compared with the responses of vestibulocerebellar Purkinje cells.


2003 ◽  
Vol 3 ◽  
pp. 1286-1293 ◽  
Author(s):  
Soren Ventegodt ◽  
Niels Jorgen Andersen ◽  
Joav Merrick

When we acknowledge our purpose as the essence of our self, when we take all our power into use in an effortless way, and when we fully accept our own nature — including sex and sexuality, our purpose of life takes the form of a unique talent. Using this talent gives the experience of happiness. A person in his natural state of being uses his core talent in a conscious, joyful, and effortless way, contributing to the world the best he or she has to offer. Full expression of self happens when a person, in full acceptance of body and life, with whole-hearted intension, uses all his personal powers to realize his core talent and all associated talents, to contribute to his beloved and to the world. Thus, self-actualisation is a result of a person fully expressing and realizing his core talent.The theory of talent states that a core talent can be expressed optimally when a human being takes possession of a three-dimensional space with the axis of purpose, power and gender, as we have a threefold need: 1-Acknowledging our core talent (our purpose of life) and intending it 2-Understanding our potential powers and manifesting them 3-Accepting our human form including our sex and expressing itThe first dimension is spiritual, the next dimension is mental, emotional and physical, and the third dimension is bodily and sexual. We manifest our talents in a giving movement from the bottom of our soul trough our biological nature onto the subject and object of the outer world. These three dimensions can be drawn as three axes, one saggital axis called purpose or love or me-you, one vertical axis called power or consciousness (light) or heaven-earth, and one horizontal axis called gender or joy or male-female. The three core dimensions of human existence are considered of equal importance for expression of our life purpose, life mission, or core talent. Each of the dimensions is connected to special needs. When these needs are not fulfilled, we suffer and if this suffering becomes unbearable we deny the dimension or a part of is. This is why the dimensions of purpose, power and gender become suppressed from our consciousness.


Perception ◽  
1977 ◽  
Vol 6 (3) ◽  
pp. 327-332 ◽  
Author(s):  
Raymond Klein

Four stereoblind and four normal subjects were tested on a mental rotation task. It was hypothesized that, if stereopsis is an important input for building up the perceptual system that represents three-dimensional space, then subjects lacking it ought to be deficient at mental rotations in depth. Stereoblind subjects were equally efficient at picture-plane and depth rotations, and were nonsignificantly better than normal subjects at rotations in depth. It was concluded that in the absence of stereopsis other cues are sufficient for the development of the ‘three-dimensional’ perceptual system. A puzzling paradox was raised, however, by the finding that the introspections of the two groups differed markedly.


1993 ◽  
Vol 70 (6) ◽  
pp. 2647-2659 ◽  
Author(s):  
D. R. Wylie ◽  
B. J. Frost

1. The complex spike activity of Purkinje cells in the flocculus in response to rotational flowfields was recorded extracellularly in anesthetized pigeons. 2. The optokinetic stimulus was produced by a rotating “planetarium projector.” A light source was placed in the center of a tin cylinder, which was pierced with numerous small holes. A pen motor oscillated the cylinder about its long axis. This apparatus was placed above the bird's head and the resultant rotational flow-field was projected onto screens that surrounded the bird on all four sides. The axis of rotation of the planetarium could be oriented to any position in three-dimensional space. 3. Two types of responses were found: vertical axis (VA; n = 43) neurons responded best to visual rotation about the vertical axis, and H-135i neurons (n = 34) responded best to rotation about a horizontal axis. The preferred orientation of the horizontal axis was at approximately 135 degrees ipsilateral azimuth. VA neurons were excited by rotation about the vertical axis producing forward (temporal to nasal) and backward motion in the ipsilateral and contralateral eyes, respectively, and were inhibited by rotation in the opposite direction. H-135i neurons in the left flocculus were excited by counterclockwise rotation about the 135 degrees ipsilateral horizontal axis and were inhibited by clockwise motion. Thus, the VA and H-135i neurons, respectively, encode visual flowfields resulting from head rotations stimulating the ipsilateral horizontal and ipsilateral anterior semicircular canals. 4. Sixty-seven percent of VA and 80% of H-135i neurons had binocular receptive fields, although for most binocular cells the ipsilateral eye was dominant. Binocular stimulation resulted in a greater depth of modulation than did monocular stimulation of the dominant eye for 69% of the cells. 5. Monocular stimulation of the VA neurons revealed that the best axis for the contralateral eye was tilted back 11 degrees, on average, to the best axis for ipsilateral stimulation. For the H-135i neurons, the best axes for monocular stimulation of the two eyes were approximately the same. 6. By stimulating circumscribed portions of the monocular receptive fields of the H-135i neurons with alternating upward and downward largefield motion, it was revealed that the contralateral receptive fields were bipartite. Upward motion was preferred in the anterior 45 degrees of the contralateral field, and downward motion, was preferred in the central 90 degrees of the contralateral visual field.(ABSTRACT TRUNCATED AT 400 WORDS)


2021 ◽  
Author(s):  
◽  
Laura Coates

<p>Contemporary architectural practise has come to depend upon digital representation as a means of design and for the production of architectural drawings. The computer is common place in architectural offices, relegating the drawing board as a machine of the past. Today, the architect is more likely to draw with a mouse than a mechanical pencil. The proposition of this research suggests such a dramatic shift within representational technology will not only affect how architects design, but also, what they design. Digital modes of architectural representation are reliant on mathematical code designed to artificially simulate visual experience. Such software offers strict alliance with a geometrically correct perspective code making the construction of perspective as simple as taking a ‘snap shot’. The compliance of the digital drawing to codes prescribed by a programmer distance the architect from the perspectival representation, consequently removing the architect’s control of the drawing convention. The universality of perspectival views is enforced by computer programmes such as Google Sketch-Up, which use perspective as a default view. This research explores the bias of linear perspective, revealing that which architects have forgotten due to a dependence on digital software. Special attention is drawn to the lack of control the architect exerts over their limits of representation. By using manual drawing the perspective convention is able to be unpacked and critiqued against the limitations of the system first prescribed by Brunelleschi. The manual drawing is positioned as a powerful mode of representation for it overtly expresses projection and the architect’s control of the line. The hand drawing allows the convention to be interpreted erroneously. The research is methodology driven, focusing on representation as more than a rudimentary tool, but a component of the design process. Thus, representational tools are used to provide a new spatial representation of a site. Computer aided design entered wide spread architectural practice at the end of the 1980’s, a decade that provided an ideal setting for speculative drawn projects. Such projects proved fruitful to architects critically approaching issues of representation and drawing convention, treating the drawing as more than utilitarian in the production of architecture. Whilst the move into digital imagining is not a paradigm shift for the act of drawing, it fundamentally shifted the way architects draw, separating drawing conventions onto visually separate ‘sheets’. The architectural drawing known today was that discovered in the Renaissance, Renaissance architects, the first to conceive of architecture through representation, thus was their endeavour to produce a true three dimensional image. The Renaissance architect executed absolute control of perspective, control, which has since defined the modern architect. Positioned within research by design, the ‘drawing-out’ process is a critical interpretation of perspective. In particular the drawing of instrumental perspective is unpacked within the realm of scientific research. The picture plane, horizon line and ground plane remain constant as the positions of these are well documented. The stationary point, vanishing point (possibly the most speculative components of the drawing) or the relationship between the two, behave as independent variables. In breaking the assumptions that underlie linear perspective as a fixed geometric system we may ask ourselves if we are in control of representational methods, or if they control us. Since architects are controlled by their means of representation this question is paramount to the discipline, particularly today, when digital drawing has shifted the relationship between architect and representation. The implications of this new relationship may result in monotony across the architectural disciple, where the production of critical architecture is secondary to computer technology.</p>


Sign in / Sign up

Export Citation Format

Share Document