Differences in the Perceptual Processes Behind Shading and Achromatic Color Responses on the Rorschach

Author(s):  
Masaru Yasuda

Abstract. Differences in perceptional processes between shading responses and achromatic-color responses were examined by comparing eye movements. The following hypotheses were tested. Hypothesis 1: Shading responses, compared to non-shading responses, would show an increased fixation time directed at the inside of the area of shading stimuli and a decreased fixation time directed at the outline. Hypothesis 2: The differences in fixation times proposed in Hypothesis 1 would not be observed between achromatic-color responses and non-achromatic-color responses. Eye movement data of 60 responses produced for the W in Card IV and D1 in Card VI were analyzed. The results indicated that shading responses had significantly longer fixation times directed at the inner area and significantly shorter fixation times directed at the outline, compared to non-shading responses. On the other hand, achromatic-color responses did not show a significant main effect or interaction. The above results supported Hypotheses 1 and 2.

Perception ◽  
10.1068/p6080 ◽  
2009 ◽  
Vol 38 (4) ◽  
pp. 479-491 ◽  
Author(s):  
Zoi Kapoula ◽  
Gintautas Daunys ◽  
Olivier Herbez ◽  
Qing Yang

Franklin et al (1993, Leonardo26 103–108) reported that title information influenced the interpretation of paintings but not the way observers explore and look at the paintings; in their study subjects used a hand pointer to indicate where they looked. We used eye-movement recording and examined the effect of title on eye-movement exploration of nonrealistic cubist paintings giving rise to free interpretation. Three paintings by Fernand Léger were used: The Wedding contained high density of small fragments of real human faces, limbs, or arbitrary fragments mixed with large plane surfaces; The Alarm Clock consisted of arbitrary fragments creating perception of a person; Contrast of Forms contained forms and cylinders. Different groups of naive subjects explored paintings without knowing the title (spontaneous condition), with the instruction to invent a title (active condition), and after announcement of the authentic title (driven condition). Exploration time was unrestricted and eye movements were recorded by Chronos video-oculography. Fixation duration was found to increase in the driven condition relative to active condition; such increase occurred for all paintings. In contrast, fixation-duration variability remained stable over all title conditions. Saccade amplitude increased in the driven condition for Contrast of Forms. Increase of fixation duration and of saccade size are attributed to additional cognitive analysis, ie search fitting between the title and the painting. When comparing paintings within each title condition, The Wedding produced different results than the other paintings: longer exploration time (in spontaneous condition), higher fixation duration variability (in spontaneous and driven conditions), but smaller saccade sizes (in active and driven conditions). The differences are attributed to visual aspects (high density of small fragments) but also to complex semantic analysis of multiple segments of faces and limbs contained by this painting. Spatial distribution of fixation time was highly selective, with a preponderance of the central area that was the most fixated for all paintings and all title conditions. In the driven condition, however, loci of most frequent fixations were different than in the other conditions from the first 5 s; particularly for The Alarm Clock the title drove the eyes rapidly on the inconspicuous fragment of the clock. Our findings go against Franklin's conclusions. We conclude that title information influences both physiological parameters of eye movements and the distribution of fixation time over different selected areas of the painting.


2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


1972 ◽  
Vol 35 (1) ◽  
pp. 103-110
Author(s):  
Phillip Kleespies ◽  
Morton Wiener

This study explored (1) for evidence of visual input at so-called “subliminal” exposure durations, and (2) whether the response, if any, was a function of the thematic content of the stimulus. Thematic content (threatening versus non-threatening) and stimulus structure (angular versus curved) were varied independently under “subliminal,” “part-cue,” and “identification” exposure conditions. With Ss' reports and the frequency and latency of first eye movements (“orienting reflex”) as input indicators, there was no evidence of input differences which are a function of thematic content at any exposure duration, and the “report” data were consistent with the eye-movement data.


2020 ◽  
Author(s):  
Šimon Kucharský ◽  
Daan Roelof van Renswoude ◽  
Maartje Eusebia Josefa Raijmakers ◽  
Ingmar Visser

Describing, analyzing and explaining patterns in eye movement behavior is crucial for understanding visual perception. Further, eye movements are increasingly used in informing cognitive process models. In this article, we start by reviewing basic characteristics and desiderata for models of eye movements. Specifically, we argue that there is a need for models combining spatial and temporal aspects of eye-tracking data (i.e., fixation durations and fixation locations), that formal models derived from concrete theoretical assumptions are needed to inform our empirical research, and custom statistical models are useful for detecting specific empirical phenomena that are to be explained by said theory. In this article, we develop a conceptual model of eye movements, or specifically, fixation durations and fixation locations, and from it derive a formal statistical model --- meeting our goal of crafting a model useful in both the theoretical and empirical research cycle. We demonstrate the use of the model on an example of infant natural scene viewing, to show that the model is able to explain different features of the eye movement data, and to showcase how to identify that the model needs to be adapted if it does not agree with the data. We conclude with discussion of potential future avenues for formal eye movement models.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


1999 ◽  
Vol 81 (5) ◽  
pp. 2538-2557 ◽  
Author(s):  
Chiju Chen-Huang ◽  
Robert A. McCrea

Effects of viewing distance on the responses of vestibular neurons to combined angular and linear vestibular stimulation. The firing behavior of 59 horizontal canal–related secondary vestibular neurons was studied in alert squirrel monkeys during the combined angular and linear vestibuloocular reflex (CVOR). The CVOR was evoked by positioning the animal’s head 20 cm in front of, or behind, the axis of rotation during whole body rotation (0.7, 1.9, and 4.0 Hz). The effect of viewing distance was studied by having the monkeys fixate small targets that were either near (10 cm) or far (1.3–1.7 m) from the eyes. Most units (50/59) were sensitive to eye movements and were monosynaptically activated after electrical stimulation of the vestibular nerve (51/56 tested). The responses of eye movement–related units were significantly affected by viewing distance. The viewing distance–related change in response gain of many eye-head-velocity and burst-position units was comparable with the change in eye movement gain. On the other hand, position-vestibular-pause units were approximately half as sensitive to changes in viewing distance as were eye movements. The sensitivity of units to the linear vestibuloocular reflex (LVOR) was estimated by subtraction of angular vestibuloocular reflex (AVOR)–related responses recorded with the head in the center of the axis of rotation from CVOR responses. During far target viewing, unit sensitivity to linear translation was small, but during near target viewing the firing rate of many units was strongly modulated. The LVOR responses and viewing distance–related LVOR responses of most units were nearly in phase with linear head velocity. The signals generated by secondary vestibular units during voluntary cancellation of the AVOR and CVOR were comparable. However, unit sensitivity to linear translation and angular rotation were not well correlated either during far or near target viewing. Unit LVOR responses were also not well correlated with their sensitivity to smooth pursuit eye movements or their sensitivity to viewing distance during the AVOR. On the other hand there was a significant correlation between static eye position sensitivity and sensitivity to viewing distance. We conclude that secondary horizontal canal–related vestibuloocular pathways are an important part of the premotor neural substrate that produces the LVOR. The otolith sensory signals that appear on these pathways have been spatially and temporally transformed to match the angular eye movement commands required to stabilize images at different distances. We suggest that this transformation may be performed by the circuits related to temporal integration of the LVOR.


2012 ◽  
Vol 25 (0) ◽  
pp. 171-172
Author(s):  
Fumio Mizuno ◽  
Tomoaki Hayasaka ◽  
Takami Yamaguchi

Humans have the capability to flexibly adapt to visual stimulation, such as spatial inversion in which a person wears glasses that display images upside down for long periods of time (Ewert, 1930; Snyder and Pronko, 1952; Stratton, 1887). To investigate feasibility of extension of vision and the flexible adaptation of the human visual system with binocular rivalry, we developed a system that provides a human user with the artificial oculomotor ability to control their eyes independently for arbitrary directions, and we named the system Virtual Chameleon having to do with Chameleons (Mizuno et al., 2010, 2011). The successful users of the system were able to actively control visual axes by manipulating 3D sensors held by their both hands, to watch independent fields of view presented to the left and right eyes, and to look around as chameleons do. Although it was thought that those independent fields of view provided to the user were formed by eye movements control corresponding to pursuit movements on human, the system did not have control systems to perform saccadic movements and compensatory movements as numerous animals including human do. Fluctuations in dominance and suppression with binocular rivalry are irregular, but it is possible to bias these fluctuations by boosting the strength of one rival image over the other (Blake and Logothetis, 2002). It was assumed that visual stimuli induced by various eye movements affect predominance. Therefore, in this research, we focused on influenced of patterns of eye movements on visual perception with binocular rivalry, and implemented functions to produce saccadic movements in Virtual Chameleon.


2008 ◽  
Vol 3 (2) ◽  
pp. 149-175 ◽  
Author(s):  
Ian Cunnings ◽  
Harald Clahsen

The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.


1987 ◽  
Vol 57 (4) ◽  
pp. 1033-1049 ◽  
Author(s):  
P. H. Schiller ◽  
J. H. Sandell ◽  
J. H. Maunsell

Rhesus monkeys were trained to make saccadic eye movements to visual targets using detection and discrimination paradigms in which they were required to make a saccade either to a solitary stimulus (detection) or to that same stimulus when it appeared simultaneously with several other stimuli (discrimination). The detection paradigm yielded a bimodal distribution of saccadic latencies with the faster mode peaking around 100 ms (express saccades); the introduction of a pause between the termination of the fixation spot and the onset of the target (gap) increased the frequency of express saccades. The discrimination paradigm, on the other hand, yielded only a unimodal distribution of latencies even when a gap was introduced, and there was no evidence for short-latency "express" saccades. In three monkeys either the frontal eye field or the superior colliculus was ablated unilaterally. Frontal eye field ablation had no discernible long-term effects on the distribution of saccadic latencies in either the detection or discrimination tasks. After unilateral collicular ablation, on the other hand, express saccades obtained in the detection paradigm were eliminated for eye movements contralateral to the lesion, leaving only a unimodal distribution of latencies. This deficit persisted throughout testing, which in one monkey continued for 9 mo. Express saccades were not observed again for saccades contralateral to the lesion, and the mean latency of the contralateral saccades was longer than the mean latency of the second peak for the ipsiversive saccades. The latency distribution of saccades ipsiversive to the collicular lesion was unaffected except for a few days after surgery, during which time an increase in the proportion of express saccades was evident. Saccades obtained with the discrimination paradigm yielded a small but reliable increase in saccadic latencies following collicular lesions, without altering the shape of the distribution. Unilateral muscimol injections into the superior colliculus produced results similar to those obtained immediately after collicular lesions: saccades contralateral to the injection site were strongly inhibited and showed increased saccadic latencies. This was accompanied by a decrease of ipsilateral saccadic latencies and an increase in the number of saccades falling into the express range. The results suggest that the superior colliculus is essential for the generation of short-latency (express) saccades and that the frontal eye fields do not play a significant role in shaping the distribution of saccadic latencies in the paradigms used in this study.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document