scholarly journals Tracking Eye Gaze during Interpretation of Endoluminal Three-dimensional CT Colonography: Visual Perception of Experienced and Inexperienced Readers

Radiology ◽  
2014 ◽  
Vol 273 (3) ◽  
pp. 783-792 ◽  
Author(s):  
Susan Mallett ◽  
Peter Phillips ◽  
Thomas R. Fanshawe ◽  
Emma Helbren ◽  
Darren Boone ◽  
...  
2013 ◽  
Vol 13 (4) ◽  
pp. 12-12 ◽  
Author(s):  
S. A. Cholewiak ◽  
R. W. Fleming ◽  
M. Singh

Author(s):  
Nico Orlandi

Why do things look to us as they do? This question, formulated by psychologist Kurt Koffka, identifies the main problematic of vision science. Consider looking at a black cat. We tend to see both the cat and its colour as the same at different times. Despite the ease with which this perception occurs, the process by which we perceive is fairly complex. The initial stimulation that gives rise to seeing, consists in a pattern of light that projects on the retina – a light-sensitive layer of the eye. The so-called ‘retinal image’ is a two-dimensional projection that does not correspond in any obvious manner to the way things look. It is not three-dimensional, coloured and shaped in a similar fashion to the objects of our experience. Indeed the light projected from objects is not just different from what we see, it is also both continuously changing and ambiguous. Because the cat moves around, the light it reflects changes from moment to moment. The cat’s projection on the retina correspondingly changes in size. We do not, however, see the cat as changing in size. We tend to see it as size-constant and uniformly coloured through time. How do we explain this constancy? Along similar lines, the cat’s white paws cause on the retina a patch of light that differs in intensity from the rest. This patch could also be caused by a change in illumination. A black surface illuminated very brightly can look like a white surface illuminated very dimly. This means that the light hitting the retina from the paws is underdetermined – it does not uniquely specify what is present. But, again, we tend to see the paws as consistently white. We do not see them as shifting from being white to being black, but illuminated brightly. How do we explain this stability? A central aim of theories of vision is to answer these questions. The science that attempts to address these queries is interdisciplinary. Traditionally, philosophical theories of vision have influenced psychological theories and vice versa. The collaboration between these disciplines eventually developed into what is now known as cognitive science. Cognitive science includes – in addition to philosophy and psychology – computer science, linguistics and neuroscience. Cognitive scientists aim primarily to understand the process by which we see. Philosophers are interested in this topic particularly as it connects to understanding the nature of our acquaintance with reality. Theories of vision differ along many dimensions. Giving a full survey is not possible in this entry. One useful difference is whether a theory presumes that visual perception involves a psychological process. Psychological theories of vision hold that in achieving perception – which is itself a psychological state – the organism uses other psychological material. Opponents of psychological theories prefer to make reference to physiological, mechanical and neurophysiological explanations.


2020 ◽  
Vol 10 (5) ◽  
pp. 1668 ◽  
Author(s):  
Pavan Kumar B. N. ◽  
Adithya Balasubramanyam ◽  
Ashok Kumar Patil ◽  
Chethana B. ◽  
Young Ho Chai

Over the years, gaze input modality has been an easy and demanding human–computer interaction (HCI) method for various applications. The research of gaze-based interactive applications has advanced considerably, as HCIs are no longer constrained to traditional input devices. In this paper, we propose a novel immersive eye-gaze-guided camera (called GazeGuide) that can seamlessly control the movements of a camera mounted on an unmanned aerial vehicle (UAV) from the eye-gaze of a remote user. The video stream captured by the camera is fed into a head-mounted display (HMD) with a binocular eye tracker. The user’s eye-gaze is the sole input modality to maneuver the camera. A user study was conducted considering the static and moving targets of interest in a three-dimensional (3D) space to evaluate the proposed framework. GazeGuide was compared with a state-of-the-art input modality remote controller. The qualitative and quantitative results showed that the proposed GazeGuide performed significantly better than the remote controller.


i-Perception ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 204166952092703
Author(s):  
Kristof Meding ◽  
Sebastian A. Bruijns ◽  
Bernhard Schölkopf ◽  
Philipp Berens ◽  
Felix A. Wichmann

One of the most important tasks for humans is the attribution of causes and effects in all wakes of life. The first systematical study of visual perception of causality—often referred to as phenomenal causality—was done by Albert Michotte using his now well-known launching events paradigm. Launching events are the seeming collision and seeming transfer of movement between two objects—abstract, featureless stimuli (“objects”) in Michotte’s original experiments. Here, we study the relation between causal ratings for launching events in Michotte’s setting and launching collisions in a photorealistically computer-rendered setting. We presented launching events with differing temporal gaps, the same launching processes with photorealistic billiard balls, as well as photorealistic billiard balls with realistic motion dynamics, that is, an initial rebound of the first ball after collision and a short sliding phase of the second ball due to momentum and friction. We found that providing the normal launching stimulus with realistic visuals led to lower causal ratings, but realistic visuals together with realistic motion dynamics evoked higher ratings. Two-dimensional versus three-dimensional presentation, on the other hand, did not affect phenomenal causality. We discuss our results in terms of intuitive physics as well as cue conflict.


2005 ◽  
Vol 46 (3) ◽  
pp. 222-226 ◽  
Author(s):  
R. Röttgen ◽  
F. Fischbach ◽  
M. Plotkin ◽  
H. Herzog ◽  
T. Freund ◽  
...  

Purpose: To improve the sensitivity of computed tomography (CT) colonography in the detection of polyps by comparing the 3D reconstruction tool “colon dissection” and endoluminal view (virtual colonoscopy) with axial 2D reconstructions. Material and Methods: Forty‐eight patients (22 M, 26 F, mean age 57±21) were studied after intra‐anal air insufflation in the supine and prone positions using a 16‐slice helical CT (16×0.625 mm, pitch 1.7; detector rotation time 0.5 s; 160 mAs und 120 kV) and conventional colonoscopy. Two radiologists blinded to the results of the conventional colonoscopy analyzed the 3D reconstruction in virtual‐endoscopy mode, in colon‐dissection mode, and axial 2D slices. Results: Conventional colonoscopy revealed a total of 35 polyps in 15 patients; 33 polyps were disclosed by CT methods. Sensitivity and specificity for detecting colon polyps were 94% and 94%, respectively, when using the “colon dissection”, 89% and 94% when using “virtual endoscopy”, and 62% and 100% when using axial 2D reconstruction. Sensitivity in relation to the diameter of colon polyps with “colon dissection”, “virtual colonoscopy”, and axial 2D‐slices was: polyps with a diameter >5.0 mm, 100%, 100%, and 71%, respectively; polyps with a diameter of between 3 and 4.9 mm, 92%, 85%, and 46%; and polyps with a diameter <3 mm, 89%, 78%, and 56%. The difference between “virtual endoscopy” and “colon dissection” in diagnosing polyps up to 4.9 mm in diameter was statistically significant. Conclusion: 3D reconstruction software “colon dissection” improves sensitivity of CT colonography compared with the endoluminal view.


Sign in / Sign up

Export Citation Format

Share Document