scholarly journals The Spatial-Perceptual Design Space: A New Comprehension for Data Visualization

2007 ◽  
Vol 6 (4) ◽  
pp. 261-279 ◽  
Author(s):  
José Fernando Rodrigues ◽  
Agma JM Traina ◽  
Maria Cristina F. de Oliveira ◽  
Caetano Traina

We revisit the design space of visualizations aiming at identifying and relating its components. In this sense, we establish a model to examine the process through which visualizations become expressive for users. This model has lead us to a taxonomy oriented to the human visual perception. The essence of this taxonomy provides natural criteria in order to delineate a novel understanding for the design space of visualizations. From such understanding, we elaborate a model for generalized design. The model poses an intuitive comprehension for the visualization design space departing from fundamental pre-attentive stimuli and from perceptual phenomena. The paper is presented as a survey, its structure introduces an alternative conceptual organization for the space of techniques concerning visual analysis.

2018 ◽  
Author(s):  
Anamaria Crisan ◽  
Jennifer L. Gardy ◽  
Tamara Munzner

AbstractMotivation:Data visualization is an important tool for exploring and communicating findings from genomic and healthcare datasets. Yet, without a systematic way of organizing and describing the design space of data visualizations, researchers may not be aware of the breadth of possible visualization design choices or how to distinguish between good and bad options.Results:We have developed a method that systematically surveys data visualizations using the analysis of both text and images. Our method supports the construction of a visualization design space that is explorable along two axes: why the visualization was created and how it was constructed. We applied our method to a corpus of scientific research articles from infectious disease genomic epidemiology and derived a Genomic Epidemiology Visualization Typology (GEViT) that describes how visualizations were created from a series of chart types, combinations, and enhancements. We have also implemented an online gallery that allows others to explore our resulting design space of visualizations. Our results have important implications for visualization design and for researchers intending to develop or use data visualization tools. Finally, the method that we introduce is extensible to constructing visualizations design spaces across other research areas.Availability:Our browsable gallery is available at http://gevit.net and all project code can be found at https://github.com/amcrisan/gevitAnalysisRelease


1993 ◽  
Vol 26 (6) ◽  
pp. 825-842 ◽  
Author(s):  
Yung-Sheng Chen ◽  
Shih-Liang Chang ◽  
Wen-Hsing Hsu

Nanophotonics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 41-74
Author(s):  
Bernard C. Kress ◽  
Ishan Chatterjee

AbstractThis paper is a review and analysis of the various implementation architectures of diffractive waveguide combiners for augmented reality (AR), mixed reality (MR) headsets, and smart glasses. Extended reality (XR) is another acronym frequently used to refer to all variants across the MR spectrum. Such devices have the potential to revolutionize how we work, communicate, travel, learn, teach, shop, and are entertained. Already, market analysts show very optimistic expectations on return on investment in MR, for both enterprise and consumer applications. Hardware architectures and technologies for AR and MR have made tremendous progress over the past five years, fueled by recent investment hype in start-ups and accelerated mergers and acquisitions by larger corporations. In order to meet such high market expectations, several challenges must be addressed: first, cementing primary use cases for each specific market segment and, second, achieving greater MR performance out of increasingly size-, weight-, cost- and power-constrained hardware. One such crucial component is the optical combiner. Combiners are often considered as critical optical elements in MR headsets, as they are the direct window to both the digital content and the real world for the user’s eyes.Two main pillars defining the MR experience are comfort and immersion. Comfort comes in various forms: –wearable comfort—reducing weight and size, pushing back the center of gravity, addressing thermal issues, and so on–visual comfort—providing accurate and natural 3-dimensional cues over a large field of view and a high angular resolution–vestibular comfort—providing stable and realistic virtual overlays that spatially agree with the user’s motion–social comfort—allowing for true eye contact, in a socially acceptable form factor.Immersion can be defined as the multisensory perceptual experience (including audio, display, gestures, haptics) that conveys to the user a sense of realism and envelopment. In order to effectively address both comfort and immersion challenges through improved hardware architectures and software developments, a deep understanding of the specific features and limitations of the human visual perception system is required. We emphasize the need for a human-centric optical design process, which would allow for the most comfortable headset design (wearable, visual, vestibular, and social comfort) without compromising the user’s sense of immersion (display, sensing, and interaction). Matching the specifics of the display architecture to the human visual perception system is key to bound the constraints of the hardware allowing for headset development and mass production at reasonable costs, while providing a delightful experience to the end user.


Author(s):  
Denis Hilton

Attribution processes appear to be an integral part of human visual perception, as low-level inferences of causality and intentionality appear to be automatic and are supported by specific brain systems. However, higher-order attribution processes use information held in memory or made present at the time of judgment. While attribution processes about social objects are sometimes biased, there is scope for partial correction. This chapter reviews work on the generation, communication, and interpretation of complex explanations, with reference to explanation-based models of text understanding that result in situation models of narratives. It distinguishes between causal connection and causal selection, and suggests that a factor will be discounted if it is not perceived to be connected to the event and backgrounded if it is perceived to be causally connected to that event, but is not selected as relevant to an explanation. The final section focuses on how interpersonal explanation processes constrain causal selection.


2017 ◽  
Author(s):  
Jeremy Cole ◽  
David Reitter ◽  
Yanxi Liu

Most literature on symmetry perception has focused on bilateralreflection symmetry with some suggesting that it isthe only type of symmetry humans can perceive (Wilson &Wilkinson, 2002). Using image stimuli generated from themathematically well-defined seventeen wallpaper groups, thisstudy demonstrates that humans can discriminate various symmetriesfound in 2D wallpaper patterns (Liu, Hel-Or, Kaplan,Van Gool, et al., 2010). Furthermore, the results demonstratethe features which contribute to wallpaper pattern perception.All wallpaper groups but one were found to be reliably distinguishable(p < 0:05). Additionally, as wallpaper patterns canbe arranged in a hierarchy, we propose a metric to quantify thesimilarity of their perception using the shortest path in this hierarchy.This subgroup distance was found to be a factor in alikely model of pattern perception.


Sign in / Sign up

Export Citation Format

Share Document