scholarly journals Integration of CityGML and Collada for High-Quality Geographic Data Visualization on the PC and Xbox 360

Author(s):  
Marc Herrlich ◽  
Henrik Holle ◽  
Rainer Malaka
2019 ◽  
Vol 16 ◽  
Author(s):  
Marta M. Jankowska ◽  
Jiue-An Yang ◽  
Jessica Block ◽  
Rebecca J. Baer ◽  
Laura L. Jelliffe-Pawlowski ◽  
...  

2016 ◽  
Author(s):  
Jens Ingensand ◽  
Sarah Composto ◽  
Olivier Ertz ◽  
Daniel Rappo ◽  
Marion Nappez ◽  
...  

Scientific projects are increasingly using volunteered geographic information (VGI) in order to collect and validate geographic data. This concept relies on the three challenges that first of all users can be found and second be convinced to collaborate and contribute and that scientists finally are able to gather high quality data for their projects. In this paper these three challenges are discussed using the experience with three different research projects: Urbangene, Signalez-nous and BioSentiers.


2003 ◽  
Vol 36 (3) ◽  
pp. 944-947 ◽  
Author(s):  
T. D. Fenn ◽  
D. Ringe ◽  
G. A. Petsko

Macromolecular visualization is hampered by the fragmented set of available programs and the lack of cooperativity among them. The amount of visual information required for robust structural analysis is relatively difficult to generate and rarely allows further high-quality three-dimensional graphic rendering. Here, a modification ofMolScript[Kraulis (1991).J. Appl. Cryst.24, 946–950] is presented which contains the capability of the originalMolScript, the ability to carry out a majority of the options available in most other crystallographic visualization packages, as well as several new features of its own.POVScript+(currently version 1.62) allows anisotropic displacement ellipsoid rendering (read in as a second-rank tensor from a PDB file), electron-density polygonization (in several formats derived from a `marching tetrahedra' approach), volumetric rendering of electron density and GRASP/MSMS surface-map input/output. Finally,POVRayoutput is supported (viaa modified version ofPovScript) to generate high-quality renderings that are easily modified for any of a number of purposes (e.g.animations or altered textures).POVScript+provides a marked increase in the amount of structural and atomic detail possible, while still allowing a straightforward means of generating this information.


Author(s):  
Tatyana S. Molokina ◽  
◽  
Aleksey A. Kolesnikov ◽  

The growth of information technology has led to a significant expansion of the possibilities for storing, processing and presenting spatial data. This gave a new round of development to such a direction of cartography and geoinformatics as geovisualization. Interactivity and dynamics have become the main distinguishing features of modern maps, especially in the field of cartographic design, which now extend to the problems of human-computer interaction to ensure more successful analysis of geodata and the development of spatial solutions. The article examines the existing definitions of geovisualization and proposes its own version. The scheme and features of individual stages of geovisualization creation are considered. Typical tasks that need to be solved to create high-quality visualization of spatial data are formulated and their systematization is performed. On the basis of the mentioned above tasks and their specificity, the most promising areas of research in the field of geovisualization were identified.


2020 ◽  
Author(s):  
Andrew Lensen ◽  
Bing Xue ◽  
Mengjie Zhang

Data visualization is a key tool in data mining for understanding big datasets. Many visualization methods have been proposed, including the well-regarded state-of-the-art method t-distributed stochastic neighbor embedding. However, the most powerful visualization methods have a significant limitation: the manner in which they create their visualization from the original features of the dataset is completely opaque. Many domains require an understanding of the data in terms of the original features; there is hence a need for powerful visualization methods which use understandable models. In this article, we propose a genetic programming (GP) approach called GP-tSNE for evolving interpretable mappings from the dataset to high-quality visualizations. A multiobjective approach is designed that produces a variety of visualizations in a single run which gives different tradeoffs between visual quality and model complexity. Testing against baseline methods on a variety of datasets shows the clear potential of GP-tSNE to allow deeper insight into data than that provided by existing visualization methods. We further highlight the benefits of a multiobjective approach through an in-depth analysis of a candidate front, which shows how multiple models can be analyzed jointly to give increased insight into the dataset.


2020 ◽  
Author(s):  
Andrew Lensen ◽  
Bing Xue ◽  
Mengjie Zhang

Data visualization is a key tool in data mining for understanding big datasets. Many visualization methods have been proposed, including the well-regarded state-of-the-art method t-distributed stochastic neighbor embedding. However, the most powerful visualization methods have a significant limitation: the manner in which they create their visualization from the original features of the dataset is completely opaque. Many domains require an understanding of the data in terms of the original features; there is hence a need for powerful visualization methods which use understandable models. In this article, we propose a genetic programming (GP) approach called GP-tSNE for evolving interpretable mappings from the dataset to high-quality visualizations. A multiobjective approach is designed that produces a variety of visualizations in a single run which gives different tradeoffs between visual quality and model complexity. Testing against baseline methods on a variety of datasets shows the clear potential of GP-tSNE to allow deeper insight into data than that provided by existing visualization methods. We further highlight the benefits of a multiobjective approach through an in-depth analysis of a candidate front, which shows how multiple models can be analyzed jointly to give increased insight into the dataset.


Author(s):  
Evan F. Sinar

Data visualization—a set of approaches for applying graphical principles to represent quantitative information—is extremely well matched to the nature of survey data but often underleveraged for this purpose. Surveys produce data sets that are highly structured and comparative across groups and geographies, that often blend numerical and open-text information, and that are designed for repeated administration and analysis. Each of these characteristics aligns well with specific visualization types, use of which has the potential to—when paired with foundational, evidence-based tenets of high-quality graphical representations—substantially increase the impact and influence of data presentations given by survey researchers. This chapter recommends and provides guidance on data visualization techniques fit to purpose for survey researchers, while also describing key risks and missteps associated with these approaches.


2016 ◽  
Author(s):  
Jens Ingensand ◽  
Sarah Composto ◽  
Olivier Ertz ◽  
Daniel Rappo ◽  
Marion Nappez ◽  
...  

Scientific projects are increasingly using volunteered geographic information (VGI) in order to collect and validate geographic data. This concept relies on the three challenges that first of all users can be found and second be convinced to collaborate and contribute and that scientists finally are able to gather high quality data for their projects. In this paper these three challenges are discussed using the experience with three different research projects: Urbangene, Signalez-nous and BioSentiers.


Sign in / Sign up

Export Citation Format

Share Document