scholarly journals Performance Testing on Marker Clustering and Heatmap Visualization Techniques: A Comparative Study on JavaScript Mapping Libraries

2019 ◽  
Vol 8 (8) ◽  
pp. 348 ◽  
Author(s):  
Netek ◽  
Brus ◽  
Tomecka

We are now generating exponentially more data from more sources than a few years ago. Big data, an already familiar term, has been generally defined as a massive volume of structured, semi-structured, and/or unstructured data, which may not be effectively managed and processed using traditional databases and software techniques. It could be problematic to visualize easily and quickly a large amount of data via an Internet platform. From this perspective, the main aim of the paper is to test point data visualization possibilities of selected JavaScript Mapping Libraries to measure their performance and ability to cope with a big amount of data. Nine datasets containing 10,000 to 3,000,000 points were generated from the Nature Conservation Database. Five libraries for marker clustering and two libraries for heatmap visualization were analyzed. Loading time and the ability to visualize large data sets were compared for each dataset and each library. The best-evaluated library was a Mapbox GL JS (Graphics Library JavaScript) with the highest overall performance. Some of the tested libraries were not able to handle the desired amount of data. In general, an amount of less than 100,000 points was indicated as the threshold for implementation without a noticeable slowdown in performance. Their usage can be a limiting factor for point data visualization in such a dynamic environment as we live nowadays.

Author(s):  
Anna Ursyn ◽  
Edoardo L'Astorina

This chapter discusses some possible ways of how professionals, researchers and users representing various knowledge domains are collecting and visualizing big data sets. First it describes communication through senses as a basis for visualization techniques, computational solutions for enhancing senses and ways of enhancing senses by technology. The next part discusses ideas behind visualization of data sets and ponders what is and what not visualization is. Further discussion relates to data visualization through art as visual solutions of science and mathematics related problems, documentation objects and events, and a testimony to thoughts, knowledge and meaning. Learning and teaching through data visualization is the concluding theme of the chapter. Edoardo L'Astorina provides visual analysis of best practices in visualization: An overlay of Google Maps that showed all the arrival times - in real time - of all the buses in your area based on your location and visual representation of all the Tweets in the world about TfL (Transport for London) tube lines to predict disruptions.


2002 ◽  
Vol 1 (1) ◽  
pp. 20-34 ◽  
Author(s):  
Daniel A. Keim ◽  
Ming C. Hao ◽  
Umesh Dayal ◽  
Meichun Hsu

Simple presentation graphics are intuitive and easy-to-use, but show only highly aggregated data presenting only a very small number of data values (as in the case of bar charts) and may have a high degree of overlap occluding a significant portion of the data values (as in the case of the x-y plots). In this article, the authors therefore propose a generalization of traditional bar charts and x-y plots, which allows the visualization of large amounts of data. The basic idea is to use the pixels within the bars to present detailed information of the data records. The so-called pixel bar charts retain the intuitiveness of traditional bar charts while allowing very large data sets to be visualized in an effective way. It is shown that, for an effective pixel placement, a complex optimization problem has to be solved. The authors then present an algorithm which efficiently solves the problem. The application to a number of real-world e-commerce data sets shows the wide applicability and usefulness of this new idea, and a comparison to other well-known visualization techniques (parallel coordinates and spiral techniques) shows a number of clear advantages.


Author(s):  
R. Daniel Bergeron ◽  
Daniel A. Keim ◽  
Ronald M. Pickett

2016 ◽  
pp. 1677-1692
Author(s):  
William H. Hsu

This chapter presents challenges and recommended practices for visualizing data about phenomena that are observed or simulated across space and time. Some data may be collected for the express purpose of answering questions through quantitative analysis and simulation, especially about future occurrences or continuations of the phenomena – that is, prediction. In this case, analytical computations may serve two purposes: to prepare the data for presentation and to answer questions by producing information, especially an informative model, that can also be visualized. These purposes may have significant overlap. Thus, the focus of the chapter is about analytical techniques for visual display of quantitative data and information that scale up to large data sets. It begins by surveying trends in educational and scientific use of visualization and reviewing taxonomies of data to be visualized. Next, it reviews aspects of spatiotemporal data that pose challenges, such as heterogeneity and scale, along with techniques for dealing specifically with geospatial data and text. An exploration of concrete applications then follows. Finally, tenets of information visualization design, put forward by Tufte and other experts on data representation and presentation, are considered in the context of analytical applications for heterogeneous data in spatiotemporal domains.


Author(s):  
Evan F. Sinar

Data visualization—a set of approaches for applying graphical principles to represent quantitative information—is extremely well matched to the nature of survey data but often underleveraged for this purpose. Surveys produce data sets that are highly structured and comparative across groups and geographies, that often blend numerical and open-text information, and that are designed for repeated administration and analysis. Each of these characteristics aligns well with specific visualization types, use of which has the potential to—when paired with foundational, evidence-based tenets of high-quality graphical representations—substantially increase the impact and influence of data presentations given by survey researchers. This chapter recommends and provides guidance on data visualization techniques fit to purpose for survey researchers, while also describing key risks and missteps associated with these approaches.


2019 ◽  
Vol 8 (2) ◽  
pp. 63 ◽  
Author(s):  
Jing He ◽  
Haonan Chen ◽  
Yijin Chen ◽  
Xinming Tang ◽  
Yebin Zou

Trajectory big data have significant applications in many areas, such as traffic management, urban planning and military reconnaissance. Traditional visualization methods, which are represented by contour maps, shading maps and hypsometric maps, are mainly based on the spatiotemporal information of trajectories, which can macroscopically study the spatiotemporal conditions of the entire trajectory set and microscopically analyze the individual movement of each trajectory; such methods are widely used in screen display and flat mapping. With the improvement of trajectory data quality, these data can generally describe information in the spatial and temporal dimensions and involve many other attributes (e.g., speed, orientation, and elevation) with large data amounts and high dimensions. Additionally, these data have relatively complicated internal relationships and regularities, whose analysis could cause many troubles; the traditional approaches can no longer fully meet the requirements of visualizing trajectory data and mining hidden information. Therefore, diverse visualization methods that present the value of massive trajectory information are currently a hot research topic. This paper summarizes the research status of trajectory data-visualization techniques in recent years and extracts common contemporary trajectory data-visualization methods to comprehensively cognize and understand the fundamental characteristics and diverse achievements of trajectory-data visualization.


2014 ◽  
Vol 989-994 ◽  
pp. 2457-2461 ◽  
Author(s):  
Ting Ting Jiang ◽  
Qing Gang Wang ◽  
Hai Kuo Zhang ◽  
Wei Dong Xiao ◽  
Chong Zhang ◽  
...  

With the advent of the era of big data, data visualization display faces great challenges. Big data, especially in industries such as telecommunications, finance, almost to the point of "data is the business itself". At this time, in order to let more people understand, use and analyze the data better, we proposed the method to display the big data in the field of finance and the notion of the Zoom financial data visualization (ZFDV). By providing a consistent set of the preliminary design of ZFDV and the interaction of ZFDV techniques, ZFDV makes it possible for users to browse through very large data sets. These techniques use the structure of the displayed data to guide the human-interaction and provide a way to improve interactive navigation in the financial.


Author(s):  
Süreyya Özöğür Akyüz ◽  
Gürkan Üstünkar ◽  
Gerhard Wilhelm Weber

The interplay of machine learning (ML) and optimization methods is an emerging field of artificial intelligence. Both ML and optimization are concerned with modeling of systems related to real-world problems. Parameter selection for classification models is an important task for ML algorithms. In statistical learning theory, cross-validation (CV) which is the most well-known model selection method can be very time consuming for large data sets. One of the recent model selection techniques developed for support vector machines (SVMs) is based on the observed test point margins. In this study, observed margin strategy is integrated into our novel infinite kernel learning (IKL) algorithm together with multi-local procedure (MLP) which is an optimization technique to find global solution. The experimental results show improvements in accuracy and speed when comparing with multiple kernel learning (MKL) and semi-infinite linear programming (SILP) with CV.


2016 ◽  
Vol 39 (1) ◽  
pp. 127-146 ◽  
Author(s):  
Karen A. Monsen ◽  
Jessica J. Peterson ◽  
Michelle A. Mathiason ◽  
Era Kim ◽  
Brian Votava ◽  
...  

Visualization is a Big Data method for detecting and validating previously unknown and hidden patterns within large data sets. This study used visualization techniques to discover and test novel patterns in public health nurse (PHN)–client–risk–intervention–outcome relationships. To understand the mechanism underlying risk reduction among high risk mothers, data representing complex social interventions were visualized in a series of three steps, and analyzed with other important contextual factors using standard descriptive and inferential statistics. Overall, client risk decreased after clients received personally tailored PHN services. Clinically important and unique PHN–client–risk–intervention–outcome patterns were discovered through pattern detection using streamgraphs, heat maps, and parallel coordinates techniques. Statistical evaluation validated that PHN intervention tailoring leads to improved client outcomes. The study demonstrates the importance of exploring data to discover ways to improve care quality and client outcomes. Further research is needed to examine additional factors that may influence PHN–client–risk–intervention–outcome patterns, and to test these methods with other data sets.


Sign in / Sign up

Export Citation Format

Share Document