Modeling and evaluating user behavior in exploratory visual analysis

2016 ◽  
Vol 15 (4) ◽  
pp. 325-339 ◽  
Author(s):  
Khairi Reda ◽  
Andrew E. Johnson ◽  
Michael E. Papka ◽  
Jason Leigh

Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This article presents a methodology for modeling and evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.

2009 ◽  
Vol 8 (1) ◽  
pp. 56-70 ◽  
Author(s):  
Chen Yu ◽  
Yiwen Zhong ◽  
Thomas Smith ◽  
Ikhyun Park ◽  
Weixia Huang

With advances in computing techniques, a large amount of high-resolution high-quality multimedia data (video and audio, and so on) has been collected in research laboratories in various scientific disciplines, particularly in cognitive and behavioral studies. How to automatically and effectively discover new knowledge from rich multimedia data poses a compelling challenge because most state-of-the-art data mining techniques can only search and extract pre-defined patterns or knowledge from complex heterogeneous data. In light of this challenge, we propose a hybrid approach that allows scientists to use data mining as a first pass, and then forms a closed loop of visual analysis of current results followed by more data mining work inspired by visualization, the results of which can be in turn visualized and lead to the next round of visual exploration and analysis. In this way, new insights and hypotheses gleaned from the raw data and the current level of analysis can contribute to further analysis. As a first step toward this goal, we implement a visualization system with three critical components: (1) a smooth interface between visualization and data mining; (2) a flexible tool to explore and query temporal data derived from raw multimedia data; and (3) a seamless interface between raw multimedia data and derived data. We have developed various ways to visualize both temporal correlations and statistics of multiple derived variables as well as conditional and high-order statistics. Our visualization tool allows users to explore, compare and analyze multi-stream derived variables and simultaneously switch to access raw multimedia data.


2011 ◽  
pp. 188-204 ◽  
Author(s):  
Maria Golemati ◽  
Costas Vassilakis ◽  
Akrivi Katifori ◽  
George Lepouras ◽  
Constantin Halatsis

Novel and intelligent visualization methods are being developed in order to accommodate user searching and browsing tasks, including new and advanced functionalities. Besides, research in the field of user modeling is progressing in order to personalize these visualization systems, according to its users’ individual profiles. However, employing a single visualization system, may not suit best any information seeking activity. In this paper we present a visualization environment, which is based on a visualization library, i.e. is a set of visualization methods, from which the most appropriate one is selected for presenting information to the user. This selection is performed combining information extracted from the context of the user, the system configuration and the data collection. A set of rules inputs such information and assigns a score to all candidate visualization methods. The presented environment additionally monitors user behavior and preferences to adapt the visualization method selection criteria.


BMC Genomics ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ratanond Koonchanok ◽  
Swapna Vidhur Daulatabad ◽  
Quoseena Mir ◽  
Khairi Reda ◽  
Sarath Chandra Janga

Abstract Background Direct-sequencing technologies, such as Oxford Nanopore’s, are delivering long RNA reads with great efficacy and convenience. These technologies afford an ability to detect post-transcriptional modifications at a single-molecule resolution, promising new insights into the functional roles of RNA. However, realizing this potential requires new tools to analyze and explore this type of data. Result Here, we present Sequoia, a visual analytics tool that allows users to interactively explore nanopore sequences. Sequoia combines a Python-based backend with a multi-view visualization interface, enabling users to import raw nanopore sequencing data in a Fast5 format, cluster sequences based on electric-current similarities, and drill-down onto signals to identify properties of interest. We demonstrate the application of Sequoia by generating and analyzing ~ 500k reads from direct RNA sequencing data of human HeLa cell line. We focus on comparing signal features from m6A and m5C RNA modifications as the first step towards building automated classifiers. We show how, through iterative visual exploration and tuning of dimensionality reduction parameters, we can separate modified RNA sequences from their unmodified counterparts. We also document new, qualitative signal signatures that characterize these modifications from otherwise normal RNA bases, which we were able to discover from the visualization. Conclusions Sequoia’s interactive features complement existing computational approaches in nanopore-based RNA workflows. The insights gleaned through visual analysis should help users in developing rationales, hypotheses, and insights into the dynamic nature of RNA. Sequoia is available at https://github.com/dnonatar/Sequoia.


2019 ◽  
Vol 19 (1) ◽  
pp. 3-23
Author(s):  
Aurea Soriano-Vargas ◽  
Bernd Hamann ◽  
Maria Cristina F de Oliveira

We present an integrated interactive framework for the visual analysis of time-varying multivariate data sets. As part of our research, we performed in-depth studies concerning the applicability of visualization techniques to obtain valuable insights. We consolidated the considered analysis and visualization methods in one framework, called TV-MV Analytics. TV-MV Analytics effectively combines visualization and data mining algorithms providing the following capabilities: (1) visual exploration of multivariate data at different temporal scales, and (2) a hierarchical small multiples visualization combined with interactive clustering and multidimensional projection to detect temporal relationships in the data. We demonstrate the value of our framework for specific scenarios, by studying three use cases that were validated and discussed with domain experts.


Author(s):  
Jozef Kapusta ◽  
Michal Munk ◽  
Dominik Halvoník ◽  
Martin Drlík

If we are talking about user behavior analytics, we have to understand what the main source of valuable information is. One of these sources is definitely a web server. There are multiple places where we can extract the necessary data. The most common ways are to search for these data in access log, error log, custom log files of web server, proxy server log file, web browser log, browser cookies etc. A web server log is in its default form known as a Common Log File (W3C, 1995) and keeps information about IP address; date and time of visit; ac-cessed and referenced resource. There are standardized methodologies which contain several steps leading to extract new knowledge from provided data. Usu-ally, the first step is in each one of them to identify users, users’ sessions, page views, and clickstreams. This process is called pre-processing. Main goal of this stage is to receive unprocessed web server log file as input and after processing outputs meaningful representations which can be used in next phase. In this pa-per, we describe in detail user session identification which can be considered as most important part of data pre-processing. Our paper aims to compare the us-er/session identification using the STT with the identification of user/session us-ing cookies. This comparison was performed concerning the quality of the se-quential rules generated, i.e., a comparison was made regarding generation useful, trivial and inexplicable rules.


2017 ◽  
Vol 11 (01) ◽  
pp. 65-84 ◽  
Author(s):  
Denny Stohr ◽  
Iva Toteva ◽  
Stefan Wilk ◽  
Wolfgang Effelsberg ◽  
Ralf Steinmetz

Instant sharing of user-generated video recordings has become a widely used service on platforms such as YouNow, Facebook.Live or uStream. Yet, providing such services with a high QoE for viewers is still challenging, given that mobile upload speed and capacities are limited, and the recording quality on mobile devices greatly depends on the users’ capabilities. One proposed solution to address these issues is video composition. It allows to switch between multiple recorded video streams, selecting the best source at any given time, for composing a live video with a better overall quality for the viewers. Previous approaches have required an in-depth visual analysis of the video streams, which usually limited the scalability of these systems. In contrast, our work allows the stream selection to be realized solely on context information, based on video- and service-quality aspects from sensor and network measurements. The implemented monitoring service for a context-aware upload of video streams is evaluated in different network conditions, with diverse user behavior, including camera shaking and user mobility. We have evaluated the system’s performance based on two studies. First, in a user study, we show that a higher efficiency for the video upload as well as a better QoE for viewers can be achieved when using our proposed system. Second, by examining the overall delay for the switching between streams based on sensor readings, we show that a composition view change can efficiently be achieved in approximately four seconds.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Joerg Leukel ◽  
Vijayan Sugumaran

PurposeProcess models specific to the supply chain domain are an important tool for the analysis of interorganizational interfaces and requirements of information technology (IT) systems supporting supply chain decision-making. The purpose of this study is to examine the effectiveness of supply chain process models for novice analysts in conveying domain semantics compared to alternative textual representations.Design/methodology/approachA laboratory experiment with graduate students as proxies for novice analysts was conducted. Participants were randomly assigned to either the diagram group, which worked with “thread diagrams” created from the modeling grammar “Supply Chain Operation Reference (SCOR) model”, or the text group, which worked with semantically equivalent textual representations. Domain understanding was measured using cognitively demanding information acquisition for two different domains.FindingsDiagram users were more accurate in identifying product-related information and organizing this information in a graph compared to those using the textual representation. The authors found considerable improvements in domain understanding, and using the diagrams was perceived as easy as using the texts.Originality/valueThe study's findings are unique in providing empirical evidence for supply chain process models being an effective representation for novice analysts. Such evidence is lacking in prior research because of the evaluation methods used, which are limited to scenario, case study and informed argument. This study adds the diagram user's perspective to that literature and provides a rigorous empirical evaluation by contrasting diagrammatic and textual representations.


Author(s):  
Tomasz Muldner ◽  
Elhadi Shakshuki

This article presents a novel approach for explaining algorithms that aims to overcome various pedagogical limitations of the current visualization systems. The main idea is that at any given time, a learner is able to focus on a single problem. This problem can be explained, studied, understood, and tested, before the learner moves on to study another problem. Toward this end, a visualization system that explains algorithms at various levels of abstraction has been designed and implemented. In this system, each abstraction is focused on a single operation from the algorithm using various media, including text and an associated visualization. The explanations are designed to help the user to understand basic properties of the operation represented by this abstraction, for example its invariants. The explanation system allows the user to traverse the hierarchy graph, using either a top-down (from primitive operations to general operations) approach or a bottom-up approach. Since the system is implemented using a client-server architecture, it can be used both in the classroom setting and through distance education.


Author(s):  
Serra Çelik

This chapter focuses on predicting web user behaviors. When web users enter a website, every move they make on that website is stored as web log files. Unlike the focus group or questionnaire, the log files reflect real user behavior. It can easily be said that having actual user behavior is a gold value for the organizations. In this chapter, the ways of extracting user patterns (user behavior) from the log files are sought. In this context, the web usage mining process is explained. Some web usage mining techniques are mentioned.


2016 ◽  
Vol 16 (2) ◽  
pp. 93-112 ◽  
Author(s):  
João Marcelo Borovina Josko ◽  
João Eduardo Ferreira

Data quality assessment outcomes are essential to ensure useful analytical processes results. Relevant computational approaches provide assessment support, especially to data defects that present more precise rules. However, data defects that are more dependent of data context knowledge challenge the data quality assessment since the process involves human supervision. Visualization systems belong to a class of supervised tools that can make visible data defect structures. Despite their considerable design knowledge encodings, there is little support design to visual quality assessment of data defects. Therefore, this work reports a case study that has explored which and how visualization properties facilitate visual detection of data defect. Its outcomes offer a first set of implications to design visualization system to permit data quality visual assessment.


Sign in / Sign up

Export Citation Format

Share Document