scholarly journals Using Flow Maps to Visualize Time-Series Data: Comparing the Effectiveness of a Paper Map Series, a Computer Map Series, and Animation

1998 ◽  
pp. 47-64 ◽  
Author(s):  
Harry Johnson ◽  
Elisabeth S. Nelson

Motion and change through time are important aspects of thematic maps. Traditionally, such data have been visualized using a series of paper maps that represent multiple snapshots of a location over time. These maps are visually compared by the map reader when analyzing change over time for a location. This static view of change over time has worked well for cartographers in the past, but today computer animation allows cartographers to emphasize the dynamic nature of this data. By animating a map, change over time can be represented on one map rather than in a traditional map series. This study compared a paper map series, a computer map series, and animated maps of the same data to assess the effectiveness of each technique for memorizing data symbolized by graduated flow lines. Subjects were asked to study the maps and to memorize two types of information: quantity data at specified locations on the maps and trend patterns that occurred over the maps. Memorization of the information was subsequently tested using a series of multiple choice questions. Analysis of response times and accuracy rates for these questions suggest that animation does not improve learning ability for quantity evaluations. It does appear, however, to improve subjects' abilities to learn and remember trend patterns in the data. Results also indicate gender differences in using animated maps.

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Hitoshi Iuchi ◽  
Michiaki Hamada

Abstract Time-course experiments using parallel sequencers have the potential to uncover gradual changes in cells over time that cannot be observed in a two-point comparison. An essential step in time-series data analysis is the identification of temporal differentially expressed genes (TEGs) under two conditions (e.g. control versus case). Model-based approaches, which are typical TEG detection methods, often set one parameter (e.g. degree or degree of freedom) for one dataset. This approach risks modeling of linearly increasing genes with higher-order functions, or fitting of cyclic gene expression with linear functions, thereby leading to false positives/negatives. Here, we present a Jonckheere–Terpstra–Kendall (JTK)-based non-parametric algorithm for TEG detection. Benchmarks, using simulation data, show that the JTK-based approach outperforms existing methods, especially in long time-series experiments. Additionally, application of JTK in the analysis of time-series RNA-seq data from seven tissue types, across developmental stages in mouse and rat, suggested that the wave pattern contributes to the TEG identification of JTK, not the difference in expression levels. This result suggests that JTK is a suitable algorithm when focusing on expression patterns over time rather than expression levels, such as comparisons between different species. These results show that JTK is an excellent candidate for TEG detection.


2021 ◽  
Author(s):  
Sadnan Al Manir ◽  
Justin Niestroy ◽  
Maxwell Adam Levinson ◽  
Timothy Clark

Introduction: Transparency of computation is a requirement for assessing the validity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These components may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publications, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challenging evidence for computations, services, software, data, and results, across arbitrarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory. It extends PROV for additional expressiveness, with support for defeasible reasoning. EVI treats any com- putational result or component of evidence as a defeasible assertion, supported by a DAG of the computations, software, data, and agents that produced it. Results: We have successfully deployed EVI for very-large-scale predictive analytics on clinical time-series data. Every result may reference its own evidence graph as metadata, which can be extended when subsequent computations are executed. Discussion: Evidence graphs support transparency and defeasible reasoning on results. They are first-class computational objects, and reference the datasets and software from which they are derived. They support fully transparent computation, with challenge and support propagation. The EVI approach may be extended to include instruments, animal models, and critical experimental reagents.


2019 ◽  
Vol 14 (2) ◽  
pp. 182-207 ◽  
Author(s):  
Benoît Faye ◽  
Eric Le Fur

AbstractThis article tests the stability of the main hedonic wine price coefficients over time. We draw on an extensive literature review to identify the most frequently used methodology and define a standard hedonic model. We estimate this model on monthly subsamples of a worldwide auction database of the most commonly exchanged fine wines. This provides, for each attribute, a monthly time series of hedonic coefficients time series data from 2003 to 2014. Using a multivariate autoregressive model, we then study the stability of these coefficients over time and test the existence of structural or cyclical changes related to fluctuations in general price levels. We find that most hedonic coefficients are variable and either exhibit structural or cyclical variations over time. These findings shed doubt on the relevance of both short- and long-run hedonic estimations. (JEL Classifications: C13, C22, D44, G11)


2022 ◽  
Vol 18 (2) ◽  
pp. 198-223
Author(s):  
Farin Cyntiya Garini ◽  
Warosatul Anbiya

PT. Kereta Api Indonesia and PT. KAI Commuter Jabodetabek records time series data in the form of the number of train passengers (thousand people) in Jabodetabek Region in 2011-2020. One of the time series methods that can be used to predict the number of train passengers (thousand people) in Jabodetabek area is ARIMA method. ARIMA or also known as Box-Jenkins time series analysis method is used for short-term forecasting and does not accommodate seasonal factors. If the assumption of residual homoscedasticity is violated, the ARCH / GARCH method can be used, which explicitly models changes in residual variety over time. This study aims to model and forecast the number of train passengers (thousand people) in Jabodetabek area in 2021. Based on data analysis and processing using ARIMA method, the best model is ARIMA (1,1,1) with an AIC value of 2,159.87 and with ARCH / GARCH method, the best model is GARCH (1,1) with an AIC value of 18.314. Forecasting results obtained based on the best model can be used as a reference for related parties in managing and providing public transportation facilities, especially trains.


2021 ◽  
Author(s):  
Eberhard Voit ◽  
Jacob Davis ◽  
Daniel Olivenca

Abstract For close to a century, Lotka-Volterra (LV) models have been used to investigate interactions among populations of different species. For a few species, these investigations are straightforward. However, with the arrival of large and complex microbiomes, unprecedently rich data have become available and await analysis. In particular, these data require us to ask which microbial populations of a mixed community affect other populations, whether these influences are activating or inhibiting and how the interactions change over time. Here we present two new inference strategies for interaction parameters that are based on a new algebraic LV inference (ALVI) method. One strategy uses different survivor profiles of communities grown under similar conditions, while the other pertains to time series data. In addition, we address the question of whether observation data are compliant with the LV structure or require a richer modeling format.


2018 ◽  
Vol 56 (1) ◽  
pp. 134-145 ◽  
Author(s):  
Christian Gineste ◽  
Burcu Savun

While scholars have for some time debated the role of refugee flows in the international spread of conflict, most evidence has been indirect due to the scarcity of systematic data on refugee-related violence. The Political and Societal Violence By And Against Refugees (POSVAR) dataset addresses this lacuna by providing cross-national, time-series data on refugees’ involvement in acts of physical violence in their host state, either as the victims or the perpetrators of violence, individually or collectively, in all countries between 1996 and 2015. In this article, we provide an overview of the main features of the dataset, identify its limitations, and trace variation in reported levels of refugee-related violence over time and across different types of actors. We emphasize that the data may be helpful to both researchers and policymakers for more accurate understanding of the prevalence of refugee-related violence and the design of more optimal policies to mitigate it.


1977 ◽  
Vol 2 (2) ◽  
pp. 4-15 ◽  
Author(s):  
William Dipietro ◽  
Bansi Sawhney

Identifying the causes of business failures is crucial for effective policy making, and is especially important for small businesses which account for the largest component of firm failures in the U.S. As an example, if we find that managerial skills are important in reducing firm failures, resources can be directed toward management training and education programs. Policies designed to reduce failures would not only reduce the hardships on the individuals affected directly by such failures but would also aid in the smooth functioning of the economy. There are two important factors which jointly determine the failure rate of businesses in the economy. The first is internal — the effectiveness of management, and the second is external — the general economic environment. Overall the purpose of this paper is to use the traditional economic model of the firm to discuss the importance of each of these factors in determining business failures and to use time-series data to assess the relative weight of each factor in determining failures over time. Specifically the objectives of this paper are: first, to economically define managerial competency; second, to test the hypothesis that managerial efficiency has increased over time; and third, to assess the effect of selected macroeconomic variables on firm failures.


Sign in / Sign up

Export Citation Format

Share Document