scholarly journals Visual analytics of sensor movement data for cheetah behaviour analysis

Author(s):  
Karsten Klein ◽  
Sabrina Jaeger ◽  
Jörg Melzheimer ◽  
Bettina Wachter ◽  
Heribert Hofer ◽  
...  

Abstract Current tracking technology such as GPS data loggers allows biologists to remotely collect large amounts of movement data for a large variety of species. Extending, and often replacing interpretation based on observation, the analysis of the collected data supports research on animal behaviour, on impact factors such as climate change and human intervention on the globe, as well as on conservation programs. However, this analysis is difficult, due to the nature of the research questions and the complexity of the data sets. It requires both automated analysis, for example, for the detection of behavioural patterns, and human inspection, for example, for interpretation, inclusion of previous knowledge, and for conclusions on future actions and decision making. For this analysis and inspection, the movement data needs to be put into the context of environmental data, which helps to interpret the behaviour. Thus, a major challenge is to design and develop methods and intuitive interfaces that integrate the data for analysis by biologists. We present a concept and implementation for the visual analysis of cheetah movement data in a web-based fashion that allows usage both in the field and in office environments. Graphic abstract

2019 ◽  
Vol 19 (1) ◽  
pp. 3-23
Author(s):  
Aurea Soriano-Vargas ◽  
Bernd Hamann ◽  
Maria Cristina F de Oliveira

We present an integrated interactive framework for the visual analysis of time-varying multivariate data sets. As part of our research, we performed in-depth studies concerning the applicability of visualization techniques to obtain valuable insights. We consolidated the considered analysis and visualization methods in one framework, called TV-MV Analytics. TV-MV Analytics effectively combines visualization and data mining algorithms providing the following capabilities: (1) visual exploration of multivariate data at different temporal scales, and (2) a hierarchical small multiples visualization combined with interactive clustering and multidimensional projection to detect temporal relationships in the data. We demonstrate the value of our framework for specific scenarios, by studying three use cases that were validated and discussed with domain experts.


Author(s):  
A. Moreno ◽  
D. I. Hernandez ◽  
D. Moreno ◽  
M. Caglioni ◽  
J. T. Hernandez

Abstract. Solid waste management is an important urban issue to be addressed in every city. In the smart city context, waste collection allows massive collection of data representing movements, provided by satellite tracking technologies and sensors on waste collection equipment. For decision makers to take advantage of this opportunity, an analytical tool suitable for the waste management context, able to visualize the complexity of the data and to deal with different types of formats in which the data is stored is required.The aim of this paper is to evaluate the potential of an interactive data analysis tool, based on R and R-Shiny, to better understand the particularities of a waste collection service and how it relates to the local city context. The User-centered Analysis-Task driven model (AVIMEU) is presented. The model is organized into seven components: database load, classification panel, multivariate analysis, concurrency, origin-destination, points of interest and itinerary. The model was implemented as a test case for the waste collection service of the city of Pasto in the southwest of Colombia. It is shown that the model based on visual analysis is a promising approach that should be further enhanced. The analyses are oriented in such a way that they provide practical information to the agents or experts of the service. The model is available on the site https://github.com/MerariFonseca/AVIMEU-visual-analytics-for-movement-data-in-R .


2014 ◽  
Vol 24 (2) ◽  
pp. 122-141 ◽  
Author(s):  
Victoria Louise Lemieux ◽  
Brianna Gormly ◽  
Lyse Rowledge

Purpose – This paper aims to explore the role of records management in supporting the effective use of information visualisation and visual analytics (VA) to meet the challenges associated with the analysis of Big Data. Design/methodology/approach – This exploratory research entailed conducting and analysing interviews with a convenience sample of visual analysts and VA tool developers, affiliated with a major VA institute, to gain a deeper understanding of data-related issues that constrain or prevent effective visual analysis of large data sets or the use of VA tools, and analysing key emergent themes related to data challenges to map them to records management controls that may be used to address them. Findings – The authors identify key data-related issues that constrain or prevent effective visual analysis of large data sets or the use of VA tools, and identify records management controls that may be used to address these data-related issues. Originality/value – This paper discusses a relatively new field, VA, which has emerged in response to meeting the challenge of analysing big, open data. It contributes a small exploratory research study aimed at helping records professionals understand the data challenges faced by visual analysts and, by extension, data scientists for the analysis of large and heterogeneous data sets. It further aims to help records professionals identify how records management controls may be used to address data issues in the context of VA.


2016 ◽  
Vol 16 (3) ◽  
pp. 205-216 ◽  
Author(s):  
Lorne Leonard ◽  
Alan M MacEachren ◽  
Kamesh Madduri

This article reports on the development and application of a visual analytics approach to big data cleaning and integration focused on very large graphs, constructed in support of national-scale hydrological modeling. We explain why large graphs are required for hydrology modeling and describe how we create two graphs using continental United States heterogeneous national data products. The first smaller graph is constructed by assigning level-12 hydrological unit code watersheds as nodes. Creating and cleaning graphs at this scale highlight the issues that cannot be addressed without high-resolution datasets and expert intervention. Expert intervention, aided with visual analytical tools, is necessary to address edge directions at the second graph scale: subdividing continental United States streams as edges (851,265,305) and nodes (683,298,991) for large-scale hydrological modeling. We demonstrate how large graph workflows are created and are used for automated analysis to prepare the user interface for visual analytics. We explain the design of the visual interface using a watershed case study and then discuss how the visual interface is used to engage the expert user to resolve data and graph issues.


Obesity Facts ◽  
2021 ◽  
pp. 1-11
Author(s):  
Marijn Marthe Georgine van Berckel ◽  
Saskia L.M. van Loon ◽  
Arjen-Kars Boer ◽  
Volkher Scharnhorst ◽  
Simon W. Nienhuijs

<b><i>Introduction:</i></b> Bariatric surgery results in both intentional and unintentional metabolic changes. In a high-volume bariatric center, extensive laboratory panels are used to monitor these changes pre- and postoperatively. Consecutive measurements of relevant biochemical markers allow exploration of the health state of bariatric patients and comparison of different patient groups. <b><i>Objective:</i></b> The objective of this study is to compare biomarker distributions over time between 2 common bariatric procedures, i.e., sleeve gastrectomy (SG) and gastric bypass (RYGB), using visual analytics. <b><i>Methods:</i></b> Both pre- and postsurgical (6, 12, and 24 months) data of all patients who underwent primary bariatric surgery were collected retrospectively. The distribution and evolution of different biochemical markers were compared before and after surgery using asymmetric beanplots in order to evaluate the effect of primary SG and RYGB. A beanplot is an alternative to the boxplot that allows an easy and thorough visual comparison of univariate data. <b><i>Results:</i></b> In total, 1,237 patients (659 SG and 578 RYGB) were included. The sleeve and bypass groups were comparable in terms of age and the prevalence of comorbidities. The mean presurgical BMI and the percentage of males were higher in the sleeve group. The effect of surgery on lowering of glycated hemoglobin was similar for both surgery types. After RYGB surgery, the decrease in the cholesterol concentration was larger than after SG. The enzymatic activity of aspartate aminotransferase, alanine aminotransferase, and alkaline phosphate in sleeve patients was higher presurgically but lower postsurgically compared to bypass values. <b><i>Conclusions:</i></b> Beanplots allow intuitive visualization of population distributions. Analysis of this large population-based data set using beanplots suggests comparable efficacies of both types of surgery in reducing diabetes. RYGB surgery reduced dyslipidemia more effectively than SG. The trend toward a larger decrease in liver enzyme activities following SG is a subject for further investigation.


BMC Genomics ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ratanond Koonchanok ◽  
Swapna Vidhur Daulatabad ◽  
Quoseena Mir ◽  
Khairi Reda ◽  
Sarath Chandra Janga

Abstract Background Direct-sequencing technologies, such as Oxford Nanopore’s, are delivering long RNA reads with great efficacy and convenience. These technologies afford an ability to detect post-transcriptional modifications at a single-molecule resolution, promising new insights into the functional roles of RNA. However, realizing this potential requires new tools to analyze and explore this type of data. Result Here, we present Sequoia, a visual analytics tool that allows users to interactively explore nanopore sequences. Sequoia combines a Python-based backend with a multi-view visualization interface, enabling users to import raw nanopore sequencing data in a Fast5 format, cluster sequences based on electric-current similarities, and drill-down onto signals to identify properties of interest. We demonstrate the application of Sequoia by generating and analyzing ~ 500k reads from direct RNA sequencing data of human HeLa cell line. We focus on comparing signal features from m6A and m5C RNA modifications as the first step towards building automated classifiers. We show how, through iterative visual exploration and tuning of dimensionality reduction parameters, we can separate modified RNA sequences from their unmodified counterparts. We also document new, qualitative signal signatures that characterize these modifications from otherwise normal RNA bases, which we were able to discover from the visualization. Conclusions Sequoia’s interactive features complement existing computational approaches in nanopore-based RNA workflows. The insights gleaned through visual analysis should help users in developing rationales, hypotheses, and insights into the dynamic nature of RNA. Sequoia is available at https://github.com/dnonatar/Sequoia.


Author(s):  
Julian Prell ◽  
Christian Scheller ◽  
Sebastian Simmermacher ◽  
Christian Strauss ◽  
Stefan Rampp

Abstract Objective The quantity of A-trains, a high-frequency pattern of free-running facial nerve electromyography, is correlated with the risk for postoperative high-grade facial nerve paresis. This correlation has been confirmed by automated analysis with dedicated algorithms and by visual offline analysis but not by audiovisual real-time analysis. Methods An investigator was presented with 29 complete data sets measured during actual surgeries in real time and without breaks in a random order. Data were presented either strictly via loudspeaker (audio) or simultaneously by loudspeaker and computer screen (audiovisual). Visible and/or audible A-train activity was then quantified by the investigator with the computerized equivalent of a stopwatch. The same data were also analyzed with quantification of A-trains by automated algorithms. Results Automated (auto) traintime (TT), known to be a small, yet highly representative fraction of overall A-train activity, ranged from 0.01 to 10.86 s (median: 0.58 s). In contrast, audio-TT ranged from 0 to 1,357.44 s (median: 29.69 s), and audiovisual-TT ranged from 0 to 786.57 s (median: 46.19 s). All three modalities were correlated to each other in a highly significant way. Likewise, all three modalities correlated significantly with the extent of postoperative facial paresis. As a rule of thumb, patients with visible/audible A-train activity < 1 minute presented with a more favorable clinical outcome than patients with > 1 minute of A-train activity. Conclusion Detection and even quantification of A-trains is technically possible not only with intraoperative automated real-time calculation or postoperative visual offline analysis, but also with very basic monitoring equipment and real-time good quality audiovisual analysis. However, the investigator found audiovisual real-time-analysis to be very demanding; thus tools for automated quantification can be very helpful in this respect.


2021 ◽  
Vol 11 (11) ◽  
pp. 4751
Author(s):  
Jorge-Félix Rodríguez-Quintero ◽  
Alexander Sánchez-Díaz ◽  
Leonel Iriarte-Navarro ◽  
Alejandro Maté ◽  
Manuel Marco-Such ◽  
...  

Among the knowledge areas in which process mining has had an impact, the audit domain is particularly striking. Traditionally, audits seek evidence in a data sample that allows making inferences about a population. Mistakes are usually committed when generalizing the results and anomalies; therefore, they appear in unprocessed sets; however, there are some efforts to address these limitations using process-mining-based approaches for fraud detection. To the best of our knowledge, no fraud audit method exists that combines process mining techniques and visual analytics to identify relevant patterns. This paper presents a fraud audit approach based on the combination of process mining techniques and visual analytics. The main advantages are: (i) a method is included that guides the use of the visual capabilities of process mining to detect fraud data patterns during an audit; (ii) the approach can be generalized to any business domain; (iii) well-known process mining techniques are used (dotted chart, trace alignment, fuzzy miner…). The techniques were selected by a group of experts and were extended to enable filtering for contextual analysis, to handle levels of process abstraction, and to facilitate implementation in the area of fraud audits. Based on the proposed approach, we developed a software solution that is currently being used in the financial sector as well as in the telecommunications and hospitality sectors. Finally, for demonstration purposes, we present a real hotel management use case in which we detected suspected fraud behaviors, thus validating the effectiveness of the approach.


2015 ◽  
Vol 06 (04) ◽  
pp. 757-768 ◽  
Author(s):  
V. David ◽  
M. Haller ◽  
S. Kotzian ◽  
M. Hofmann ◽  
S. Schlossarek ◽  
...  

SummaryBackground: Preservation of mobility in conjunction with an independent life style is one of the major goals of rehabilitation after stroke.Objectives: The Rehab@Home framework shall support the continuation of rehabilitation at home.Methods: The framework consists of instrumented insoles, connected wirelessly to a 3G ready tablet PC, a server, and a web-interface for medical experts. The rehabilitation progress is estimated via automated analysis of movement data from standardized assessment tests which are designed according to the needs of stroke patients and executed via the tablet PC application.Results: The Rehab@Home framework’s implementation is finished and ready for the field trial (at five patients’ homes). Initial testing of the automated evaluation of the standardized mobility tests shows reproducible results.Conclusions: Therefore it is assumed that the Rehab@Home framework is applicable as monitoring tool for the gait rehabilitation progress in stroke patients.


2003 ◽  
Vol 9 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Paul G. Kotula ◽  
Michael R. Keenan ◽  
Joseph R. Michael

Spectral imaging in the scanning electron microscope (SEM) equipped with an energy-dispersive X-ray (EDX) analyzer has the potential to be a powerful tool for chemical phase identification, but the large data sets have, in the past, proved too large to efficiently analyze. In the present work, we describe the application of a new automated, unbiased, multivariate statistical analysis technique to very large X-ray spectral image data sets. The method, based in part on principal components analysis, returns physically accurate (all positive) component spectra and images in a few minutes on a standard personal computer. The efficacy of the technique for microanalysis is illustrated by the analysis of complex multi-phase materials, particulates, a diffusion couple, and a single-pixel-detection problem.


Sign in / Sign up

Export Citation Format

Share Document