scholarly journals RhythmicAlly: Your R and Shiny–Based Open-Source Ally for the Analysis of Biological Rhythms

2019 ◽  
Vol 34 (5) ◽  
pp. 551-561 ◽  
Author(s):  
Lakshman Abhilash ◽  
Vasu Sheeba

Research on circadian rhythms often requires researchers to estimate period, robustness/power, and phase of the rhythm. These are important to estimate, owing to the fact that they act as readouts of different features of the underlying clock. The commonly used tools, to this end, suffer from being very expensive, having very limited interactivity, being very cumbersome to use, or a combination of these. As a step toward remedying the inaccessibility to users who may not be able to afford them and to ease the analysis of biological time-series data, we have written RhythmicAlly, an open-source program using R and Shiny that has the following advantages: (1) it is free, (2) it allows subjective marking of phases on actograms, (3) it provides high interactivity with graphs, (4) it allows visualization and storing of data for a batch of individuals simultaneously, and (5) it does what other free programs do but with fewer mouse clicks, thereby being more efficient and user-friendly. Moreover, our program can be used for a wide range of ultradian, circadian, and infradian rhythms from a variety of organisms, some examples of which are described here. The first version of RhythmicAlly is available on Github, and we aim to maintain the program with subsequent versions having updated methods of visualizing and analyzing time-series data.

2019 ◽  
Author(s):  
Birgit Möller ◽  
Hongmei Chen ◽  
Tino Schmidt ◽  
Axel Zieschank ◽  
Roman Patzak ◽  
...  

AbstractBackground and aimsMinirhizotrons are commonly used to study root turnover which is essential for understanding ecosystem carbon and nutrient cycling. Yet, extracting data from minirhizotron images requires intensive annotation effort. Existing annotation tools often lack flexibility and provide only a subset of the required functionality. To facilitate efficient root annotation in minirhizotrons, we present the user-friendly open source tool rhizoTrak.Methods and resultsrhizoTrak builds on TrakEM2 and is publically available as Fiji plugin. It uses treelines to represent branching structures in roots and assigns customizable status labels per root segment. rhizoTrak offers configuration options for visualization and various functions for root annotation mostly accessible via keyboard shortcuts. rhizoTrak allows time-series data import and particularly supports easy handling and annotation of time series images. This is facilitated via explicit temporal links (connectors) between roots which are automatically generated when copying annotations from one image to the next. rhizoTrak includes automatic consistency checks and guided procedures for resolving conflicts. It facilitates easy data exchange with other software by supporting open data formats.ConclusionsrhizoTrak covers the full range of functions required for user-friendly and efficient annotation of time-series images. Its flexibility and open source nature will foster efficient data acquisition procedures in root studies using minirhizotrons.


2020 ◽  
Vol 109 (11) ◽  
pp. 2029-2061
Author(s):  
Zahraa S. Abdallah ◽  
Mohamed Medhat Gaber

Abstract Time series classification (TSC) is a challenging task that attracted many researchers in the last few years. One main challenge in TSC is the diversity of domains where time series data come from. Thus, there is no “one model that fits all” in TSC. Some algorithms are very accurate in classifying a specific type of time series when the whole series is considered, while some only target the existence/non-existence of specific patterns/shapelets. Yet other techniques focus on the frequency of occurrences of discriminating patterns/features. This paper presents a new classification technique that addresses the inherent diversity problem in TSC using a nature-inspired method. The technique is stimulated by how flies look at the world through “compound eyes” that are made up of thousands of lenses, called ommatidia. Each ommatidium is an eye with its own lens, and thousands of them together create a broad field of vision. The developed technique similarly uses different lenses and representations to look at the time series, and then combines them for broader visibility. These lenses have been created through hyper-parameterisation of symbolic representations (Piecewise Aggregate and Fourier approximations). The algorithm builds a random forest for each lens, then performs soft dynamic voting for classifying new instances using the most confident eyes, i.e., forests. We evaluate the new technique, coined Co-eye, using the recently released extended version of UCR archive, containing more than 100 datasets across a wide range of domains. The results show the benefits of bringing together different perspectives reflecting on the accuracy and robustness of Co-eye in comparison to other state-of-the-art techniques.


2007 ◽  
Vol 23 (4) ◽  
pp. 227-237 ◽  
Author(s):  
Thomas Kubiak ◽  
Cornelia Jonas

Abstract. Patterns of psychological variables in time have been of interest to research from the beginning. This is particularly true for ambulatory monitoring research, where large (cross-sectional) time-series datasets are often the matter of investigation. Common methods for identifying cyclic variations include spectral analyses of time-series data or time-domain based strategies, which also allow for modeling cyclic components. Though the prerequisites of these sophisticated procedures, such as interval-scaled time-series variables, are seldom met, their usage is common. In contrast to the time-series approach, methods from a different field of statistics, directional or circular statistics, offer another opportunity for the detection of patterns in time, where fewer prerequisites have to be met. These approaches are commonly used in biology or geostatistics. They offer a wide range of analytical strategies to examine “circular data,” i.e., data where period of measurement is rotationally invariant (e.g., directions on the compass or daily hours ranging from 0 to 24, 24 being the same as 0). In psychology, however, circular statistics are hardly known at all. In the present paper, we intend to give a succinct introduction into the rationale of circular statistics and describe how this approach can be used for the detection of patterns in time, contrasting it with time-series analysis. We report data from a monitoring study, where mood and social interactions were assessed for 4 weeks in order to illustrate the use of circular statistics. Both the results of periodogram analyses and circular statistics-based results are reported. Advantages and possible pitfalls of the circular statistics approach are highlighted concluding that ambulatory assessment research can benefit from strategies borrowed from circular statistics.


Author(s):  
Trung Duy Pham ◽  
Dat Tran ◽  
Wanli Ma

In the biomedical and healthcare fields, the ownership protection of the outsourced data is becoming a challenging issue in sharing the data between data owners and data mining experts to extract hidden knowledge and patterns. Watermarking has been proved as a right-protection mechanism that provides detectable evidence for the legal ownership of a shared dataset, without compromising its usability under a wide range of data mining for digital data in different formats such as audio, video, image, relational database, text and software. Time series biomedical data such as Electroencephalography (EEG) or Electrocardiography (ECG) is valuable and costly in healthcare, which need to have owner protection when sharing or transmission in data mining application. However, this issue related to kind of data has only been investigated in little previous research as its characteristics and requirements. This paper proposes an optimized watermarking scheme to protect ownership for biomedical and healthcare systems in data mining. To achieve the highest possible robustness without losing watermark transparency, Particle Swarm Optimization (PSO) technique is used to optimize quantization steps to find a suitable one. Experimental results on EEG data show that the proposed scheme provides good imperceptibility and more robust against various signal processing techniques and common attacks such as noise addition, low-pass filtering, and re-sampling.


2011 ◽  
Vol 12 (1) ◽  
pp. 119 ◽  
Author(s):  
Michael Lindner ◽  
Raul Vicente ◽  
Viola Priesemann ◽  
Michael Wibral

2015 ◽  
Author(s):  
Andrew MacDonald

PhilDB is an open-source time series database. It supports storage of time series datasets that are dynamic, that is recording updates to existing values in a log as they occur. Recent open-source systems, such as InfluxDB and OpenTSDB, have been developed to indefinitely store long-period, high-resolution time series data. Unfortunately they require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a ‘big data’ approach to storage and access. Other open-source projects for handling time series data that don’t take the ‘big data’ approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that changed. Unlike ‘big data’ solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. PhilDB improves accessing datasets by two methods. Firstly, it uses fast reads which make it practical to select data for analysis. Secondly, it uses simple read methods to minimise effort required to extract data. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances as a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries. This paper describes the general approach, architecture, and philosophy of the PhilDB software.


2017 ◽  
pp. 23-32
Author(s):  
Owen Stuckey

I compare two GIS programs which can be used to create cartographic animations—the commercial Esri ArcGIS and the free and open-source QGIS. ArcGIS implements animation through the “Time Slider” while QGIS uses a plugin called “TimeManager.” There are some key similarities and differences as well as functions unique to each plugin. This analysis examines each program’s capabilities in mapping time series data. Criteria for evaluation include the number of steps, the number of output formats, input of data, processing, output of a finished animation, and cost. The comparison indicates that ArcGIS has more control in input, processing, and output of animations than QGIS, but has a baseline cost of $100 per year for a personal license. In contrast, QGIS is free, uses fewer steps, and enables more output formats. The QGIS interface can make data input, processing, and output of an animation slower.


2021 ◽  
Author(s):  
Shambo Bhattacharjee ◽  
Alvaro Santamaría-Gómez

<p>Long GNSS position time series contain offsets typically at rates between 1 and 3 offsets per decade. We may classify the offsets whether their epoch is precisely known, from GNSS station log files or Earthquake databases, or unknown. Very often, GNSS position time series contain offsets for which the epoch is not known a priori and, therefore, an offset detection/removal operation needs to be done in order to produce continuous position time series needed for many applications in geodesy and geophysics. A further classification of the offsets corresponds to those having a physical origin related to the instantaneous displacement of the GNSS antenna phase center (from Earthquakes, antenna changes or even changes of the environment of the antenna) and those spurious originated from the offset detection method being used (manual/supervised or automatic/unsupervised). Offsets due to changes of the antenna and its environment must be avoided by the station operators as much as possible. Spurious offsets due to the detection method must be avoided by the time series analyst and are the focus of this work.</p><p><br>Even if manual offset detection by expert analysis is likely to perform better, automatic offset detection algorithms are extremely useful when using massive (thousands) GNSS time series sets. Change point detection and cluster analysis algorithms can be used for detecting offsets in a GNSS time series data and R offers a number of libraries related to performing these two. For example, the “Bayesian Analysis of Change Point Problems” or the “bcp” helps to detect change points in a time series data. Similarly, the “dtwclust” (Dynamic Time Warping algorithm) is used for the time series cluster analysis. Our objective is to assess various open-source R libraries for the automatic offset detection.</p>


Sign in / Sign up

Export Citation Format

Share Document