A Dynamic Approach for Mining Generalised Sequential Patterns in Time Series Clinical Data Sets

Author(s):  
M. Rasheeda Shameem ◽  
M. Razia Naseem ◽  
N. K. Subanivedhi ◽  
R. Sethukkarasi
1984 ◽  
Vol 30 (104) ◽  
pp. 66-76 ◽  
Author(s):  
Paul A. Mayewski ◽  
W. Berry Lyons ◽  
N. Ahmad ◽  
Gordon Smith ◽  
M. Pourchet

AbstractSpectral analysis of time series of a c. 17 ± 0.3 year core, calibrated for total ß activity recovered from Sentik Glacier (4908m) Ladakh, Himalaya, yields several recognizable periodicities including subannual, annual, and multi-annual. The time-series, include both chemical data (chloride, sodium, reactive iron, reactive silicate, reactive phosphate, ammonium, δD, δ(18O) and pH) and physical data (density, debris and ice-band locations, and microparticles in size grades 0.50 to 12.70 μm). Source areas for chemical species investigated and general air-mass circulation defined from chemical and physical time-series are discussed to demonstrate the potential of such studies in the development of paleometeorological data sets from remote high-alpine glacierized sites such as the Himalaya.


Author(s):  
Cong Gao ◽  
Ping Yang ◽  
Yanping Chen ◽  
Zhongmin Wang ◽  
Yue Wang

AbstractWith large deployment of wireless sensor networks, anomaly detection for sensor data is becoming increasingly important in various fields. As a vital data form of sensor data, time series has three main types of anomaly: point anomaly, pattern anomaly, and sequence anomaly. In production environments, the analysis of pattern anomaly is the most rewarding one. However, the traditional processing model cloud computing is crippled in front of large amount of widely distributed data. This paper presents an edge-cloud collaboration architecture for pattern anomaly detection of time series. A task migration algorithm is developed to alleviate the problem of backlogged detection tasks at edge node. Besides, the detection tasks related to long-term correlation and short-term correlation in time series are allocated to cloud and edge node, respectively. A multi-dimensional feature representation scheme is devised to conduct efficient dimension reduction. Two key components of the feature representation trend identification and feature point extraction are elaborated. Based on the result of feature representation, pattern anomaly detection is performed with an improved kernel density estimation method. Finally, extensive experiments are conducted with synthetic data sets and real-world data sets.


2021 ◽  
Vol 5 (1) ◽  
pp. 10
Author(s):  
Mark Levene

A bootstrap-based hypothesis test of the goodness-of-fit for the marginal distribution of a time series is presented. Two metrics, the empirical survival Jensen–Shannon divergence (ESJS) and the Kolmogorov–Smirnov two-sample test statistic (KS2), are compared on four data sets—three stablecoin time series and a Bitcoin time series. We demonstrate that, after applying first-order differencing, all the data sets fit heavy-tailed α-stable distributions with 1<α<2 at the 95% confidence level. Moreover, ESJS is more powerful than KS2 on these data sets, since the widths of the derived confidence intervals for KS2 are, proportionately, much larger than those of ESJS.


2019 ◽  
Vol 93 (12) ◽  
pp. 2651-2660 ◽  
Author(s):  
Sergey Samsonov

AbstractThe previously presented Multidimensional Small Baseline Subset (MSBAS-2D) technique computes two-dimensional (2D), east and vertical, ground deformation time series from two or more ascending and descending Differential Interferometric Synthetic Aperture Radar (DInSAR) data sets by assuming that the contribution of the north deformation component is negligible. DInSAR data sets can be acquired with different temporal and spatial resolutions, viewing geometries and wavelengths. The MSBAS-2D technique has previously been used for mapping deformation due to mining, urban development, carbon sequestration, permafrost aggradation and pingo growth, and volcanic activities. In the case of glacier ice flow, the north deformation component is often too large to be negligible. Historically, the surface-parallel flow (SPF) constraint was used to compute the static three-dimensional (3D) velocity field at various glaciers. A novel MSBAS-3D technique has been developed for computing 3D deformation time series where the SPF constraint is utilized. This technique is used for mapping 3D deformation at the Barnes Ice Cap, Baffin Island, Nunavut, Canada, during January–March 2015, and the MSBAS-2D and MSBAS-3D solutions are compared. The MSBAS-3D technique can be used for studying glacier ice flow at other glaciers and other surface deformation processes with large north deformation component, such as landslides. The software implementation of MSBAS-3D technique can be downloaded from http://insar.ca/.


2018 ◽  
Vol 617 ◽  
pp. A108 ◽  
Author(s):  
T. Appourchaux ◽  
P. Boumier ◽  
J. W. Leibacher ◽  
T. Corbard

Context. The recent claims of g-mode detection have restarted the search for these potentially extremely important modes. These claims can be reassessed in view of the different data sets available from the SoHO instruments and ground-based instruments. Aims. We produce a new calibration of the GOLF data with a more consistent p-mode amplitude and a more consistent time shift correction compared to the time series used in the past. Methods. The calibration of 22 yr of GOLF data is done with a simpler approach that uses only the predictive radial velocity of the SoHO spacecraft as a reference. Using p modes, we measure and correct the time shift between ground- and space-based instruments and the GOLF instrument. Results. The p-mode velocity calibration is now consistent to within a few percent with other instruments. The remaining time shifts are within ±5 s for 99.8% of the time series.


2019 ◽  
Vol 24 (48) ◽  
pp. 194-204 ◽  
Author(s):  
Francisco Flores-Muñoz ◽  
Alberto Javier Báez-García ◽  
Josué Gutiérrez-Barroso

Purpose This work aims to explore the behavior of stock market prices according to the autoregressive fractional differencing integrated moving average model. This behavior will be compared with a measure of online presence, search engine results as measured by Google Trends. Design/methodology/approach The study sample is comprised by the companies listed at the STOXX® Global 3000 Travel and Leisure. Google Finance and Yahoo Finance, along with Google Trends, were used, respectively, to obtain the data of stock prices and search results, for a period of five years (October 2012 to October 2017). To guarantee certain comparability between the two data sets, weekly observations were collected, with a total figure of 118 firms, two time series each (price and search results), around 61,000 observations. Findings Relationships between the two data sets are explored, with theoretical implications for the fields of economics, finance and management. Tourist corporations were analyzed owing to their growing economic impact. The estimations are initially consistent with long memory; so, they suggest that both stock market prices and online search trends deserve further exploration for modeling and forecasting. Significant differences owing to country and sector effects are also shown. Originality/value This research contributes in two different ways: it demonstrate the potential of a new tool for the analysis of relevant time series to monitor the behavior of firms and markets, and it suggests several theoretical pathways for further research in the specific topics of asymmetry of information and corporate transparency, proposing pertinent bridges between the two fields.


Author(s):  
Christian Herff ◽  
Dean J. Krusienski

AbstractClinical data is often collected and processed as time series: a sequence of data indexed by successive time points. Such time series can be from sources that are sampled over short time intervals to represent continuous biophysical wave-(one word waveforms) forms such as the voltage measurements representing the electrocardiogram, to measurements that are sampled daily, weekly, yearly, etc. such as patient weight, blood triglyceride levels, etc. When analyzing clinical data or designing biomedical systems for measurements, interventions, or diagnostic aids, it is important to represent the information contained within such time series in a more compact or meaningful form (e.g., noise filtering), amenable to interpretation by a human or computer. This process is known as feature extraction. This chapter will discuss some fundamental techniques for extracting features from time series representing general forms of clinical data.


Author(s):  
Fida Kamal Dankar ◽  
Khaled El Emam ◽  
Angelica Neisa ◽  
Tyson Roffey
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document