scholarly journals A Computationally Efficient Measure for Word Semantic Relatedness Using Time Series

Author(s):  
Arash Joorabchi ◽  
Alaa Alahmadi ◽  
Michael English ◽  
Abdulhussain E. Mahdi
2013 ◽  
Vol 2013 ◽  
pp. 1-16 ◽  
Author(s):  
Sunghee Oh ◽  
Seongho Song ◽  
Gregory Grabowski ◽  
Hongyu Zhao ◽  
James P. Noonan

RNA-seq is becoming thede factostandard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.


2012 ◽  
Vol 22 (10) ◽  
pp. 1250236 ◽  
Author(s):  
LIANG HUANG ◽  
YING-CHENG LAI ◽  
MARY ANN F. HARRISON

We propose a method to detect nodes of relative importance, e.g. hubs, in an unknown network based on a set of measured time series. The idea is to construct a matrix characterizing the synchronization probabilities between various pairs of time series and examine the components of the principal eigenvector. We provide a heuristic argument indicating the existence of an approximate one-to-one correspondence between the components and the degrees of the nodes from which measurements are obtained. The striking finding is that such a correspondence appears to be quite robust, which holds regardless of the detailed node dynamics and of the network topology. Our computationally efficient method thus provides a general means to address the important problem of network detection, with potential applications in a number of fields.


2006 ◽  
Vol 16 (05) ◽  
pp. 371-382 ◽  
Author(s):  
EDMOND H. C. WU ◽  
PHILIP L. H. YU ◽  
W. K. LI

We suggest using independent component analysis (ICA) to decompose multivariate time series into statistically independent time series. Then, we propose to use ICA-GARCH models which are computationally efficient to estimate the multivariate volatilities. The experimental results show that the ICA-GARCH models are more effective than existing methods, including DCC, PCA-GARCH, and EWMA. We also apply the proposed models to compute value at risk (VaR) for risk management applications. The backtesting and the out-of-sample tests validate the performance of ICA-GARCH models for value at risk estimation.


2009 ◽  
Vol 2009 ◽  
pp. 1-11 ◽  
Author(s):  
Lourens Waldorp

As a consequence of misspecification of the hemodynamic response and noise variance models, tests on general linear model coefficients are not valid. Robust estimation of the variance of the general linear model (GLM) coefficients in fMRI time series is therefore essential. In this paper an alternative method to estimate the variance of the GLM coefficients accurately is suggested and compared to other methods. The alternative, referred to as the sandwich, is based primarily on the fact that the time series are obtained from multiple exchangeable stimulus presentations. The analytic results show that the sandwich is unbiased. Using this result, it is possible to obtain an exact statistic which keeps the 5% false positive rate. Extensive Monte Carlo simulations show that the sandwich is robust against misspeci cation of the autocorrelations and of the hemodynamic response model. The sandwich is seen to be in many circumstances robust, computationally efficient, and flexible with respect to correlation structures across the brain. In contrast, the smoothing approach can be robust to a certain extent but only with specific knowledge of the circumstances for the smoothing parameter.


2001 ◽  
Vol 04 (01) ◽  
pp. 147-177 ◽  
Author(s):  
GILLES ZUMBACH ◽  
ULRICH MÜLLER

We present a toolbox to compute and extract information from inhomogeneous (i.e. unequally spaced) time series. The toolbox contains a large set of operators, mapping from the space of inhomogeneous time series to itself. These operators are computationally efficient (time and memory-wise) and suitable for stochastic processes. This makes them attractive for processing high-frequency data in finance and other fields. Using a basic set of operators, we easily construct more powerful combined operators which cover a wide set of typical applications. The operators are classified as either macroscopic operators (that have a limit value when the sampling frequency goes to infinity) or microscopic operators (that strongly depend on the actual sampling). For inhomogeneous data, macroscopic operators are more robust and more important. Examples of macroscopic operators are (exponential) moving averages, differentials, derivatives, moving volatilities, etc.…


2021 ◽  
Vol 13 (6) ◽  
pp. 1130
Author(s):  
Jonathan León-Tavares ◽  
Jean-Louis Roujean ◽  
Bruno Smets ◽  
Erwin Wolters ◽  
Carolien Toté ◽  
...  

Land surface reflectance measurements from the VEGETATION program (SPOT-4, SPOT-5 and PROBA-V satellites) have led to the acquisition of consistent time-series of Normalized Difference Vegetation Index (NDVI) at a global scale. The wide imaging swath (>2000 km) of the family of VEGETATION space-borne sensors ensures a daily coverage of the Earth at the expense of a varying observation and illumination geometries between successive orbit overpasses for a given target located on the ground. Such angular variations infer saw-like patterns on time-series of reflectance and NDVI. The presence of directional effects is not a real issue provided that they can be properly removed, which supposes an appropriate BRDF (Bidirectional Reflectance Distribution Function) sampling as offered by the VEGETATION program. An anisotropy correction supports a better analysis of the temporal shapes and spatial patterns of land surface reflectance values and vegetation indices such as NDVI. Herein we describe a BRDF correction methodology, for the purpose of the Copernicus Global Land Service framework, which includes notably an adaptive data accumulation window and provides uncertainties associated with the NDVI computed with normalized reflectance. Assessing the general performance of the methodology in comparing time-series between normalized and directional NDVI reveals a significant removal of the high-frequency noise due to directional effects. The proposed methodology is computationally efficient to operate at a global scale to deliver a BRDF-corrected NDVI product based on long-term Time-Series of VEGETATION sensor and its follow-on with the Copernicus Sentinel-3 satellite constellation.


2011 ◽  
Vol 84 (3) ◽  
Author(s):  
Réka Albert ◽  
Bhaskar DasGupta ◽  
Rashmi Hegde ◽  
Gowri Sangeetha Sivanathan ◽  
Anthony Gitter ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document