scholarly journals A conversation with Kashyap Tumkur

Ubiquity ◽  
2021 ◽  
Vol 2021 (May) ◽  
pp. 1-5
Author(s):  
Bushra Anjum

Ubiquity's senior editor Dr. Bushra Anjum chats with Kashyap Tumkur, a software engineer at Verily Life Sciences, the healthcare and life sciences arm of Alphabet. They discuss how the notion of "precision medicine" has gained popularity in recent times. Next, the focus turns to Tumkur's work, where he, along with his team, is working on collecting and integrating continuous time-series data to create a map of human health.

2008 ◽  
Vol 5 (25) ◽  
pp. 885-897 ◽  
Author(s):  
Simon Cauchemez ◽  
Neil M Ferguson

We present a new statistical approach to analyse epidemic time-series data. A major difficulty for inference is that (i) the latent transmission process is partially observed and (ii) observed quantities are further aggregated temporally. We develop a data augmentation strategy to tackle these problems and introduce a diffusion process that mimicks the susceptible–infectious–removed (SIR) epidemic process, but that is more tractable analytically. While methods based on discrete-time models require epidemic and data collection processes to have similar time scales, our approach, based on a continuous-time model, is free of such constraint. Using simulated data, we found that all parameters of the SIR model, including the generation time, were estimated accurately if the observation interval was less than 2.5 times the generation time of the disease. Previous discrete-time TSIR models have been unable to estimate generation times, given that they assume the generation time is equal to the observation interval. However, we were unable to estimate the generation time of measles accurately from historical data. This indicates that simple models assuming homogenous mixing (even with age structure) of the type which are standard in mathematical epidemiology miss key features of epidemics in large populations.


Author(s):  
Frank-Michael Schleif ◽  
Bassam Mokbel ◽  
Andrej Gisbrecht ◽  
Leslie Theunissen ◽  
Volker Dürr ◽  
...  

Water ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 1098 ◽  
Author(s):  
Benjamin D. Bowes ◽  
Jeffrey M. Sadler ◽  
Mohamed M. Morsy ◽  
Madhur Behl ◽  
Jonathan L. Goodall

Many coastal cities are facing frequent flooding from storm events that are made worse by sea level rise and climate change. The groundwater table level in these low relief coastal cities is an important, but often overlooked, factor in the recurrent flooding these locations face. Infiltration of stormwater and water intrusion due to tidal forcing can cause already shallow groundwater tables to quickly rise toward the land surface. This decreases available storage which increases runoff, stormwater system loads, and flooding. Groundwater table forecasts, which could help inform the modeling and management of coastal flooding, are generally unavailable. This study explores two machine learning models, Long Short-term Memory (LSTM) networks and Recurrent Neural Networks (RNN), to model and forecast groundwater table response to storm events in the flood prone coastal city of Norfolk, Virginia. To determine the effect of training data type on model accuracy, two types of datasets (i) the continuous time series and (ii) a dataset of only storm events, created from observed groundwater table, rainfall, and sea level data from 2010–2018 are used to train and test the models. Additionally, a real-time groundwater table forecasting scenario was carried out to compare the models’ abilities to predict groundwater table levels given forecast rainfall and sea level as input data. When modeling the groundwater table with observed data, LSTM networks were found to have more predictive skill than RNNs (root mean squared error (RMSE) of 0.09 m versus 0.14 m, respectively). The real-time forecast scenario showed that models trained only on storm event data outperformed models trained on the continuous time series data (RMSE of 0.07 m versus 0.66 m, respectively) and that LSTM outperformed RNN models. Because models trained with the continuous time series data had much higher RMSE values, they were not suitable for predicting the groundwater table in the real-time scenario when using forecast input data. These results demonstrate the first use of LSTM networks to create hourly forecasts of groundwater table in a coastal city and show they are well suited for creating operational forecasts in real-time. As groundwater table levels increase due to sea level rise, forecasts of groundwater table will become an increasingly valuable part of coastal flood modeling and management.


Bernoulli ◽  
2008 ◽  
Vol 14 (2) ◽  
pp. 519-542 ◽  
Author(s):  
Ross A. Maller ◽  
Gernot Müller ◽  
Alex Szimayer

2014 ◽  
Vol 24 (05) ◽  
pp. 1450063 ◽  
Author(s):  
J. S. Armand Eyebe Fouda ◽  
Bertrand Bodo ◽  
Samrat L. Sabat ◽  
J. Yves Effa

The use of binary 0-1 test for chaos detection is limited to detect chaos in oversampled time series observations. In this paper we propose a modified 0-1 test in which, binary 0-1 test is applied to the discrete map of local maxima and minima of the original observable in contrast to the direct observable. The proposed approach successfully detects chaos in oversampled time series data. This is verified by simulating different numerical simulations of Lorenz and Duffing systems. The simulation results show the efficiency and computational gain of the proposed test for chaos detection in the continuous time dynamical systems.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
David Moriña ◽  
Amanda Fernández-Fontelo ◽  
Alejandra Cabaña ◽  
Pedro Puig

AbstractThe main goal of this work is to present a new model able to deal with potentially misreported continuous time series. The proposed model is able to handle the autocorrelation structure in continuous time series data, which might be partially or totally underreported or overreported. Its performance is illustrated through a comprehensive simulation study considering several autocorrelation structures and three real data applications on human papillomavirus incidence in Girona (Catalonia, Spain) and Covid-19 incidence in two regions with very different circumstances: the early days of the epidemic in the Chinese region of Heilongjiang and the most current data from Catalonia.


2021 ◽  
Vol 3 (1) ◽  
pp. 11
Author(s):  
Christopher G. Albert ◽  
Ulrich Callies ◽  
Udo von Toussaint

We present an approach to enhance the performance and flexibility of the Bayesian inference of model parameters based on observations of the measured data. Going beyond the usual surrogate-enhanced Monte-Carlo or optimization methods that focus on a scalar loss, we place emphasis on a function-valued output of a formally infinite dimension. For this purpose, the surrogate models are built on a combination of linear dimensionality reduction in an adaptive basis of principal components and Gaussian process regression for the map between reduced feature spaces. Since the decoded surrogate provides the full model output rather than only the loss, it is re-usable for multiple calibration measurements as well as different loss metrics and, consequently, allows for flexible marginalization over such quantities and applications to Bayesian hierarchical models. We evaluate the method’s performance based on a case study of a toy model and a simple riverine diatom model for the Elbe river. As input data, this model uses six tunable scalar parameters as well as silica concentrations in the upper reach of the river together with the continuous time-series of temperature, radiation, and river discharge over a specific year. The output consists of continuous time-series data that are calibrated against corresponding measurements from the Geesthacht Weir station at the Elbe river. For this study, only two scalar inputs were considered together with a function-valued output and compared to an existing model calibration using direct simulation runs without a surrogate.


2014 ◽  
Vol 644-650 ◽  
pp. 2164-2168
Author(s):  
Yong Zhi Liu ◽  
Xue Ping Jia

Association rules has played a significant role in mining classification clear affairs, but the performance is poor for the continuous time series data . Firstly, this paper presents the trend of time series, including the rise, decline and steady trend, and the time series trend method is proposed; Secondly, define the trend of association rules, including the trend of association rules’ support degree, trend of association rule’s confidence; Finally, gives an application example, show the effectiveness of the method in classification and association analysis of time series.


2018 ◽  
Author(s):  
Jake P. Taylor-King ◽  
Asbjørn N. Riseth ◽  
Manfred Claassen

AbstractRecent high-dimensional single-cell technologies such as mass cytometry are enabling time series experiments to monitor the temporal evolution of cell state distributions and to identify dynamically important cell states, such as fate decision states in differentiation. However, these technologies are destructive, and require analysis approaches that temporally map between cell state distributions across time points. Current approaches to approximate the single-cell time series as a dynamical system suffer from too restrictive assumptions about the type of kinetics, or link together pairs of sequential measurements in a discontinuous fashion.We propose Dynamic Distribution Decomposition (DDD), an operator approximation approach to infer a continuous distribution map between time points. On the basis of single-cell snapshot time series data, DDD approximates the continuous time Perron-Frobenius operator by means of a finite set of basis functions. This procedure can be interpreted as a continuous time Markov chain over a continuum of states. By only assuming a memoryless Markov (autonomous) process, the types of dynamics represented are more general than those represented by other common models, e.g., chemical reaction networks, stochastic differential equations. Additionally, the continuity assumption ensures that the same dynamical system maps between all time points, not arbitrarily changing at each time point. We demonstrate the ability of DDD to reconstruct dynamically important cell states and their transitions both on synthetic data, as well as on mass cytometry time series of iPSC reprogramming of a fibroblast system. We use DDD to find previously identified subpopulations of cells and to visualize differentiation trajectories.Dynamic Distribution Decomposition allows interpreting high-dimensional snapshot time series data as a low-dimensional Markov process, thereby enabling an interpretable dynamics analysis for a variety of biological processes by means of identifying their dynamically important cell states.Author summaryHigh-dimensional single-cell snapshot measurements are now increasingly utilized to study dynamic processes. Such measurements enable us to evaluate cell population distributions and their evolution over time. However, it is not trivial to map these distribution across time and to identify dynamically important cell states, i.e. bottleneck regions of state space exhibiting a high degree of change. We present Dynamic Distribution Decomposition (DDD) achieving this task by encoding single-cell measurements as linear combination of basis function distributions and evolving these as a linear system. We demonstrate reconstruction of dynamically important states for synthetic data of a bifurcated diffusion process and mass cytometry data for iPSC reprogramming.


Sign in / Sign up

Export Citation Format

Share Document