scholarly journals Psychometric network models from time-series and panel data

2019 ◽  
Author(s):  
Sacha Epskamp

Researchers in the field of network psychometrics often focus on the estimation of Gaussian graphical models (GGM)---an undirected network model of partial correlations---between observed variables of cross-sectional data or single subject time-series data. This assumes that all variables are measured without measurement error, which may be implausible. In addition, cross-sectional data cannot distinguish between within-subject and between-subject effects. This paper provides a general framework that extends GGM modeling with latent variables, including relationships over time. These relationships can be estimated from time-series data or panel data featuring at least three waves of measurement. The model takes the form of a graphical vector-autoregression model between latent variables and is termed the ts-lvgvar when estimated from time-series data and the panel-lvgvar when estimated from panel data. These methods have been implemented in the software package psychonetrics, which is exemplified in two empirical examples, one using time-series data and one using panel data, and evaluated in two large-scale simulation studies. The paper concludes with a discussion on ergodicity and generalizability. Although within-subject effects may in principle be separated from between-subject effects, the interpretation of these results rest on the intensity and the time interval of measurement and on the plausibility of the assumption of stationarity.

2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Juan Bacilio Guerrero Escamilla ◽  
Arquímedes Avilés Vargas

This paper presents the elements entailing the building of a panel data model on the basis of both cross-sectional and time series dimensions, as well as the assumptions implemented for the model application; this, with the objective of focusing on the main elements of the panel data modelling, its way of building, the estimation of parameters and their ratification. On the basis of the methodology of operations research, a practical application exercise is made to estimate the number of kidnapping cases in Mexico based on several economic indicators, finding that from the two types of panel data analyzed in this research, the best adjustment is obtained through the random-effects model, and the most meaningful variables are the Gross domestic product growth and the informal employment rate from the period 2010 to 2019 in each of the states. Thus, it is illustrated that panel data modelling present a better adjustment of data than any other type of models such as linear regression and time series analysis.


2021 ◽  
Vol 48 (3) ◽  
Author(s):  
Muhammet O. Yalçin ◽  
◽  
Nevin Güler Dincer ◽  
Serdar Demir ◽  
◽  
...  

In statistical and econometric researches, three types of data are mostly used as cross-section, time series and panel data. Cross-section data are obtained by collecting the observations related to the same variables of many units at constant time. Time series data are data type consisted of observations measured at successive time points for single unit. Sometimes, the number of observations in cross-sectional or time series data is insufficient for carrying out the statistical or econometric analysis. In that cases, panel data obtained by combining cross-section and time series data are often used. Panel data analysis (PDA) has some advantages such as increasing the number of observations and freedom degree, decreasing of multicollinearity, and obtaining more efficient and consistent predictions results with more data information. However, PDA requires to satisfy some statistical assumptions such as “heteroscedasticity”, “autocorrelation”, “correlation between units”, and “stationarity”. It is too difficult to hold these assumptions in real-time applications. In this study, fuzzy panel data analysis (FPDA) is proposed in order to overcome these drawbacks of PDA. FPDA is based on predicting the parameters of panel data regression as triangular fuzzy number. In order to validate the performance of efficiency of FPDA, FPDA, and PDA are applied to panel data consisted of gross domestic production data from five country groups between the years of 2005-2013 and the prediction performances of them are compared by using three criteria such mean absolute percentage error, root mean square error, and variance accounted for. All analyses are performed in R 3.5.2. As a result of analysis, it is observed that FPDA is an efficient and practical method, especially in case required statistical assumptions are not satisfied.


Author(s):  
Andrew Q. Philips

In cross-sectional time-series data with a dichotomous dependent variable, failing to account for duration dependence when it exists can lead to faulty inferences. A common solution is to include duration dummies, polynomials, or splines to proxy for duration dependence. Because creating these is not easy for the common practitioner, I introduce a new command, mkduration, that is a straightforward way to generate a duration variable for binary cross-sectional time-series data in Stata. mkduration can handle various forms of missing data and allows the duration variable to easily be turned into common parametric and nonparametric approximations.


Water ◽  
2021 ◽  
Vol 13 (14) ◽  
pp. 1944
Author(s):  
Haitham H. Mahmoud ◽  
Wenyan Wu ◽  
Yonghao Wang

This work develops a toolbox called WDSchain on MATLAB that can simulate blockchain on water distribution systems (WDS). WDSchain can import data from Excel and EPANET water modelling software. It extends the EPANET to enable simulation blockchain of the hydraulic data at any intended nodes. Using WDSchain will strengthen network automation and the security in WDS. WDSchain can process time-series data with two simulation modes: (1) static blockchain, which takes a snapshot of one-time interval data of all nodes in WDS as input and output into chained blocks at a time, and (2) dynamic blockchain, which takes all simulated time-series data of all the nodes as input and establishes chained blocks at the simulated time. Five consensus mechanisms are developed in WDSchain to provide data at different security levels using PoW, PoT, PoV, PoA, and PoAuth. Five different sizes of WDS are simulated in WDSchain for performance evaluation. The results show that a trade-off is needed between the system complexity and security level for data validation. The WDSchain provides a methodology to further explore the data validation using Blockchain to WDS. The limitations of WDSchain do not consider selection of blockchain nodes and broadcasting delay compared to commercial blockchain platforms.


2021 ◽  
Author(s):  
Sadnan Al Manir ◽  
Justin Niestroy ◽  
Maxwell Adam Levinson ◽  
Timothy Clark

Introduction: Transparency of computation is a requirement for assessing the validity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These components may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publications, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challenging evidence for computations, services, software, data, and results, across arbitrarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory. It extends PROV for additional expressiveness, with support for defeasible reasoning. EVI treats any com- putational result or component of evidence as a defeasible assertion, supported by a DAG of the computations, software, data, and agents that produced it. Results: We have successfully deployed EVI for very-large-scale predictive analytics on clinical time-series data. Every result may reference its own evidence graph as metadata, which can be extended when subsequent computations are executed. Discussion: Evidence graphs support transparency and defeasible reasoning on results. They are first-class computational objects, and reference the datasets and software from which they are derived. They support fully transparent computation, with challenge and support propagation. The EVI approach may be extended to include instruments, animal models, and critical experimental reagents.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jing Zhao ◽  
Shubo Liu ◽  
Xingxing Xiong ◽  
Zhaohui Cai

Privacy protection is one of the major obstacles for data sharing. Time-series data have the characteristics of autocorrelation, continuity, and large scale. Current research on time-series data publication mainly ignores the correlation of time-series data and the lack of privacy protection. In this paper, we study the problem of correlated time-series data publication and propose a sliding window-based autocorrelation time-series data publication algorithm, called SW-ATS. Instead of using global sensitivity in the traditional differential privacy mechanisms, we proposed periodic sensitivity to provide a stronger degree of privacy guarantee. SW-ATS introduces a sliding window mechanism, with the correlation between the noise-adding sequence and the original time-series data guaranteed by sequence indistinguishability, to protect the privacy of the latest data. We prove that SW-ATS satisfies ε-differential privacy. Compared with the state-of-the-art algorithm, SW-ATS is superior in reducing the error rate of MAE which is about 25%, improving the utility of data, and providing stronger privacy protection.


2020 ◽  
Vol 2020 (1) ◽  
pp. 98-117
Author(s):  
Jyoti U. Devkota

Abstract The nightfires illuminated on the earth surface are caught by the satellite. These are emitted by various sources such as gas flares, biomass burning, volcanoes, and industrial sites such as steel mills. Amount of nightfires in an area is a proxy indicator of fuel consumption and CO2 emission. In this paper the behavior of radiant heat (RH) data produced by nightfire is minutely analyzed over a period of 75 hour; the geographical coordinates of energy sources generating these values are not considered. Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS DNB) satellite earth observation nightfire data were used. These 75 hours and 28252 observations time series RH (unit W) data is from 2 September 2018 to 6 September 2018. The dynamics of change in the overall behavior these data and with respect to time and irrespective of its geographical occurrence is studied and presented here. Different statistical methodologies are also used to identify hidden groups and patterns which are not obvious by remote sensing. Underlying groups and clusters are formed using Cluster Analysis and Discriminant Analysis. The behavior of RH for three consecutive days is studied with the technique Analysis of Variance. Cubic Spline Interpolation and merging has been done to create a time series data occurring at equal minute time interval. The time series data is decomposed to study the effect of various components. The behavior of this data is also analyzed in frequency domain by study of period, amplitude and the spectrum.


Author(s):  
Josep Escrig Escrig ◽  
Buddhika Hewakandamby ◽  
Georgios Dimitrakis ◽  
Barry Azzopardi

Intermittent gas and liquid two-phase flow was generated in a 6 m × 67 mm diameter pipe mounted rotatable frame (vertical up to −20°). Air and a 5 mPa s silicone oil at atmospheric pressure were studied. Gas superficial velocities between 0.17 and 2.9 m/s and liquid superficial velocities between 0.023 and 0.47 m/s were employed. These runs were repeated at 7 angles making a total of 420 runs. Cross sectional void fraction time series were measured over 60 seconds for each run using a Wire Mesh Sensor and a twin plane Electrical Capacitance Tomography. The void fraction time series data were analysed in order to extract average void fraction, structure velocities and structure frequencies. Results are presented to illustrate the effect of the angle as well as the phase superficial velocities affect the intermittent flows behaviour. Existing correlations suggested to predict average void fraction and gas structures velocity and frequency in slug flow have been compared with new experimental results for any intermittent flow including: slug, cap bubble and churn. Good agreements have been seen for the gas structure velocity and mean void fraction. On the other hand, no correlation was found to predict the gas structure frequency, especially in vertical and inclined pipes.


2000 ◽  
Vol 16 (6) ◽  
pp. 927-997 ◽  
Author(s):  
Hyungsik R. Moon ◽  
Peter C.B. Phillips

Time series data are often well modeled by using the device of an autoregressive root that is local to unity. Unfortunately, the localizing parameter (c) is not consistently estimable using existing time series econometric techniques and the lack of a consistent estimator complicates inference. This paper develops procedures for the estimation of a common localizing parameter using panel data. Pooling information across individuals in a panel aids the identification and estimation of the localizing parameter and leads to consistent estimation in simple panel models. However, in the important case of models with concomitant deterministic trends, it is shown that pooled panel estimators of the localizing parameter are asymptotically biased. Some techniques are developed to overcome this difficulty, and consistent estimators of c in the region c < 0 are developed for panel models with deterministic and stochastic trends. A limit distribution theory is also established, and test statistics are constructed for exploring interesting hypotheses, such as the equivalence of local to unity parameters across subgroups of the population. The methods are applied to the empirically important problem of the efficient extraction of deterministic trends. They are also shown to deliver consistent estimates of distancing parameters in nonstationary panel models where the initial conditions are in the distant past. In the development of the asymptotic theory this paper makes use of both sequential and joint limit approaches. An important limitation in the operation of the joint asymptotics that is sometimes needed in our development is the rate condition n/T → 0. So the results in the paper are likely to be most relevant in panels where T is large and n is moderately large.


2018 ◽  
Vol 373 (1758) ◽  
pp. 20170377 ◽  
Author(s):  
Hexuan Liu ◽  
Jimin Kim ◽  
Eli Shlizerman

We propose an approach to represent neuronal network dynamics as a probabilistic graphical model (PGM). To construct the PGM, we collect time series of neuronal responses produced by the neuronal network and use singular value decomposition to obtain a low-dimensional projection of the time-series data. We then extract dominant patterns from the projections to get pairwise dependency information and create a graphical model for the full network. The outcome model is a functional connectome that captures how stimuli propagate through the network and thus represents causal dependencies between neurons and stimuli. We apply our methodology to a model of the Caenorhabditis elegans somatic nervous system to validate and show an example of our approach. The structure and dynamics of the C. elegans nervous system are well studied and a model that generates neuronal responses is available. The resulting PGM enables us to obtain and verify underlying neuronal pathways for known behavioural scenarios and detect possible pathways for novel scenarios. This article is part of a discussion meeting issue ‘Connectome to behaviour: modelling C. elegans at cellular resolution’.


Sign in / Sign up

Export Citation Format

Share Document