New approach to the statistical analysis of cardiovascular data

2005 ◽  
Vol 98 (6) ◽  
pp. 2298-2303 ◽  
Author(s):  
Michele R. Norton ◽  
Richard P. Sloan ◽  
Emilia Bagiella

Fourier-based approaches to analysis of variability of R-R intervals or blood pressure typically compute power in a given frequency band (e.g., 0.01–0.07 Hz) by aggregating the power at each constituent frequency within that band. This paper describes a new approach to the analysis of these data. We propose to partition the blood pressure variability spectrum into more narrow components by computing power in 0.01-Hz-wide bands. Therefore, instead of a single measure of variability in a specific frequency interval, we obtain several measurements. The approach generates a more complex data structure that requires a careful account of the nested repeated measures. We briefly describe a statistical methodology based on generalized estimating equations that suitably handles this more complex data structure. To illustrate the methods, we consider systolic blood pressure data collected during psychological and orthostatic challenge. We compare the results with those obtained using the conventional methods to compute blood pressure variability, and we show that our approach yields more efficient results and more powerful statistical tests. We conclude that this approach may allow a more thorough analysis of cardiovascular parameters that are measured under different experimental conditions, such as blood pressure or heart rate variability.

2007 ◽  
Vol 10 (1) ◽  
Author(s):  
Jorge Villalobos ◽  
Danilo Pérez ◽  
Juan Castro ◽  
Camilo Jiménez

In a computer science curriculum, the data structures course is considered fundamental. In that course, students must generate the ability to desingn the more suitable data structures for a problem solution. They must also write an efficient algorithm in order to solve the problem. Students must understand that there are different types of data structures, each of them with associated algorithms of different complexity. A data structures laboratory is a set of computional tools that helps students in the experimentation with the concepts introduced in the curse. The main objetive of this experimentation is to generate the student's needed abilities for manipulating complex data structure. This paper presents the main characteristics of the laboratory built as a sopport of the course. we illustrate the huge possibilities of the tool with an example.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yerim Kim ◽  
Jae-Sung Lim ◽  
Mi Sun Oh ◽  
Kyung-Ho Yu ◽  
Ji Sung Lee ◽  
...  

AbstractBlood pressure variability (BPV) is associated with higher cardiovascular morbidity risks; however, its association with cognitive decline remains unclear. We investigated whether higher BPV is associated with faster declines in cognitive function in ischemic stroke (IS) patients. Cognitive function was evaluated between April 2010 and August 2015 using the Mini-mental State Examination (MMSE) and Montreal Cognitive Assessment in 1,240 Korean PICASSO participants. Patients for whom baseline and follow-up cognitive test results and at least five valid BP readings were available were included. A restricted maximum likelihood–based Mixed Model for Repeated Measures was used to compare changes in cognitive function over time. Among a total of 746 participants (64.6 ± 10.8 years; 35.9% female). Baseline mean-MMSE score was 24.9 ± 4.7. The median number of BP readings was 11. During a mean follow-up of 2.6 years, mean baseline and last follow-up MMSE scores were 25.4 ± 4.8 vs. 27.8 ± 4.4 (the lowest BPV group) and 23.9 ± 5.2 vs. 23.2 ± 5.9 (the highest BPV group). After adjusting for multiple variables, higher BPV was independently associated with faster cognitive decline over time. However, no significant intergroup difference in cognitive changes associated with mean systolic BP was observed. Further research is needed to elucidate how BPV might affect cognitive function.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253926
Author(s):  
Xiang Zhang ◽  
Taolin Yuan ◽  
Jaap Keijer ◽  
Vincent C. J. de Boer

Background Mitochondrial dysfunction is involved in many complex diseases. Efficient and accurate evaluation of mitochondrial functionality is crucial for understanding pathology as well as facilitating novel therapeutic developments. As a popular platform, Seahorse extracellular flux (XF) analyzer is widely used for measuring mitochondrial oxygen consumption rate (OCR) in living cells. A hidden feature of Seahorse XF OCR data is that it has a complex data structure, caused by nesting and crossing between measurement cycles, wells and plates. Surprisingly, statistical analysis of Seahorse XF data has not received sufficient attention, and current methods completely ignore the complex data structure, impairing the robustness of statistical inference. Results To rigorously incorporate the complex structure into data analysis, here we developed a Bayesian hierarchical modeling framework, OCRbayes, and demonstrated its applicability based on analysis of published data sets. Conclusions We showed that OCRbayes can analyze Seahorse XF OCR experimental data derived from either single or multiple plates. Moreover, OCRbayes has potential to be used for diagnosing patients with mitochondrial diseases.


2021 ◽  
Author(s):  
Xiang Zhang ◽  
Taolin Yuan ◽  
Jaap Keijer ◽  
Vincent C. J. de Boer

Mitochondrial dysfunction is involved in many complex diseases. Efficient and accurate evaluation of mitochondrial functionality is crucial for understanding pathology as well as facilitating novel therapeutic developments. As a popular platform, Seahorse extracellular flux (XF) analyzer is widely used for measuring mitochondrial oxygen consumption rate (OCR) in living cells. A hidden feature of Seahorse XF OCR data is that it has a complex data structure, caused by nesting and crossing between measurement cycles, wells and plates. Surprisingly, statistical analysis of Seahorse XF data has not received sufficient attention, and current methods completely ignore the complex data structure, impairing the robustness of statistical inference. To rigorously incorporate the complex structure into data analysis, here we developed a Bayesian hierarchical modeling framework, OCRbayes, and demonstrated its applicability based on analysis of published data sets. We showed that OCRbayes can analyze Seahorse XF OCR experimental data derived from either single or multiple plates. Moreover, OCRbayes has potential to be used for diagnosing patients with mitochondrial diseases.


2021 ◽  
pp. 25-30
Author(s):  
Khalid Alnajjar ◽  
◽  
Mika Hämäläinen

Every NLP researcher has to work with different XML or JSON encoded files. This often involves writing code that serves a very specific purpose. Corpona is meant to streamline any workflow that involves XML and JSON based corpora, by offering easy and reusable functionalities. The current functionalities relate to easy parsing and access to XML files, easy access to sub-items in a nested JSON structure and visualization of a complex data structure. Corpona is fully open-source and it is available on GitHub and Zenodo.


Sign in / Sign up

Export Citation Format

Share Document