scholarly journals RADAR-Base: Open Source Mobile Health Platform for Collecting, Monitoring, and Analyzing Data Using Sensors, Wearables, and Mobile Devices

10.2196/11734 ◽  
2019 ◽  
Vol 7 (8) ◽  
pp. e11734 ◽  
Author(s):  
Yatharth Ranjan ◽  
Zulqarnain Rashid ◽  
Callum Stewart ◽  
Pauline Conde ◽  
Mark Begale ◽  
...  

Background With a wide range of use cases in both research and clinical domains, collecting continuous mobile health (mHealth) streaming data from multiple sources in a secure, highly scalable, and extensible platform is of high interest to the open source mHealth community. The European Union Innovative Medicines Initiative Remote Assessment of Disease and Relapse-Central Nervous System (RADAR-CNS) program is an exemplary project with the requirements to support the collection of high-resolution data at scale; as such, the Remote Assessment of Disease and Relapse (RADAR)-base platform is designed to meet these needs and additionally facilitate a new generation of mHealth projects in this nascent field. Objective Wide-bandwidth networks, smartphone penetrance, and wearable sensors offer new possibilities for collecting near-real-time high-resolution datasets from large numbers of participants. The aim of this study was to build a platform that would cater for large-scale data collection for remote monitoring initiatives. Key criteria are around scalability, extensibility, security, and privacy. Methods RADAR-base is developed as a modular application; the backend is built on a backbone of the highly successful Confluent/Apache Kafka framework for streaming data. To facilitate scaling and ease of deployment, we use Docker containers to package the components of the platform. RADAR-base provides 2 main mobile apps for data collection, a Passive App and an Active App. Other third-Party Apps and sensors are easily integrated into the platform. Management user interfaces to support data collection and enrolment are also provided. Results General principles of the platform components and design of RADAR-base are presented here, with examples of the types of data currently being collected from devices used in RADAR-CNS projects: Multiple Sclerosis, Epilepsy, and Depression cohorts. Conclusions RADAR-base is a fully functional, remote data collection platform built around Confluent/Apache Kafka and provides off-the-shelf components for projects interested in collecting mHealth datasets at scale.

Author(s):  
Yatharth Ranjan ◽  
Zulqarnain Rashid ◽  
Callum Stewart ◽  
Maximilian Kerz ◽  
Mark Begale ◽  
...  

BACKGROUND With a wide range of use cases in both research and clinical domains, collecting continuous mobile health (mHealth) streaming data from multiple sources in a secure, highly scalable and extensible platform is of high interest to the open source mHealth community. The EU IMI RADAR-CNS program is an exemplar project with the requirements to support collection of high resolution data at scale; as such, the RADAR-base platform is designed to meet these needs and additionally facilitate a new generation of mHealth projects in this nascent field. OBJECTIVE Wide-bandwidth networks, smartphone penetrance and wearable sensors offer new possibilities for collecting (near) real-time high resolution datasets from large numbers of participants. We aimed to build a platform that would cater for large scale data collection for remote monitoring initiatives. Key criteria are around scalability, extensibility, security and privacy. METHODS RADAR-base is developed as a modular application, the backend is built on a backbone of the highly successful Confluent/Apache Kafka framework for streaming data. To facilitate scaling and ease of deployment, we use Docker containers to package the components of the platform. RADAR-base provides two main mobile apps for data collection, a Passive App and an Active App. Other 3rd Party Apps and sensors are easily integrated into the platform. Management user interfaces to support data collection and enrolment are also provided. RESULTS General principles of the platform components and design of RADAR-base are presented here, with examples of the types of data currently being collected from devices used in RADAR-CNS projects: Multiple Sclerosis, Epilepsy and Depression cohorts. CONCLUSIONS RADAR-base is a fully functional, remote data collection platform built around Confluent/Apache Kafka and provides off-the-shelf components for projects interested in collecting mHealth datasets at scale.


Author(s):  
Sacha J. van Albada ◽  
Jari Pronold ◽  
Alexander van Meegen ◽  
Markus Diesmann

AbstractWe are entering an age of ‘big’ computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other’s work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of an ICT infrastructure for neuroscience.


2016 ◽  
Vol 144 (4) ◽  
pp. 1407-1421 ◽  
Author(s):  
Michael L. Waite

Abstract Many high-resolution atmospheric models can reproduce the qualitative shape of the atmospheric kinetic energy spectrum, which has a power-law slope of −3 at large horizontal scales that shallows to approximately −5/3 in the mesoscale. This paper investigates the possible dependence of model energy spectra on the vertical grid resolution. Idealized simulations forced by relaxation to a baroclinically unstable jet are performed for a wide range of vertical grid spacings Δz. Energy spectra are converged for Δz 200 m but are very sensitive to resolution with 500 m ≤ Δz ≤ 2 km. The nature of this sensitivity depends on the vertical mixing scheme. With no vertical mixing or with weak, stability-dependent mixing, the mesoscale spectra are artificially amplified by low resolution: they are shallower and extend to larger scales than in the converged simulations. By contrast, vertical hyperviscosity with fixed grid-scale damping rate has the opposite effect: underresolved spectra are spuriously steepened. High-resolution spectra are converged except for the stability-dependent mixing case, which are damped by excessive mixing due to enhanced shear over a wide range of horizontal scales. It is shown that converged spectra require resolution of all vertical scales associated with the resolved horizontal structures: these include quasigeostrophic scales for large-scale motions with small Rossby number and the buoyancy scale for small-scale motions at large Rossby number. It is speculated that some model energy spectra may be contaminated by low vertical resolution, and it is recommended that vertical-resolution sensitivity tests always be performed.


Author(s):  
Hammad Mazhar

This paper describes an open source parallel simulation framework capable of simulating large-scale granular and multi-body dynamics problems. This framework, called Chrono::Parallel, builds upon the modeling capabilities of Chrono::Engine, another open source simulation package, and leverages parallel data structures to enable scalable simulation of large problems. Chrono::Parallel is somewhat unique in that it was designed from the ground up to leverage parallel data structures and algorithms so that it scales across a wide range of computer architectures and yet has a rich modeling capability for simulating many different types of problems. The modeling capabilities of Chrono::Parallel will be demonstrated in the context of additive manufacturing and 3D printing by modeling the Selective Layer Sintering layering process and simulating large complex interlocking structures which require compression and folding to fit into a 3D printer’s build volume.


2019 ◽  
Author(s):  
Franklin D. Wolfe ◽  
Timothy A. Stahl ◽  
Pilar Villamor ◽  
Biljana Lukovic

Abstract. Here, we introduce an open source, semi-automated, Python-based graphical user interface (GUI) called the Monte Carlo Slip Statistics Toolkit (MCSST) for estimating dip slip on individual or bulk fault datasets. Using this toolkit, profiles are defined across fault scarps in high-resolution digital elevation models (DEMs) and then relevant fault scarp components are interactively identified (e.g., footwall, hanging wall, and scarp). Displacement statistics are calculated automatically using Monte Carlo simulation and can be conveniently visualized in Geographic Information Systems (GIS) for spatial analysis. Fault slip rates can also be calculated when ages of footwall and hanging wall surfaces are known, allowing for temporal analysis. This method allows for rapid analysis of tens to hundreds of faults in rapid succession within GIS and a Python coding environment. Application of this method may contribute to a wide range of regional and local earthquake geology studies with adequate high-resolution DEM coverage, both regional fault source characterization for seismic hazard and/or estimating geologic slip and strain rates, including creating long-term deformation maps. ArcGIS versions of these functions are available, as well ones that utilize free, open source Quantum GIS (QGIS) and Jupyter Notebook Python software.


DNA Research ◽  
2020 ◽  
Vol 27 (3) ◽  
Author(s):  
Rei Kajitani ◽  
Dai Yoshimura ◽  
Yoshitoshi Ogura ◽  
Yasuhiro Gotoh ◽  
Tetsuya Hayashi ◽  
...  

Abstract De novo assembly of short DNA reads remains an essential technology, especially for large-scale projects and high-resolution variant analyses in epidemiology. However, the existing tools often lack sufficient accuracy required to compare closely related strains. To facilitate such studies on bacterial genomes, we developed Platanus_B, a de novo assembler that employs iterations of multiple error-removal algorithms. The benchmarks demonstrated the superior accuracy and high contiguity of Platanus_B, in addition to its ability to enhance the hybrid assembly of both short and nanopore long reads. Although the hybrid strategies for short and long reads were effective in achieving near full-length genomes, we found that short-read-only assemblies generated with Platanus_B were sufficient to obtain ≥90% of exact coding sequences in most cases. In addition, while nanopore long-read-only assemblies lacked fine-scale accuracies, inclusion of short reads was effective in improving the accuracies. Platanus_B can, therefore, be used for comprehensive genomic surveillances of bacterial pathogens and high-resolution phylogenomic analyses of a wide range of bacteria.


2008 ◽  
Vol 5 (1) ◽  
pp. 73-94 ◽  
Author(s):  
A. Leip ◽  
G. Marchi ◽  
R. Koeble ◽  
M. Kempen ◽  
W. Britz ◽  
...  

Abstract. A comprehensive assessment of policy impact on greenhouse gas (GHG) emissions from agricultural soils requires careful consideration of both socio-economic aspects and the environmental heterogeneity of the landscape. We developed a modelling framework that links the large-scale economic model for agriculture CAPRI (Common Agricultural Policy Regional Impact assessment) with the biogeochemistry model DNDC (DeNitrification DeComposition) to simulate GHG fluxes, carbon stock changes and the nitrogen budget of agricultural soils in Europe. The framework allows the ex-ante simulation of agricultural or agri-environmental policy impacts on a wide range of environmental problems such as climate change (GHG emissions), air pollution and groundwater pollution. Those environmental impacts can be analyzed in the context of economic and social indicators as calculated by the economic model. The methodology consists of four steps: (i) definition of appropriate calculation units that can be considered as homogeneous in terms of economic behaviour and environmental response; (ii) downscaling of regional agricultural statistics and farm management information from a CAPRI simulation run into the spatial calculation units; (iii) designing environmental model scenarios and model runs; and finally (iv) aggregating results for interpretation. We show the first results of the nitrogen budget in croplands in fourteen countries of the European Union and discuss possibilities to improve the detailed assessment of nitrogen and carbon fluxes from European arable soils.


2021 ◽  
Vol 9 ◽  
Author(s):  
Michael Marks ◽  
Sham Lal ◽  
Hannah Brindle ◽  
Pierre-Stéphane Gsell ◽  
Matthew MacGregor ◽  
...  

Background: ODK provides software and standards that are popular solutions for off-grid electronic data collection and has substantial code overlap and interoperability with a number of related software products including CommCare, Enketo, Ona, SurveyCTO, and KoBoToolbox. These tools provide open-source options for off-grid use in public health data collection, management, analysis, and reporting. During the 2018–2020 Ebola epidemic in the North Kivu and Ituri regions of Democratic Republic of Congo, we used these tools to support the DRC Ministère de la Santé RDC and World Health Organization in their efforts to administer an experimental vaccine (VSV-Zebov-GP) as part of their strategy to control the transmission of infection.Method: New functions were developed to facilitate the use of ODK, Enketo and R in large scale data collection, aggregation, monitoring, and near-real-time analysis during clinical research in health emergencies. We present enhancements to ODK that include a built-in audit-trail, a framework and companion app for biometric registration of ISO/IEC 19794-2 fingerprint templates, enhanced performance features, better scalability for studies featuring millions of data form submissions, increased options for parallelization of research projects, and pipelines for automated management and analysis of data. We also developed novel encryption protocols for enhanced web-form security in Enketo.Results: Against the backdrop of a complex and challenging epidemic response, our enhanced platform of open tools was used to collect and manage data from more than 280,000 eligible study participants who received VSV-Zebov-GP under informed consent. These data were used to determine whether the VSV-Zebov-GP was safe and effective and to guide daily field operations.Conclusions: We present open-source developments that make electronic data management during clinical research and health emergencies more viable and robust. These developments will also enhance and expand the functionality of a diverse range of data collection platforms that are based on the ODK software and standards.


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Deworm the World serves millions of school children every year. Monitoring on such a large scale can amplify the difficulty of developing a right-fit system: How can an organization ensure credible data collection across a wide range of sites and prioritize actionable information that informs implementation? How can such a large-scale system rapidly respond to issues once identified? This case illustrates the challenge of finding credible and actionable activity tracking measures. How does Deworm the World apply the credible, actionable, and responsible principles to determine the right amount of data to collect and the right time and place at which to collect it?


1987 ◽  
Vol 117 ◽  
pp. 287-287
Author(s):  
Michael J. West ◽  
Avishai Dekel ◽  
Augustus Oemler

We have studied the properties of rich clusters of galaxies in various cosmological scenarios by comparing high resolution N-body simulations with observations of Abell clusters. The clusters have been simulated in two steps. First, protoclusters are identified in large-scale simulations which represent a wide range of cosmological scenarios (hierarchical clustering, pancake scenarios, and hybrids of the two, spanning a range of power spectra). Then the region around each protocluster is simulated with high resolution, the particles representing L* galaxies. The protoclusters have no spatial symmetry built into them initially. The final clusters are still dynamically young, and of moderate densities, which should be representative of Abell clusters of richness classes 1 and 2.


Sign in / Sign up

Export Citation Format

Share Document