scholarly journals Facilitating User Participation in Digital Health Research: The mHealth Impact Registry

Iproceedings ◽  
10.2196/15157 ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. e15157
Author(s):  
Kelsey Lynett Ford ◽  
Sheana Bull ◽  
Susan L Moore ◽  
Charlene Barrientos Ortiz

Background The proliferation of technology galvanizes providers, researchers, and entrepreneurs to revolutionize health care and care delivery with diverse audiences. Digital health provides promise in improving health outcomes; however, the pace of technology requires rapid research to remain relevant in the marketplace. User experience (UX) research provides critical information about patient/client preferences while rigorous research trials demonstrate digital health efficacy. Despite the need for such research, the recruitment and enrollment process for digital health research remains time consuming and expensive, particularly when engaging underrepresented populations. Developed in the Colorado School of Public Health, the mHealth Impact Registry is a newly launched platform designed for rapid and responsive recruitment of participants for digital health research studies. While the use of registries in research is robust, the application in digital health research is quite limited. Objective This poster illustrates the development and testing of the mHealth Impact Registry’s Web-based platform, health status survey, mobile app, and participant database to reach underrepresented populations in digital health research. Methods Formative methods used a user centered approach to document user preferences for Registry design followed by iterative testing to ensure usability and navigability. Results End-user feedback was captured from multiple stakeholder groups (ie, Patient and Family Research Advisory Panel and mHealth Community Advisory Board) to refine recruitment strategy (ie, letters, video development). A health status survey was developed in both English and Spanish using the online software (ie, Qualtrics) that informs the back-end database. A detailed requirements document outlined technical and functional requirements for the mobile app (ie, iOS and Android) and Web-based platform (ie, Wordpress and Amazon Web Services). Conclusions Due to the need for rapid, rigorous, and inclusive research in digital health, a registry containing a pool of diverse participants would not only accelerate the recruitment and enrollment process but would also help to improve the reach and engagement of digital health solutions for underrepresented populations. The mHealth Impact Registry would house diverse participants, supporting quick enrollment and active participation in studies for which they are eligible. Improving the accessibility of participants and the speed of enrollment has promise in ensuring digital health solutions are relevant upon dissemination and commercialization.

2017 ◽  
Vol 1 (4) ◽  
pp. 240-245
Author(s):  
Aalap Doshi ◽  
Lisa Connally ◽  
Meghan Spiroff ◽  
Anita Johnson ◽  
George A. Mashour

IntroductionUMHealthResearch is the University of Michigan’s digital health research recruitment platform. It allows health researchers to connect efficiently with potentially eligible volunteers.MethodsIn 2013, the UMHealthResearch team strategically adapted a consumer behavior model, the buying funnel, to create the Digital Health Research Participation Funnel. The Digital Health Research Participation Funnel was then used to design a more active way for potential participants to volunteer for research studies through UMHealthResearch.ResultsIn the 5 years before the redesign (2007–2012), an average of 1844 new accounts were created every year, whereas in the completed years after the redesign (2013–2016) the annual average improved to 3906, an increase of 111%.ConclusionAlthough a randomized design was not possible in this instance, these preintervention and postintervention data suggest that the focus on user experience is an effective strategy for improving web-based research recruitment platforms.


10.2196/18781 ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. e18781 ◽  
Author(s):  
Anita Lawitschka ◽  
Stephanie Buehrer ◽  
Dorothea Bauer ◽  
Konrad Peters ◽  
Marisa Silbernagl ◽  
...  

Background A growing number of cancer and hematopoietic stem cell transplant (HSCT) survivors require long-term follow-up with optimal communication schemes, and patients' compliance is crucial. Adolescents have various unmet needs. Regarding self-report of symptoms and health status, users of mobile apps showed enhanced compliance. Currently, HSCT aftercare at the HSCT outpatient clinic of the St. Anna Children’s Hospital in Vienna, Austria, is based on handwritten diaries, carrying various disadvantages. Recently, we developed the prototype of a web-based, self-monitoring gamified mobile app tailored for adolescents: the INTERACCT (Integrating Entertainment and Reaction Assessment into Child Cancer Therapy) app. Objective This observational, prospective study evaluated the usability of the INTERACCT app for tracking real-time self-reported symptoms and health status data in adolescent HSCT patients and a healthy matched control group. The primary outcome of the study was the quality of the self-reported medical information. We hypothesized that the mobile app would provide superior medical information for the clinicians than would the handwritten diaries. Methods Health data were reported via paper diary and mobile app for 5 consecutive days each. The quality of medical information was rated on a 5-point scale independently and blinded by two HSCT clinicians, and the duration of use was evaluated. A total of 52 participant questionnaires were assessed for gaming patterns and device preferences, self-efficacy, users’ satisfaction, acceptability, and suggestions for improvement of the mobile app. Interrater reliability was calculated with the intraclass correlation coefficient, based on a two-way mixed model; one-way repeated-measures analysis of variance and t tests were conducted post hoc. Descriptive methods were used for correlation with participants’ demographics. For users’ satisfaction and acceptability of the mobile app, the median and the IQR were calculated. Results Data from 42 participants—15 patients and 27 healthy students—with comparable demographics were evaluated. The results of our study indicated a superiority of the quality of self-reported medical data in the INTERACCT app over traditional paper-and-pencil assessment (mobile app: 4.14 points, vs paper-based diary: 3.77 points, P=.02). The mobile app outperformed paper-and-pencil assessments mainly among the patients, in particular among patients with treatment-associated complications (mobile app: 4.43 points, vs paper-based diary: 3.73 points, P=.01). The mobile app was used significantly longer by adolescents (≥14 years: 4.57 days, vs ≤13 years: 3.14 days, P=.03) and females (4.76 days for females vs 2.95 days for males, P=.004). This corresponds with a longer duration of use among impaired patients with comorbidities. User satisfaction and acceptability ratings for the mobile app were high across all groups, but adherence to entering a large amount of data decreased over time. Based on our results, we developed a case vignette of the target group. Conclusions Our study was the first to show that the quality of patient-reported medical information submitted via the INTERACCT app embedded in a serious game is superior to that submitted via a handwritten diary. In light of these results, a refinement of the mobile app supported by a machine learning approach is planned within an international research project.


2019 ◽  
Author(s):  
Mark Floryan ◽  
Philip I Chow ◽  
Stephen M Schueller ◽  
Lee M Ritterband

BACKGROUND Although gamification continues to be a popular approach to increase engagement, motivation, and adherence to behavioral interventions, empirical studies have rarely focused on this topic. There is a need to empirically evaluate gamification models to increase the understanding of how to integrate gamification into interventions. OBJECTIVE The model of gamification principles for digital health interventions proposes a set of five independent yet interrelated gamification principles. This study aimed to examine the validity and reliability of this model to inform its use in Web- and mobile-based apps. METHODS A total of 17 digital health interventions were selected from a curated website of mobile- and Web-based apps (PsyberGuide), which makes independent and unbiased ratings on various metrics. A total of 133 independent raters trained in gamification evaluation techniques were instructed to evaluate the apps and rate the degree to which gamification principles are present. Multiple ratings (n≥20) were collected for each of the five gamification principles within each app. Existing measures, including the PsyberGuide credibility score, mobile app rating scale (MARS), and the app store rating of each app were collected, and their relationship with the gamification principle scores was investigated. RESULTS Apps varied widely in the degree of gamification implemented (ie, the mean gamification rating ranged from 0.17≤m≤4.65 out of 5). Inter-rater reliability of gamification scores for each app was acceptable (κ≥0.5). There was no significant correlation between any of the five gamification principles and the PsyberGuide credibility score (<i>P</i>≥.49 in all cases). Three gamification principles (supporting player archetypes, feedback, and visibility) were significantly correlated with the MARS score, whereas three principles (meaningful purpose, meaningful choice, and supporting player archetypes) were significantly correlated with the app store rating. One gamification principle was statistically significant with both the MARS and the app store rating (supporting player archetypes). CONCLUSIONS Overall, the results support the validity and potential utility of the model of gamification principles for digital health interventions. As expected, there was some overlap between several gamification principles and existing app measures (eg, MARS). However, the results indicate that the gamification principles are not redundant with existing measures and highlight the potential utility of a 5-factor gamification model structure in digital behavioral health interventions. These gamification principles may be used to improve user experience and enhance engagement with digital health programs.


Author(s):  
Anita Lawitschka ◽  
Stephanie Buehrer ◽  
Dorothea Bauer ◽  
Konrad Peters ◽  
Marisa Silbernagl ◽  
...  

BACKGROUND A growing number of cancer and hematopoietic stem cell transplant (HSCT) survivors require long-term follow-up with optimal communication schemes, and patients' compliance is crucial. Adolescents have various unmet needs. Regarding self-report of symptoms and health status, users of mobile apps showed enhanced compliance. Currently, HSCT aftercare at the HSCT outpatient clinic of the St. Anna Children’s Hospital in Vienna, Austria, is based on handwritten diaries, carrying various disadvantages. Recently, we developed the prototype of a web-based, self-monitoring gamified mobile app tailored for adolescents: the INTERACCT (Integrating Entertainment and Reaction Assessment into Child Cancer Therapy) app. OBJECTIVE This observational, prospective study evaluated the usability of the INTERACCT app for tracking real-time self-reported symptoms and health status data in adolescent HSCT patients and a healthy matched control group. The primary outcome of the study was the quality of the self-reported medical information. We hypothesized that the mobile app would provide superior medical information for the clinicians than would the handwritten diaries. METHODS Health data were reported via paper diary and mobile app for 5 consecutive days each. The quality of medical information was rated on a 5-point scale independently and blinded by two HSCT clinicians, and the duration of use was evaluated. A total of 52 participant questionnaires were assessed for gaming patterns and device preferences, self-efficacy, users’ satisfaction, acceptability, and suggestions for improvement of the mobile app. Interrater reliability was calculated with the intraclass correlation coefficient, based on a two-way mixed model; one-way repeated-measures analysis of variance and <i>t</i> tests were conducted post hoc. Descriptive methods were used for correlation with participants’ demographics. For users’ satisfaction and acceptability of the mobile app, the median and the IQR were calculated. RESULTS Data from 42 participants—15 patients and 27 healthy students—with comparable demographics were evaluated. The results of our study indicated a superiority of the quality of self-reported medical data in the INTERACCT app over traditional paper-and-pencil assessment (mobile app: 4.14 points, vs paper-based diary: 3.77 points, <i>P</i>=.02). The mobile app outperformed paper-and-pencil assessments mainly among the patients, in particular among patients with treatment-associated complications (mobile app: 4.43 points, vs paper-based diary: 3.73 points, <i>P</i>=.01). The mobile app was used significantly longer by adolescents (≥14 years: 4.57 days, vs ≤13 years: 3.14 days, <i>P</i>=.03) and females (4.76 days for females vs 2.95 days for males, <i>P</i>=.004). This corresponds with a longer duration of use among impaired patients with comorbidities. User satisfaction and acceptability ratings for the mobile app were high across all groups, but adherence to entering a large amount of data decreased over time. Based on our results, we developed a case vignette of the target group. CONCLUSIONS Our study was the first to show that the quality of patient-reported medical information submitted via the INTERACCT app embedded in a serious game is superior to that submitted via a handwritten diary. In light of these results, a refinement of the mobile app supported by a machine learning approach is planned within an international research project.


10.2196/16506 ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. e16506
Author(s):  
Mark Floryan ◽  
Philip I Chow ◽  
Stephen M Schueller ◽  
Lee M Ritterband

Background Although gamification continues to be a popular approach to increase engagement, motivation, and adherence to behavioral interventions, empirical studies have rarely focused on this topic. There is a need to empirically evaluate gamification models to increase the understanding of how to integrate gamification into interventions. Objective The model of gamification principles for digital health interventions proposes a set of five independent yet interrelated gamification principles. This study aimed to examine the validity and reliability of this model to inform its use in Web- and mobile-based apps. Methods A total of 17 digital health interventions were selected from a curated website of mobile- and Web-based apps (PsyberGuide), which makes independent and unbiased ratings on various metrics. A total of 133 independent raters trained in gamification evaluation techniques were instructed to evaluate the apps and rate the degree to which gamification principles are present. Multiple ratings (n≥20) were collected for each of the five gamification principles within each app. Existing measures, including the PsyberGuide credibility score, mobile app rating scale (MARS), and the app store rating of each app were collected, and their relationship with the gamification principle scores was investigated. Results Apps varied widely in the degree of gamification implemented (ie, the mean gamification rating ranged from 0.17≤m≤4.65 out of 5). Inter-rater reliability of gamification scores for each app was acceptable (κ≥0.5). There was no significant correlation between any of the five gamification principles and the PsyberGuide credibility score (P≥.49 in all cases). Three gamification principles (supporting player archetypes, feedback, and visibility) were significantly correlated with the MARS score, whereas three principles (meaningful purpose, meaningful choice, and supporting player archetypes) were significantly correlated with the app store rating. One gamification principle was statistically significant with both the MARS and the app store rating (supporting player archetypes). Conclusions Overall, the results support the validity and potential utility of the model of gamification principles for digital health interventions. As expected, there was some overlap between several gamification principles and existing app measures (eg, MARS). However, the results indicate that the gamification principles are not redundant with existing measures and highlight the potential utility of a 5-factor gamification model structure in digital behavioral health interventions. These gamification principles may be used to improve user experience and enhance engagement with digital health programs.


Author(s):  
Nicola Orio ◽  
Berardina De Carolis ◽  
Francesco Liotard

AbstractAlthough overshadowed by visual information, sound plays a central role in how people perceive an environment. The effect of a landscape is enriched by its soundscape, that is, the stratification of all the acoustic sources that, often unconsciously, are heard. This paper presents a framework for archiving, browsing, and accessing soundscapes, either remotely or on-site. The framework is based on two main components: a web-based interface to upload and search the recordings of an acoustic environment, enriched by in- formation about geolocation, timing, and context of the recording; and a mobile app to browse and listen to the recordings, using an interactive map or GPS information. To populate the archive, we launched two crowdsourcing initiatives. An initial experiment examined the city of Padua’s soundscape through the participation of a group of undergraduate students. A broader experiment, which was proposed to all people in Italy, aimed at tracking how the nationwide COVID-19 lockdown was dramatically changing the soundscape of the entire country.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
F Estupiñán-Romero ◽  
J Gonzalez-García ◽  
E Bernal-Delgado

Abstract Issue/problem Interoperability is paramount when reusing health data from multiple data sources and becomes vital when the scope is cross-national. We aimed at piloting interoperability solutions building on three case studies relevant to population health research. Interoperability lies on four pillars; so: a) Legal frame (i.e., compliance with the GDPR, privacy- and security-by-design, and ethical standards); b) Organizational structure (e.g., availability and access to digital health data and governance of health information systems); c) Semantic developments (e.g., existence of metadata, availability of standards, data quality issues, coherence between data models and research purposes); and, d) Technical environment (e.g., how well documented are data processes, which are the dependencies linked to software components or alignment to standards). Results We have developed a federated research network architecture with 10 hubs each from a different country. This architecture has implied: a) the design of the data model that address the research questions; b) developing, distributing and deploying scripts for data extraction, transformation and analysis; and, c) retrieving the shared results for comparison or pooled meta-analysis. Lessons The development of a federated architecture for population health research is a technical solution that allows full compliance with interoperability pillars. The deployment of this type of solution where data remain in house under the governance and legal requirements of the data owners, and scripts for data extraction and analysis are shared across hubs, requires the implementation of capacity building measures. Key messages Population health research will benefit from the development of federated architectures that provide solutions to interoperability challenges. Case studies conducted within InfAct are providing valuable lessons to advance the design of a future pan-European research infrastructure.


Sign in / Sign up

Export Citation Format

Share Document