scholarly journals Community Engagement among the BioSense 2.0 User Group

2015 ◽  
Vol 7 (1) ◽  
Author(s):  
Stacey Hoferka ◽  
Marcus Rennick ◽  
Erin E. Austin ◽  
Anne Burke ◽  
Rosa Ergas ◽  
...  

This roundtable will provide a forum for the syndromic surveillance Community of Practice (CoP) to learn about activities of the BioSense 2.0 User Group (BUG) workgroups that address priority issues in syndromic surveillance. The goals of the workgroups are to coordinate efforts nationwide, better inform development of BioSense 2.0 to the Governance Group and CDC, and achieve high-quality outcomes for the practice of syndromic surveillance. Representatives from each workgroup will describe their efforts to date so participants can discuss key challenges and best practices in the areas of data quality, data sharing, onboarding, and developing syndrome definitions.

2021 ◽  
pp. 193896552110254
Author(s):  
Lu Lu ◽  
Nathan Neale ◽  
Nathaniel D. Line ◽  
Mark Bonn

As the use of Amazon’s Mechanical Turk (MTurk) has increased among social science researchers, so, too, has research into the merits and drawbacks of the platform. However, while many endeavors have sought to address issues such as generalizability, the attentiveness of workers, and the quality of the associated data, there has been relatively less effort concentrated on integrating the various strategies that can be used to generate high-quality data using MTurk samples. Accordingly, the purpose of this research is twofold. First, existing studies are integrated into a set of strategies/best practices that can be used to maximize MTurk data quality. Second, focusing on task setup, selected platform-level strategies that have received relatively less attention in previous research are empirically tested to further enhance the contribution of the proposed best practices for MTurk usage.


2021 ◽  
Author(s):  
Victoria Leong ◽  
Kausar Raheel ◽  
Sim Jia Yi ◽  
Kriti Kacker ◽  
Vasilis M. Karlaftis ◽  
...  

Background. The global COVID-19 pandemic has triggered a fundamental reexamination of how human psychological research can be conducted both safely and robustly in a new era of digital working and physical distancing. Online web-based testing has risen to the fore as a promising solution for rapid mass collection of cognitive data without requiring human contact. However, a long-standing debate exists over the data quality and validity of web-based studies. Here, we examine the opportunities and challenges afforded by the societal shift toward web-based testing, highlight an urgent need to establish a standard data quality assurance framework for online studies, and develop and validate a new supervised online testing methodology, remote guided testing (RGT). Methods. A total of 85 healthy young adults were tested on 10 cognitive tasks assessing executive functioning (flexibility, memory and inhibition) and learning. Tasks were administered either face-to-face in the laboratory (N=41) or online using remote guided testing (N=44), delivered using identical web-based platforms (CANTAB, Inquisit and i-ABC). Data quality was assessed using detailed trial-level measures (missed trials, outlying and excluded responses, response times), as well as overall task performance measures. Results. The results indicated that, across all measures of data quality and performance, RGT data was statistically-equivalent to data collected in person in the lab. Moreover, RGT participants out-performed the lab group on measured verbal intelligence, which could reflect test environment differences, including possible effects of mask-wearing on communication. Conclusions. These data suggest that the RGT methodology could help to ameliorate concerns regarding online data quality and - particularly for studies involving high-risk or rare cohorts - offer an alternative for collecting high-quality human cognitive data without requiring in-person physical attendance.


2021 ◽  
Author(s):  
Victoria Leong ◽  
Kausar Raheel ◽  
Jia Yi Sim ◽  
Kriti Kacker ◽  
Vasilis M Karlaftis ◽  
...  

BACKGROUND The global COVID-19 pandemic has triggered a fundamental reexamination of how human psychological research can be conducted both safely and robustly in a new era of digital working and physical distancing. Online web-based testing has risen to the fore as a promising solution for rapid mass collection of cognitive data without requiring human contact. However, a long-standing debate exists over the data quality and validity of web-based studies. OBJECTIVE Here, we examine the opportunities and challenges afforded by the societal shift toward web-based testing, highlight an urgent need to establish a standard data quality assurance framework for online studies, and develop and validate a new supervised online testing methodology, remote guided testing (RGT). METHODS A total of 85 healthy young adults were tested on 10 cognitive tasks assessing executive functioning (flexibility, memory and inhibition) and learning. Tasks were administered either face-to-face in the laboratory (N=41) or online using remote guided testing (N=44), delivered using identical web-based platforms (CANTAB, Inquisit and i-ABC). Data quality was assessed using detailed trial-level measures (missed trials, outlying and excluded responses, response times), as well as overall task performance measures. RESULTS The results indicated that, across all measures of data quality and performance, RGT data was statistically-equivalent to data collected in person in the lab. Moreover, RGT participants out-performed the lab group on measured verbal intelligence, which could reflect test environment differences, including possible effects of mask-wearing on communication. CONCLUSIONS These data suggest that the RGT methodology could help to ameliorate concerns regarding online data quality and - particularly for studies involving high-risk or rare cohorts - offer an alternative for collecting high-quality human cognitive data without requiring in-person physical attendance. CLINICALTRIAL N.A.


2016 ◽  
Vol 8 (1) ◽  
Author(s):  
Peter Hicks ◽  
Julie A. Pavlin ◽  
Atar Baer ◽  
David J. Swenson ◽  
Rebecca Lampkins ◽  
...  

The "Preliminary Look into the Icd9/10 Transition Impact on Public Health Surveillance" roundtable will provide a forum for the syndromic surveillance Community of Practice (CoP) to discuss the public health impacts from the ICD-10-CM conversion, and to support jurisdictional public health practices with this transition. The discussion will be aimed at identifying conversion challenges, solutions, and best practices.


2018 ◽  
Vol 2 ◽  
pp. e25394 ◽  
Author(s):  
Steve Kelling

eBird is a global citizen science project that gathers observations of birds. The project has been making a considerable contribution to the collection and sharing of bird observations, even in the data-poorest countries, and is accelerating the accumulation of bird records globally. On 22 March 2018 eBird surpassed ½ billion bird observations. A primary component of ensuring the best quality data is the network of more than 1300 volunteer reviewers who scour incoming data for accuracy. Reviewers provide active feedback to participants on everything from bird identification to best practices for data collection. Since eBird’s inception in 2002, almost 23 million observations have been reviewed, requiring more than 190,000 hours of effort by reviewers. In this presentation we review how eBird recruits expert reviewers, describe their responsibilities, and offer some insight in new developments to improve the reviewing process. How are reviewers recruited. There are three primary methods that used to identify new reviewers. First, if we don’t have any active participants in a region (e.g., Kamchatka Russia) eBird staff search birding listserves to find an individual who is reporting a lot of high-quality observations from the area. We then contact those individuals and offer them the opportunity to review records for the region. This option has the lowest likelihood of success. Second, if an individual is submitting a lot of records to eBird from a region that needs a reviewer we contact them and request their participation. Third, in much of the world eBird has partner groups. These partner organizations (e.g., Taiwan, Spain, India, Portugal, Australia, and all of the Western Hemisphere) recruit their own reviewers. The third method is the most effective way to gain expert participation. What does a reviewer do? eBird reviewers work to improve eBird data in three primary areas. First, they develop and manage the eBird checklist filters for a region. These filters generate a checklist of birds for a particular time and location, and determine what records get flagged for further review. Second, if an eBird participant tries to report a species that is not on the checklist, or if the number of individuals of a species exceeds the filter limit, then these records get flagged for review. Reviewers contact the observer and request further documentation. Currently, 57% of all records that are evaluated by reviewers are validated. Finally, eBird reviewers validate whether the participant is eBirding correctly. That is, are they correctly filling out the information on when, where, and how they went birding. It has been our experience that different types of reviewers are required to effectively review eBird submissions: those who are good at reviewing bird records and those who are good at educating observers on how to participate. What are future plans? eBird will move towards more effective reviewer teams, where the volume of observations can be split amongst a number of individuals with different strengths, allowing identification experts to focus on observation-level ID issues; and strong communicators to focus on working with contributors on checklist-level best practices. Currently, a single eBird review platform handles a broad array of different reviewing functions. It is our intent to split some of these functions into multiple platforms. For example, right now all review happens at the database level of the ‘observation’: a record of a taxon at a date and location. Plans are underway to develop tools that will allow reviewers to work at the entire checklist level (i.e., to more easily review the accuracy of how all the observations during a checklist event were submitted), which will enable much more effective review of checklist-level data quality concerns.


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Krystal S. Collier ◽  
Sophia Crossen ◽  
Courtney Fitzgerald ◽  
Kaitlyn Ciampaglio ◽  
Lakshmi Radhakrishnan ◽  
...  

ObjectiveThe National Syndromic Surveillance Program (NSSP) Community of Practice (CoP) works to support syndromic surveillance by providing guidance and assistance to help resolve data issues and foster relationships between jurisdictions, stakeholders, and vendors. During this presentation, we will highlight the value of collaboration through the International Society for Disease Surveillance (ISDS) Data Quality Committee (DQC) between jurisdictional sites conducting syndromic surveillance, the Centers for Disease Control and Prevention’s (CDC) NSSP, and electronic health record (EHR) vendors when vendor-specific errors are identified, using a recent incident to illustrate and discuss how this collaboration can work to address suspected data anomalies.IntroductionOn November 20, 2017, several sites participating in the NSSP reported anomalies in their syndromic data. Upon review, it was found that between November 17-18, an EHR vendor’s syndromic product experienced an outage and errors in processing data. The ISDS DQC, NSSP, a large EHR vendor, and many of the affected sites worked together to identify the core issues, evaluate ramifications, and formulate solutions to provide to the entire NSSP CoP.DescriptionOn November 20, 2017, several sites participating in the NSSP reported anomalies in their syndromic data. Upon review, it was found that between November 17-18, an EHR vendor’s syndromic product experienced an outage and errors in processing data. The ISDS DQC, NSSP, a large EHR vendor, and many of the affected sites worked together to identify the core issues, evaluate ramifications, and formulate solutions to provide to the entire NSSP CoP.How the Moderator Intends to Engage the Audience in Discussions on the TopicFollowing presentation of this information, the presenters will lead a discussion on how to improve the response, provide resolution, communicate expectations, and decrease the time required to resolve issues should a similar event happen in the future. Participants from all three stakeholder groups, sites conducting syndromic surveillance, the NSSP, and vendor representatives, will be invited to share their experiences, successes, and concerns.


2019 ◽  
Vol 53 (1) ◽  
pp. 46-50
Author(s):  
Carolyn Logan ◽  
Pablo Parás ◽  
Michael Robbins ◽  
Elizabeth J. Zechmeister

ABSTRACTData quality in survey research remains a paramount concern for those studying mass political behavior. Because surveys are conducted in increasingly diverse contexts around the world, ensuring that best practices are followed becomes ever more important to the field of political science. Bringing together insights from surveys conducted in more than 80 countries worldwide, this article highlights common challenges faced in survey research and outlines steps that researchers can take to improve the quality of survey data. Importantly, the article demonstrates that with the investment of the necessary time and resources, it is possible to carry out high-quality survey research even in challenging environments in which survey research is not well established.


Metabolomics ◽  
2014 ◽  
Vol 10 (4) ◽  
pp. 539-540 ◽  
Author(s):  
Daniel W. Bearden ◽  
Richard D. Beger ◽  
David Broadhurst ◽  
Warwick Dunn ◽  
Arthur Edison ◽  
...  

2020 ◽  
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

BACKGROUND Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. OBJECTIVE This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. METHODS The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. RESULTS Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. CONCLUSIONS The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. INTERNATIONAL REGISTERED REPORT DERR1-10.2196/18366


10.2196/18366 ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. e18366
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

Background Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. Objective This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. Methods The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. Results Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. Conclusions The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. International Registered Report Identifier (IRRID) DERR1-10.2196/18366


Sign in / Sign up

Export Citation Format

Share Document