Reply to correspondence “Do big numbers assure high-quality of data?”

2017 ◽  
Vol 4 (9) ◽  
pp. e410
Author(s):  
Claudia Allemani ◽  
Audrey Bonaventure ◽  
Rhea Harewood ◽  
Veronica Di Carlo ◽  
Michel P Coleman
Keyword(s):  
2021 ◽  
Vol 9 (2) ◽  
pp. 229
Author(s):  
Georgy Mitrofanov ◽  
Nikita Goreyavchev ◽  
Roman Kushnarev

The emerging tasks of determining the features of bottom sediments, including the evolution of the seabed, require a significant improvement in the quality of data and methods for their processing. Marine seismic data has traditionally been perceived to be of high quality compared to land data. However, high quality is always a relative characteristic and is determined by the problem being solved. In a detailed study of complex processes, the interaction of waves with bottom sediments, as well as the processes of seabed evolution over short time intervals (not millions of years), we need very high accuracy of observations. If we also need significant volumes of research covering large areas, then a significant revision of questions about the quality of observations and methods of processing is required to improve the quality of data. The article provides an example of data obtained during high-precision marine surveys and containing a wide frequency range from hundreds of hertz to kilohertz. It is shown that these data, visually having a very high quality, have variations in wavelets at all analyzed frequencies. The corresponding variations reach tens of percent. The use of the method of factor decomposition in the spectral domain made it possible to significantly improve the quality of the data, reducing the variability of wavelets by several times.


2017 ◽  
Vol 35 (8_suppl) ◽  
pp. 217-217
Author(s):  
Shaheena Mukhi ◽  
John Srigley ◽  
Corinne Daly ◽  
Mary Agent-Katwala

217 Background: To improve variability in diagnosing and treating cancer resection cases, six Canadian provinces implemented standardized pathology checklists to transition from narrative to synoptic reporting. In clinical practice, pathologists are electronically capturing data on the resected cancer specimens synoptically for breast, colorectal, lung, prostate, and endometrial cases. Though data were collected in a standardized format, consensus based indicators were unavailable to coordinate action across Canada. Objectives: We aimed to develop indicators to measure consistency of high quality cancer diagnosis, staging, prognosis and treatment, and coordinate action. Methods: A literature review was conducted with the input of clinical experts to inform the development of indicators. 50 clinicians from x jurisdictions reviewed, selected and ranked 33 indicators, initially drafted. Clinicians also provided input on the clinical validity of the indicators and set targets based on evidence. Clinicians reviewed the baseline data, confirmed the clinical usefulness of indicators, and assigned indicators into three pioneered domains. Results: 47 indicators were developed and categorized into one of three domains: descriptive, which provide data on intrinsic measures of a patient’s tumour, such as stage or tumour type; process, which measure the quality of data completeness, timeliness and compliance; and clinico-pathologic outcome, which examine surgeon or pathologist effect on the diagnostic pathway, such as margin positivity rates or adequacy of lymph node removal. Examples of indicators are: margin status; lymph node examined, involved and retrieval; histologic type and grade distribution; lympho-vascular invasion; pT3 margin positivity rate. Conclusions: The indicators have set a framework for: measuring consistency and inconsistency in diagnosing and staging cancer; for organizing conversations and multidisciplinary group discussions; and establishing the culture of quality improvement.


2021 ◽  
Vol 10 (3) ◽  
pp. 44-53
Author(s):  
Modar Abdullatif ◽  
Aya Banna ◽  
Duha El-Sahsah ◽  
Taher Wafa

This study aims to explore the application of analytical procedures (AP) as a major external auditing procedure in the developing country context of Jordan, a context characterised by the prevalence of closely held businesses, and limited demand for an external audit of high quality (Abdullatif, 2016; Almarayeh, Aibar-Guzman, & Abdullatif, 2020). To do so, the researchers conducted semi-structured interviews with twelve experienced Jordanian external auditors. The main issues covered are the detailed use of AP as an audit procedure and the most significant issues that may limit the effectiveness and reliability of this procedure in the Jordanian context. The main findings of the study include that AP are generally used and favoured by Jordanian auditors, despite their recognition of several problems facing the application of AP, and potentially limiting its reliability and effectiveness. These problems include weak internal controls of some clients, low quality of data provided by some clients, a lack of availability of specialised audit software for many auditors, and a lack of local Jordanian industry benchmarks that can be used to develop expectations necessary for the proper application of AP. The study recommends the establishment of such industry benchmarks, along with better monitoring by the regulatory authorities of the quality of company data, and increasing the efforts of these authorities on promoting the auditors’ use of specialised audit software in performing AP


2013 ◽  
Vol 2013 ◽  
pp. 1-13
Author(s):  
Li-Min Liu

Clinical trials are crucial to modern healthcare industries, and information technologies have been employed to improve the quality of data collected in trials and reduce the overall cost of data processing. While developing software for clinical trials, one needs to take into account the similar patterns shared by all clinical trial software. Such patterns exist because of the unique properties of clinical trials and the rigorous regulations imposed by the government for the reasons of subject safety. Among the existing software development methodologies, none, unfortunately, was built specifically upon these properties and patterns and therefore works sufficiently well. In this paper, the process of clinical trials is reviewed, and the unique properties of clinical trial system development are explained thoroughly. Based on the properties, a new software development methodology is then proposed specifically for developing electronic clinical trial systems. A case study shows that, by adopting the proposed methodology, high-quality software products can be delivered on schedule within budget. With such high-quality software, data collection, management, and analysis can be more efficient, accurate, and inexpensive, which in turn will improve the overall quality of clinical trials.


2020 ◽  
Vol 4 (Supplement_1) ◽  
Author(s):  
Wen Wang ◽  
Yan Ren

Abstract Introduction There were limited evidence supporting the management of PA, primarily due to lack of high quality of data. Developing a research database through integrate both retrospective and prospective collected data regarding clinical care and outcomes of patients with PA may provide valuable evidence on management of PA. Methods The establishment of PA research database involved two steps. Firstly, patients with confirmation of PA between 1 Jan 2009 to 31 Aug 2019 were identified and data were extracted from EMR. Secondly, patients who have positive confirmatory testing for PA and agree to participant a prospective cohort will be enrolled. Data regarding clinical care and long-term outcomes will be prospectively collected based on the case report forms since 1 Sep 2019. We evaluated the quality of research database through assessment of quality of key variables. Results Totally, 904 patients diagnosed as PA in WCH were identified, of which 507 patients had positive confirmatory testing for PA were finally included into the retrospective database. Among included patients, the mean age was 49.2 years old, and the mean BMI was 24.72 kg/m2. There were 37 (7.3%) patients diagnosed as chronic kidney disease (CKD), 13 (2.6%) as coronary artery disease (CAD), 95 (18.7%) as diabetes mellitus (DM) and 77 (15.2%) as obstructive sleep apnea-hypopnea syndrome (OSA). The mean systolic blood pressure (SBP) was 155.8 mmHg, and the mean diastolic blood pressure (DBP) was 96.2 mmHg. Among included patients, the lowest serum potassium during admission was 2.96 mmol/L, and the mean serum aldosterone was 26.4 ng/dL. Validation of data extracting and linking showed the accuracy were 100%. Evaluation of missing data showed that the completeness of BMI (95.9%), SBP (1%) and DBP (1%) were high. Conclusion Through retrospective and prospective cohort of PA, a research database about PA with high quality and comprehensive data will be established. We anticipate that the research database will provide a high level of feasibility for management of PA in China.


Author(s):  
Amber Chauncey Strain ◽  
Lucille M. Booker

One of the major challenges of ANLP research is the constant balancing act between the need for large samples, and the excessive time and monetary resources necessary for acquiring those samples. Amazon’s Mechanical Turk (MTurk) is a web-based data collection tool that has become a premier resource for researchers who are interested in optimizing their sample sizes and minimizing costs. Due to its supportive infrastructure, diverse participant pool, quality of data, and time and cost efficiency, MTurk seems particularly suitable for ANLP researchers who are interested in gathering large, high quality corpora in relatively short time frames. In this chapter, the authors first provide a broad description of the MTurk interface. Next, they describe the steps for acquiring IRB approval of MTurk experiments, designing experiments using the MTurk dashboard, and managing data. Finally, the chapter concludes by discussing the potential benefits and limitations of using MTurk for ANLP experimentation.


2018 ◽  
Vol 10 (1) ◽  
Author(s):  
Wilfred Bonney ◽  
Sandy F Price ◽  
Roque Miramontes

Objective: The objective of this presentation is to use a congruence of standardization protocols to effectively ensure that the quality of the data elements and exchange formats within the NTSS are optimal for users of the system.Introduction: Disease surveillance systems remain the best quality systems to rely on when standardized surveillance systems provide the best data to understand disease occurrence and trends. The United States National Tuberculosis Surveillance System (NTSS) contains reported tuberculosis (TB) cases provided by all 50 states, the District of Columbia (DC), New York City, Puerto Rico, and other U.S.-affiliated jurisdictions in the Pacific Ocean and Caribbean Sea [1]. However, the NTSS currently captures phenotypic drug susceptibility testing (DST) data and does not have the ability to collect the rapid molecular DST data generated by platforms such as Cepheid GeneXpert MTB/RIF, Hain MTBDRplus and MTBDRsl, Pyrosequencing, and Whole Genome Sequencing [2-6]. Moreover, the information exchanges within the NTSS (represented in HL7 v2.5.1 [7]) are missing critical segments for appropriately representing laboratory test results and data on microbiological specimens.Methods: The application of the standardization protocols involves: (a) the revision of the current Report of Verified Case of Tuberculosis (RCVT) form to include the collection of molecular DST data; (b) the enhancement of the TB Case Notification Message Mapping Guide (MMG) v2.03 [8] to include segments for appropriately reporting laboratory test results (i.e., using Logical Observation Identifiers Names and Codes (LOINC) as a recommended vocabulary) and microbiology related test results (i.e., using Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT) as a recommended vocabulary); and (c) the standardization of the laboratory testing results generated by the variety of molecular DST platforms, reported to TB health departments through electronic laboratory results (ELR), using those same standardized LOINC and SNOMED CT vocabularies in HL7 v2.5.1 [7].Results: The application of the standardization protocols would optimize early detection and reporting of rifampin-resistant TB cases; provide a high-quality data-driven decision-making process by public health administrators on TB cases; and generate high-quality datasets to enhance reporting or analyses of TB surveillance data and drug resistance.Conclusions: This study demonstrates that it is possible to apply standardized protocols to improve the quality of data, specifications and exchange formats within the NTSS, thereby streamlining the seamless exchange of TB incident cases in an integrated public health environment supporting TB surveillance, informatics, and translational research.


2017 ◽  
Vol 4 (7) ◽  
pp. e309 ◽  
Author(s):  
Emanuele Crocetti ◽  
Carlotta Buzzoni
Keyword(s):  

2013 ◽  
Vol 66 (2) ◽  
pp. 1033-1048 ◽  
Author(s):  
Chi-Yao Weng ◽  
Yu Hong Zhang ◽  
Li Chun Lin ◽  
Shiuh-Jeng Wang

2020 ◽  
Author(s):  
◽  

Good data management is essential for ensuring the validity and quality of data in all types of clinical research and is an essential precursor for data sharing. The Data Management Portal has been developed to provide support to researchers to ensure that high-quality data management is fully considered, and planned for, from the outset and throughout the life of a research project. The steps described in the portal will help identify the areas which should be considered when developing a Data Management Plan, with a particular focus on data management systems and how to organise and structure your data. Other elements include best practices for data capture, entry, processing and monitoring, how to prepare data for analysis, sharing, and archiving, and an extensive collection of resources linked to data management which can be searched and filtered depending on their type.


Sign in / Sign up

Export Citation Format

Share Document