scholarly journals Improving Specimen Labelling and Data Collection in Bio-science Research using Mobile and Web Applications

2020 ◽  
Vol 10 (1) ◽  
pp. 1-16
Author(s):  
Isaac Nyabisa Oteyo ◽  
Mary Esther Muyoka Toili

AbstractResearchers in bio-sciences are increasingly harnessing technology to improve processes that were traditionally pegged on pen-and-paper and highly manual. The pen-and-paper approach is used mainly to record and capture data from experiment sites. This method is typically slow and prone to errors. Also, bio-science research activities are often undertaken in remote and distributed locations. Timeliness and quality of data collected are essential. The manual method is slow to collect quality data and relay it in a timely manner. Capturing data manually and relaying it in real time is a daunting task. The data collected has to be associated to respective specimens (objects or plants). In this paper, we seek to improve specimen labelling and data collection guided by the following questions; (1) How can data collection in bio-science research be improved? (2) How can specimen labelling be improved in bio-science research activities? We present WebLog, an application that we prototyped to aid researchers generate specimen labels and collect data from experiment sites. We use the application to convert the object (specimen) identifiers into quick response (QR) codes and use them to label the specimens. Once a specimen label is successfully scanned, the application automatically invokes the data entry form. The collected data is immediately sent to the server in electronic form for analysis.

10.28945/2584 ◽  
2002 ◽  
Author(s):  
Herna L. Viktor ◽  
Wayne Motha

Increasingly, large organizations are engaging in data warehousing projects in order to achieve a competitive advantage through the exploration of the information as contained therein. It is therefore paramount to ensure that the data warehouse includes high quality data. However, practitioners agree that the improvement of the quality of data in an organization is a daunting task. This is especially evident in data warehousing projects, which are often initiated “after the fact”. The slightest suspicion of poor quality data often hinders managers from reaching decisions, when they waste hours in discussions to determine what portion of the data should be trusted. Augmenting data warehousing with data mining methods offers a mechanism to explore these vast repositories, enabling decision makers to assess the quality of their data and to unlock a wealth of new knowledge. These methods can be effectively used with inconsistent, noisy and incomplete data that are commonplace in data warehouses.


2018 ◽  
Vol 4 (Supplement 2) ◽  
pp. 156s-156s
Author(s):  
S. Rayne ◽  
J. Meyerowitz ◽  
G. Even-Tov ◽  
H. Rae ◽  
N. Tapela ◽  
...  

Background and context: Breast cancer is one of the most common cancers in most resource-constrained environments worldwide. Although breast awareness has improved, lack of understanding of the diagnosis and management can cause patient anxiety, noncompliance and ultimately may affect survival through compromised or delayed care. South African women attending government hospitals are diverse, with differing levels of income, education and support available. Often there is a lack of access for them to appropriate information for their cancer care. Aim: A novel bioinformatics data management system was conceived through an innovative close collaboration between Wits Biomedical Informatics and Translational Science (Wits-BITS) and academic breast cancer surgeons. The aim was to develop a platform to allow acquisition of epidemiologic data but synchronously convert this into a personalised cancer plan and “take-home” information sheet for the patient. Strategy/Tactics: The concept of a clinician “customer” was used, in which the “currency” in which they rewarded the database service was accurate data. For this payment they received the “product” of an immediate personalised information sheet for their patient. Program/Policy process: A custom software module was developed to generate individualized patient letters containing a mixture of template text and information from the patient's medical record. The letter is populated with the patient's name and where they were seen, and an personalised explanation of the patient's specific cancer stage according to the TNM system. Outcomes: Through a process of continuous use with patient and clinician feedback, the quality of data in the system was improved. Patients enjoyed the personalised information sheet, allowing patient and family to comprehend and be reassured by the management plan. Clinicians found that the quality of the information sheet was instant feedback as to the comprehensiveness of their data input, and thus assured compliance and quality of data points. What was learned: Using a consumer model, through a process of cross-discipline collaboration, where there is normally poor access to appropriate patient information and poor data entry by overburdened clinicians, a low-cost model of high-quality data collection was achieved, in real-time, by clinicians best qualified to input correct data points. Patients also benefitted from participation in a database immediately, through personalised information sheets improving their understanding of their cancer care.


2020 ◽  
Author(s):  
Cristina Costa-Santos ◽  
Ana Luísa Neves ◽  
Ricardo Correia ◽  
Paulo Santos ◽  
Matilde Monteiro-Soares ◽  
...  

AbstractBackgroundHigh-quality data is crucial for guiding decision making and practicing evidence-based healthcare, especially if previous knowledge is lacking. Nevertheless, data quality frailties have been exposed worldwide during the current COVID-19 pandemic. Focusing on a major Portuguese surveillance dataset, our study aims to assess data quality issues and suggest possible solutions.MethodsOn April 27th 2020, the Portuguese Directorate-General of Health (DGS) made available a dataset (DGSApril) for researchers, upon request. On August 4th, an updated dataset (DGSAugust) was also obtained. The quality of data was assessed through analysis of data completeness and consistency between both datasets.ResultsDGSAugust has not followed the data format and variables as DGSApril and a significant number of missing data and inconsistencies were found (e.g. 4,075 cases from the DGSApril were apparently not included in DGSAugust). Several variables also showed a low degree of completeness and/or changed their values from one dataset to another (e.g. the variable ‘underlying conditions’ had more than half of cases showing different information between datasets). There were also significant inconsistencies between the number of cases and deaths due to COVID-19 shown in DGSAugust and by the DGS reports publicly provided daily.ConclusionsThe low quality of COVID-19 surveillance datasets limits its usability to inform good decisions and perform useful research. Major improvements in surveillance datasets are therefore urgently needed - e.g. simplification of data entry processes, constant monitoring of data, and increased training and awareness of health care providers - as low data quality may lead to a deficient pandemic control.


2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


1996 ◽  
Vol 9 (6) ◽  
pp. 406-415
Author(s):  
Daniel Krichbaum ◽  
Alan Rosenthal

Drug development in the United States has undergone considerable change over the past decade. The outsourcing of clinical research activities to Contract Research Organizations (CROs) continues to escalate in an attempt to speed drugs to market faster. The increasing use of business strategies at the investigational site level has fostered the emergence of specialty networks and Site Management Organizations (SMOs). SMOs offer pharmaceutical and biotechnology sponsors the ability to work with a tightly managed network of experienced professional multispecialty research centers that can enroll large numbers of patients and provide high quality data. While these organizations have fundamentally changed the way drugs are developed, they have also contributed to an acceleration of the process and an improvement in the scientific integrity and quality of the data.


2000 ◽  
Vol 12 (1) ◽  
pp. 57-72 ◽  
Author(s):  
Hannie C. Comijs ◽  
Wil Dijkstra ◽  
Lex M. Bouter ◽  
Johannes H. Smit

2015 ◽  
Author(s):  
Paula Aristizabal ◽  
Foyinsola Ani ◽  
Erica Del Muro ◽  
Teresa Cassidy ◽  
William Roberts ◽  
...  

2019 ◽  
Vol 32 (1) ◽  
pp. 108-119 ◽  
Author(s):  
Mehrdad Farzandipour ◽  
Mahtab Karami ◽  
Mohsen Arbabi ◽  
Sakine Abbasi Moghadam

Purpose Data comprise one of the key resources currently used in organizations. High-quality data are those that are appropriate for use by the customer. The quality of data is a key factor in determining the level of healthcare in hospitals, and its improvement leads to an improved quality of health and treatment and ultimately increases patient satisfaction. The purpose of this paper is to assess the quality of emergency patients’ information in a hospital information system. Design/methodology/approach This cross-sectional study was conducted on 385 randomly selected records of patients admitted to the emergency department of Shahid Beheshti Hospital in Kashan, Iran, in 2016. Data on five dimensions of quality, including accuracy, accessibility, timeliness, completeness and definition, were collected using a researcher-made checklist and were then analyzed in SPSS. The results are presented using descriptive statistics, such as frequency distribution and percentage. Findings The overall quality of emergency patients’ information in the hospital information system was 86 percent, and the dimensions of quality scored 87.7 percent for accuracy, 86.8 percent for completeness, 83.9 percent for timeliness, 79 percent for definition and 62.1 percent for accessibility. Originality/value Increasing the quality of patient information at emergency departments can lead to improvements in the timely diagnosis and management of diseases and patient and personnel satisfaction, and reduce hospital costs.


2020 ◽  
pp. 089443932092824 ◽  
Author(s):  
Michael J. Stern ◽  
Erin Fordyce ◽  
Rachel Carpenter ◽  
Melissa Heim Viox ◽  
Stuart Michaels ◽  
...  

Social media recruitment is no longer an uncharted avenue for survey research. The results thus far provide evidence of an engaging means of recruiting hard-to-reach populations. Questions remain, however, regarding whether the data collected using this method of recruitment produce quality data. This article assesses one aspect that may influence the quality of data gathered through nonprobability sampling using social media advertisements for a hard-to-reach sexual and gender minority youth population: recruitment design formats. The data come from the Survey of Today’s Adolescent Relationships and Transitions, which used a variety of forms of advertisements as survey recruitment tools on Facebook, Instagram, and Snapchat. Results demonstrate that design decisions such as the format of the advertisement (e.g., video or static) and the use of eligibility language on the advertisements impact the quality of the data as measured by break-off rates and the use of nonsubstantive responses. Additionally, the type of device used affected the measures of data quality.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 426
Author(s):  
V Jayaraj ◽  
S Alonshia

Although data collection has received much attention by effectively minimizing delay, computational complexity and increasing the total data transmitted, the transience of sensor nodes for multiple data collection of sensed node in wireless sensor network (WSN) renders quality of service a great challenge. To circumvent transience of sensor nodes for multiple data collection, Quality based Drip-Drag-Match Data Collection (QDDM-DC) scheme have been proposed. In Drip-Drag-Match data collection scheme, initially dripping of data is done on the sink by applying Equidistant-based Optimum Communication Path from the sensor nodes which reduces the data loss. Next the drag operation pulls out the required sensed data using Neighbourhood-based model from multiple locations to reduce the delay for storage. Finally, the matching operation, compares the sensed data received by the dragging operation to that of the corresponding sender sensor node (drip stage) and stores the sensed data accurately which in turn improves the throughput and quality of data collection. Simulation is carried for the QDDM-DC scheme with multiple scenarios (size of data, number of sinks, storage capacity) in WSN with both random and deterministic models. Simulation results show that QDDM-DC provides better performance than other data collection schemes, especially with high throughput, ensuring minimum delay and data loss for effective multiple data collection of sensed data in WSN.


Sign in / Sign up

Export Citation Format

Share Document