Can Third-party Help Improve Data Quality in Research Interviews? A Natural Experiment in a Hard-to-study Population

Field Methods ◽  
2015 ◽  
Vol 27 (4) ◽  
pp. 426-440 ◽  
Author(s):  
Melissa Quetulio-Navarra ◽  
Wander van der Vaart ◽  
Anke Niehof
2020 ◽  
Vol 19 ◽  
pp. 160940692096648
Author(s):  
Melissa Beresford ◽  
J. Leah Jones ◽  
Julia C. Bausch ◽  
Clinton F. Williams ◽  
Amber Wutich ◽  
...  

This paper examines the effect of having a third-party scientific expert present in stakeholder interviews. The study was conducted as part of a larger project on stakeholder engagement for natural resource management in the Verde Valley region of Arizona. We employed an experimental design, conducting stakeholder interviews both with and without an identified scientific expert present. Our sample consisted of 12 pairs of interviewees (24 total participants) who we matched based on their occupation, sex, and spatial proximity. For each pair, the scientific expert was present as a third party in one interview and absent in the other. We used a word-based coding strategy to code all interview responses for three known areas of sensitivity among the study population (risk, gatekeeping, and competence). We then performed both quantitative and qualitative analyses to compare responses across the two interview groups. We found that the presence of a scientific expert did not have a statistically significant effect on the mention of sensitive topics among stakeholders. However, our qualitative results show that the presence of a scientific expert had subtle influences on the ways that stakeholders discussed sensitive topics, particularly in placing emphasis on their own credibility and knowledge. Our findings indicate that researchers may be able to pursue collaborative, interdisciplinary research designs with multiple researchers present during interviews without concerns of strongly influencing data elicitation on sensitive topics. However, researchers should be cognizant of the subtle ways in which the presence of a third-party expert may influence the credibility claims and knowledge assertions made by respondents when a third-party expert is present during stakeholder interviews.


Author(s):  
David Lugo ◽  
Juan Ortega

A key process in the oil industry to make decisions is data collection. To improve productivity it is important data and information analysis. For many organizations is not profitable data automation, which has an impact in the way organizations, collect data. Data collection is taken by manual processes that create uncertainty for analysis because it is not reliable. As consequence, making a decision has not the planned results. After working for many years in the oil industry was identified: 1. People collecting data in a manual process normally by using a piece of paper which could be lost or damage. 2. After taking data at the well, data are brought to the office. Then, data are downloaded by another worker in computer software. It can be modified intentionally or not. 3. Accuracy of data collection activity is carried out. How do we know if the staff really went to work area? 4. Training to new staff, lack of experience? 5. There are “risks zones” due vandalism, facilities are damaged by people who stole devices which causes great money losses to companies. All these mentioned factors affect decision making which has a big impact in the production process. This application helps the whole process from collection data until data are registered in databases. This application considered several observations, suggestions and comments from people involve in the oil industry, especially at the production area. As a result, it is a tool that support data collection, standardize information in databases, improve data quality (it doesn’t matter localization), shows time and photographic position in a mobile device. Information is generated digitally taking advantage of easy handling. To summarize advantages of the whole system: • Reduce time of the data re-collection process • Improve data quality • Reduce amount of people working on data registration • Data reliability • Support decisions making • Minimize the use of paper in order to help ambient environment • Improve vehicle logistics • Minimize use of gasoline which helps to reduce costs • Help to optimize routes for vehicles on the field • Productivity, Maintenance, etc., reports can be generated • Vandalism is not a problem


2020 ◽  
Vol 26 (1) ◽  
pp. 107-126
Author(s):  
Anastasija Nikiforova ◽  
Janis Bicevskis ◽  
Zane Bicevska ◽  
Ivo Oditis

The paper proposes a new data object-driven approach to data quality evaluation. It consists of three main components: (1) a data object, (2) data quality requirements, and (3) data quality evaluation process. As data quality is of relative nature, the data object and quality requirements are (a) use-case dependent and (b) defined by the user in accordance with his needs. All three components of the presented data quality model are described using graphical Domain Specific Languages (DSLs). In accordance with Model-Driven Architecture (MDA), the data quality model is built in two steps: (1) creating a platform-independent model (PIM), and (2) converting the created PIM into a platform-specific model (PSM). The PIM comprises informal specifications of data quality. The PSM describes the implementation of a data quality model, thus making it executable, enabling data object scanning and detecting data quality defects and anomalies. The proposed approach was applied to open data sets, analysing their quality. At least 3 advantages were highlighted: (1) a graphical data quality model allows the definition of data quality by non-IT and non-data quality experts as the presented diagrams are easy to read, create and modify, (2) the data quality model allows an analysis of "third-party" data without deeper knowledge on how the data were accrued and processed, (3) the quality of the data can be described at least at two levels of abstraction - informally using natural language or formally by including executable artefacts such as SQL statements.


2019 ◽  
Vol 6 (3) ◽  
pp. 69
Author(s):  
Jenny J. Ly ◽  
Rinah T. Yamamoto ◽  
Susan M. Dallabrida

<p class="abstract"><strong>Background:</strong> In migraine clinical trials, patients’ understanding of the terminology used in patient-reported outcome (PRO) measures is important as variability in completing PRO measures can reduce the power to detect treatment efficacy. This study examines patients’ understanding of how to complete PRO measures in the absence of training, if minimal training can improve the accuracy of answering PRO items, and patients’ opinion on the necessity of training and their preference for the method of training.</p><p class="abstract"><strong>Methods:</strong> Participants reporting a diagnosis of migraine completed online surveys. Participants were given scenarios of how to report headache days and pain severity. Respondents were asked about their opinions on the necessity of training, and their preference for the method of training. In a second study, participants were given a hypothetical scenario on how to report pain severity before and after a short training.</p><p class="abstract"><strong>Results:</strong> The majority of participants had different criteria to interpret PRO questions and provided incorrect answers to our scenarios. In the second study, with minimal training, errors were reduced by 7.5%. Over 90% of participants viewed educational materials and training as necessary and preferred electronic modes of training with the ability to review training materials as needed for the duration of the trial.</p><p class="abstract"><strong>Conclusions: </strong>Patient training may improve data quality and inter-rater reliability in clinical trials. Electronic interactive training could be used as an approach to reduce inconsistencies in PRO measures and improve data quality.</p>


2008 ◽  
Vol 13 (5) ◽  
pp. 378-389 ◽  
Author(s):  
Xiaohua Douglas Zhang ◽  
Amy S. Espeseth ◽  
Eric N. Johnson ◽  
Jayne Chin ◽  
Adam Gates ◽  
...  

RNA interference (RNAi) not only plays an important role in drug discovery but can also be developed directly into drugs. RNAi high-throughput screening (HTS) biotechnology allows us to conduct genome-wide RNAi research. A central challenge in genome-wide RNAi research is to integrate both experimental and computational approaches to obtain high quality RNAi HTS assays. Based on our daily practice in RNAi HTS experiments, we propose the implementation of 3 experimental and analytic processes to improve the quality of data from RNAi HTS biotechnology: (1) select effective biological controls; (2) adopt appropriate plate designs to display and/or adjust for systematic errors of measurement; and (3) use effective analytic metrics to assess data quality. The applications in 5 real RNAi HTS experiments demonstrate the effectiveness of integrating these processes to improve data quality. Due to the effectiveness in improving data quality in RNAi HTS experiments, the methods and guidelines contained in the 3 experimental and analytic processes are likely to have broad utility in genome-wide RNAi research. ( Journal of Biomolecular Screening 2008:378-389)


Sign in / Sign up

Export Citation Format

Share Document