scholarly journals Data quality control and tools in passive seismic experiments exemplified on the Czech broadband seismic pool MOBNET in the AlpArray collaborative project

2017 ◽  
Vol 6 (2) ◽  
pp. 505-521 ◽  
Author(s):  
Luděk Vecsey ◽  
Jaroslava Plomerová ◽  
Petr Jedlička ◽  
Helena Munzarová ◽  
Vladislav Babuška ◽  
...  

Abstract. This paper focuses on major issues related to the data reliability and network performance of 20 broadband (BB) stations of the Czech (CZ) MOBNET (MOBile NETwork) seismic pool within the AlpArray seismic experiments. Currently used high-resolution seismological applications require high-quality data recorded for a sufficiently long time interval at seismological observatories and during the entire time of operation of the temporary stations. In this paper we present new hardware and software tools we have been developing during the last two decades while analysing data from several international passive experiments. The new tools help to assure the high-quality standard of broadband seismic data and eliminate potential errors before supplying data to seismological centres. Special attention is paid to crucial issues like the detection of sensor misorientation, timing problems, interchange of record components and/or their polarity reversal, sensor mass centring, or anomalous channel amplitudes due to, for example, imperfect gain. Thorough data quality control should represent an integral constituent of seismic data recording, preprocessing, and archiving, especially for data from temporary stations in passive seismic experiments. Large international seismic experiments require enormous efforts from scientists from different countries and institutions to gather hundreds of stations to be deployed in the field during a limited time period. In this paper, we demonstrate the beneficial effects of the procedures we have developed for acquiring a reliable large set of high-quality data from each group participating in field experiments. The presented tools can be applied manually or automatically on data from any seismic network.

2017 ◽  
Author(s):  
Luděk Vecsey ◽  
Jaroslava Plomerová ◽  
Petr Jedlička ◽  
Helena Munzarová ◽  
Vladislav Babuška ◽  
...  

Abstract. This paper focuses on major issues related to data reliability and MOBNET network performance in the AlpArray seismic experiments, in which twenty temporary broad-band stations of the Czech MOBNET pool of mobile stations have been involved. Currently used high-resolution scientific methods require high-quality data recorded for a sufficiently long time interval at observatories and during full time of operation of temporary stations. In this paper we present both new hardware and software tools that help to assure the high-quality standard of broad-band seismic data. Special attention is paid to issues like a detection of sensor mis-orientation, timing problems, exchange of record components and/or their polarity reversal, sensor mass centring, or anomalous channel amplitudes due to, e.g., imperfect gain. Thorough data-quality control should represent an integral constituent of seismic data recording, pre-processing and archiving, especially for data from temporary stations in passive seismic experiments. Large international seismic experiments require enormous efforts of scientists from different countries and institutions to gather hundreds of stations to be deployed in the field during a limited time period. In this paper, we demonstrate beneficial effects of the procedures we have developed for having a sufficiently large set of high-quality and reliable data from each group participating in field experiments.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Janet E. Squires ◽  
Alison M. Hutchinson ◽  
Anne-Marie Bostrom ◽  
Kelly Deis ◽  
Peter G. Norton ◽  
...  

Researchers strive to optimize data quality in order to ensure that study findings are valid and reliable. In this paper, we describe a data quality control program designed to maximize quality of survey data collected using computer-assisted personal interviews. The quality control program comprised three phases: (1) software development, (2) an interviewer quality control protocol, and (3) a data cleaning and processing protocol. To illustrate the value of the program, we assess its use in the Translating Research in Elder Care Study. We utilize data collected annually for two years from computer-assisted personal interviews with 3004 healthcare aides. Data quality was assessed using both survey and process data. Missing data and data errors were minimal. Mean and median values and standard deviations were within acceptable limits. Process data indicated that in only 3.4% and 4.0% of cases was the interviewer unable to conduct interviews in accordance with the details of the program. Interviewers’ perceptions of interview quality also significantly improved between Years 1 and 2. While this data quality control program was demanding in terms of time and resources, we found that the benefits clearly outweighed the effort required to achieve high-quality data.


Metabolomics ◽  
2014 ◽  
Vol 10 (4) ◽  
pp. 539-540 ◽  
Author(s):  
Daniel W. Bearden ◽  
Richard D. Beger ◽  
David Broadhurst ◽  
Warwick Dunn ◽  
Arthur Edison ◽  
...  

Author(s):  
H.H. Alwan ◽  
A.A. Mohammed Ali ◽  
Y.N. Mahmood

The purpose of this research is to evaluate the work of construction of an Assyrian library affiliated with the University of Mosul, which was based on conditions and procedures for quality control on both the administration and the engineering aspects taking in consideration the stages of the projects during the construction.  When applying the quality requirements to the project in terms of achievement and integration, it found that there was a delay in the time of site implementation, in addition to the fact that the actual data with presumed.  As well as the lack of high-quality data on the machines used for construction by the company.  In general, however, some criteria have yielded satisfactory results in terms of on-site work.


2020 ◽  
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

BACKGROUND Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. OBJECTIVE This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. METHODS The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. RESULTS Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. CONCLUSIONS The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. INTERNATIONAL REGISTERED REPORT DERR1-10.2196/18366


10.2196/18366 ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. e18366
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

Background Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. Objective This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. Methods The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. Results Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. Conclusions The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. International Registered Report Identifier (IRRID) DERR1-10.2196/18366


2015 ◽  
Vol 21 (3) ◽  
pp. 358-374 ◽  
Author(s):  
Mustafa Aljumaili ◽  
Karina Wandt ◽  
Ramin Karim ◽  
Phillip Tretten

Purpose – The purpose of this paper is to explore the main ontologies related to eMaintenance solutions and to study their application area. The advantages of using these ontologies to improve and control data quality will be investigated. Design/methodology/approach – A literature study has been done to explore the eMaintenance ontologies in the different areas. These ontologies are mainly related to content structure and communication interface. Then, ontologies will be linked to each step of the data production process in maintenance. Findings – The findings suggest that eMaintenance ontologies can help to produce a high-quality data in maintenance. The suggested maintenance data production process may help to control data quality. Using these ontologies in every step of the process may help to provide management tools to provide high-quality data. Research limitations/implications – Based on this study, it can be concluded that further research could broaden the investigation to identify more eMaintenance ontologies. Moreover, studying these ontologies in more technical details may help to increase the understandability and the use of these standards. Practical implications – It has been concluded in this study that applying eMaintenance ontologies by companies needs additional cost and time. Also the lack or the ineffective use of eMaintenance tools in many enterprises is one of the limitations for using these ontologies. Originality/value – Investigating eMaintenance ontologies and connecting them to maintenance data production is important to control and manage the data quality in maintenance.


Sign in / Sign up

Export Citation Format

Share Document