scholarly journals Integrated Quality Control Process for Hydrological Database: A Case Study of Daecheong Dam Basin in South Korea

Water ◽  
2021 ◽  
Vol 13 (20) ◽  
pp. 2820
Author(s):  
Gimoon Jeong ◽  
Do-Guen Yoo ◽  
Tae-Woong Kim ◽  
Jin-Young Lee ◽  
Joon-Woo Noh ◽  
...  

In our intelligent society, water resources are being managed using vast amounts of hydrological data collected through telemetric devices. Recently, advanced data quality control technologies for data refinement based on hydrological observation history, such as big data and artificial intelligence, have been studied. However, these are impractical due to insufficient verification and implementation periods. In this study, a process to accurately identify missing and false-reading data was developed to efficiently validate hydrological data by combining various conventional validation methods. Here, false-reading data were reclassified into suspected and confirmed groups by combining the results of individual validation methods. Furthermore, an integrated quality control process that links data validation and reconstruction was developed. In particular, an iterative quality control feedback process was proposed to achieve highly reliable data quality, which was applied to precipitation and water level stations in the Daecheong Dam Basin, South Korea. The case study revealed that the proposed approach can improve the quality control procedure of hydrological database and possibly be implemented in practice.

Author(s):  
Ali, Hassana Oseiwu ◽  
Orumbe, Seth Obafemi

This research is an analysis of quality control process on paper production on the soft roll production process of Bel Papyrus Ltd located in Ogba, Lagos State, Nigeria. The research was done with the aim of determining the conformity of the industry’s product to quality standard, identifying and eliminating the possible causes of variation in their production process, with reference to Percentage Elongation. The researchers used primary data in form of periodic laboratory test result done on soft rolls.Data presentations were made using simple statistical tools like Mean, Ranges, Standard Deviations, and Tables reflecting the primary data obtained at equal interval of production. The researchers made use of variable control charts for the purpose of analysis. The


Author(s):  
Felipe Simoes ◽  
Donat Agosti ◽  
Marcus Guidoti

Automatic data mining is not an easy task and its success in the biodiversity world is deeply tied to the standardization and consistency of scientific journals' layout structure. The various formatting styles found in the over 500 million pages of published biodiversity information (Kalfatovich 2010), pose a remarkable challenge towards the goal of automating the liberation of data currently trapped on the printed page. Regular expressions and other pattern-recognition strategies invariably fail to cope with this diverse landscape of academic publishing. Challenges such as incomplete data and taxonomic uncertainty add several additional layers of complexity. However, in the era of big data, the liberation of all the different facts contained in biodiversity literature is of crucial importance. Plazi tackles this daunting task by providing workflows and technology to automatically process biodiversity publications and annotate the information therein, all within the principles of FAIR (findable, accessible, interoperable, and reusable) data usage (Agosti and Egloff 2009). It uses the concept of taxonomic treatments (Catapano 2019) as the most fundamental unit in biodiversity literature, to provide a framework that reflects the reality of taxonomic data for linking the different pieces of information contained in these taxonomic treatments. Treatment citations, composed of a taxonomic name and a bibliographic reference, and material citations carrying all specimen-related information are additional conceptual cornerstones for this framework. The resulting enhanced data are added to TreatmentBank. Figures and treatments are made Findable, Accessible, Interoperable and Reuseable (FAIR) by depositing them including specific metadata to the Biodiversity Literature Repository community (BLR) at the European Organization for Nuclear Research (CERN) repository Zenodo, and pushed to GBIF. The automation, however, is error prone due to the constraints explained above. In order to cope with this remarkable task without compromising data quality, Plazi has established a quality control process, based on logical rules that check the components of the extracted document raising errors in four different levels of severity. These errors are also used in a data transit control mechanism, “the gatekeeper”, which blocks certain data transits to create deposits (e.g., BLR) or reuse of data (e.g., GBIF) in the presence of specific errors. Finally, a set of automatic notifications were included in the plazi/community Github repository, in order to provide a channel that empowers external users to report data issues directly to a dedicated team of data miners, which will in turn and in a timely manner, fix these issues, improving data quality on demand. In this talk, we aim to explain Plazi’s internal quality control process and phases, the data transits that are potentially affected, as well as statistics on the most common issues raised by this automated endeavor and how we use the generated data to continuously improve this important step in Plazi's workflow.


2004 ◽  
Vol 50 (4) ◽  
pp. 147-152 ◽  
Author(s):  
T. Higuchi ◽  
J. Masuda

In 2000, 2001, and 2002, interlaboratory comparison of olfactometry was carried out in order to collect basic data for the establishment of a quality control procedure and the determination of quality criteria for the triangular odour bag method. In 2000, interlaboratory comparison was conducted by using a measurement method for samples taken at smoke stacks. On the other hand, the measurement method for samples taken at boundary lines was used for interlaboratory comparison in 2001. A total of seven olfactometry laboratories in Japan participated in each test, and mean values, repeatability standard deviations, reproducibility standard deviations, and standard deviations under intermediate conditions of detection threshold of ethyl acetate were calculated from the results. These values can be used in a quality control process of olfactometry. In 2002, interlaboratory comparison was carried out by using a measurement method for samples taken at smoke stacks. A total of 137 olfactometry laboratories in Japan participated in the test, and 69% of them lay within the permissible range of the odour index.


2021 ◽  
pp. 1-11
Author(s):  
Song Gang ◽  
Wang Xiaoming ◽  
Wu Junfeng ◽  
Li Shufang ◽  
Liu Zhuowen ◽  
...  

In view of the production quality management of filter rods in the manufacturing and execution process of cigarette enterprises, this paper analyzes the necessity of implementing the manufacturing execution system (MES) in the production process of filter rods. In this paper, the filter rod quality system of cigarette enterprise based on MES is fully studied, and the constructive information management system demand analysis, cigarette quality control process, system function module design, implementation and test effect are given. This paper utilizes the Fuzzy analytic hierarchy process to find the optimal system for processing the manufacturing of cigarette. The implementation of MSE based filter rod quality information management system for a cigarette enterprise ensures the quality control in the cigarette production process. Through visualization, real-time and dynamic way, the information management of cigarette production is completed, which greatly improves the quality of cigarette enterprise manufacturing process.


Cell ◽  
2021 ◽  
Vol 184 (11) ◽  
pp. 2896-2910.e13
Author(s):  
Haifeng Jiao ◽  
Dong Jiang ◽  
Xiaoyu Hu ◽  
Wanqing Du ◽  
Liangliang Ji ◽  
...  

2013 ◽  
Vol 141 (2) ◽  
pp. 798-808 ◽  
Author(s):  
Zhifang Xu ◽  
Yi Wang ◽  
Guangzhou Fan

Abstract The relatively smooth terrain embedded in the numerical model creates an elevation difference against the actual terrain, which in turn makes the quality control of 2-m temperature difficult when forecast or analysis fields are utilized in the process. In this paper, a two-stage quality control method is proposed to address the quality control of 2-m temperature, using biweight means and a progressive EOF analysis. The study is made to improve the quality control of the observed 2-m temperature collected by China and its neighboring areas, based on the 6-h T639 analysis from December 2009 to February 2010. Results show that the proposed two-stage quality control method can secure the needed quality control better, compared with a regular EOF quality control process. The new method is, in particular, able to remove the data that are dotted with consecutive errors but showing small fluctuations. Meanwhile, compared with the lapse rate of temperature method, the biweight mean method is able to remove the systematic bias generated by the model. It turns out that such methods make the distributions of observation increments (the difference between observation and background) more Gaussian-like, which ensures the data quality after the quality control.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0247925
Author(s):  
Pooi-Mun Wong ◽  
Shreya R. K. Sinha ◽  
Chee-Kong Chui

Blockchain has been applied to quality control in manufacturing, but the problems of false defect detections and lack of data transparency remain. This paper proposes a framework, Blockchain Quality Controller (BCQC), to overcome these limitations while fortifying data security. BCQC utilizes blockchain and Internet-of-Things to form a peer-to-peer supervision network. This paper also proposes a consensus algorithm, Quality Defect Tolerance (QDT), to adopt blockchain for during-production quality control. Simulation results show that BCQC enhances data security and improves defect detections. Although the time taken for the quality control process increases with the number of nodes in blockchain, the application of QDT allows multiple inspections on a workpiece to be consolidated at a faster pace, effectively speeding up the entire quality control process. The BCQC and QDT can improve the quality of parts produced for mass personalization manufacturing.


AGROINTEK ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 72
Author(s):  
Andan Linggar Rucitra ◽  
S Fadiah

<p><em>Telon oil is</em><em> one of </em><em> </em><em>the </em><em>traditional medicine in the form of </em><em> </em><em>liquid preparations that serves to provide a sense of warmth to the wearer. PT</em><em>.X</em><em> is one of the companies that produce</em><em> </em><em>telon</em><em> oil</em><em>.</em><em> To maintain</em><em> the quality of telon oil from PT.X</em><em> product</em><em>, required overall quality control that is starting from the quality control of raw materials, quality control process to the quality control of the final product. The purpose of this research is to know the application of Statistical Quality Control (SQC) in controlling the quality of telon oil in PT X. </em><em>F</em><em>inal product</em><em> quality</em><em> become one of the measurement of success of a process, so it needs a good quality control. SQC method used in this research is Pareto Diagram and Cause and Effect Diagram. Pareto diagram is a bar graph </em><em>that </em><em>show the problem based on the order of the number of occurrences of the most number of problems until the least happened. A causal diagram is often called a fishbone diagram, a tool for identifying potential causes of an effect or problem. The result of applying the method indicates that 80% defect is caused by unsuitable volume and on the incompatibility of Expired Date (ED) code. The damage is caused by several factors namely the method, labor, and machine while the most potential factor is the volume conformity to reduce the number of defect products.</em></p>


Sign in / Sign up

Export Citation Format

Share Document