88. Validation Methods and Data Quality Objectives Analysis for Retrospective Exposure Estimations — Examples, Analysis, and Pitfalls

2001 ◽  
Author(s):  
J. Rasmuson ◽  
L. Birkner
Water ◽  
2021 ◽  
Vol 13 (20) ◽  
pp. 2820
Author(s):  
Gimoon Jeong ◽  
Do-Guen Yoo ◽  
Tae-Woong Kim ◽  
Jin-Young Lee ◽  
Joon-Woo Noh ◽  
...  

In our intelligent society, water resources are being managed using vast amounts of hydrological data collected through telemetric devices. Recently, advanced data quality control technologies for data refinement based on hydrological observation history, such as big data and artificial intelligence, have been studied. However, these are impractical due to insufficient verification and implementation periods. In this study, a process to accurately identify missing and false-reading data was developed to efficiently validate hydrological data by combining various conventional validation methods. Here, false-reading data were reclassified into suspected and confirmed groups by combining the results of individual validation methods. Furthermore, an integrated quality control process that links data validation and reconstruction was developed. In particular, an iterative quality control feedback process was proposed to achieve highly reliable data quality, which was applied to precipitation and water level stations in the Daecheong Dam Basin, South Korea. The case study revealed that the proposed approach can improve the quality control procedure of hydrological database and possibly be implemented in practice.


2021 ◽  
Vol 10 (11) ◽  
pp. 735
Author(s):  
Lih Wei Yeow ◽  
Raymond Low ◽  
Yu Xiang Tan ◽  
Lynette Cheah

Point-of-interest (POI) data from map sources are increasingly used in a wide range of applications, including real estate, land use, and transport planning. However, uncertainties in data quality arise from the fact that some of this data are crowdsourced and proprietary validation workflows lack transparency. Comparing data quality between POI sources without standardized validation metrics is a challenge. This study reviews and implements the available POI validation methods, working towards identifying a set of metrics that is applicable across datasets. Twenty-three validation methods were found and categorized. Most methods evaluated positional accuracy, while logical consistency and usability were the least represented. A subset of nine methods was implemented to assess four real-world POI datasets extracted for a highly urbanized neighborhood in Singapore. The datasets were found to have poor completeness with errors of commission and omission, although spatial errors were reasonably low (<60 m). Thematic accuracy in names and place types varied. The move towards standardized validation metrics depends on factors such as data availability for intrinsic or extrinsic methods, varying levels of detail across POI datasets, the influence of matching procedures, and the intended application of POI data.


2012 ◽  
Author(s):  
Nurul A. Emran ◽  
Noraswaliza Abdullah ◽  
Nuzaimah Mustafa

2013 ◽  
pp. 97-116 ◽  
Author(s):  
A. Apokin

The author compares several quantitative and qualitative approaches to forecasting to find appropriate methods to incorporate technological change in long-range forecasts of the world economy. A?number of long-run forecasts (with horizons over 10 years) for the world economy and national economies is reviewed to outline advantages and drawbacks for different ways to account for technological change. Various approaches based on their sensitivity to data quality and robustness to model misspecifications are compared and recommendations are offered on the choice of appropriate technique in long-run forecasts of the world economy in the presence of technological change.


2019 ◽  
Vol 10 (2) ◽  
pp. 117-125
Author(s):  
Dana Kubíčková ◽  
◽  
Vladimír Nulíček ◽  

The aim of the research project solved at the University of Finance and administration is to construct a new bankruptcy model. The intention is to use data of the firms that have to cease their activities due to bankruptcy. The most common method for bankruptcy model construction is multivariate discriminant analyses (MDA). It allows to derive the indicators most sensitive to the future companies’ failure as a parts of the bankruptcy model. One of the assumptions for using the MDA method and reassuring the reliable results is the normal distribution and independence of the input data. The results of verification of this assumption as the third stage of the project are presented in this article. We have revealed that this assumption is met only in a few selected indicators. Better results were achieved in the indicators in the set of prosperous companies and one year prior the failure. The selected indicators intended for the bankruptcy model construction thus cannot be considered as suitable for using the MDA method.


Sign in / Sign up

Export Citation Format

Share Document