Reassessing Data Quality for Information Products

2010 ◽  
Vol 56 (12) ◽  
pp. 2316-2322 ◽  
Author(s):  
Debabrata Dey ◽  
Subodha Kumar
Author(s):  
F. Albrecht ◽  
T. Blaschke ◽  
S. Lang ◽  
H. M. Abdulmutalib ◽  
G. Szabó ◽  
...  

The availability and accessibility of remote sensing (RS) data, cloud processing platforms and provided information products and services has increased the size and diversity of the RS user community. This development also generates a need for validation approaches to assess data quality. Validation approaches employ quality criteria in their assessment. Data Quality (DQ) dimensions as the basis for quality criteria have been deeply investigated in the database area and in the remote sensing domain. Several standards exist within the RS domain but a general classification – established for databases – has been adapted only recently. For an easier identification of research opportunities, a better understanding is required how quality criteria are employed in the RS lifecycle. Therefore, this research investigates how quality criteria support decisions that guide the RS lifecycle and how they relate to the measured DQ dimensions. Subsequently follows an overview of the relevant standards in the RS domain that is matched to the RS lifecycle. Conclusively, the required research needs are identified that would enable a complete understanding of the interrelationships between the RS lifecycle, the data sources and the DQ dimensions, an understanding that would be very valuable for designing validation approaches in RS.


Author(s):  
Tom Breur

Business Intelligence (BI) projects that involve substantial data integration have often proven failure-prone and difficult to plan. Data quality issues trigger rework, which makes it difficult to accurately schedule deliverables. Two things can bring improvement. Firstly, one should deliver information products in the smallest possible chunks, but without adding prohibitive overhead for breaking up the work in tiny increments. This will increase the frequency and improve timeliness of feedback on suitability of information products and hence make planning and progress more predictable. Secondly, BI teams need to provide better stewardship when they facilitate discussions between departments whose data cannot easily be integrated. Many so-called data quality errors do not stem from inaccurate source data, but rather from incorrect interpretation of data. This is mostly caused by different interpretation of essentially the same underlying source system facts across departments with misaligned performance objectives. Such problems require prudent stakeholder management and informed negotiations to resolve such differences. In this chapter, the authors suggest an innovation to data warehouse architecture to help accomplish these objectives.


2004 ◽  
Vol 50 (7) ◽  
pp. 967-982 ◽  
Author(s):  
Amir Parssian ◽  
Sumit Sarkar ◽  
Varghese S. Jacob

2012 ◽  
Author(s):  
Nurul A. Emran ◽  
Noraswaliza Abdullah ◽  
Nuzaimah Mustafa

2013 ◽  
pp. 97-116 ◽  
Author(s):  
A. Apokin

The author compares several quantitative and qualitative approaches to forecasting to find appropriate methods to incorporate technological change in long-range forecasts of the world economy. A?number of long-run forecasts (with horizons over 10 years) for the world economy and national economies is reviewed to outline advantages and drawbacks for different ways to account for technological change. Various approaches based on their sensitivity to data quality and robustness to model misspecifications are compared and recommendations are offered on the choice of appropriate technique in long-run forecasts of the world economy in the presence of technological change.


2019 ◽  
Vol 10 (2) ◽  
pp. 117-125
Author(s):  
Dana Kubíčková ◽  
◽  
Vladimír Nulíček ◽  

The aim of the research project solved at the University of Finance and administration is to construct a new bankruptcy model. The intention is to use data of the firms that have to cease their activities due to bankruptcy. The most common method for bankruptcy model construction is multivariate discriminant analyses (MDA). It allows to derive the indicators most sensitive to the future companies’ failure as a parts of the bankruptcy model. One of the assumptions for using the MDA method and reassuring the reliable results is the normal distribution and independence of the input data. The results of verification of this assumption as the third stage of the project are presented in this article. We have revealed that this assumption is met only in a few selected indicators. Better results were achieved in the indicators in the set of prosperous companies and one year prior the failure. The selected indicators intended for the bankruptcy model construction thus cannot be considered as suitable for using the MDA method.


Sign in / Sign up

Export Citation Format

Share Document