Information Quality and Governance for Business Intelligence - Advances in Business Strategy and Competitive Advantage
Latest Publications


TOTAL DOCUMENTS

21
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466648920, 9781466648937

Author(s):  
Qinghua Zhu ◽  
Linghe Huang ◽  
Jia Tina Du ◽  
Hua Liu

Wiki is a typical representative of the User-Generated Content. Its appearance greatly promotes the creation, organization, management, and sharing of knowledge on the Internet. As articles grow rapidly in Wikis, the quality of the articles has aroused many people’s concerns. The topics on how to assess and control the quality of articles have attracted many researchers. However, few studies have been conducted to investigate the status of this research topic. This chapter explores the current research status and trends of wikis' quality and governance. The authors selected papers from the databases of ISI, EI, IEEE, and other widely used databases. They reported the trends and research of wikis’ quality and governance using bibliometric analysis and content analysis of a total of 99 relevant papers. The results show that although the research topics in the field have experienced a very rapid development, they are still at an early age that lacks theories to support them. The discipline of Library and Information Science was found to play a very active role in this new area. Future research agenda and directions are also discussed.


Author(s):  
Samuel Otero Schmidt ◽  
Edmir Parada Vasques Prado

Organizations are currently investing more in information technology to store and process a vast amount of information. Generally, this information does not comply with any standard, which hinders the decision-making process. The cause of the difficulties can be attributed to Information Quality (IQ), which has technical characteristics related to the architecture used in Data Warehouse (DW) and Business Intelligence (BI) environments. On the basis of the relevant literature in IQ, DW, and BI, a research model was created to identify the relations between components of DW/BI architecture and IQ dimensions. This research model was applied in a real case study (Big Financial Company in Brazil). This case study involved semi-structured interviews with managers and analysts. This chapter attempts to provide a better understanding of the relations between IT architecture and IQ in DW and BI environments. The authors hope to motivate the discussion around the development of IQ-oriented architectures for BI and the relationship between these concepts.


Author(s):  
Aleš Popovič ◽  
Jurij Jaklič

The IS literature has long highlighted the positive impact of information provided by Business Intelligence Systems (BIS) on decision-making, particularly when organizations operate in highly competitive environments. The primary purpose of implementing BIS is to utilize diverse mechanisms to increase the levels of the two Information Quality (IQ) dimensions, namely information access quality and information content quality. While researchers have traditionally focused on assessing IQ criteria, they have largely ignored the mechanisms to boost IQ dimensions. Drawing on extant literature of BIS and IQ, the research sought to understand how, at its present level of development, BIS maturity affects IQ dimensions, as well as the role that business knowledge may exert in mobilizing this link. The authors test the hypotheses across 181 medium and large organizations. Interestingly, the data describe a more complex picture than might have been anticipated.


Author(s):  
Yinle Zhou ◽  
John R. Talburt

Inverted indexing is a commonly used technique for improving the performance of entity resolution algorithms by reducing the number of pair-wise comparisons necessary to arrive at acceptable results. This chapter describes how inverted indexing can also be used as a data partitioning strategy to perform entity resolution on large datasets in a distributed processing environment. This chapter discusses the importance of index-to-rule alignment, pre-resolution index closure, post-resolution link closure, and workflows for record-based identity capture and update, and attribute-based identity capture and update in a distributed processing environment.


Author(s):  
Ivan Milman ◽  
Martin Oberhofer ◽  
Sushain Pandit ◽  
Yinle Zhou

Most large enterprises requiring operational business processes (e.g., call center, human resources, order fulfillments, billing, etc.) utilize anywhere from a few hundred to several thousand instances of legacy, upgraded, cloud-based, and/or acquired information management applications. Due to this vastly heterogeneous information landscape, Business Intelligence (BI) systems (e.g., enterprise data warehouses) receive unconsolidated data from a wide-range of data sources with no overarching governance procedures to ensure quality, consistency, or appropriateness. Although different applications deal with their own flavor of data (e.g., master data, metadata, unstructured and structured data, etc.), reference data (residing in code tables) is found invariably in all of them. Given the critical role that BI plays in ensuring business success, the fact that BI relies heavily on the quality of data to ensure that the intelligence being provided is trustworthy and the prevalence of reference data in the information integration landscape, a principled approach towards management, stewardship, and governance of reference data becomes necessary to ensure quality and operational excellence across BI systems. In this chapter, the authors discuss this approach in the domain of typical reference data management concepts and features, leading to a comprehensive solution architecture for BI integration.


Author(s):  
Sang Hyun Lee ◽  
Abrar Haider

Information quality is critical for any business. It is particularly important for mission critical information systems that manage the lifecycle of an engineering asset. Quality of information or lack thereof in these systems can be traced to technical, organisation, as well as human sources. It is, therefore, extremely important to ascertain the causes that contribute to lack of information quality in asset lifecycle management systems. Depending upon the business area, organisations take proactive or reactive approach to establishing, maintaining, and enhancing their information quality. Among proactive approaches, the ability of the organisation to measure information quality dimensions forms the foundation of a solid information quality management initiative. Such a measurement, however, is an intricate task because these dimensions are subjective, can be context dependent as well as independent, and are interdependent, since each dimension impacts other dimensions. This research employs productive perspective to information and applies Six-Sigma methodology to assess its quality in information systems utilised to manage engineering asset lifecycle. It utilises analytical hierarchy process and quality function deployment to convert subjective information quality dimension into objective metrics, assesses the relationship between various information quality dimensions, and ascertains critical to quality factors. The results thus obtained form the basis for monitoring of information quality aimed at its continuous improvement. This study contributes to literature and practice by providing a method for assessing correlation of information quality dimensions, applying six sigma to information, and controlling quality of information.


Author(s):  
Daragh O Brien

Data Protection (DP) and Privacy are increasingly important quality characteristics of Information, particularly in the context of Business Intelligence and Big Data. This relationship between Data Protection and Information Quality (IQ) is often poorly understood, and DP itself is often misunderstood as being an issue of security control rather than information governance. This chapter examines the relationship between DP, IQ, and Data Governance (DG). It provides an overview of how techniques and practices from IQ and DG can ensure that BI projects are grounded on appropriate privacy controls that ensure that the right information is being used in the right way by the right people to answer the right questions.


Author(s):  
Tom Breur

Business Intelligence (BI) projects that involve substantial data integration have often proven failure-prone and difficult to plan. Data quality issues trigger rework, which makes it difficult to accurately schedule deliverables. Two things can bring improvement. Firstly, one should deliver information products in the smallest possible chunks, but without adding prohibitive overhead for breaking up the work in tiny increments. This will increase the frequency and improve timeliness of feedback on suitability of information products and hence make planning and progress more predictable. Secondly, BI teams need to provide better stewardship when they facilitate discussions between departments whose data cannot easily be integrated. Many so-called data quality errors do not stem from inaccurate source data, but rather from incorrect interpretation of data. This is mostly caused by different interpretation of essentially the same underlying source system facts across departments with misaligned performance objectives. Such problems require prudent stakeholder management and informed negotiations to resolve such differences. In this chapter, the authors suggest an innovation to data warehouse architecture to help accomplish these objectives.


Author(s):  
Neal Gibson ◽  
Greg Holland

A longitudinal database structure, which allows for the joining of data between disparate systems and government agencies, is outlined. While this approach is specific to government agencies, many of the ideas implemented are from the commercial world and have relevance to problems associated with data integration in all domains. The goal of the system is to allow for the sharing of data between agencies while upholding the strictest interpretations of rules and regulations protecting individual privacy and confidentiality. The ability to link records over time is central to such a system, so a knowledge-based approach to entity resolution is outlined along with how this system that integrates longitudinal data from multiple sources can still protect individual privacy and confidentiality. Central to this protection is that personally identifiable information should not be proliferated on multiple systems. The system, TrustEd, is a hybrid model that provides the simplicity of a centralized model with the privacy protection of a federated model.


Author(s):  
Scott Delaney

Business intelligence systems have reached business critical status within many companies. It is not uncommon for such systems to be central to the decision-making effectiveness of these enterprises. However, the processes used to load data into these systems often do not exhibit a level of robustness in line with their criticality to the organisation. The processes of loading business intelligence systems with data are subject to compromised execution, delays, or failures as a result of changes in the source system data. These ETL processes are not designed to recognise nor deal with such shifts in data shape. This chapter proposes the use of data profiling techniques as a means of early discovery of issues and changes within the source system data and examines how this knowledge can be applied to guard against reductions in the decision making capability and effectiveness of an organisation caused by interruptions to business intelligence system availability or compromised data quality. It does so by examining issues such as where profiling can be best be applied to get appropriate benefit and value, the techniques of establishing profiling, and the types of actions that may be taken once the results of profiling are available. The chapter describes components able to be drawn together to provide a system of control that can be applied around a business intelligence system to enhance the quality of organisational decision making through monitoring the characteristics of arriving data and taking action when values are materially different than those expected.


Sign in / Sign up

Export Citation Format

Share Document