Designing Information Product (IP) Maps On the Process of Data Processing and Academic Information

2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.

2017 ◽  
Vol 7 (2) ◽  
pp. 88
Author(s):  
Ahmad Fahmi Karami

Organizational performance depends on strategic decisions taken by stakeholders in the organization, where strategic decisions of stakeholders depend on the quality of data and information available to the organization. Data and information quality called good when the data and quality has criteria that suits for users of data and information, where data and information user need on the organization will be different according to their aim and objectives, so that the criteria of data quality and information is not universal. The research aims to improve the quality management of data and information by utilizing information systems to produce good quality data and information and help improve the organization's performance on the Palm Oil Processing Factory in Indonesia. This research was conducted to know data and information quality management in producing data and information, and its contribution on the mill performance using interview methods with those who have a role in the implementation of data quality and information management, observation, and document management related to factory performance. This research resulted findings that still in the implementation of data quality and information management there are still procedures that are not undertaken, so the result of data and information not entirely suits with the user wishes. Although the procedure has not been fully implemented, using data and information production has helped data and information users in decision making and succeeded in lowering the mill breakdown by 0.10%.


Author(s):  
Patrick Ohemeng Gyaase ◽  
Joseph Tei Boye-Doe ◽  
Christiana Okantey

Quality data from the Expanded Immunization Programme (EPI), which is pivotal in reducing infant mortalities globally, is critical for knowledge management on the EPI. This chapter assesses the quality of data from the EPI for the six childhood killer diseases from the EPI tally books, monthly reports, and the District Health Information Management System (DHIMS II) using the Data Quality Self-Assessment (DQS) tool of WHO. The study found high availability and completeness of data in the EPI tally books and the monthly EPI reports. The accuracy and currency of data on all antigens from EPI tally books compared to reported number issued were comparatively low. The composite quality index of the data from the EPI is thus low, an indication poor supervision of the EPI programme in the health facilities. There is therefore, the need for effective monitoring and data validation at the point of collection and entry to improve the data quality for knowledge management on the EPI programme.


2020 ◽  
pp. 089443932092824 ◽  
Author(s):  
Michael J. Stern ◽  
Erin Fordyce ◽  
Rachel Carpenter ◽  
Melissa Heim Viox ◽  
Stuart Michaels ◽  
...  

Social media recruitment is no longer an uncharted avenue for survey research. The results thus far provide evidence of an engaging means of recruiting hard-to-reach populations. Questions remain, however, regarding whether the data collected using this method of recruitment produce quality data. This article assesses one aspect that may influence the quality of data gathered through nonprobability sampling using social media advertisements for a hard-to-reach sexual and gender minority youth population: recruitment design formats. The data come from the Survey of Today’s Adolescent Relationships and Transitions, which used a variety of forms of advertisements as survey recruitment tools on Facebook, Instagram, and Snapchat. Results demonstrate that design decisions such as the format of the advertisement (e.g., video or static) and the use of eligibility language on the advertisements impact the quality of the data as measured by break-off rates and the use of nonsubstantive responses. Additionally, the type of device used affected the measures of data quality.


10.28945/2584 ◽  
2002 ◽  
Author(s):  
Herna L. Viktor ◽  
Wayne Motha

Increasingly, large organizations are engaging in data warehousing projects in order to achieve a competitive advantage through the exploration of the information as contained therein. It is therefore paramount to ensure that the data warehouse includes high quality data. However, practitioners agree that the improvement of the quality of data in an organization is a daunting task. This is especially evident in data warehousing projects, which are often initiated “after the fact”. The slightest suspicion of poor quality data often hinders managers from reaching decisions, when they waste hours in discussions to determine what portion of the data should be trusted. Augmenting data warehousing with data mining methods offers a mechanism to explore these vast repositories, enabling decision makers to assess the quality of their data and to unlock a wealth of new knowledge. These methods can be effectively used with inconsistent, noisy and incomplete data that are commonplace in data warehouses.


Author(s):  
Benjamin Ngugi ◽  
Jafar Mana ◽  
Lydia Segal

As the nation confronts a growing tide of security breaches, the importance of having quality data breach information systems becomes paramount. Yet too little attention is paid to evaluating these systems. This article draws on data quality scholarship to develop a yardstick that assesses the quality of data breach notification systems in the U.S. at both the state and national levels from the perspective of key stakeholders, who include law enforcement agencies, consumers, shareholders, investors, researchers, and businesses that sell security products. Findings reveal major shortcomings that reduce the value of data breach information to these stakeholders. The study concludes with detailed recommendations for reform.


2009 ◽  
Vol 11 (2) ◽  
Author(s):  
L. Marshall ◽  
R. De la Harpe

[email protected] Making decisions in a business intelligence (BI) environment can become extremely challenging and sometimes even impossible if the data on which the decisions are based are of poor quality. It is only possible to utilise data effectively when it is accurate, up-to-date, complete and available when needed. The BI decision makers and users are in the best position to determine the quality of the data available to them. It is important to ask the right questions of them; therefore the issues of information quality in the BI environment were established through a literature study. Information-related problems may cause supplier relationships to deteriorate, reduce internal productivity and the business' confidence in IT. Ultimately it can have implications for an organisation's ability to perform and remain competitive. The purpose of this article is aimed at identifying the underlying factors that prevent information from being easily and effectively utilised and understanding how these factors can influence the decision-making process, particularly within a BI environment. An exploratory investigation was conducted at a large retail organisation in South Africa to collect empirical data from BI users through unstructured interviews. Some of the main findings indicate specific causes that impact the decisions of BI users, including accuracy, inconsistency, understandability and availability of information. Key performance measures that are directly impacted by the quality of data on decision-making include waste, availability, sales and supplier fulfilment. The time spent on investigating and resolving data quality issues has a major impact on productivity. The importance of documentation was highlighted as an important issue that requires further investigation. The initial results indicate the value of


Author(s):  
Arun Thotapalli Sundararaman

DQ/IQ measurement in general and in the specific context of BI has always been a topic of high interest for researchers. The topic of Data Quality (DQ) in the field of Information Management has been well researched, published, and studied. Despite such research advances, there has been very little understanding either from a theoretical or from a practical perspective of DQ/IQ measurement for BI. Assessing the quality of data for a BI System has been one of the major challenges for researchers as well as practitioners, leading to the need for frameworks to measure DQ for BI. The objective of this chapter is to provide an overview of the existing frameworks for measurement of DQ for BI, analyze the gaps therein, review proposed solutions, and provide a direction for future research and practice in this area.


Author(s):  
Nishita Shewale

Abstract: To introduce unified information systems, this will provide different establishments with an insight on how data related activities take place and there results with assured quality. Considering data accumulation, replication, missing entities, incorrect formatting, anomalies etc. can come to light in the collection of data in different information systems, which can cause an array of adverse effects on data quality, the subject of data quality should be treated with better results. This paper inspects the data quality problems in information systems and introduces the new techniques that enable organizations to improve their quality of data. Keywords: Information Systems (IS), Data Quality, Data Cleaning, Data Profiling, Standardization, Database, Organization


2020 ◽  
Author(s):  
Cristina Costa-Santos ◽  
Ana Luísa Neves ◽  
Ricardo Correia ◽  
Paulo Santos ◽  
Matilde Monteiro-Soares ◽  
...  

AbstractBackgroundHigh-quality data is crucial for guiding decision making and practicing evidence-based healthcare, especially if previous knowledge is lacking. Nevertheless, data quality frailties have been exposed worldwide during the current COVID-19 pandemic. Focusing on a major Portuguese surveillance dataset, our study aims to assess data quality issues and suggest possible solutions.MethodsOn April 27th 2020, the Portuguese Directorate-General of Health (DGS) made available a dataset (DGSApril) for researchers, upon request. On August 4th, an updated dataset (DGSAugust) was also obtained. The quality of data was assessed through analysis of data completeness and consistency between both datasets.ResultsDGSAugust has not followed the data format and variables as DGSApril and a significant number of missing data and inconsistencies were found (e.g. 4,075 cases from the DGSApril were apparently not included in DGSAugust). Several variables also showed a low degree of completeness and/or changed their values from one dataset to another (e.g. the variable ‘underlying conditions’ had more than half of cases showing different information between datasets). There were also significant inconsistencies between the number of cases and deaths due to COVID-19 shown in DGSAugust and by the DGS reports publicly provided daily.ConclusionsThe low quality of COVID-19 surveillance datasets limits its usability to inform good decisions and perform useful research. Major improvements in surveillance datasets are therefore urgently needed - e.g. simplification of data entry processes, constant monitoring of data, and increased training and awareness of health care providers - as low data quality may lead to a deficient pandemic control.


2020 ◽  
Author(s):  
◽  

Good data management is essential for ensuring the validity and quality of data in all types of clinical research and is an essential precursor for data sharing. The Data Management Portal has been developed to provide support to researchers to ensure that high-quality data management is fully considered, and planned for, from the outset and throughout the life of a research project. The steps described in the portal will help identify the areas which should be considered when developing a Data Management Plan, with a particular focus on data management systems and how to organise and structure your data. Other elements include best practices for data capture, entry, processing and monitoring, how to prepare data for analysis, sharing, and archiving, and an extensive collection of resources linked to data management which can be searched and filtered depending on their type.


Sign in / Sign up

Export Citation Format

Share Document