scholarly journals Quality of data model for supporting mobile decision making

2007 ◽  
Vol 43 (4) ◽  
pp. 1675-1683 ◽  
Author(s):  
Julie Cowie ◽  
Frada Burstein
Author(s):  
Clair Blacketer ◽  
Erica A Voss ◽  
Frank DeFalco ◽  
Nigel Hughes ◽  
Martijn J Schuemie ◽  
...  

Background: Observational health data has the potential to be a rich resource to inform clinical practice and regulatory decision making. However, the lack of standard data quality processes makes it difficult to know if these data are research ready. The EHDEN COVID-19 Rapid Col-laboration Call presented the opportunity to assess how the newly developed open-source tool Data Quality Dashboard (DQD) informs the quality of data in a federated network. Methods: 15 Data Partners (DPs) from 10 different countries worked with the EHDEN taskforce to map their data to the OMOP CDM. Throughout the process at least two DQD results were collected and compared for each DP. Results: All DPs showed an improvement in their data quality between the first and last run of the DQD. The DQD excelled at helping DPs identify and fix conformance is-sues but showed less of an impact on completeness and plausibility checks. Conclusions: This is the first study to apply the DQD on multiple, disparate databases across a network. While study-specific checks should still be run, we recommend that all data holders converting their data to the OMOP CDM use the DQD as it ensures conformance to the model specifications and that a database meets a baseline level of completeness and plausibility for use in research.


Author(s):  
Francisco Javier Villar Martín ◽  
Jose Luis Castillo Sequera ◽  
Miguel Angel Navarro Huerga

The quality of a company's information system is essential and also its physical data model. In this article, the authors apply data mining techniques in order to generate knowledge from the information system's data model, and also to discover and understand hidden patterns within data that regulate the planning of flight hours of pilots and copilots in an airline. With this objective, they use Weka free software which offers a set of algorithms and visualization tools geared to data analysis and predictive modeling of information systems. Firstly, they apply clustering to study the information system and analyze data model; secondly, they apply association rules to discover connection patterns in data; and finally, they generate a decision tree to classify and extract more specific patterns. The authors suggest conclusions according these information system's data to improve future decision making in an airline's flight hours assignments.


2015 ◽  
Vol 809-810 ◽  
pp. 1528-1534
Author(s):  
Alexandre Sava ◽  
Kondo Adjallah ◽  
Valentin Zichil

The quality of data is recognized to be a key issue for the assets management in enterprises as data is the foundation of any decision making process. Recent research work has established that the quality of data is highly dependent on the knowledge one has on the socio-technical system being considered. Three modes of knowledge have been identified: knowing what, knowing how and knowing why. In this paper we focus on how to manage these modes of knowledge in durable socio-technical systems to enhance the data quality face to technological progress and employees turnover. We believe that an organization based on ISO 9001 international standard can provide a valuable framework to provide the data quality needed to an efficient decision making process. This framework has been applied to design the data quality management system within a high education socio-technical system. The most important benefits that have been noticed are: 1) a shared vision on the external clients of the system with a positive impact on the definition of the strategy and the objectives of the system and 2) a deep understanding of the data client-supplier relationship inside the socio-technical system. A direct consequence of these achievements was the increasing knowledge on “know-what” data to collect, “know-why” to collect that data and “know-how” to collect it.


2009 ◽  
Vol 11 (2) ◽  
Author(s):  
L. Marshall ◽  
R. De la Harpe

[email protected] Making decisions in a business intelligence (BI) environment can become extremely challenging and sometimes even impossible if the data on which the decisions are based are of poor quality. It is only possible to utilise data effectively when it is accurate, up-to-date, complete and available when needed. The BI decision makers and users are in the best position to determine the quality of the data available to them. It is important to ask the right questions of them; therefore the issues of information quality in the BI environment were established through a literature study. Information-related problems may cause supplier relationships to deteriorate, reduce internal productivity and the business' confidence in IT. Ultimately it can have implications for an organisation's ability to perform and remain competitive. The purpose of this article is aimed at identifying the underlying factors that prevent information from being easily and effectively utilised and understanding how these factors can influence the decision-making process, particularly within a BI environment. An exploratory investigation was conducted at a large retail organisation in South Africa to collect empirical data from BI users through unstructured interviews. Some of the main findings indicate specific causes that impact the decisions of BI users, including accuracy, inconsistency, understandability and availability of information. Key performance measures that are directly impacted by the quality of data on decision-making include waste, availability, sales and supplier fulfilment. The time spent on investigating and resolving data quality issues has a major impact on productivity. The importance of documentation was highlighted as an important issue that requires further investigation. The initial results indicate the value of


2021 ◽  
Vol 11 (24) ◽  
pp. 11920
Author(s):  
Clair Blacketer ◽  
Erica A. Voss ◽  
Frank DeFalco ◽  
Nigel Hughes ◽  
Martijn J. Schuemie ◽  
...  

Federated networks of observational health databases have the potential to be a rich resource to inform clinical practice and regulatory decision making. However, the lack of standard data quality processes makes it difficult to know if these data are research ready. The EHDEN COVID-19 Rapid Collaboration Call presented the opportunity to assess how the newly developed open-source tool Data Quality Dashboard (DQD) informs the quality of data in a federated network. Fifteen Data Partners (DPs) from 10 different countries worked with the EHDEN taskforce to map their data to the OMOP CDM. Throughout the process at least two DQD results were collected and compared for each DP. All DPs showed an improvement in their data quality between the first and last run of the DQD. The DQD excelled at helping DPs identify and fix conformance issues but showed less of an impact on completeness and plausibility checks. This is the first study to apply the DQD on multiple, disparate databases across a network. While study-specific checks should still be run, we recommend that all data holders converting their data to the OMOP CDM use the DQD as it ensures conformance to the model specifications and that a database meets a baseline level of completeness and plausibility for use in research.


2022 ◽  
pp. 197-217
Author(s):  
Gregory Smith ◽  
Thilini Ariyachandra

Disaster recovery management requires agile decision making and action that can be supported through business intelligence (BI) and analytics. Yet, fundamental data issues such as challenges in data quality have continued to plague disaster recovery efforts leading to delays and high costs in disaster support. This chapter presents an example of these issues from the 2005 Atlantic hurricane season, where Hurricane Katrina wreaked havoc upon the city of New Orleans forcing the Federal Emergency Management Agency (FEMA) to begin an unprecedented cleanup effort. The chapter brings to light the failings in record keeping during this disaster and highlight how a simple BI application can improve the accuracy and quality of data and save costs. It also highlights the ongoing data driven issues in disaster recovery management that FEMA continues to confront and the need for integrated centralized BI and analytics solutions extending to the supply chain that FEMA needs to become more nimble and effective when dealing with disasters.


Author(s):  
Elizaveta Pavlovna Bashkova ◽  
Andrei Evgen'evich Dzengelevskii

This article is dedicated to the problem of effective use by credit organization of the own client data due to their heterogeneity. The object of this research is the client data of a credit organization. The subject is the possibility of application of the means of efficient client data management in a credit organization for the purpose of fulfilling regulatory requirements  The main goal of this work consists in the analysis and formulation of recommendations with regards to modification of client data model, as well as improvement of quality of client data by means of data management based on the DAMA-DMBOK body of knowledge in the context of the key areas of data management. The article reviews the key areas of data management and the methods of application of rules and recommendations of each of them to the data management process of the credit organization. The relevance of this article is substantiated by legislative requirements pertaining to monitoring of services provided for private and legal entities from the sanctioned territory. The scientific novelty consists in application of data management knowledge to the relevant issue on the need for automatic determination of territorial affiliation of the client in order to comply with the requirements of normative legal acts. The research methods are content analysis, structural analysis, and modeling. The main conclusions consist in recommendations for modification client data model of the bank, namely methods and principles for expanding the data model using standardized data parameters, which significantly improves the quality of data. The implementation of such recommendations would allow a financial organization to reduce the risks associated with payment of fines and revoke of licenses due to noncompliance with the regulatory requirements.


1992 ◽  
Vol 31 (02) ◽  
pp. 136-146 ◽  
Author(s):  
R. Haux ◽  
C. A. Müller’ ◽  
F. Gerneth

Abstract:In this study it was investigated whether and to what extent semantic data models and their methods for data modeling are useful for adequate representation and integration of immunological and clinical data. To that end the special research program in leukemia research and immunogenetics (SFB 120) of the University of Tübingen was taken as an example. Based on the semantic data model RIWT we propose the design of a database system, report on its realization, and discuss this approach. Using a semantic data model, the quality of data increased considerably. This means, for instance, that the integration of the molecular-biological knowledge allows a better control of the person-related results. Hence, the decisions based of these data may have greater validity and the treatment on leukemia patients can be improved. Furthermore, the elucidation of immune mechanisms concerning auto-immune diseases could be improved.


2007 ◽  
Vol 7 ◽  
pp. 246-252
Author(s):  
T V Gopal

Way back in the 1980s corporations began collecting, combining, and crunching data from sources through-out the enterprise. This approach was widely accepted as a methodology that provides objectivity and trans-parency in decision-making. Good processing of the garnered data paved way for improved analysis of trends and patterns leading to better business and increased profit margins. Corporations began investing in collect-ing, storing, processing and maintaining enterprise wide data. The focus was always on the quality of data and the process of converting it into knowledge that enables right decisions. It was soon realized that a wide range of personal biases has an impact on the way decisions are made. The entire process is replete with ethical dilemmas. This paper provides a framework to understand the interplay of data, information, personal biases, ethics and decision-making. This approach is suitable for every individ-ual, team, organization or a nation. Several years of turmoil in South Africa make it imminent for it to take a fresh look at the way data is transformed into knowledge. The leadership within South Africa has to arrive at wise decisions that can withstand the scrutiny of generations to come.


Author(s):  
B. L. Armbruster ◽  
B. Kraus ◽  
M. Pan

One goal in electron microscopy of biological specimens is to improve the quality of data to equal the resolution capabilities of modem transmission electron microscopes. Radiation damage and beam- induced movement caused by charging of the sample, low image contrast at high resolution, and sensitivity to external vibration and drift in side entry specimen holders limit the effective resolution one can achieve. Several methods have been developed to address these limitations: cryomethods are widely employed to preserve and stabilize specimens against some of the adverse effects of the vacuum and electron beam irradiation, spot-scan imaging reduces charging and associated beam-induced movement, and energy-filtered imaging removes the “fog” caused by inelastic scattering of electrons which is particularly pronounced in thick specimens.Although most cryoholders can easily achieve a 3.4Å resolution specification, information perpendicular to the goniometer axis may be degraded due to vibration. Absolute drift after mechanical and thermal equilibration as well as drift after movement of a holder may cause loss of resolution in any direction.


Sign in / Sign up

Export Citation Format

Share Document