The advantages of an Ontology-Based Data Management approach: openness, interoperability and data quality

2016 ◽  
Vol 108 (1) ◽  
pp. 441-455 ◽  
Author(s):  
Cinzia Daraio ◽  
Maurizio Lenzerini ◽  
Claudio Leporelli ◽  
Paolo Naggar ◽  
Andrea Bonaccorsi ◽  
...  
2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


Trials ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Sophie Relph ◽  
◽  
Maria Elstad ◽  
Bolaji Coker ◽  
Matias C. Vieira ◽  
...  

Abstract Background The use of electronic patient records for assessing outcomes in clinical trials is a methodological strategy intended to drive faster and more cost-efficient acquisition of results. The aim of this manuscript was to outline the data collection and management considerations of a maternity and perinatal clinical trial using data from electronic patient records, exemplifying the DESiGN Trial as a case study. Methods The DESiGN Trial is a cluster randomised control trial assessing the effect of a complex intervention versus standard care for identifying small for gestational age foetuses. Data on maternal/perinatal characteristics and outcomes including infants admitted to neonatal care, parameters from foetal ultrasound and details of hospital activity for health-economic evaluation were collected at two time points from four types of electronic patient records held in 22 different electronic record systems at the 13 research clusters. Data were pseudonymised on site using a bespoke Microsoft Excel macro and securely transferred to the central data store. Data quality checks were undertaken. Rules for data harmonisation of the raw data were developed and a data dictionary produced, along with rules and assumptions for data linkage of the datasets. The dictionary included descriptions of the rationale and assumptions for data harmonisation and quality checks. Results Data were collected on 182,052 babies from 178,350 pregnancies in 165,397 unique women. Data availability and completeness varied across research sites; each of eight variables which were key to calculation of the primary outcome were completely missing in median 3 (range 1–4) clusters at the time of the first data download. This improved by the second data download following clarification of instructions to the research sites (each of the eight key variables were completely missing in median 1 (range 0–1) cluster at the second time point). Common data management challenges were harmonising a single variable from multiple sources and categorising free-text data, solutions were developed for this trial. Conclusions Conduct of clinical trials which use electronic patient records for the assessment of outcomes can be time and cost-effective but still requires appropriate time and resources to maximise data quality. A difficulty for pregnancy and perinatal research in the UK is the wide variety of different systems used to collect patient data across maternity units. In this manuscript, we describe how we managed this and provide a detailed data dictionary covering the harmonisation of variable names and values that will be helpful for other researchers working with these data. Trial registration Primary registry and trial identifying number: ISRCTN 67698474. Registered on 02/11/16.


2017 ◽  
Vol 64 (suppl_3) ◽  
pp. S238-S244 ◽  
Author(s):  
Nora L. Watson ◽  
Christine Prosperi ◽  
Amanda J. Driscoll ◽  
Melissa M. Higdon ◽  
Daniel E. Park ◽  
...  

Author(s):  
Yamini Gourishankar ◽  
Frank Weisgerber

Abstract It is observed that calculating the wind pressures on structures involves more data retrieval from the ASCE standard than any subjective reasoning on the designer’s part. Once the initial design requirements are established, the procedure involved in the computation is straightforward. This paper discusses an approach to automate the process associated with wind pressure computation on one story and multi-story buildings using a data management strategy (implemented using the ORACLE database management system). In the prototype system developed herein, the designer supplies the design requirements in the form of the structure’s exposure type, its dimensions and the nature of occupancy of the structure. Using these requirements, the program retrieves the necessary standards data from an independently maintained database, and computes the wind pressures. The final output contains the wind pressures on the main wind force resisting system, and on the components and claddings, for wind blowing parallel and perpendicular to the ridge. The knowledge encoded in the system was gained from ASCE codes, design guidelines and as a result of interviews with various experts and practitioners. Several information modeling methodologies such as the entity relationship model, IDEF 1X, etc. were employed in the system analysis and design phase of this project. The prototype is implemented on an IBM PC using the ORACLE DBMS and the ‘C’ programming language. Appendix A illustrates a sample run.


2016 ◽  
Author(s):  
Alfred Enyekwe ◽  
Osahon Urubusi ◽  
Raufu Yekini ◽  
Iorkam Azoom ◽  
Oloruntoba Isehunwa

ABSTRACT Significant emphasis on data quality is placed on real-time drilling data for the optimization of drilling operations and on logging data for quality lithological and petrophysical description of a field. This is evidenced by huge sums spent on real time MWD/LWD tools, broadband services, wireline logging tools, etc. However, a lot more needs to be done to harness quality data for future workover and or abandonment operations where data being relied on is data that must have been entered decades ago and costs and time spent are critically linked to already known and certified information. In some cases, data relied on has been migrated across different data management platforms, during which relevant data might have been lost, mis-interpreted or mis-placed. Another common cause of wrong data is improperly documented well intervention operations which have been done in such a short time, that there is no pressure to document the operation properly. This leads to confusion over simple issues such as what depth a plug was set, or what junk was left in hole. The relative lack of emphasis on this type of data quality has led to high costs of workover and abandonment operations. In some cases, well control incidents and process safety incidents have arisen. This paper looks at over 20 workover operations carried out in a span of 10 years. An analysis is done on the wells’ original timeline of operation. The data management system is generally analyzed and a categorization of issues experienced during the workover operations is outlined. Bottlenecks in data management are defined and solutions currently being implemented to manage these problems are listed as recommended good practices.


2016 ◽  
Vol 83 ◽  
pp. 576-583 ◽  
Author(s):  
Patrícia Franková ◽  
Martina Drahošová ◽  
Peter Balco

2019 ◽  
Vol 8 (10) ◽  
pp. 24851-24854
Author(s):  
Hewa Majeed Zangana

Nowadays, more and more organizations are realizing of importance of their data, because it can be considered as an important asset in present nearly all business organizational processes. Information Technology Division (ITD) is a department in the International Islamic University Malaysia (IIUM) that consolidates efforts in providing IT services to the university. The university data management started with decentralized units, where each center or division has its own hardware and database system. Later it improved to become became centralized, and ITD is now trying to one policy across the whole university. This will optimize the high performance of data management in the university. A visit has been done to ITD building and a presentation has been conducted discussing many issues concerning data management quality maturity in IT division at IIUM. We got some notices like server’s room location, power supply and backup and existence of data redundant. These issues are discussed in details in the next sections of this paper and some recommendations are suggested to improve data quality in the university. The quality of the data is very important in decision making, especially for a university that is trying to improve its strategy towards a research university and rise its rank among the World University Ranking.


Sign in / Sign up

Export Citation Format

Share Document