Design and Establishment of Information Exchange Standard on Campus

2014 ◽  
Vol 513-517 ◽  
pp. 1294-1298 ◽  
Author(s):  
Si Si Shen ◽  
Ai Xia Ding

Exchanging and sharing information are the basic request for the Digital Campus. To deal with the current problems of information sharing and integration, the content and framework of the universal data interchange platform are introduced in terms of the categories of processes and the layers of information exchange. And a data interchange model was developed to elaborate the data exchange between different departments on campus. Four key technologies, such as XML data model, XML data hybrid storage, data standard construction and data import & export module are presented so as to define the implementation and exchange paths. The current practice of implement the exchange standards and future study are also discussed.

Author(s):  
Antonio Celesti ◽  
Maria Fazio ◽  
Antonio Puliafito ◽  
Massimo Villari

In this paper the authors focus on sensing systems supporting data exchange among several healthcare administrative domains. The challenge in this area is twofold: efficient management of a huge amount of data produced by medical devices, bio-sensors and information systems, sharing sensed data for scientific and clinical purposes. The authors present a new information system that exploits Cloud computing capabilities to overcome such issues, also guaranteeing patients' privacy. Their proposal integrates different healthcare institutions into a federated environment, thus establishing a trust context among the institutions themselves. The storage service is designed according to a fully distributed approach and it is based on the wide-used Open Source framework Hadoop, which is enriched to establish a compelling federated system. They adopt the XRI technology to formalize an XML-based data model which allows to simplify the classification, searching and retrieval of medical data.


Author(s):  
Peter C. G. Veenstra

The Pipeline Open Data Standard (PODS) Association develops and advances global pipeline data standards and best practices supporting data management and reporting for the oil and gas industry. This presentation provides an overview of the PODS Association and a detailed overview of the transformed PODS Pipeline Data Model resulting from the PODS Next Generation initiative. The PODS Association’s Next Generation, or Next Gen, initiative is focused on a complete re-design and modernization of the PODS Pipeline Data Model. The re-design of the PODS Pipeline Data Model is driven by PODS Association Strategy objectives as defined in its 2016–2019 Strategic Plan and reflects nearly 20 years of PODS Pipeline Data Model implementation experience and lessons learned. The Next Gen Data Model is designed to be the system of record for pipeline centerlines and pressurized containment assets for the safe transport of product, allowing pipeline operators to: • Achieve greater agility to build and extend the data model, • respond to new business requirements, • interoperate through standard data models and consistent application interface, • share data within and between organizations using well defined data exchange specifications, • optimize performance for management of bulk loading, reroute, inspection data and history. The presentation will introduce the Next Gen Data Model design principles, conceptual, logical and physical structures with a focus on transformational changes from prior versions of the Model. Support for multiple platforms including but not limited to Esri ArcGIS, open source GIS and relational database management systems will be described. Alignment with Esri’s ArcGIS Platform and ArcGIS for Pipeline Referencing (APR) will be a main topic of discussion along with how PODS Next Gen can be leveraged to benefit pipeline integrity, risk assessment, reporting and data maintenance. The end goal of a PODS implementation is a realization of data management efficiency, data transfer and exchange, to make the operation of a pipeline safer and most cost effective.


2006 ◽  
Vol 22 (01) ◽  
pp. 9-14
Author(s):  
Wen-Yen Chien ◽  
Heiu-Jou Shaw ◽  
Shen-Ming Wang

In this article, Taiwan Ship Net is proposed to be a portal Web site of the Taiwan yachtindustry. By using Microsoft Distributed interNet Application (DNA) and Active ServicePage (ASP), the supply chain management system, consisting of three sections, shipyards, suppliers, and administrator, exchanges and transfers the information ofenterprise's system. Through applying XML to conduct the information exchange, weintegrate the formats of orders for several yacht manufacturers and suppliers, develop the system of placing orders online, and edit XML documents to exchange the information in the system. E-catalogs that are easy to preserve and update will replace the traditional paper catalogs hereafter. The various demands of accessories will result in an increase in the costs of production and logistics. Through this system, yacht manufacturers can search the material of E-catalogs and purchase online directly. Meanwhile, it will allow the suppliers to update the E-catalogs regularly to be more competitive in the market. Taiwan Ship Net can provide the newest information to foreign customers and communicate with all yacht companies, equipment suppliers, and manufacturers in Taiwan.


2014 ◽  
Vol 13 (4) ◽  
pp. 53-57
Author(s):  
G. D. Kopanitsa

A possibility to unite different healthcare providers in one network was investigated and proved. We developed solution based on ISO 13606 archetype model and service oriented architecture to enable an exchange of semantically meaningful medical data within a network of healthcare providers. Application of a data model of the ISO 13606 standard for data modeling and XML data transformation allowed organizing data exchange in the regional healthcare network.


2012 ◽  
Vol 33 (3) ◽  
pp. 84-88
Author(s):  
Leonardas Marozas ◽  
Eimuntas Paršeliūnas ◽  
Saulius Urbonas

XML (Extended Markup Language) Data Schemas as the format for information exchange in graphical SVG (Scalable Vector Graphics) format is presented. SVG format is a language for two‐dimensional graphics and is based on XML. It is the advanced version of previous data exchange format TGX (Tide Gauge Independent Exchange Format), which contains the plain ASCII data. SVG graphics are formatted from sea level observations data at ESEAS station KLPD. Comparing with similar CGM (Computer Graphics Metafile) format, SVG has its advantages of being based on XML. XML Data Schemas, in addition, add a header information about the marine measurements site, sensors and any additional necessary information. The correct header formation and the advantages of such data exchange format are analysed. Header is important for an exchange between different sites from different countries so as it would be information standardisation. Visual appearance of the SVG file, the source, headers and formation of the file in script are described too.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Patrick Obilikwu ◽  
Emeka Ogbuju

Abstract Organizations may be related in terms of similar operational procedures, management, and supervisory agencies coordinating their operations. Supervisory agencies may be governmental or non-governmental but, in all cases, they perform oversight functions over the activities of the organizations under their control. Multiple organizations that are related in terms of oversight functions by their supervisory agencies, may differ significantly in terms of their geographical locations, aims, and objectives. To harmonize these differences such that comparative analysis will be meaningful, data about the operations of multiple organizations under one control or management can be cultivated, using a uniform format. In this format, data is easily harvested and the ease with which it is used for cross-population analysis, referred to as data comparability is enhanced. The current practice, whereby organizations under one control maintain their data in independent databases, specific to an enterprise application, greatly reduces data comparability and makes cross-population analysis a herculean task. In this paper, the collocation data model is formulated as consisting of big data technologies beyond data mining techniques and used to reduce the heterogeneity inherent in databases maintained independently across multiple organizations. The collocation data model is thus presented as capable of enhancing data comparability across multiple organizations. The model was used to cultivate the assessment scores of students in some schools for some period and used to rank the schools. The model permits data comparability across several geographical scales among which are: national, regional and global scales, where harvested data form the basis for generating analytics for insights, hindsight, and foresight about organizational problems and strategies.


Sign in / Sign up

Export Citation Format

Share Document