xml document
Recently Published Documents


TOTAL DOCUMENTS

461
(FIVE YEARS 18)

H-INDEX

16
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Aigul Mukhitova ◽  
Aigerim Yerimbetova ◽  
Nenad Mladenovic

Author(s):  
Evan Lenz

The issues with comparing versions of a transformed XML document have been discussed many times. A special challenge, however, arises when the transformations of a document are musical in nature, rather than the more usual editorial changes. An XSLT visualizer can be modified to render musical scores to SVG and enable visual comparisons of the transformation results.


2021 ◽  
Vol 20 (2) ◽  
pp. 12-15
Author(s):  
Alhadi A. Klaib

Extensible Markup Language (XML) has become a significant technology for transferring data through the world of the Internet. XML labelling schemes are an essential technique used to handle XML data effectively. Labelling XML data is performed by assigning labels to all nodes in that XML document. CLS labelling scheme is a hybrid labelling scheme that was developed to address some limitations of indexing XML data.  Moreover, datasets are used to test XML labelling schemes. There are many XML datasets available nowadays. Some of them are from real life datasets and others are from artificial datasets. These datasets and benchmarks are used for testing the XML labelling schemes. This paper discusses and considers these datasets and benchmarks and their specifications in order to determine the most appropriate one for testing the CLS labelling scheme. This research found out that the XMark benchmark is the most appropriate choice for the testing performance of the CLS labelling scheme. 


Author(s):  
Imane Belahyane ◽  
Mouad Mammass ◽  
Hasna Abioui ◽  
Assmaa Moutaoukkil ◽  
Ali Idarrou

2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Daniel Evans

Two designs, the Transaction Serial Format (TSF) and the Transaction Array Model (TAM), are presented. Together, they provide full, efficient, transaction serialization facilities for devices with limited onboard energy, such as those in an Internet of Things (IoT) network. TSF provides a compact, non-parsed, format for transactions, which can be deserialized with minimal processing. TAM provides an internal data structure, that can be constructed with minimal dynamic storage directly using the elements of TSF. TSF is built from simple lexical units that do not require parsing to be extracted from a serialized transaction. The lexical units contain enough information to efficiently allocate the internal TAM data structure. TSF generality is shown by exhibiting its equivalence to XML and JSON. The TSF representation of any XML document or JSON object can be serialized and deserialized without loss of information, including whitespace. The XML equivalence provides a foundation for the performance comparisons. TSF efficiency is shown by comparing the performance of reference implementations of TSF and TAM, written in C, to the performance of the popular Expat XML library, also written in C. TSF deserialization is shown to reduce processor time by more than 80%, demonstrating the efficiency of the design.


Author(s):  
Rizki Fitriani ◽  
Abd. Qohar Agus Maulana ◽  
Lingga Wahyu Rochim ◽  
Muhammad Ainul Yaqin

Aplikasi komputer meningkat jumlahnya dan menjadi lebih kompleks seiring dengan pertumbuhan bisnis dan ilmu pengetahuan. Dalam hal ini, pengelolaan dan integrasi data menjadi sangat penting. Pengembangan Web Service semakin meningkat. Pengembang Web Service membutuhkan banyak data riset dan referensi mengenai kompleks atau tidaknya suatu Web Service berkaitan dengan perkiraan biaya pengembangan, batasan waktu pengerjaan, dan sumber daya yang dibutuhkan untuk pengembangan, termasuk spesifikasi komputer dan sumber daya manusianya. Setiap pengukuran yang dilakukan dibutuhkan tersedianya ukuran kuantitatif yang disebut metrik, yang selanjutnya akan menghasilkan output berupa kompleksitas dari Web Service. Atas dasar itu, diperlukan sebuah perangkat lunak yang dapat menghitung metrik skala dan kompleksitas Web Service dengan cara meng-import file bertipe xml document dan berisi dokumen WSDL (Web Service Description Language) yang digunakan untuk menggambarkan kompleksitas Web Service berdasarkan pengukuran Data Weight (DW) yang dihitung dari bobot Argument per Operation (APO) dan Operation per Service (OPS). Dari percobaan perhitungan kami secara manual terhadap suatu dokumen WSDL, dimunculkan beberapa hasil berupa banyaknya argumen, OPS, dan APO. Hal ini kami terapkan pada perangkat lunak dan menghasilkan proses perhitungan yang lebih cepat. Pengguna dapat meng-import document tipe .xml dan hasil kompleksitas Web Service akan dimunculkan dalam aplikasi. Ukuran metrik dan kompleksitas yang diperoleh dapat digunakan untuk memperkirakan manajemen pengembangan Web Service.


Author(s):  
Mohammed Ragheb Hakawati ◽  
Yasmin Yacob ◽  
Amiza Amir ◽  
Jabiry M. Mohammed ◽  
Khalid Jamal Jadaa

Extensible Markup Language (XML) is emerging as the primary standard for representing and exchanging data, with more than 60% of the total; XML considered the most dominant document type over the web; nevertheless, their quality is not as expected. XML integrity constraint especially XFD plays an important role in keeping the XML dataset as consistent as possible, but their ability to solve data quality issues is still intangible. The main reason is that old-fashioned data dependencies were basically introduced to maintain the consistency of the schema rather than that of the data. The purpose of this study is to introduce a method for discovering pattern tableaus for XML conditional dependencies to be used for enhancing XML document consistency as a part of data quality improvement phases. The notations of the conditional dependencies as new rules are designed mainly for improving data instance and extended traditional XML dependencies by enforcing pattern tableaus of semantically related constants. Subsequent to this, a set of minimal approximate conditional dependencies (XCFD, XCIND) is discovered and learned from the XML tree using a set of mining algorithms. The discovered patterns can be used as a Master data in order to detect inconsistencies that don’t respect the majority of the dataset.


Author(s):  
Mohammed Ragheb Hakawati ◽  
Yasmin Yacob ◽  
Rafikha Aliana A. Raof ◽  
Mustafa M.Khalifa Jabiry ◽  
Eiad Syaf Alhudiani

Data Cleaning as an essential phase to enhance the overall quality used for decades with different data models, the majority handled a relational dataset as the most dominant data model. However, the XML data model, besides the relational data model considered the most data model commonly used for storing, retrieving, and querying valuable data. In this paper, we introduce a model for detecting and repairing XML data inconsistencies using a set of conditional dependencies. Detecting inconsistencies will be done by joining the existed data source with a set of patterns tableaus as conditional dependencies and then update these values to match the proper patterns using a set of SQL statements. This research considered the final phase for a cleaning model introduced for XML datasets by firstly mapping the XML document to a set of related tables then discovering a set of conditional dependencies (Functional and Inclusions) and finally then applying the following algorithms as a closing step of quality enhancement.


Sign in / Sign up

Export Citation Format

Share Document