xml technologies
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 4)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 2021 (1) ◽  
pp. 28-32
Author(s):  
Ryan Lieu

Conservation documentation plays a crucial role in preventing misrepresentations about cultural property. Yet conservation records often remain undigitized and unsearchable. As part of efforts to improve access to conservation documentation, members of the Linked Conservation Data Consortium recently embarked on a project to transform paper and born-digital conservation records spanning forty years into linked data. Project team members reviewed existing models for preservation data and found that only the CIDOC Conceptual Reference Model would accommodate documentation of materiality, object structure, and conservation treatment events as prescribed by professional guidelines. Project outcomes revealed meaningful patterns in conservation data that may be useful in future model development as well as shortcomings in the XML technologies employed for transforming the data.


Author(s):  
C. M. Sperberg-McQueen

Spell checking has both practical and theoretical significance. The practical connections seem obvious: spell checking makes it easier to find some kinds of errors in documents. But spell checking is sometimes harder and less capable in XML than it could be. If a spell checker could exploit markup instead of just ignoring it, could spell checking be easier and more useful? The theoretical foundations of spell checking may be less obvious, but every spell checker operationalizes both a simple model of language and a model of errors and error correction. The SCX (spell checking for XML) framework is intended to support the author's experimentation with different models of language and errors: it uses XML technologies to tokenize documents, spell check them, provide a user interface for acting on the flags raised by the spell checker, and inserting the corrections into the original text.


2020 ◽  
Vol 8 (6) ◽  
pp. 2907-2910

This paper presents an approach to create a XML-based Syllabus repository for Computer Science subjects. It facilitates educators and experts to create, store, update and publish a syllabus for a particular subject. The objective of the research is to contribute to automate the Computer Science syllabus creation process allowing use, reuse and repurpose of the learning objects from the repository. The Syllabus learning objects like topic and subtopic are stored in a hierarchical XML structure which are combined and aggregated to create a customized syllabus for a particular subject. This paper discussing the Structuring, Navigating and Parsing of XML data done by using XML Schema Definitions (XSD), XPath and SimpleXML respective XML technologies. The steps to process the XML data and transform the data to produce the output is also discussed. We have tried to solve the issues associated with the traditional method of creating a syllabus which uses MS-Word or PDF data format.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Jian Zhang ◽  
Haowen Yan

<p><strong>Abstract.</strong> In order to adapt to the information needs of the we media era, Haowen Yan and other scholars proposed a "grassroots" map for the civilian population &amp;ndash; "We-Map". The mathematical foundations such as accuracy are not high, and the producers do not need to undergo strict professional training. Map users can participate in map production at any time, and can easily and quickly communicate and apply on personal electronic devices (such as computers and mobile phones). In the We-Map platform, it uses the Google Map SDK and XML technologies to implement the basic functions of the mobile platform-based We-Map application, including map editing (copy, rotate, zoom, pan, line drawing, coloring, etc.), path navigation, Real-time location, information query and We- Map storage, transmission and distribution. In this paper, we study and implement a landmark-based pedestrian path navigation algorithm, which is added to the We-Map platform for auxiliary navigation of pedestrian paths.</p>


Author(s):  
Gard B. Jenset ◽  
Barbara McGillivray

Chapter 4 explains the concept and process of annotation for historical corpora, from a theoretical, practical, and technical point of view, and discusses the challenges presented by historical texts. We introduce basic terminology for XML technologies and corpus metadata, and we describe the different levels of linguistic annotation, from spelling normalization to morphological, syntactic, and semantic analysis, and briefly present the state of the art for historical corpora and treebanks. We cover annotation schemes and standards and illustrate the main concepts in corpus annotation with an example from LatinISE, a large annotated Latin corpus.


Author(s):  
Ari Nordström

A conversion of hundreds of Rich Text Format documents to highly structured XML is always going to be a challenge and a showcase of XML technologies, even if you are excluded from a number of them. This paper is a case study of one such conversion, dealing with migrating huge volumes of legal commentary, more specifically the classic standard text Halsbury's Laws of England, from RTF to XML so new editions can be authored and published in XML to various paper and online publication targets. While describing the migration approach in any detail would probably require a book-length paper, this attempts to highlight some of the challenges and their solutions.


Author(s):  
Anne Brüggemann-Klein

Web applications offer a golden opportunity for domain experts who work with XML documents to leverage their domain expertise, their knowledge of document engineering principles, and their skills in XML technology. Current XML technologies provide a full stack of modeling languages, implementation languages, and tools for Web applications that is stable, platform independent, and based on open standards. Combining principles and proven practices from document and software engineering, we identify architectures, modeling techniques, and implementation strategies that let end-user developers who are conversant with XML technologies create their own Web applications.


Author(s):  
Andreas Tai

Although broadcast TV subtitles are well established, digital production workflows are changing. Increasingly, the internet is a primary channel for distribution. Audio and video standards have already adapted, but changes to broadcast subtitle workflows are only just beginning. Timed Text Markup Language (TTML) is the leading contender to replace legacy subtitle file and transmission formats for digital and hybrid broadcasting. TTML is a format for authoring, exchange, and presentation of subtitles. Used at different stages in the workflow, TTML addresses some, but not all, of the current problems in media distribution. We examine how TTML succeeds and where it falls short. We view each shortcoming as an opportunity for further advancement. Whether it’s a question of adapting TTML to non-XML environments or encouraging broader use of XML technologies in new areas, there is much to learn from these efforts.


Author(s):  
Gerrit Imsieke

Deploying advanced XML technologies such as XProc, XSLT 2.0, and Schematron, an "ex-post" conversion of InDesign files may be a viable alternative to XML-first publishing production workflows.


Sign in / Sign up

Export Citation Format

Share Document