Proceedings of Balisage: The Markup Conference 2020
Latest Publications


TOTAL DOCUMENTS

29
(FIVE YEARS 29)

H-INDEX

1
(FIVE YEARS 1)

Published By Mulberry Technologies, Inc.

9781935958215

Author(s):  
Mary Holstege
Keyword(s):  

Complex algorithms need data structures that can be “updated”updated effectively. Here are some techniques and tips for using XQuery 3.1 maps for that purpose.


Author(s):  
Steven Pemberton

Submission, the process of sending data to a server and dealing with the response, is probably the hardest part of XForms to implement, and certainly involves the XForms element with the most attributes. This is largely due to legacy: XForms was designed to work with existing standards, and HTTP submission was designed before XML existed: the data representations are several, and on occasion byzantine. Part of the process of producing a standard such as XForms is a test suite to check implementability of the specification. The original XForms test suite consisted of a large collection of XForms, one XForm per feature to be tested. These had to be run by hand, and the output inspected to determine if the test had passed. As a part of the XForms 2.0 effort, a new test suite is being designed and built. This tests features by introspection, without user intervention, so that the XForm itself can report if it has passed or not. Current work within the test suite is on submission. This paper gives an overview of how the test suite works, and discusses the issues involved with submission, the XForms approach to it, and how to go about introspecting something that has left the client before you can cast your eyes on it.


Author(s):  
Steven DeRose

Models for XML documents often focus on text documents, but XML is used for many other kinds of data as well: databases, math, music, vector graphic images, and more. This paper examines how basic document models in the “text” world, do and do not fit a quite different kind of data: vector graphic images, and in particular their very common application for many kinds of diagrams.


Author(s):  
Patrick Andries ◽  
Lauren Wood

Before XML, the United States Government Publishing Office (GPO) created complex typography using non-hierarchical, line-based typesetting systems characterized by “locator” files which contain lines of typesetting instructions. Our mission is to convert years of locator files that describe U.S. government bills, laws, and statues (etc.) into structural XML, valid to the United States Legislative Markup (USLM) XML Schema. This was and is complicated, as locator files, in addition to being completely presentation-focused, use stylistic differences to communicate semantic significance. Our iterative analysis grew the mapping specification in stages. The conversion itself is in two parts. First, Java converts the locator files into hierarchical XML (the JAVA lexical, syntactical, decomposition, and generational phases are the focus of this paper). Then XSLT improves the resulting XML. Quality control and testing required additional programming and the creation and maintenance of a large set of reference samples.


Author(s):  
C. M. Sperberg-McQueen

How to react when things are not as we expect them to be.


Author(s):  
Eliot Kimber

Many products make XML from Microsoft Word, but consider the reverse: making Word versions of your XML documents, thus using MS Word as a document composition engine. The Wordinator enables automatic creation of high-quality Word documents from XML source. It uses an extension of the Word2DITA project’s SimpleWP (Simple Word Processing markup language) as the input to an Apache POI-based Java application that generates Word documents. XSLT generates the SimpleWP XML, managing the mapping of source XML elements to Word constructs and styles. I consider, in particular, the separation of concerns between the XSLT that generates the SimpleWP XML and the Java code that generates the Word documents.


Author(s):  
Elli Bleeker ◽  
Bram Buitendijk ◽  
Ronald Haentjens Dekker

The article discusses how micro-level textual variation can be expressed in an idiomatic manner using markup, and how the markup information is subsequently used by a digital collation tool for a more refined analysis of the textual variation. We take examples from the manuscript materials of Virginia Woolf's To the Lighthouse (1927), which bear the traces of the author's struggles in the form of deletions, additions, and rewrites. These in-text revisions typically constitute non-linear, discontinuous, or multi-hierarchical information structures. While digital technology has been instrumental in supporting manuscript research, the current data models for text provide only limited support for co-existing hierarchies or non-linear text features. The hypergraph data model of TAG is specifically designed to support and facilitate the study of complex manuscript text by way of its syntax TAGML and the collation tool HyperCollate. The article demonstrates how the study of textual variation can be augmented by designated markup to express the in-text, micro-level revisions, and by computer-assisted collation that takes into account that information.


Author(s):  
Vincenzo Rubano ◽  
Fabio Vitali

Producing accessible content for the Web is a rather complex task. Standards, rules and principles that offer largely useful recommendations for accessible content do indeed exist, but they are not adequately enforced and supported by actual implementations. It is fairly frequent for content authors to produce material that ends up not being accessible without even noticing it, even when using additional tools and services. Yet, most of the existing recommendations for accessible web resources center around the addition of reasonably simple markup with a clear declarative purpose in their design. How therefore is it possible that producing truly accessible content is such a rare occurrence? In this paper, we posit that an important justification of this, in addition to well-known lack of interest and lack of awareness, is the difficulty of evaluating and perceiving the correctness or wrongness of the generated assistive markup by non-disabled content authors and tool designers. Designers have serious difficulties when evaluating the effectiveness and correctness of the accessibility of their works, and existing tools do little or nothing to reduce the "handicap". Under these assumptions, we aim to describe an innovative approach based on declarative markup to improve the design and evaluation the accessibility of web pages. In particular, our strategy encompasses the combined usage of a declarative framework of accessible web components, capable of enforcing best-practices and conformance to accessibility standards, as well as automated tools to test for the accessibility of web content and, in addition, a new approach to manual tools to let developers and content creators examine visually the accessibility issues so that they can make sense of their impact on people with disabilities.


Author(s):  
Liam Quin

This paper describes ongoing work to make the Web site containing the proceedings for the Balisage conference be more accessible. Like many older Web sites, the site may have followed best practices when the framework was built, but times and the Web have moved on. The site is built from static XML documents using XSLT, which was modified as needed. Difficulties encountered included those common to updating any project’s production code that has been in use for a long time and has undergone periodic tweaks and adjustments; changes specific to accessibility and to the XML input are highlighted. The work took only a few days, albeit spread over several months, accompanied by considerably more research and testing, also described in the paper. Limitations of the work, especially including dealing with author-submitted content, are also described.


Author(s):  
Joel Kalvesmaki

Regular expressions from one programming language or environment to the next differ in details. The XPath flavor of regular expressions has unrivaled access to Unicode code blocks and character classes. But why stop there? In this paper I present a small XSLT function library that extends the XPath functions fn:matches(), fn:replace(), fn:tokenize(), and fn:analyze-string() to permit new ways to build classes of Unicode characters, by means of their names and decomposition relations.


Sign in / Sign up

Export Citation Format

Share Document