If you search for books and other media on the Web, you find Amazon, Wikipedia, and many other resources long before you see any libraries. This is a historical problem of librarians' having started ahead of the state of the art in database technologies, and yet unable to keep up with mainstream computing developments, including the Web. As a result, libraries are left with extraordinarily rich catalogs in formats which are unsuited to the Web, and which need a lot of work to adapt for the Web. A first step towards addressing this problem, BIBFRAME is a model developed for representing metadata from libraries and other cultural heritage institutions in linked data form. Libhub is a project building on BIBFRAME to convert traditional library formats, especially MARC/XML, to Web resource pages using BIBFRAME and other vocabulary frameworks. The technology used to implement Libhub transforms MARC/XML to a semi-structured, RDF-like metamodel called Versa, from which various outputs are possible, including data-rich Web pages. The authors developed a pipeline processing technology in Python in order to address the need for high performance and scalability as well as a prodigious degree of customization to accommodate a half century of variations and nuances in library cataloging conventions. The heart of this pipelining system is in the open-source project pybibframe, and the main way to customize the transform for non-technical librarians is a pattern microlanguage called marcpatterns.py. Using marcpatterns.py recipes specialized for the first Libhub participant, Denver Public Library, further specialized from common patterns among public libraries, (FIXME - not quite sure what is being said here) The first prerelease of linked data Web pages has already demonstrated the dramatic improvement in visibility for the library and quality, curated content for the Web, made possible through the adaptive, semistructured transform from notoriously abstruse library catalog formats. This paper discusses an unorthodox approach to structured and heuristics-based transformation from a large corpus of XML in a difficult format which doesn't well serve the richness of its content. It covers some of the pragmatic choices made by developers of the system who happen to be pioneering advocates of The Web, markup, and standards around these, but who had to subordinate purity to the urgent need to effect large-scale exposure of dark cultural heritage data in difficult circumstances for a small development and maintenance team. This is a case study of where proper knowledge of XML and its related standards must combine with agile techniques and "worse-is-better" concessions to solve a stubborn problem in extracting value from cultural heritage markup.