The XML chip is now more than six years old. The diffusion of this technology has been very limited, due, on the one hand, to the long period of evolutionary development needed to develop hardware capable of accelerating a significant portion of the XML computing workload and, on the other hand, to the fact that the chip was invented by start-up Tarari in a commercial context which required, for business reasons, a minimum of public disclosure of its design features. It remains, nevertheless, a significant landmark that the XML chip has been sold and continuously improved for the last six years. From the perspective of general computing history, the XML chip is an uncommon example of a successful workload-specific symbolic computing device. With respect to the specific interests of the XML community, the XML chip is a remarkable validation of one of its core founding principles: normalizing on a data format, whatever its imperfections, would enable the developers to, eventually, create tools to process it efficiently.
This paper was prepared for the International Symposium on Processing XML Efficiently: Overcoming Limits on Space, Time, or Bandwidth, a day of discussion among, predominately, software developers working in the area of efficient XML processing. The Symposium is being held as a workshop within Balisage, a conference of specialists in markup theory. Given the interests of the audience this paper does not delve into the design features and principles of the chip itself; rather it presents a dialectic on the motivation for the development of an XML chip in view of related and potentially competing developments in scaling as it is commonly characterized as a manifestation of Moore's Law, parallelization through increasing the number of computing cores on general purpose processors (multicore Von Neumann architecture), and optimization of software.