A data-flow visual approach to symbolic computing: implementing a production-rule-based programming system through a general-purpose data-flow VL

Author(s):  
M. Mosconi ◽  
M. Porta
1989 ◽  
Vol 21 (1) ◽  
pp. 17-21 ◽  
Author(s):  
Jacobo Carrasquel ◽  
Jim Roberts ◽  
John Pane
Keyword(s):  
Top Down ◽  

Author(s):  
H. Ashrafiuon ◽  
N. K. Mani

Abstract The symbolic computing system MACSYMA is used to automatically generate the explicit equations necessary to represent the kinematic constraints and system dynamics and to compute the design sensitivities for optimal design of any multibody system. The logic to construct system matrices and vectors involved in the analysis and design equations is implemented as general purpose MACSYMA programs. All necessary manipulations are performed by MACSYMA and the equations are output as FORTRAN statements that can be compiled and executed. This approach results in a computational saving of up to 95% compared to using a general purpose programs. The approach is general in nature and is applicable to any multibody system. Examples are presented to demonstrate the effectiveness of the approach.


Author(s):  
Theodore Patkos ◽  
Abdelghani Chibani ◽  
Dimitris Plexousakis ◽  
Yacine Amirat

1988 ◽  
Vol 54 (500) ◽  
pp. 847-853
Author(s):  
Juhachi ODA ◽  
Kouetsu YAMAZAKI ◽  
Jirou SAKAMOTO ◽  
Junpei ABE ◽  
Masahide MATSUMOTO

Author(s):  
Michael Leventhal ◽  
Eric Lemoine

The XML chip is now more than six years old. The diffusion of this technology has been very limited, due, on the one hand, to the long period of evolutionary development needed to develop hardware capable of accelerating a significant portion of the XML computing workload and, on the other hand, to the fact that the chip was invented by start-up Tarari in a commercial context which required, for business reasons, a minimum of public disclosure of its design features. It remains, nevertheless, a significant landmark that the XML chip has been sold and continuously improved for the last six years. From the perspective of general computing history, the XML chip is an uncommon example of a successful workload-specific symbolic computing device. With respect to the specific interests of the XML community, the XML chip is a remarkable validation of one of its core founding principles: normalizing on a data format, whatever its imperfections, would enable the developers to, eventually, create tools to process it efficiently. This paper was prepared for the International Symposium on Processing XML Efficiently: Overcoming Limits on Space, Time, or Bandwidth, a day of discussion among, predominately, software developers working in the area of efficient XML processing. The Symposium is being held as a workshop within Balisage, a conference of specialists in markup theory. Given the interests of the audience this paper does not delve into the design features and principles of the chip itself; rather it presents a dialectic on the motivation for the development of an XML chip in view of related and potentially competing developments in scaling as it is commonly characterized as a manifestation of Moore's Law, parallelization through increasing the number of computing cores on general purpose processors (multicore Von Neumann architecture), and optimization of software.


Sign in / Sign up

Export Citation Format

Share Document