Evaluation of a Multi-Goal Solver for Use in a Blackboard Architecture

Author(s):  
Jeremy Straub

This article presents a multi-goal solver for problems that can be modeled using a Blackboard Architecture. The Blackboard Architecture can be used for data fusion, robotic control and other applications. It combines the rule-based problem analysis of an expert system with a mechanism for interacting with its operating environment. In this context, numerous control or domain (system-subject) problems may exist which can be solved through reaching one of multiple outcomes. For these problems which have multiple solutions, any of which constitutes an end-goal, a solving mechanism which is solution-choice-agnostic and finds the lowest-cost path to the lowest-cost solution is required. Such a solver mechanism is presented and characterized herein. The performance of the solver (including both the computational time required to ascertain a solution and execute it) is compared to the naïve Blackboard approach. This performance characterization is performed across multiple levels of rule counts and rule connectivity. The naïve approach is shown to generate a solution faster, but the solutions generated by this approach, in most cases, are inferior to those generated by the solver.

2021 ◽  
Vol 11 (5) ◽  
pp. 2326
Author(s):  
Claudio Favi ◽  
Roberto Garziera ◽  
Federico Campi

Welding is a consolidated technology used to manufacture/assemble large products and structures. Currently, welding design issues are tackled downstream of the 3D modeling, lacking concurrent development of design and manufacturing engineering activities. This study aims to define a method to formalize welding knowledge that can be reused as a base for the development of an engineering design platform, applying design for assembly method to assure product manufacturability and welding operations (design for welding (DFW)). The method of ontology (rule-based system) is used to translate tacit knowledge into explicit knowledge, while geometrical feature recognition with parametric modeling is adopted to couple geometrical information with the identification of welding issues. Results show how, within the design phase, manufacturing issues related to the welding operations can be identified and fixed. Two metal structures (a jack adapter of a heavy-duty prop and a lateral frame of a bracket structure) fabricated with arc welding processes were used as case studies and the following benefits were highlighted: (i) anticipation of welding issues related to the product geometry and (ii) reduction of effort and time required for the design review. In conclusion, this research moves forward toward the direction of concurrent engineering, closing the gap between design and manufacturing.


Author(s):  
Siyao Luan ◽  
Deborah L. Thurston ◽  
Madhav Arora ◽  
James T. Allison

In some cases, the level of effort required to formulate and solve an engineering design problem as a mathematical optimization problem is significant, and the potential improved design performance may not be worth the excessive effort. In this article we address the tradeoffs associated with formulation and modeling effort. Here we define three core elements (dimensions) of design formulations: design representation, comparison metrics, and predictive model. Each formulation dimension offers opportunities for the design engineer to balance the expected quality of the solution with the level of effort and time required to reach that solution. This paper demonstrates how using guidelines can be used to help create alternative formulations for the same underlying design problem, and then how the resulting solutions can be evaluated and compared. Using a vibration absorber design example, the guidelines are enumerated, explained, and used to compose six alternative optimization formulations, featuring different objective functions, decision variables, and constraints. The six alternative optimization formulations are subsequently solved, and their scores reflecting their complexity, computational time, and solution quality are quantified and compared. The results illustrate the unavoidable tradeoffs among these three attributes. The best formulation depends on the set of tradeoffs that are best in that situation.


2018 ◽  
Vol 159 ◽  
pp. 01009 ◽  
Author(s):  
Mohammad Ghozi ◽  
Anik Budiati

There are many applications of Genetic Algorithm (GA) and Harmony Search (HS) Method for solving problems in civil engineering design. The question is, still, which method is better for geometry optimization of a steel structure. The purpose of this paper is to compare GA and HS performance for geometric optimization of a steel structure. This problem is solved by optimizing a steel structure using GA and HS and then comparing the structure’s weight as well as the time required for the calculation. In this study, GA produced a structural weight of 2308.00 kg to 2387.00 kg and HS scored 2193.12 kg to 2239.48 kg. The average computational time required by GA is 607 seconds and HS needed 278 seconds. It concludes that HS is faster and better than GA for geometry optimization of a steel structure.


Author(s):  
Reza Alizadeh ◽  
Liangyue Jia ◽  
Anand Balu Nellippallil ◽  
Guoxin Wang ◽  
Jia Hao ◽  
...  

AbstractIn engineering design, surrogate models are often used instead of costly computer simulations. Typically, a single surrogate model is selected based on the previous experience. We observe, based on an analysis of the published literature, that fitting an ensemble of surrogates (EoS) based on cross-validation errors is more accurate but requires more computational time. In this paper, we propose a method to build an EoS that is both accurate and less computationally expensive. In the proposed method, the EoS is a weighted average surrogate of response surface models, kriging, and radial basis functions based on overall cross-validation error. We demonstrate that created EoS is accurate than individual surrogates even when fewer data points are used, so computationally efficient with relatively insensitive predictions. We demonstrate the use of an EoS using hot rod rolling as an example. Finally, we include a rule-based template which can be used for other problems with similar requirements, for example, the computational time, required accuracy, and the size of the data.


2019 ◽  
Vol 141 (7) ◽  
Author(s):  
Jaya Narain ◽  
Amos G. Winter V

This paper details a hybrid computational and analytical model to predict the performance of inline pressure compensating drip irrigation emitters. Pressure compensating emitters deliver a constant flow rate over a range of applied pressures to accurately meter water to crops. Flow rate is controlled within the emitter via a fixed resistance tortuous path, and a variable flow resistance composed of a flexible membrane that deflects under changes in pressure, restricting the flow path. A pressure resistance parameter was derived using an experimentally validated computational fluid dynamics (CFD) model to describe the flow behavior in tortuous paths. The bending mechanics of the membrane were modeled analytically and refined by deriving a correction factor using finite element analysis (FEA). A matrix formulation that calculates the force applied by a line or a patch load of any shape on a rectangular membrane, along which there is a prescribed deflection, was derived and was found to be accurate to be 1%. The combined hybrid computational–analytical model reduces the computational time of modeling emitters from hours to less than 30 min, dramatically lowering the time required to iterate and select optimal designs. The model was validated experimentally using three commercially available drip emitters and was accurate to within 12% of the experimental results.


Processes ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1184
Author(s):  
Geraldine Cáceres Sepulveda ◽  
Silvia Ochoa ◽  
Jules Thibault

It is paramount to optimize the performance of a chemical process in order to maximize its yield and productivity and to minimize the production cost and the environmental impact. The various objectives in optimization are often in conflict, and one must determine the best compromise solution usually using a representative model of the process. However, solving first-principle models can be a computationally intensive problem, thus making model-based multi-objective optimization (MOO) a time-consuming task. In this work, a methodology to perform the multi-objective optimization for a two-reactor system for the production of acrylic acid, using artificial neural networks (ANNs) as meta-models, is proposed in an effort to reduce the computational time required to circumscribe the Pareto domain. The performance of the meta-model confirmed good agreement between the experimental data and the model-predicted values of the existent relationships between the eight decision variables and the nine performance criteria of the process. Once the meta-model was built, the Pareto domain was circumscribed based on a genetic algorithm (GA) and ranked with the net flow method (NFM). Using the ANN surrogate model, the optimization time decreased by a factor of 15.5.


2017 ◽  
Vol 2017 (1) ◽  
pp. 1612-1628
Author(s):  
Laura M. Fitzpatrick ◽  
A Zachary Trimble ◽  
Brian S. Bingham

ABSTRACT A marine pollutant spill environmental model that can accurately predict fine scale pollutant concentration variations on a free surface is needed in early stages of testing robotic control systems for tracking pollutant spills. The model must reproduce, for use in a robotic control system simulation environment, the fine-scale surface concentration variations observed by a robot. Furthermore, to facilitate development of robotic control systems, the model must reproduce sample spill distributions in minimal computational time. A combination Eulerian-Lagrangian type model, with two tuning parameters, was developed to produce, with minimal computational effort, the fine scale concentrations that would be observed by a robot. Multiple model scenarios were run with different tuning parameters to determine the effects of those parameters on the model’s ability to reproduce an experimental measured pollutant plume’s structure. A qualitative method for analyzing the concentration variations was established using amplitude and temporal statistical parameters. The differences in the statistical parameters between the model and experiment vary from 69%–316%. After tuning, the model produces a sample spill, which includes a high frequency concentration component not observed in the experimental data, but that generally represents the real-time, fine scale pollutant plume structure and can be used for testing control algorithms.


Author(s):  
Luis F. Ayala ◽  
Eltohami S. Eltohami ◽  
Michael A. Adewumi

Multiphase flow is prevalent in many industrial processes. Therefore, accurate and efficient modeling of multiphase flow is essential to the understanding of these processes as well as the development of technologies to handle and manage them. In the petroleum industry, the occurrence and consequence thereof associated with such hydrodynamic processes are encountered in offshore facilities, surface facilities as well as reservoir applications. In this paper, we consider the modeling of these processes with special consideration to the transport of petroleum products through pipelines. Multiphase hydrodynamic modeling is usually a trade-off between maximizing the accuracy level while minimizing the computational time required. The most fundamental modeling effort developed to achieve this goal is based on applying simplifications to the basic physical laws, as defined by continuum mechanics, governing these processes. However, the modeling of multiphase flow processes requires the coupling of these basic laws with a thermodynamic phase behavior model. This paper highlights the impact of the techniques used to computationally couple the system’s thermodynamics with its fluid mechanics while paying close attention to the trade off mentioned above. It will consider the consequences of the simplifications applied, as well as inherent deficiencies associated with such simplifications. Special consideration is given to the conservation of mass as well as the terms that govern its transfer between the phases. Furthermore, the implications related to the common simplification of isothermal conditions are studied, highlighting the loss of accuracy in the material balance associated with this computational time-saving assumption. This paper concludes by suggesting remedies to these problems, supported by results, showing considerable improvement in fulfilling both the basic constrains which are minimizing time and maximizing accuracy.


1971 ◽  
Vol 93 (4) ◽  
pp. 543-549
Author(s):  
B. A. Gastrock ◽  
J. A. Miller

The development of a numerical technique for the treatment of two-dimensional non-similar, unsteady, laminar boundary layers is presented. The method is an extension to nonsteady flows of the integral matrix procedure of Kendall. Solutions of example problems are presented demonstrating good agreement with known classical results. Core storage requirements of 130K bytes allow consideration of as many as 1250 field points and 50 time increments per oscillation cycle. Solution of oscillating Blasius flow for 8 nodal points and 16 time increments in 13.49 seconds demonstrates the practicality of the computational time required, while agreement with both the analysis and experiment of Nickerson for this flow is excellent.


Sign in / Sign up

Export Citation Format

Share Document