shape constraints
Recently Published Documents


TOTAL DOCUMENTS

127
(FIVE YEARS 32)

H-INDEX

18
(FIVE YEARS 3)

Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 345
Author(s):  
Martin von Kurnatowski ◽  
Jochen Schmid ◽  
Patrick Link ◽  
Rebekka Zache ◽  
Lukas Morand ◽  
...  

Systematic decision making in engineering requires appropriate models. In this article, we introduce a regression method for enhancing the predictive power of a model by exploiting expert knowledge in the form of shape constraints, or more specifically, monotonicity constraints. Incorporating such information is particularly useful when the available datasets are small or do not cover the entire input space, as is often the case in manufacturing applications. We set up the regression subject to the considered monotonicity constraints as a semi-infinite optimization problem, and propose an adaptive solution algorithm. The method is applicable in multiple dimensions and can be extended to more general shape constraints. It was tested and validated on two real-world manufacturing processes, namely, laser glass bending and press hardening of sheet metal. It was found that the resulting models both complied well with the expert’s monotonicity knowledge and predicted the training data accurately. The suggested approach led to lower root-mean-squared errors than comparative methods from the literature for the sparse datasets considered in this work.


Author(s):  
Kazuyuki Wakasugi

If domain knowledge can be integrated as an appropriate constraint, it is highly possible that the generalization performance of a neural network model can be improved. We propose Sensitivity Direction Learning (SDL) for learning about the neural network model with user-specified relationships (e.g., monotonicity, convexity) between each input feature and the output of the model by imposing soft shape constraints which represent domain knowledge. To impose soft shape constraints, SDL uses a novel penalty function, Sensitivity Direction Error (SDE) function, which returns the squared error between coefficients of the approximation curve for each Individual Conditional Expectation plot and coefficient constraints which represent domain knowledge. The effectiveness of our concept was verified by simple experiments. Similar to those such as L2 regularization and dropout, SDL and SDE can be used without changing neural network architecture. We believe our algorithm can be a strong candidate for neural network users who want to incorporate domain knowledge.


2021 ◽  
Vol 12 (3) ◽  
pp. SC-58-SC-69
Author(s):  
Marc Chataigner ◽  
Areski Cousin ◽  
Stéphane Crépey ◽  
Matthew Dixon ◽  
Djibril Gueye

Sign in / Sign up

Export Citation Format

Share Document