Robust Product Design Using Manufacturing Adjustments

Author(s):  
Kevin Otto

Abstract Tuning variables represent factory floor manufacturing adjustments commonly used to correct variational noise errors in a product. Some examples are voltage supply adjustments, adjustable links and screws, or shims. This paper presents methods to determine the increased performance and robustness when using different possible manufacturing adjustments. All possible potential tuning adjustments that exist in a design are identified. Given this, the tuning variable model can be used to calculate the reduced product variation when using any of the potential tuning adjustments, including none at all. This process can be used to help select which product variables should be adjusted, based on the increased robustness to noise and the increased difficulty in manufacture. Doing so allows for more robust product performance at reduced manufacturing expense, by allowing the potential adjustment on many different possible variables. This is better than always adjusting the source of the manufacturing errors, which can be expensive.

Author(s):  
Li Chen

Abstract Robust product design is a two-objective optimization problem in nature. On the one hand, it is wished to minimize variations in product performance (i.e. first goal); on the other hand, it is desired to maximize product functionality (i.e. second goal). As known, these design goals may not arrive at their own optima at the same time; therefore, coordination between the two perfonning goals is needed during the design iterations to help achieve high robustness in product performance in the sense that a best compromise design is reached. In this work, a coordination-based robust design approach is developed to control the computational aspect of robust design process in a coordinated fashion, allowing that high-quality engineered products are produced with designing-in of performance precision and accuracy through robust design. A fuzzy control algorithm in terms of coordination rules prescribed is generated for handling design coordination by tuning an adaptive design parameter β, defined over the domain from 0 to 1.0. Three coordinated design models are discussed along with the study of a design example.


Author(s):  
Reinald Kim Amplayo ◽  
Seung-won Hwang ◽  
Min Song

Word sense induction (WSI), or the task of automatically discovering multiple senses or meanings of a word, has three main challenges: domain adaptability, novel sense detection, and sense granularity flexibility. While current latent variable models are known to solve the first two challenges, they are not flexible to different word sense granularities, which differ very much among words, from aardvark with one sense, to play with over 50 senses. Current models either require hyperparameter tuning or nonparametric induction of the number of senses, which we find both to be ineffective. Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word. These observations alleviate the problem by (a) throwing garbage senses and (b) additionally inducing fine-grained word senses. Results show great improvements over the stateof-the-art models on popular WSI datasets. We also show that AutoSense is able to learn the appropriate sense granularity of a word. Finally, we apply AutoSense to the unsupervised author name disambiguation task where the sense granularity problem is more evident and show that AutoSense is evidently better than competing models. We share our data and code here: https://github.com/rktamplayo/AutoSense.


Technometrics ◽  
1996 ◽  
Vol 38 (3) ◽  
pp. 286-287
Author(s):  
Henry W. Altland

Author(s):  
Lin He ◽  
Christopher Hoyle ◽  
Wei Chen ◽  
Jiliang Wang ◽  
Bernard Yannou

Usage Context-Based Design (UCBD) is an area of growing interest within the design community. A framework and a step-by-step procedure for implementing consumer choice modeling in UCBD are presented in this work. To implement the proposed approach, methods for common usage identification, data collection, linking performance with usage context, and choice model estimation are developed. For data collection, a method of try-it-out choice experiments is presented. This method is necessary to account for the different choices respondents make conditional on the given usage context, which allows us to examine the influence of product design, customer profile, usage context attributes, and their interactions, on the choice process. Methods of data analysis are used to understand the collected choice data, as well as to understand clusters of similar customers and similar usage contexts. The choice modeling framework, which considers the influence of usage context on both the product performance, choice set and the consumer preferences, is presented as the key element of a quantitative usage context-based design process. In this framework, product performance is modeled as a function of both the product design and the usage context. Additionally, usage context enters into an individual customer’s utility function directly to capture its influence on product preferences. The entire process is illustrated with a case study of the design of a jigsaw.


2012 ◽  
Vol 535-537 ◽  
pp. 1402-1407
Author(s):  
Li Li Yu ◽  
Zhen Hua Su ◽  
Jing Zhan Lin ◽  
Yu Sen Yuan ◽  
Chun Xiang Cui ◽  
...  

Automotive weight reduction is a challenging task due to many performance targets that must be satisfied simultaneously, in particular in terms of static and dynamic properties direct relating to strength, stiffness and NVH characteristics of vehicles. Compared to all-steel vehicle frame, multi-material substitutions are adopted in each structural component for higher product performance and a lightweight electric vehicle frame in this paper. The SHELL63 element is selected to construct finite element (FE) model of vehicle frame based on the FEA software ANSYS. Under full bending loading and torsional loading respectively, static analysis of frame is performed, and the strength and stiffness are evaluated as well. The Block Lanczos is adopted for dynamic analysis of vehicle frame. Their first eight modal properties are obtained and far away exciting frequency range of rough road. The multi-material vehicle frame has been designed to be made of mild steel, aluminum and magnesium alloys. Its static and dynamic properties show that the strength, stiffness and NVH characteristics are better than ones from all-steel vehicle frame with weight reduction of 31.7%. These procedures will help to design a lightweight and thus to provide technical support for reducing fuel consumption and greenhouse gas emissions.


1990 ◽  
Vol 66 (6) ◽  
pp. 600-605 ◽  
Author(s):  
R. T. Morton ◽  
T. I. Grabowski ◽  
S. J. Titus ◽  
G. M. Bonnor

In 1985, a survey of nine provinces and two territories was conducted to summarize operational tree volume estimation methods. Based on those results, six tree volume estimation functions were evaluated to answer the question: can a single model be used nation-wide for tree volume estimation? The six models were fitted to nation-wide data for 980 white spruce trees distributed nearly equally among the provinces and territories. Based on goodness of fit statistics and analysis of residuals, Schumacher's (1933) model and the Quebec combined variable model performed marginally better than the others. Further, the analyses did not reveal any significant differences between territories and provinces. It appears that any of these models could be applied to broad regions of Canada without suffering significant losses in accuracy.


2020 ◽  
Author(s):  
Aditya Arie Nugraha ◽  
Kouhei Sekiguchi ◽  
Kazuyoshi Yoshii

This paper describes a deep latent variable model of speech power spectrograms and its application to semi-supervised speech enhancement with a deep speech prior. By integrating two major deep generative models, a variational autoencoder (VAE) and a normalizing flow (NF), in a mutually-beneficial manner, we formulate a flexible latent variable model called the NF-VAE that can extract low-dimensional latent representations from high-dimensional observations, akin to the VAE, and does not need to explicitly represent the distribution of the observations, akin to the NF. In this paper, we consider a variant of NF called the generative flow (GF a.k.a. Glow) and formulate a latent variable model called the GF-VAE. We experimentally show that the proposed GF-VAE is better than the standard VAE at capturing fine-structured harmonics of speech spectrograms, especially in the high-frequency range. A similar finding is also obtained when the GF-VAE and the VAE are used to generate speech spectrograms from latent variables randomly sampled from the standard Gaussian distribution. Lastly, when these models are used as speech priors for statistical multichannel speech enhancement, the GF-VAE outperforms the VAE and the GF.


2020 ◽  
Author(s):  
Peijia Liu ◽  
Dong Yang ◽  
Shaomin Li ◽  
Yutian Chong ◽  
Wentao Hu ◽  
...  

Abstract Background The utilization of estimating-GFR equations is critical for kidney disease in the clinic. However, the performance of the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation has not improved substantially in the past eight years. Here we hypothesized that random forest regression(RF) method could go beyond revised linear regression, which is used to build the CKD-EPI equationMethods 1732 participants were enrolled in this study totally (1333 in development data set from Tianhe District and 399 in external data set Luogang District). Recursive feature elimination (RFE) is applied to the development data to select important variables and build random forest models. Then same variables were used to develop the estimated GFR equation with linear regression as a comparison. The performances of these equations are measured by bias, 30% accuracy , precision and root mean square error(RMSE).Results Of all the variables, creatinine, cystatin C, weight, body mass index (BMI), age, uric acid(UA), blood urea nitrogen(BUN), hematocrit(HCT) and apolipoprotein B(APOB) were selected by RFE method. The results revealed that the overall performance of random forest regression models ascended the revised regression models based on the same variables. In the 9-variable model, RF model was better than revised linear regression in term of bias, precision ,30%accuracy and RMSE(0.78 vs 2.98, 16.90 vs 23.62, 0.84 vs 0.80, 16.88 vs 18.70, all P<0.01 ). In the 4-variable model, random forest regression model showed an improvement in precision and RMSE compared with revised regression model. (20.82 vs 25.25, P<0.01, 19.08 vs 20.60, P<0.001). Bias and 30%accurancy were preferable, but the results were not statistically significant (0.34 vs 2.07, P=0.10, 0.8 vs 0.78, P=0.19, respectively).Conclusions The performances of random forest regression models are better than revised linear regression models when it comes to GFR estimation.


Sign in / Sign up

Export Citation Format

Share Document