An Approach to Determine Important Attributes for Engineering Change Evaluation

2013 ◽  
Vol 135 (4) ◽  
Author(s):  
Chandresh Mehta ◽  
Lalit Patil ◽  
Debasish Dutta

Enterprises plan detailed evaluation of only those engineering change (EC) effects that might have a significant impact. Using past EC knowledge can prove effective in determining whether a proposed EC effect has significant impact. In order to utilize past EC knowledge, it is essential to identify important attributes that should be compared to compute similarity between ECs. This paper presents a knowledge-based approach for determining important EC attributes that should be compared to retrieve similar past ECs so that the impact of proposed EC effect can be evaluated. The problem of determining important EC attributes is formulated as the multi-objective optimization problem. Measures are defined to quantify importance of an attribute set. The knowledge in change database and the domain rules among attribute values are combined for computing the measures. An ant colony optimization (ACO)-based search approach is used for efficiently locating the set of important attributes. An example EC knowledge-base is created and used for evaluating the measures and the overall approach. The evaluation results show that our measures perform better than state-of-the-art evaluation criteria. Our overall approach is evaluated based on manual observations. The results show that our approach correctly evaluates the value of proposed change impact with a success rate of 83.33%.

Author(s):  
Chandresh Mehta ◽  
Lalit Patil ◽  
Debasish Dutta

Detailed evaluation of a proposed engineering change (EC) or its effects is a time-consuming process requiring considerable user experience and expertise. Therefore, enterprises plan detailed evaluation of only those EC effects that might have a significant impact. Since similar ECs are likely to have similar effects and impacts, past EC knowledge can be utilized for determining whether the proposed EC effect has significant impact. This paper presents an approach for predicting the impact of proposed EC effect based on past ECs that are similar to it. Our approach accounts for the differences in context of impact between attribute values in two changes. The Bayes’ rule is utilized to determine differences in impact value based on the differences in attribute values. The probability values required in Bayes’ rule are determined based on the principle of minimum cross entropy. An example EC knowledge base is created and utilized to evaluate our approach against two state-of-the-art approaches, namely k-nearest neighbor (NN) and regularized local similarity discriminant analysis (SDA). The success rate in predicting impact is used as an evaluation metric. The results show that there is a statistically significant improvement in success rate obtained using our approach as compared to those obtained using the two state-of-the-art approaches. The results also show that for a very large number of proposed ECs, i.e., N > 100, the success rate in predicting impact using our approach shall be greater than that obtained using the two state-of-the-art approaches.


Author(s):  
Sri Hartini ◽  
Zuherman Rustam ◽  
Glori Stephani Saragih ◽  
María Jesús Segovia Vargas

<span id="docs-internal-guid-4935b5ce-7fff-d9fa-75c7-0c6a5aa1f9a6"><span>Banks have a crucial role in the financial system. When many banks suffer from the crisis, it can lead to financial instability. According to the impact of the crises, the banking crisis can be divided into two categories, namely systemic and non-systemic crisis. When systemic crises happen, it may cause even stable banks bankrupt. Hence, this paper proposed a random forest for estimating the probability of banking crises as prevention action. Random forest is well-known as a robust technique both in classification and regression, which is far from the intervention of outliers and overfitting. The experiments were then constructed using the financial crisis database, containing a sample of 79 countries in the period 1981-1999 (annual data). This dataset has 521 samples consisting of 164 crisis samples and 357 non-crisis cases. From the experiments, it was concluded that utilizing 90 percent of training data would deliver 0.98 accuracy, 0.92 sensitivity, 1.00 precision, and 0.96 F1-Score as the highest score than other percentages of training data. These results are also better than state-of-the-art methods used in the same dataset. Therefore, the proposed method is shown promising results to predict the probability of banking crises.</span></span>


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3037
Author(s):  
Xi Zhao ◽  
Yun Zhang ◽  
Shoulie Xie ◽  
Qianqing Qin ◽  
Shiqian Wu ◽  
...  

Geometric model fitting is a fundamental issue in computer vision, and the fitting accuracy is affected by outliers. In order to eliminate the impact of the outliers, the inlier threshold or scale estimator is usually adopted. However, a single inlier threshold cannot satisfy multiple models in the data, and scale estimators with a certain noise distribution model work poorly in geometric model fitting. It can be observed that the residuals of outliers are big for all true models in the data, which makes the consensus of the outliers. Based on this observation, we propose a preference analysis method based on residual histograms to study the outlier consensus for outlier detection in this paper. We have found that the outlier consensus makes the outliers gather away from the inliers on the designed residual histogram preference space, which is quite convenient to separate outliers from inliers through linkage clustering. After the outliers are detected and removed, a linkage clustering with permutation preference is introduced to segment the inliers. In addition, in order to make the linkage clustering process stable and robust, an alternative sampling and clustering framework is proposed in both the outlier detection and inlier segmentation processes. The experimental results also show that the outlier detection scheme based on residual histogram preference can detect most of the outliers in the data sets, and the fitting results are better than most of the state-of-the-art methods in geometric multi-model fitting.


2014 ◽  
Vol 40 (4) ◽  
pp. 837-881 ◽  
Author(s):  
Mohammad Taher Pilehvar ◽  
Roberto Navigli

The evaluation of several tasks in lexical semantics is often limited by the lack of large amounts of manual annotations, not only for training purposes, but also for testing purposes. Word Sense Disambiguation (WSD) is a case in point, as hand-labeled datasets are particularly hard and time-consuming to create. Consequently, evaluations tend to be performed on a small scale, which does not allow for in-depth analysis of the factors that determine a systems' performance. In this paper we address this issue by means of a realistic simulation of large-scale evaluation for the WSD task. We do this by providing two main contributions: First, we put forward two novel approaches to the wide-coverage generation of semantically aware pseudowords (i.e., artificial words capable of modeling real polysemous words); second, we leverage the most suitable type of pseudoword to create large pseudosense-annotated corpora, which enable a large-scale experimental framework for the comparison of state-of-the-art supervised and knowledge-based algorithms. Using this framework, we study the impact of supervision and knowledge on the two major disambiguation paradigms and perform an in-depth analysis of the factors which affect their performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Hang Yu ◽  
Yu Zhang ◽  
Pengxing Cai ◽  
Junyan Yi ◽  
Sheng Li ◽  
...  

In this study, a hybrid metaheuristic algorithm chaotic gradient-based optimizer (CGBO) is proposed. The gradient-based optimizer (GBO) is a novel metaheuristic inspired by Newton’s method which has two search strategies to ensure excellent performance. One is the gradient search rule (GSR), and the other is local escaping operation (LEO). GSR utilizes the gradient method to enhance ability of exploitation and convergence rate, and LEO employs random operators to escape the local optima. It is verified that gradient-based metaheuristic algorithms have obvious shortcomings in exploration. Meanwhile, chaotic local search (CLS) is an efficient search strategy with randomicity and ergodicity, which is usually used to improve global optimization algorithms. Accordingly, we incorporate GBO with CLS to strengthen the ability of exploration and keep high-level population diversity for original GBO. In this study, CGBO is tested with over 30 CEC2017 benchmark functions and a parameter optimization problem of the dendritic neuron model (DNM). Experimental results indicate that CGBO performs better than other state-of-the-art algorithms in terms of effectiveness and robustness.


Author(s):  
Sha Ma ◽  
Bin Song ◽  
Wen Feng Lu ◽  
Cheng Feng Zhu

Engineering changes are inevitable in a product development life cycle. The requests for engineering changes can be due to new customer requirements, emergence of new technology, market feedback, or variations of components and raw materials. Each change generates a level of impact on costs, time to market, tasks and schedules of related processes, and product components. Change management tools available today focus on the management of document and process changes. Assessments of change impact are typically based on the “rule of thumb”. Our research has developed a methodology and related techniques to quantify and analyze the impact of engineering changes to enable faster and more accurate decision-making in engineering change management. Reported in this paper are investigations of industrial requirements and fundamental issues of change impact analysis as well as related research and techniques. A framework for a knowledge-supported change impact analysis system is proposed. Three critical issues of system implementation, namely integrated design information model, change plan generator and impact estimation algorithms, are addressed. Finally the benefits and future work are discussed.


2013 ◽  
Vol 405-408 ◽  
pp. 3459-3462
Author(s):  
Jing Li ◽  
Zong Rong Xu

Engineering change directly impacts on project cost of construction projects. Beginning upon the typical engineering cases based on the quantities bill valuation, the author carried out the quantitative analysis of the impact on project cost under different changes , and find out a effective measure to prevent changes in order to control project cost.


Author(s):  
Yantao Yu ◽  
Zhen Wang ◽  
Bo Yuan

Factorization machines (FMs) are a class of general predictors working effectively with sparse data, which represents features using factorized parameters and weights. However, the accuracy of FMs can be adversely affected by the fixed representation trained for each feature, as the same feature is usually not equally predictive and useful in different instances. In fact, the inaccurate representation of features may even introduce noise and degrade the overall performance. In this work, we improve FMs by explicitly considering the impact of individual input upon the representation of features. We propose a novel model named \textit{Input-aware Factorization Machine} (IFM), which learns a unique input-aware factor for the same feature in different instances via a neural network. Comprehensive experiments on three real-world recommendation datasets are used to demonstrate the effectiveness and mechanism of IFM. Empirical results indicate that IFM is significantly better than the standard FM model and consistently outperforms four state-of-the-art deep learning based methods.


Author(s):  
Walid Ben Ahmed ◽  
Mounib Mekhilef ◽  
Michel Bigand ◽  
Yves Page

Due to the increasing complexity of the modern industrial context in an evolutionary environment, several changes (e.g. new technology, new system, human errors, etc.) may affect road safety. Analyzing the change impact on design requirements is a complex task especially when it deals with complex systems such as Vehicle Safety Systems (VSS). To handle a change impact analysis in road safety field, VSS designers require a specific knowledge stemmed from accidentology. In this paper, we develop a multi-view model of the road accident, which is crucial to extract the required knowledge. Indeed, this multi-view model allows the analysis of the impact of a given change on the Driver-Vehicle-Environment system from different viewpoints and on different grain of size. This allows an efficient approach to detect exhaustively the perturbations due to the change and thereby to anticipate and handle their effects. We use a Knowledge Engineering approach to implement the multi-view model in a Knowledge-Based System providing accidentologists and VSS designers with an efficient tool to carry out an analysis of change impact on analysis design requirements.


2021 ◽  
Vol 16 (2) ◽  
pp. 185-198
Author(s):  
W.M. Yang ◽  
C.D. Li ◽  
Y.H. Chen ◽  
Y.Y. Yu

Change impact evaluation of complex product plays an important role in controlling change cost and improving change efficiency of engineering change enterprises. In order to improve the accuracy of engineering change impact evaluation, this paper introduces three-parameter interval grey number to evaluate complex products according to the data characteristics. The linear combination of BWM and Gini coefficient method is used to improve the three-parameter interval grey number correlation model. It is applied to the impact evaluation of complex product engineering change. This paper firstly constructs a multi-stage complex network for complex product engineering change. Then the engineering change impact evaluation index system is determined. Finally, a case analysis was carried out with the permanent magnet synchronous centrifugal compressor in a large permanent magnet synchronous centrifugal unit to verify the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document