Inverse Parametric Modeling From Independently Generated Product Data Sets

Author(s):  
Zhengdong Huang ◽  
Derek Yip-Hoi

Parametric modeling has become a widely accepted mechanism for generating data set variants for product families. These data sets that include geometric models and feature-based process plans are created by specifying values for parameters within feasible ranges specified as constraints in the definition. The ranges denote the extent or envelope of the product family. Increasingly, with globalization the inverse problem is becoming important. This takes independently generated product data sets that on observation belong to the same product family and creates a parametric model for that family. This problem is also of relevance to large companies where independent design teams may work on product variants without much collaboration only to attempt consolidation later on to optimize the design of manufacturing processes and systems. In this paper we present a methodology for generating a feature-based part family parametric model through merging independently generated product data sets. We assume that these data sets are feature-based with relationships such as precedences captured using graphs. Since there are typically numerous ways in which these data sets can be merged, we formulate this as an optimization problem and solve using the A* algorithm. The parameter ranges generated by this approach will be used to design appropriate Reconfigurable Machine Tools (RMTs) and systems (RMS) for manufacturing the resulting part family.

2003 ◽  
Vol 3 (3) ◽  
pp. 231-242 ◽  
Author(s):  
Zhengdong Huang ◽  
Derek Yip-Hoi

Parametric modeling has become a widely accepted mechanism for generating data set variants for product families. These data sets include geometric models and feature-based process plans. They are created by specifying values for parameters within feasible ranges that are specified as constraints in their definition. These ranges denote the extent or envelope of the product family. Increasingly, with globalization the inverse problem is becoming important: Given independently generated product data sets that on observation belong to the same product family, create a parametric model for that family. This problem is also of relevance to large companies where independent design teams may work on product variants without much collaboration only to later attempt consolidation to optimize the design of manufacturing processes and systems. In this paper we present a methodology for generating a parametric representation of the machining process plan for a part family through merging product data sets generated independently from members of the family. We assume that these data sets are feature-based machining process plans with relationships such as precedences between the machining steps for each feature captured using graphs. Since there are numerous ways in which these data sets can be merged, we formulate this as an optimization problem and solve using the A* algorithm. The parameter ranges generated by this approach will be used in the design of tools, fixtures, material handling automation and machine tools for machining the given part family.


Big Data ◽  
2016 ◽  
pp. 261-287
Author(s):  
Keqin Wu ◽  
Song Zhang

While uncertainty in scientific data attracts an increasing research interest in the visualization community, two critical issues remain insufficiently studied: (1) visualizing the impact of the uncertainty of a data set on its features and (2) interactively exploring 3D or large 2D data sets with uncertainties. In this chapter, a suite of feature-based techniques is developed to address these issues. First, an interactive visualization tool for exploring scalar data with data-level, contour-level, and topology-level uncertainties is developed. Second, a framework of visualizing feature-level uncertainty is proposed to study the uncertain feature deviations in both scalar and vector data sets. With quantified representation and interactive capability, the proposed feature-based visualizations provide new insights into the uncertainties of both data and their features which otherwise would remain unknown with the visualization of only data uncertainties.


Author(s):  
Quanming Yao ◽  
Xiawei Guo ◽  
James Kwok ◽  
Weiwei Tu ◽  
Yuqiang Chen ◽  
...  

To meet the standard of differential privacy, noise is usually added into the original data, which inevitably deteriorates the predicting performance of subsequent learning algorithms. In this paper, motivated by the success of improving predicting performance by ensemble learning, we propose to enhance privacy-preserving logistic regression by stacking. We show that this can be done either by sample-based or feature-based partitioning. However, we prove that when privacy-budgets are the same, feature-based partitioning requires fewer samples than sample-based one, and thus likely has better empirical performance. As transfer learning is difficult to be integrated with a differential privacy guarantee, we further combine the proposed method with hypothesis transfer learning to address the problem of learning across different organizations. Finally, we not only demonstrate the effectiveness of our method on two benchmark data sets, i.e., MNIST and NEWS20, but also apply it into a real application of cross-organizational diabetes prediction from RUIJIN data set, where privacy is of a significant concern.


Author(s):  
Timo Laakko ◽  
Martti Mäntylä

Abstract A feature-based product modeling system is introduced where the user can incrementally create and modify product families. Product family and feature descriptions are coded in a special definition language and can be easily added and modified by the user. The descriptions include dynamically maintained constraints. The definition language description of a new family can be automatically created on the basis of a recognized prototypical instance. A stored design history can be used for generating the geometry definition of the family.


1982 ◽  
Vol 45 (3) ◽  
pp. 279-280
Author(s):  
J. J. RYAN ◽  
R. H. GOUGH

Coliform and total bacteria counts of soft-serve mixes and frozen soft-serve products were collected over a 21 month period. The mix data set consisted of 252 samples of which 10.71% contained >50,000 total bacteria/g and 7.54% contained >10 coliforms/g. The product data set consisted of 817 samples of which 38.51% contained >50,000 total bacteria/g and 51.22% contained >10 coliforms/g. Since mix and product data sets were from sample surveys, it was not possible to determine the specific mix used to produce a specific product.


2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773566 ◽  
Author(s):  
Lifeng An ◽  
Xinyu Zhang ◽  
Hongbo Gao ◽  
Yuchao Liu

Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.


Author(s):  
Keqin Wu ◽  
Song Zhang

While uncertainty in scientific data attracts an increasing research interest in the visualization community, two critical issues remain insufficiently studied: (1) visualizing the impact of the uncertainty of a data set on its features and (2) interactively exploring 3D or large 2D data sets with uncertainties. In this chapter, a suite of feature-based techniques is developed to address these issues. First, an interactive visualization tool for exploring scalar data with data-level, contour-level, and topology-level uncertainties is developed. Second, a framework of visualizing feature-level uncertainty is proposed to study the uncertain feature deviations in both scalar and vector data sets. With quantified representation and interactive capability, the proposed feature-based visualizations provide new insights into the uncertainties of both data and their features which otherwise would remain unknown with the visualization of only data uncertainties.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Sign in / Sign up

Export Citation Format

Share Document