Continuous Preference Trend Mining for Optimal Product Design With Multiple Profit Cycles

Author(s):  
Jungmok Ma ◽  
Harrison M. Kim

Product and design analytics is emerging as a promising area for the analysis of large-scale data and reflection of the extracted knowledge for the design of optimal system. The Continuous Preference Trend Mining (CPTM) algorithm and a framework that are proposed in this study address some fundamental challenges in the context of product and design analytics. The first contribution is the development of a new predictive trend mining technique that captures a hidden trend of customer purchase patterns from large accumulated transactional data. Different from traditional, static data mining algorithms, the CPTM does not assume the stationarity, and dynamically extract valuable knowledge of customers over time. By generating trend embedded future data, the CPTM algorithm not only shows higher prediction accuracy in comparison with static models, but also provide essential properties that could not be achieved with a previous proposed model: avoiding an over-fitting problem, identifying performance information of constructed model, and allowing a numeric prediction. The second contribution is a predictive design methodology in the early design stage. The framework enables engineering designers to optimize product design over multiple life cycles while reflecting customer preferences and technological obsolescence using the CPTM algorithm. For illustration, the developed framework is applied to an example of tablet PC design in leasing market and the result shows that the selection of optimal design is achieved over multiple life cycles.

2014 ◽  
Vol 136 (6) ◽  
Author(s):  
Jungmok Ma ◽  
Harrison M. Kim

Product and design analytics is emerging as a promising area for the analysis of large-scale data and usage of the extracted knowledge for the design of optimal system. The continuous preference trend mining (CPTM) algorithm and application proposed in this study address some fundamental challenges in the context of product and design analytics. The first contribution is the development of a new predictive trend mining technique that captures a hidden trend of customer purchase patterns from accumulated transactional data. Unlike traditional, static data mining algorithms, the CPTM does not assume stationarity but dynamically extracts valuable knowledge from customers over time. By generating trend embedded future data, the CPTM algorithm not only shows higher prediction accuracy in comparison with well-known static models but also provides essential properties that could not be achieved with previously proposed models: utilizing historical data selectively, avoiding an over-fitting problem, identifying performance information of a constructed model, and allowing a numeric prediction. The second contribution is the formulation of the initial design problem which can reveal an opportunity for multiple profit cycles. This mathematical formulation enables design engineers to optimize product design over multiple life cycles while reflecting customer preferences and technological obsolescence using the CPTM algorithm. For illustration, the developed framework is applied to an example of tablet PC design in leasing market and the result shows that the determination of optimal design is achieved over multiple life cycles.


2021 ◽  
Vol 1 ◽  
pp. 3229-3238
Author(s):  
Torben Beernaert ◽  
Pascal Etman ◽  
Maarten De Bock ◽  
Ivo Classen ◽  
Marco De Baar

AbstractThe design of ITER, a large-scale nuclear fusion reactor, is intertwined with profound research and development efforts. Tough problems call for novel solutions, but the low maturity of those solutions can lead to unexpected problems. If designers keep solving such emergent problems in iterative design cycles, the complexity of the resulting design is bound to increase. Instead, we want to show designers the sources of emergent design problems, so they may be dealt with more effectively. We propose to model the interplay between multiple problems and solutions in a problem network. Each problem and solution is then connected to a dynamically changing engineering model, a graph of physical components. By analysing the problem network and the engineering model, we can (1) derive which problem has emerged from which solution and (2) compute the contribution of each design effort to the complexity of the evolving engineering model. The method is demonstrated for a sequence of problems and solutions that characterized the early design stage of an optical subsystem of ITER.


Author(s):  
Markus Mäck ◽  
Michael Hanss

Abstract The early design stage of mechanical structures is often characterized by unknown or only partially known boundary conditions and environmental influences. Particularly, in the case of safety-relevant components, such as the crumple zone structure of a car, those uncertainties must be appropriately quantified and accounted for in the design process. For this purpose, possibility theory provides a suitable tool for the modeling of incomplete information and uncertainty propagation. However, the numerical propagation of uncertainty described by possibility theory is accompanied by high computational costs. The necessarily repeated model evaluations render the uncertainty analysis challenging to be realized if a model is complex and of large scale. Oftentimes, simplified and idealized models are used for the uncertainty analysis to speed up the simulation while accepting a loss of accuracy. The proposed multifidelity scheme for possibilistic uncertainty analysis, instead, takes advantage of the low costs of an inaccurate low-fidelity model and the accuracy of an expensive high-fidelity model. For this purpose, the functional dependency between the high- and low-fidelity model is exploited and captured in a possibilistic way. This results in a significant speedup for the uncertainty analysis while ensuring accuracy by using only a low number of expensive high-fidelity model evaluations. The proposed approach is applied to an automotive car crash scenario in order to emphasize its versatility and applicability.


2021 ◽  
Author(s):  
Oluvaseun Owojaiye

Advancement in technology has brought considerable improvement to processor design and now manufacturers design multiple processors on a single chip. Supercomputers today consists of cluster of interconnected nodes that collaborate together to solve complex and advanced computation problems. Message Passing Interface and Open Multiprocessing are the popularly used programming models to optimize sequential codes by parallelizing them on the different multiprocessor architecture that exist today. In this thesis, we parallelize the non-slicing floorplan algorithm based on Multilevel Floorplanning/placement of large scale modules using B*tree (MB*tree) with MPI and OpenMP on distributed and shared memory architectures respectively. In VLSI (Very Large Scale Integration) design automation, floorplanning is an initial and vital task performed in the early design stage. Experimental results using MCNC benchmark circuits show that our parallel algorithm produced better results than the corresponding sequential algorithm; we were able to speed up the algorithm up to 4 times, hence reducing computation time and maintaining floorplan solution quality. On the other hand, we compared both parallel versions; and the OpenMP results gave slightly better than the corresponding MPI results.


Author(s):  
Min Jung Kwak ◽  
Yoo Suk Hong ◽  
Nam Wook Cho

In recent years, sustainable product design has become a great concern to product manufacturers. An effective way to enhance the product sustainability is to design products that are easy to disassemble and recycle. An EOL strategy is concerned with how to disassemble a product and what to do with each of the resulting disassembled parts. A sound understanding of the EOL strategy from the early design stage could improve the ease of disassembly and recycling in an efficient and effective manner. We introduce a novel concept of eco-architecture which represents a scheme by which the physical components are allocated to EOL modules. An EOL module is a physical chunk of connected components or a feasible subassembly which can be simultaneously processed by the same EOL option without further disassembly. In this paper, a method for analyzing and optimizing the eco-architecture of a product in the architecture design stage is proposed. Using mathematical programming, it produces an optimal eco-architecture based on the estimation of the economic values and costs for possible EOL modules under the given environmental regulations.


2021 ◽  
Author(s):  
Venkat P. Nemani ◽  
Jinqiang Liu ◽  
Navaid Ahmed ◽  
Adam Cartwright ◽  
Gül E. Kremer ◽  
...  

Abstract Design for Remanufacturing (DfRem) is an attractive approach for sustainable product development. Evaluation of DfRem strategies, from both economic and environmental perspectives, at an early design stage can allow the designers to make informed decisions when choosing the best design option. Studying the long-term implications of a particular design scenario requires quantifying the benefits of remanufacturing for multiple life cycles while considering the reliability of the product. In addition to comparing designs on a one-to-one basis, we find that including reliability provides a different insight into comparing design strategies. We present a reliability-informed cost and energy analysis framework that accounts for product reliability for multiple remanufacturing cycles within a certain warranty policy. The variation of reuse rate over successive remanufacturing cycles is formulated using a branched power-law model which provides probabilistic scenarios of reusing or replacing with new units. To demonstrate the utility of this framework, we use the case study of a hydraulic manifold, which is a component of a transmission used in some agricultural equipment, and use real-world field reliability data to quantify the transmission’s reliability. Three design improvement changes are proposed for the manifold and we quantify the costs and energy consumption associated with each of the design changes for multiple remanufacturing cycles.


2021 ◽  

The absence of existing standards for product recovery planning and the associated difficulty in prioritising the conflicting design requirements are among the main challenges faced during product design. In this paper, a concept for the Design for Multiple Life-Cycles (DFMLC) is proposed to address this situation. The objective of the DFMLC model is to assist designers in evaluating design attributes of Multiple Life-Cycle Products (MLCP) at the early design stage. The methodology adopted for the evaluation of MLCP design strategies has been based on a modified Analytical Hierarchy Process (AHP). Two mapping matrices of the design guidelines and design strategies concerning MLCP design attributes were developed for the modified AHP model. Disassemblability (> 21 %) was found to be the most important design element for MLCP followed by serviceability (> 20 %) and reassembly (> 12 %).


Author(s):  
Guanghsu A. Chang ◽  
Cheng-Chung Su ◽  
John W. Priest

Many conflicting issues exist between product design and manufacturing department. In the early design stage, designers often do not have enough expertise to successfully address all these issues. This results in a product design with a low level of assemblability and manufacturability. Hence, an intelligent decision support system is needed for early design stages to improve a design. This paper proposed a web-based intelligent decision support system, CBR-DFMA, connecting with a case base, database and knowledge base. Early experimental results indicate that potential design problems can be detected in advance, design expertise can be effectively disseminated and effective training is offered to designer by employing this system.


2021 ◽  
Author(s):  
Oluvaseun Owojaiye

Advancement in technology has brought considerable improvement to processor design and now manufacturers design multiple processors on a single chip. Supercomputers today consists of cluster of interconnected nodes that collaborate together to solve complex and advanced computation problems. Message Passing Interface and Open Multiprocessing are the popularly used programming models to optimize sequential codes by parallelizing them on the different multiprocessor architecture that exist today. In this thesis, we parallelize the non-slicing floorplan algorithm based on Multilevel Floorplanning/placement of large scale modules using B*tree (MB*tree) with MPI and OpenMP on distributed and shared memory architectures respectively. In VLSI (Very Large Scale Integration) design automation, floorplanning is an initial and vital task performed in the early design stage. Experimental results using MCNC benchmark circuits show that our parallel algorithm produced better results than the corresponding sequential algorithm; we were able to speed up the algorithm up to 4 times, hence reducing computation time and maintaining floorplan solution quality. On the other hand, we compared both parallel versions; and the OpenMP results gave slightly better than the corresponding MPI results.


Author(s):  
Lukman Irshad ◽  
H. Onan Demirel ◽  
Irem Y. Tumer

Abstract Human errors and poor ergonomics are attributed to a majority of large-scale accidents and malfunctions in complex engineered systems. Human Error and Functional Failure Reasoning (HEFFR) is a framework developed to assess potential functional failures, human errors, and their propagation paths during early design stages so that more reliable systems with improved performance and safety can be designed. In order to perform a comprehensive analysis using this framework, a wide array of potential failure scenarios need to be tested. Coming up with such use cases that can cover a majority of faults can be challenging or even impossible for a single engineer or a team of engineers. In the field of software engineering, automated test case generation techniques have been widely used for software testing. This research explores these methods to create a use case generation technique that covers both component-related and human-related fault scenarios. The proposed technique is a time based simulation that employs a modified Depth First Search (DFS) algorithm to simulate events as the event propagation is analyzed using HEFFR at each timestep. This approach is applied to a hold-up tank design problem and the results are analyzed to explore the capabilities and limitations.


Sign in / Sign up

Export Citation Format

Share Document