A Nongraphical Method to Determine the Optimum Disassembly Plan in Remanufacturing

2012 ◽  
Vol 135 (2) ◽  
Author(s):  
Niloufar Ghoreishi ◽  
Mark J. Jakiela ◽  
Ali Nekouzadeh

Optimizing a disassembly process involves maximizing the number of disassembled valuable parts (cores) and minimizing the number of disassembly operations. Usually, some disassembly operations are in common among two or more cores, or sometimes removing a core requires prior removal of other cores (known as precedence relations); these correlations complicate the allocation of the disassembly cost to the cores. To overcome this complexity, the current optimization methods (decision trees) determine the optimum sequence of disassembly operations rather than the optimum set of cores to be disassembled. These methods become difficult to implement when the number of cores increases. In this paper, we developed a mechanized nongraphical approach to determine the optimum set of cores to be disassembled and their required disassembly operations based on the functionality statuses of the cores. This approach introduces a new characterization of the disassembly process and its precedence relations, and can be implemented conveniently using computer codes even when the product consists of many cores. The application of the method is explained with an example. Using this example, it was shown that the optimum disassembly can increase the net profit significantly compared with the complete disassembly.

Author(s):  
John G. Michopoulos ◽  
Sam G. Lambrakos ◽  
Nick E. Tran

The goal of the present work is three fold. Firstly to create the forward continuum model of a multi-species diffusing system under simultaneous presence of chemical reactivity and temperature as the general case of all hydrogen storage systems. Secondly, cast the problem of hydrogen storage in a pragmatic product-design context where the appropriate design parameters of the system are determined via appropriate optimization methods that utilize extensive experimental data encoding the behavior of the system. Thirdly, demonstrate this methodology on characterizing certain systemic parameters. Thus, the context of the work presented is defined by a data-driven characterization of coupled heat and mass diffusion models of hydrogen storage systems from a multiphysics perspective at the macro length scale. In particular, a single wall nanotube (SWNT) based composite is modeled by coupled partial differential equations representing spatio-temporal evolution of distributions of temperature and hydrogen concentration. Analytical solutions of these equations are adopted for an inverse analysis that defines a non-linear optimization problem for determining the parameters of the model by objective function minimization. Experimentally acquired and model produced data are used to construct the system’s objective function. Simulations to demonstrate the applicability of the methodology and a discussion of its potential extension to multi-scale and manufacturing process optimization are also presented.


2012 ◽  
Vol 729 ◽  
pp. 144-149 ◽  
Author(s):  
Imre Felde

The prediction of third type boundary conditions occurring during heat treatment processes is an essential requirement for characterization of heat transfer phenomena. In this work, the performance of four optimization techniques is studied. These models are the Conjugate Gradient Method, the Levenberg-Marquardt Method, the Simplex method and the NSGA II algorithm. The models are used to estimate the heat transfer coefficient during transient heat transfer. The performance of the optimization methods is demonstrated using numerical techniques.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Till Massing

AbstractTewari et al. (Parametric characterization of multimodal distributions with non-Gaussian modes, pp 286–292, 2011) introduced Gaussian mixture copula models (GMCM) for clustering problems which do not assume normality of the mixture components as Gaussian mixture models (GMM) do. In this paper, we propose Student t mixture copula models (SMCM) as an extension of GMCMs. GMCMs require weak assumptions, yielding a flexible fit and a powerful cluster tool. Our SMCM extension offers, in a natural way, even more flexibility than the GMCM approach. We discuss estimation issues and compare Expectation-Maximization (EM)-based with numerical simplex optimization methods. We illustrate the SMCM as a tool for image segmentation.


Author(s):  
Joicy V. M. Peixoto ◽  
Rafaela S. de Almeida ◽  
Jaine P. R. da Rocha ◽  
Gabriel M. Maciel ◽  
Nádya C. Santos ◽  
...  

ABSTRACT The correct characterization of germplasm banks is fundamental for breeders to succeed in breeding programs. Several studies have sought to obtain genotypes with resistance to pests. However, there is no consensus about which methodology is the most appropriate to characterize a germplasm bank of tomato with different levels of resistance to pests. The objective of this study was to compare methods of multivariate analysis for the evaluation of genetic diversity in tomato genotypes with different levels of resistance to pests. The experiments were conducted at the Vegetable Experimental Station of the Federal University of Uberlândia - Monte Carmelo campus (18º 42’ 43.19” South latitude and 47º 29’ 55.8” West longitude, 873 m altitude), in the period from April 2013 to November 2016. Sixteen genotypes were evaluated from the interspecific cross between LA-716 (S. pennellii) versus pre-commercial line (UFU-057) followed by backcrossing and self-fertilization, along with the pre-commercial line UFU-057 (recurrent parent) Santa Clara and the wild accession S. pennellii (donor genitor). The contents of acylsugar, foliar trichomes, South American tomato pinworm and leaf miner repellency tests were analyzed. The experimental design was the randomized block design totaling 76 plots (19 genotypes x 4 blocks). It was concluded that there was genetic variability among the evaluated genotypes. The method of graphic dispersion by principal components revealed a greater power of discrimination. Genotypes UFU-057F2RC27#4.3, UFU-057F2RC28#2.2 and UFU-057F2RC27#4.7 contain the highest levels of acylsugar, resistance to Liriomyza spp. and T. absoluta.


2020 ◽  
Vol 37 (1) ◽  
pp. 75-116
Author(s):  
Kensuke Takita

AbstractThe primary goal of the present paper is to argue for the hypothesis that labeling is required for linearization, which is called Labeling for Linearization (LfL). To achieve this goal, it is first argued that labels are not necessary for semantic interpretation. It is then proposed that labels are necessary for linearization at the PF-interface in that they serve as a device to encode structural asymmetries that are employed to determine precedence relations, which are asymmetric as well. It is also shown that LfL can remove several problems of the original labeling framework. Building on the idea that Spell-Out applies to the whole phase but not its subpart, it is illustrated that the LfL-based analysis can solve the problem concerning the variable ways of applying Spell-Out, which arises in the standard phase theory. Extending the LfL-based framework to Japanese, a novel analysis of particle-stranding ellipsis is also proposed. Incorporating some insights of recent approaches that particle-stranding ellipsis arises through a PF-deletion process, it is shown that the proposed analysis based on LfL offers a theoretically more suitable characterization of the PF-deletion process. In this way, the present article contributes to not only sharpening the core theoretical notions regarding structure building and linearization in terms of labeling but also deepening our understanding of the structure of Japanese.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Paolo Cinat ◽  
Marco Paggi ◽  
Giorgio Gnecco

Additive manufacturing technologies are a key point of the current era of Industry 4.0, promoting the production of mechanical components via the addition of subsequent layers of material. Then, they may be also used to produce surfaces tailored to achieve a desired mechanical contact response. In this work, we develop a method to prototype profiles optimizing a suitable trade-off between two different target mechanical responses. The mechanical design problem is solved relying on both physical assumptions and optimization methods. An algorithm is proposed, exploiting an analogy between genetics and the multiscale characterization of roughness, where various length-scales are described in terms of rough profiles, named chromosomes. Finally, the proposed algorithm is tested on a representative example, and the topological and spectral features of roughness of the optimized profiles are discussed.


2021 ◽  
Vol 34 (4) ◽  
pp. 547-555
Author(s):  
Ben Moussa Oum Salama ◽  
Ayad Ahmed Nour El Islam ◽  
Tarik Bouchala

This paper presents eddy current non-destructive characterization of three aeronautical metal sheets by deterministic and stochastic inversion methods. This procedure consists of associating the finite element method with three optimization algorithms (Simplex method and genetic and particle swarm algorithms) simultaneously determine electric conductivity, magnetic permeability and thickness of Al, Ti and 304L stainless steel metal sheets largely used in aeronautical industry. Indeed, the application of these methods has shown the performance of each inversion algorithms. As a result, while doing a qualitative and quantitative comparison, it was found that the Simplex method is more advantageous in comparison with genetic and particle swarm algorithms, since it is faster and more stable .


2021 ◽  
Vol 118 (36) ◽  
pp. e2105548118
Author(s):  
Aitor Franco ◽  
Pablo Gracia ◽  
Adai Colom ◽  
José D. Camino ◽  
José Ángel Fernández-Higuero ◽  
...  

α-synuclein aggregation is present in Parkinson’s disease and other neuropathologies. Among the assemblies that populate the amyloid formation process, oligomers and short fibrils are the most cytotoxic. The human Hsc70-based disaggregase system can resolve α-synuclein fibrils, but its ability to target other toxic assemblies has not been studied. Here, we show that this chaperone system preferentially disaggregates toxic oligomers and short fibrils, while its activity against large, less toxic amyloids is severely impaired. Biochemical and kinetic characterization of the disassembly process reveals that this behavior is the result of an all-or-none abrupt solubilization of individual aggregates. High-speed atomic force microscopy explicitly shows that disassembly starts with the destabilization of the tips and rapidly progresses to completion through protofilament unzipping and depolymerization without accumulation of harmful oligomeric intermediates. Our data provide molecular insights into the selective processing of toxic amyloids, which is critical to identify potential therapeutic targets against increasingly prevalent neurodegenerative disorders.


Author(s):  
Han P. Bao ◽  
ChunHsi Lei

Disassembly planning and costing is a major task in the achievement of sustainable manufacturing. This paper presents a systematic approach to identify the feasible ways to disassemble a product then to select the most economical one using reliable time data gathered from experimental and practice-oriented sources. The disassembly process is modeled after the Petri Net approach, a technique that has proven to be fairly popular with the research community in the last few decades. The result of our systematic approach is a reliable derivation for a time-effective disassembly plan.


Author(s):  
Emil Fridman ◽  
Francisco Álvarez Velarde ◽  
Pablo Romojaro Otero ◽  
Haileyesus Tsige-Tamirat ◽  
Antonio Jiménez-Carrascosa ◽  
...  

Abstract In the framework of the Horizon 2020 project ESFR-SMART (2017-2021), the European Sodium Fast Reactor (ESFR) core was updated through a safety-related modification and optimization of the core design from the earlier FP7 CP-ESFR project (2009-2013). This study is dedicated to neutronic analyses of the improved ESFR core design. The conducted work is reported in two parts. Part I deals with the evaluation of the safety-related neutronic parameters of the fresh Beginning-of-Life (BOL) core carried out by 8 organizations using both continuous energy Monte Carlo and deterministic computer codes. In addition to the neutronics characterization of the core, a special emphasis was put on the calibration and verification of the computational tools involved in the analyses. Part II is devoted to once-through and realistic batch-wise burnup calculations aiming at the establishing of the equilibrium core state, which will later serve as a basis for detailed safety analyses.


Sign in / Sign up

Export Citation Format

Share Document