scholarly journals A Common Weight Linear Optimization Approach for Multicriteria ABC Inventory Classification

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
S. M. Hatefi ◽  
S. A. Torabi

Organizations typically employ the ABC inventory classification technique to have an efficient control on a huge amount of inventory items. The ABC inventory classification problem is classification of a large amount of items into three groups: A, very important; B, moderately important; and C, relatively unimportant. The traditional ABC classification only accounts for one criterion, namely, the annual dollar usage of the items. But, there are other important criteria in real world which strongly affect the ABC classification. This paper proposes a novel methodology based on a common weight linear optimization model to solve the multiple criteria inventory classification problem. The proposed methodology enables the classification of inventory items via a set of common weights which is very essential in a fair classification. It has a remarkable computational saving when compared with the existing approaches and at the same time it needs no subjective information. Furthermore, it is easy enough to apply for managers. The proposed model is applied on an illustrative example and a case study taken from the literature. Both numerical results and qualitative comparisons with the existing methods reveal several merits of the proposed approach for ABC analysis.

Author(s):  
Nazanin Esmaeili ◽  
Ebrahim Teimoury ◽  
Fahimeh Pourmohammadi

In today's competitive world, the quality of after-sales services plays a significant role in customer satisfaction and customer retention. Some after-sales activities require spare parts and owing to the importance of customer satisfaction, the needed spare parts must be supplied until the end of the warranty period. In this study, a mixed-integer linear optimization model is presented to redesign and plan the sale and after-sales services supply chain that addresses the challenges of supplying spare parts after the production is stopped due to demand reduction. Three different options are considered for supplying spare parts, including production/procurement of extra parts while the product is being produced, remanufacturing, and procurement of parts just in time they are needed. Considering the challenges of supplying spare parts for after-sales services based on the product's life cycle is one contribution of this paper. Also, this paper addresses the uncertainties associated with different parameters through Mulvey's scenario-based optimization approach. Applicability of the model is investigated using a numerical example from the literature. The results indicate that the production/procurement of extra parts and remanufacturing are preferred to the third option. Moreover, remanufacturing is recommended when the remanufacturing cost is less than 23% of the production cost.


2018 ◽  
Vol 52 (4-5) ◽  
pp. 1219-1232 ◽  
Author(s):  
Atena Gholami ◽  
Reza Sheikh ◽  
Neda Mizani ◽  
Shib Sankar Sana

Customer’s recognition, classification, and selecting the target market are the most important success factors of a marketing system. ABC classification of the customers based on axiomatic design exposes the behavior of the customer in a logical way in each class. Quite often, missing data is a common occurrence and can have a significant effect on the decision- making problems. In this context, this proposed article determines the customer’s behavioral rule by incomplete rough set theory. Based on the proposed axiomatic design, the managers of a firm can map the rules on designed structures. This study demonstrates to identify the customers, determine their characteristics, and facilitate the development of a marketing strategy.


2020 ◽  
Vol 10 (22) ◽  
pp. 8233
Author(s):  
Pei-Chun Lin ◽  
Hung-Chieh Chang

The ABC classification problem is approached as a ranking problem by the most current classification models; that is, a group of inventory items is expressed according to its overall weighted score of criteria in descending order. In this paper, we present an extended version of the Hadi-Vencheh model for multiple-criteria ABC inventory classification. The proposed model is one based on the nonlinear weighted product method (WPM), which determines a common set of weights for all items. Our proposed nonlinear WPM incorporates multiple criteria with different measured units without converting the performance of each inventory item, in terms of converting each criterion into a normalized attribute value, thereby providing an improvement over the model proposed by Hadi-Vencheh. Our study mainly includes various criteria for ABC classification and demonstrates an efficient algorithm for solving nonlinear programming problems, in which the feasible solution set does not have to be convex. The algorithm presented in this study substantially improves the solution efficiency of the canonical coordinates method (CCM) algorithm when applied to large-scale, nonlinear programming problems. The modified algorithm was tested to compare our proposed model results to the results derived using the Hadi-Vencheh model and demonstrate the algorithm’s efficacy. The practical objectives of the study were to develop an efficient nonlinear optimization solver by optimizing the quality of existing solutions, thus improving time and space efficiency.


2019 ◽  
Vol 53 (5) ◽  
pp. 1775-1789
Author(s):  
Qingxian An ◽  
Yao Wen ◽  
Junhua Hu ◽  
Xiyang Lei

ABC analysis is a famous technique for inventory classification. However, this technique on the inventory classification only considering one indicator even though other important factors may affect the classification. To address this issue, researchers have proposed multiple criteria inventory classification (MCIC) solutions based on data envelopment analysis (DEA)-like methods. However, previous models almost evaluate items by different weight sets, and the index system only contains quantitative criteria and output indicators. To avoid these shortcomings, we propose an improved common-weight DEA model for MCIC issue. This model simultaneously considers quantitative and qualitative criteria as well as establishes a comprehensive index system that includes inputs and outputs. Apart from its improved discriminating power and lack of subjectivity, this non-parametric and linear programming model provides the performance scores of all items through a single computation. A case study is performed to validate and compare the performance of this new model with that of traditional ABC analysis, DEA–CCR and DEA–CI. The results show that apart from the highly improved discriminating power and significant reduction in computational burden, the proposed model has achieved a more comprehensive ABC inventory classification than the traditional models.


Author(s):  
Walid Moudani ◽  
Grace Zaarour ◽  
Félix Mora-Camino

This paper proposes a predictive model to handle customer insolvency in advance for large mobile telecommunication companies for the purpose of minimizing their losses while preserving an overall satisfaction of the customers which may have important consequences on the quality and on the consume return of the operations. A new mathematical formulation taking into consideration a set of business rules and the satisfaction of the customers is proposed. However, the customer insolvency is defined to be a classification problem since our main purpose is to categorize the customer in one of the two classes: potentially insolvent or potentially solvent. Therefore, a model with precise business prediction using the knowledge discovery and Data Mining techniques on an enormous heterogeneous and noisy data is proposed. A fuzzy approach to evaluate and analyze the customer behavior leading to segment them into groups that provide better understanding of customers is developed. These groups with many other significant variables feed into a classification algorithm based on Rough fuzzy Sets technique to classify the customers. A real case study is considered here, followed by analysis and comparison of the results for the reason to select the best classification model that maximizes the accuracy for insolvent customers and minimizes the error rate in the misclassification of solvent customers.


2019 ◽  
Vol 38 (3) ◽  
pp. 279-290 ◽  
Author(s):  
Seyed Zeinab Aliahmadi ◽  
Farnaz Barzinpour ◽  
Mir Saman Pishvaee

In many countries, municipal solid waste management is considered a very important challenge, and the most relevant costs in this field are dedicated to the collection process. Therefore, this study aimed to propose a mathematical model with multiple depots and multiple intermediate facilities to minimize fixed and variable costs of waste collection. Intermediate facilities are used in the developed countries in their waste collection network, because these facilities reduce the long-term costs of waste management and increase the quality of the waste collection process. Also, in reality, the amount of waste generated per day is not deterministic, so, to cope with the issue of uncertainty in the amount of waste, a fuzzy optimization approach was considered. Furthermore, a system where vehicles that could collect the wastes in multiple tours, with a maximum number of tours for each vehicle, was also considered. Due to the high complexity of this model, a genetic algorithm was elaborated. Further, the efficiency of the proposed algorithm was confirmed by comparison with the exact solution in small dimensions. It should be noted that the initial solution of this algorithm was obtained by a proposed heuristic algorithm. Finally, a case study on the vehicle routing of municipal solid waste was conducted in a district of Tehran, Iran. Moreover, the solutions of the model were validated by comparing the results of the proposed model and the current real-life situation. The contractors could improve vehicle routes and reduce costs by implementing the results of the proposed model, without any additional cost.


2018 ◽  
Vol 2 (1) ◽  
pp. 21
Author(s):  
Pedro Urena

<p>Ontology  enrichment  is  a  classification  problem  in which  an  algorithm  categorizes  an  input conceptual unit  in the corresponding node  in a target ontology. Conceptual enrichment  is of great importance both to Knowledge Engineering and Natural Language Processing, because it helps maximize the efficacy of intelligent systems, making them more adaptable to scenarios where  information  is  produced  by  means  of  language.  Following  previous  research  on distributional  semantics,  this  paper  presents  a  case  study  of  ontology  enrichment  using  a feature-extraction  method  which  relies  on  collocational  information  from  corpora.  The  major advantage  of  this  method  is  that  it  can  help  locate  an  input  unit  within  its  corresponding superordinate node in a taxonomy using a relatively small number of lexical features. In order to  evaluate  the  proposed  framework,  this  paper  presents  an  experiment  consisting  of  the automatic classification of a chemical substance in a taxonomy of toxicology.</p>


Author(s):  
Ahlam Mallak ◽  
Madjid Fathi

Feature selection is a crucial step to overcome the curse of dimensionality problem in data mining. This work proposes Recursive k-means Silhouette Elimination (RkSE) as a new unsupervised feature selection algorithm to reduce dimensionality in univariate and multivariate time-series datasets. Where k-means clustering is applied recursively to select the cluster representative features, following a unique application of silhouette measure for each cluster and a user-defined threshold as the feature selection or elimination criteria. The proposed method is evaluated on a hydraulic test rig, multi sensor readings in two different fashions: (1) Reduce the dimensionality in a multivariate classification problem using various classifiers of different functionalities. (2) Classification of univariate data in a sliding window scenario, where RkSE is used as a window compression method, to reduce the window dimensionality by selecting the best time points in a sliding window. Moreover, the results are validated using 10-fold cross validation technique. As well as, compared to the results when the classification is pulled directly with no feature selection applied. Additionally, a new taxonomy for k-means based feature selection methods is proposed. The experimental results and observations in the two comprehensive experiments demonstrated in this work reveal the capabilities and accuracy of the proposed method.


2020 ◽  
Author(s):  
Ahmed Abdelmoaty ◽  
Wessam Mesbah ◽  
Mohammad A. M. Abdel-Aal ◽  
Ali T. Alawami

In the recent electricity market framework, the profit of the generation companies depends on the decision of the operator on the schedule of its units, the energy price, and the optimal bidding strategies. Due to the expanded integration of uncertain renewable generators which is highly intermittent such as wind plants, the coordination with other facilities to mitigate the risks of imbalances is mandatory. Accordingly, coordination of wind generators with the evolutionary Electric Vehicles (EVs) is expected to boost the performance of the grid. In this paper, we propose a robust optimization approach for the coordination between the wind-thermal generators and the EVs in a virtual<br>power plant (VPP) environment. The objective of maximizing the profit of the VPP Operator (VPPO) is studied. The optimal bidding strategy of the VPPO in the day-ahead market under uncertainties of wind power, energy<br>prices, imbalance prices, and demand is obtained for the worst case scenario. A case study is conducted to assess the e?effectiveness of the proposed model in terms of the VPPO's profit. A comparison between the proposed model and the scenario-based optimization was introduced. Our results confirmed that, although the conservative behavior of the worst-case robust optimization model, it helps the decision maker from the fluctuations of the uncertain parameters involved in the production and bidding processes. In addition, robust optimization is a more tractable problem and does not suffer from<br>the high computation burden associated with scenario-based stochastic programming. This makes it more practical for real-life scenarios.<br>


Sign in / Sign up

Export Citation Format

Share Document