scholarly journals A Proposed Extended Version of the Hadi-Vencheh Model to Improve Multiple-Criteria ABC Inventory Classification

2020 ◽  
Vol 10 (22) ◽  
pp. 8233
Author(s):  
Pei-Chun Lin ◽  
Hung-Chieh Chang

The ABC classification problem is approached as a ranking problem by the most current classification models; that is, a group of inventory items is expressed according to its overall weighted score of criteria in descending order. In this paper, we present an extended version of the Hadi-Vencheh model for multiple-criteria ABC inventory classification. The proposed model is one based on the nonlinear weighted product method (WPM), which determines a common set of weights for all items. Our proposed nonlinear WPM incorporates multiple criteria with different measured units without converting the performance of each inventory item, in terms of converting each criterion into a normalized attribute value, thereby providing an improvement over the model proposed by Hadi-Vencheh. Our study mainly includes various criteria for ABC classification and demonstrates an efficient algorithm for solving nonlinear programming problems, in which the feasible solution set does not have to be convex. The algorithm presented in this study substantially improves the solution efficiency of the canonical coordinates method (CCM) algorithm when applied to large-scale, nonlinear programming problems. The modified algorithm was tested to compare our proposed model results to the results derived using the Hadi-Vencheh model and demonstrate the algorithm’s efficacy. The practical objectives of the study were to develop an efficient nonlinear optimization solver by optimizing the quality of existing solutions, thus improving time and space efficiency.

Author(s):  
Hung-Chieh Chang ◽  
Pei-Chun Lin

In this paper, we present an extended version of the Hadi-Vencheh model for multiple criteria ABC inventory classification. The proposed model is a nonlinear weighted product model (WPM) which determines a common set of weights for all the items. Our proposed nonlinear WPM incorporates multiple criteria with different measure units, without converting the performance of each inventory item in terms of each criterion into a normalized attribute value, thereby providing an improvement over the model proposed by Hadi-Vencheh. Our study mainly includes various criteria for ABC classification, and demonstrates an efficient algorithm for solving nonlinear programming problems in which the feasible solution set does not have to be convex. The algorithm presented in this study improves the solution efficiency of the Canonical Coordinates Method (CCM) algorithm substantially when applied to large scale, nonlinear programming problems. The modified algorithm was tested to compare our proposed model results to the results derived using the Hadi-Vencheh model and demonstrate the algorithm's efficacy. The practical implications of the study are to develop an efficient nonlinear optimization solver by optimizing the quality of existing solutions, thus improving time and space efficiency.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
S. M. Hatefi ◽  
S. A. Torabi

Organizations typically employ the ABC inventory classification technique to have an efficient control on a huge amount of inventory items. The ABC inventory classification problem is classification of a large amount of items into three groups: A, very important; B, moderately important; and C, relatively unimportant. The traditional ABC classification only accounts for one criterion, namely, the annual dollar usage of the items. But, there are other important criteria in real world which strongly affect the ABC classification. This paper proposes a novel methodology based on a common weight linear optimization model to solve the multiple criteria inventory classification problem. The proposed methodology enables the classification of inventory items via a set of common weights which is very essential in a fair classification. It has a remarkable computational saving when compared with the existing approaches and at the same time it needs no subjective information. Furthermore, it is easy enough to apply for managers. The proposed model is applied on an illustrative example and a case study taken from the literature. Both numerical results and qualitative comparisons with the existing methods reveal several merits of the proposed approach for ABC analysis.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1670
Author(s):  
Waheeb Abu-Ulbeh ◽  
Maryam Altalhi ◽  
Laith Abualigah ◽  
Abdulwahab Ali Almazroi ◽  
Putra Sumari ◽  
...  

Cyberstalking is a growing anti-social problem being transformed on a large scale and in various forms. Cyberstalking detection has become increasingly popular in recent years and has technically been investigated by many researchers. However, cyberstalking victimization, an essential part of cyberstalking, has empirically received less attention from the paper community. This paper attempts to address this gap and develop a model to understand and estimate the prevalence of cyberstalking victimization. The model of this paper is produced using routine activities and lifestyle exposure theories and includes eight hypotheses. The data of this paper is collected from the 757 respondents in Jordanian universities. This review paper utilizes a quantitative approach and uses structural equation modeling for data analysis. The results revealed a modest prevalence range is more dependent on the cyberstalking type. The results also indicated that proximity to motivated offenders, suitable targets, and digital guardians significantly influences cyberstalking victimization. The outcome from moderation hypothesis testing demonstrated that age and residence have a significant effect on cyberstalking victimization. The proposed model is an essential element for assessing cyberstalking victimization among societies, which provides a valuable understanding of the prevalence of cyberstalking victimization. This can assist the researchers and practitioners for future research in the context of cyberstalking victimization.


Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


2010 ◽  
Vol 23 (12) ◽  
pp. 3157-3180 ◽  
Author(s):  
N. Eckert ◽  
H. Baya ◽  
M. Deschatres

Abstract Snow avalanches are natural hazards strongly controlled by the mountain winter climate, but their recent response to climate change has thus far been poorly documented. In this paper, hierarchical modeling is used to obtain robust indexes of the annual fluctuations of runout altitudes. The proposed model includes a possible level shift, and distinguishes common large-scale signals in both mean- and high-magnitude events from the interannual variability. Application to the data available in France over the last 61 winters shows that the mean runout altitude is not different now than it was 60 yr ago, but that snow avalanches have been retreating since 1977. This trend is of particular note for high-magnitude events, which have seen their probability rates halved, a crucial result in terms of hazard assessment. Avalanche control measures, observation errors, and model limitations are insufficient explanations for these trends. On the other hand, strong similarities in the pattern of behavior of the proposed runout indexes and several climate datasets are shown, as well as a consistent evolution of the preferred flow regime. The proposed runout indexes may therefore be usable as indicators of climate change at high altitudes.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 79 ◽  
Author(s):  
Xiaoyu Han ◽  
Yue Zhang ◽  
Wenkai Zhang ◽  
Tinglei Huang

Relation extraction is a vital task in natural language processing. It aims to identify the relationship between two specified entities in a sentence. Besides information contained in the sentence, additional information about the entities is verified to be helpful in relation extraction. Additional information such as entity type getting by NER (Named Entity Recognition) and description provided by knowledge base both have their limitations. Nevertheless, there exists another way to provide additional information which can overcome these limitations in Chinese relation extraction. As Chinese characters usually have explicit meanings and can carry more information than English letters. We suggest that characters that constitute the entities can provide additional information which is helpful for the relation extraction task, especially in large scale datasets. This assumption has never been verified before. The main obstacle is the lack of large-scale Chinese relation datasets. In this paper, first, we generate a large scale Chinese relation extraction dataset based on a Chinese encyclopedia. Second, we propose an attention-based model using the characters that compose the entities. The result on the generated dataset shows that these characters can provide useful information for the Chinese relation extraction task. By using this information, the attention mechanism we used can recognize the crucial part of the sentence that can express the relation. The proposed model outperforms other baseline models on our Chinese relation extraction dataset.


Author(s):  
Young Hyun Kim ◽  
Eun-Gyu Ha ◽  
Kug Jin Jeon ◽  
Chena Lee ◽  
Sang-Sun Han

Objectives: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods: In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results: This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


Sign in / Sign up

Export Citation Format

Share Document