scholarly journals Classification Performance for Making Decisions about Products Missing from the Shelf

2011 ◽  
Vol 2011 ◽  
pp. 1-13 ◽  
Author(s):  
Dimitris Papakiriakopoulos ◽  
Georgios Doukidis

The out-of-shelf problem is among the most important retail problems. This work employs two different classification algorithms, C4.5 and naïve Bayes, in order to build a mechanism that makes decisions about whether a product is available on a retail store shelf or not. Following the same classification methods and feature spaces, we examined the classification performance of the algorithms in four different retail chains and utilized ROC curves and the area under curve measure to compare the predictive accuracy. Based on the results obtained for the different retail chains, we identified certain approaches for the development and introduction of such a mechanism in different retail contexts.

2021 ◽  
Vol 15 (1) ◽  
pp. 66-75
Author(s):  
Loránd Szabó ◽  
Dávid Abriha ◽  
Kwanele Phinzi ◽  
Szilárd Szabó

In this study two high-resolution satellite imagery, the PlanetScope, and SkySat were compared based on their classification capabilities of urban vegetation. During the research, we applied Random Forest and Support Vector Machine classification methods at a study area, center of Rome, Italy. We performed the classifications based on the spectral bands, then we involved the NDVI index, too. We evaluated the classification performance of the classifiers using different sets of input data with ROC curves and AUC values. Additional statistical analyses were applied to reveal the correlation structure of the satellite bands and the NDVI and General Linear Modeling to evaluate the AUC of different models. Although different classification methods did not result in significantly differing outcomes (AUC values between 0.96 and 0.99), SVM’s performance was better. The contribution of NDVI resulted in significantly higher AUC values. SkySat’s bands provided slightly better input data related to PlanetScope but the difference was minimal (~3%); accordingly, both satellites ensured excellent classification results.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 721 ◽  
Author(s):  
YuGuang Long ◽  
LiMin Wang ◽  
MingHui Sun

Due to the simplicity and competitive classification performance of the naive Bayes (NB), researchers have proposed many approaches to improve NB by weakening its attribute independence assumption. Through the theoretical analysis of Kullback–Leibler divergence, the difference between NB and its variations lies in different orders of conditional mutual information represented by these augmenting edges in the tree-shaped network structure. In this paper, we propose to relax the independence assumption by further generalizing tree-augmented naive Bayes (TAN) from 1-dependence Bayesian network classifiers (BNC) to arbitrary k-dependence. Sub-models of TAN that are built to respectively represent specific conditional dependence relationships may “best match” the conditional probability distribution over the training data. Extensive experimental results reveal that the proposed algorithm achieves bias-variance trade-off and substantially better generalization performance than state-of-the-art classifiers such as logistic regression.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Qingchao Liu ◽  
Jian Lu ◽  
Shuyan Chen ◽  
Kangjia Zhao

This study presents the applicability of the Naïve Bayes classifier ensemble for traffic incident detection. The standard Naive Bayes (NB) has been applied to traffic incident detection and has achieved good results. However, the detection result of the practically implemented NB depends on the choice of the optimal threshold, which is determined mathematically by using Bayesian concepts in the incident-detection process. To avoid the burden of choosing the optimal threshold and tuning the parameters and, furthermore, to improve the limited classification performance of the NB and to enhance the detection performance, we propose an NB classifier ensemble for incident detection. In addition, we also propose to combine the Naïve Bayes and decision tree (NBTree) to detect incidents. In this paper, we discuss extensive experiments that were performed to evaluate the performances of three algorithms: standard NB, NB ensemble, and NBTree. The experimental results indicate that the performances of five rules of the NB classifier ensemble are significantly better than those of standard NB and slightly better than those of NBTree in terms of some indicators. More importantly, the performances of the NB classifier ensemble are very stable.


2018 ◽  
Vol 41 (05) ◽  
pp. 544-549 ◽  
Author(s):  
Ladina Vonzun ◽  
Franziska Maria Winder ◽  
Martin Meuli ◽  
Ueli Moerlen ◽  
Luca Mazzone ◽  
...  

Abstract Purpose The aim of this study was to describe the sonographic evolution of fetal head circumference (HC) and width of the posterior horn of the lateral ventricle (Vp) after open fetal myelomeningocele (fMMC) repair and to assess whether pre- or postoperative measurements are helpful to predict the need for shunting during the first year of life. Patients & Methods All 30 children older than one year by January 2017 who previously had fMMC repair at the Zurich Center for Fetal Diagnosis and Therapy were included. Sonographic evolution of fetal HC and Vp before and after fMMC repair was assessed and compared between the non-shunted (N = 16) and the shunted group (N = 14). ROC curves were generated for the fetal HC Z-score and Vp in order to show their predictive accuracy for the need for shunting until 1 year of age. Results HC was not an independent factor for predicting shunting. However, the need for shunting was directly dependent on the preoperative Vp as well as the Vp before delivery. A Vp > 10 mm at evaluation for fMMC repair or > 15 mm before delivery identifies 100 % of the infants needing shunt placement at a false-positive rate of 44 % and 25 %, respectively. All fetuses with a Vp > 15 mm at first evaluation received a shunt. Conclusion Fetuses demonstrating a Vp of > 15 mm before in utero MMC repair are extremely likely to develop hydrocephalus requiring a shunt during the first year of life. This compelling piece of evidence must be appropriately integrated into prenatal counseling.


2019 ◽  
Author(s):  
Yue Zhang ◽  
Yuan Nie ◽  
Si-Zhe Wan ◽  
Cong Liu ◽  
Xuan Zhu

Abstract Background The prediction of prognosis is an important part of management in decompensated cirrhosis (DeCi) patients with high long-term mortality. Lactate is a known predictor of outcome in critically ill patients. The aim of this study was to assess the prognostic value of lactate in DeCi patients.Methods We performed a single-center, observational, retrospective study of 456 DeCi patients extracted from hospitalization. Univariate and multivariate analyses were used to determine whether lactate was independently associated with the prognosis of DeCi patients. The AUROC was calculated to assess the predictive accuracy compared with existing scores.Results Serum lactate level was significantly higher in nonsurviving patients than in surviving patients. Univariate and multivariate analyses demonstrated that lactate was a risk-independent factor 6-months mortality (odds ratio: 1.412, P=0.001). ROC curves were drawn to evaluate the prediction efficiencies of lactate for 6-months mortality (AUROC: 0.716, P<0.001). Based on our patient cohort, the new scores (MELD+ lactate score, Child-Pugh+ lactate score) had good accuracy for predicting 6-months mortality (AUROC=0.769, P<0.001; AUROC= 0.766, P<0.001). Additionally, the performance of the new scores was superior to those of existing scores (all P < 0.001).Conclusion Serum lactate at admission may be useful for predicting 6-months mortality in DeCi patients, and the predictive value of the MELD score and Child-Pugh score were improved by adjusting lactate. Lactate should be part of the rapid diagnosis and initiation of therapy to improve clinical outcome.


2020 ◽  
Author(s):  
Harith Al-Sahaf ◽  
A Song ◽  
K Neshatian ◽  
Mengjie Zhang

Image classification is a complex but important task especially in the areas of machine vision and image analysis such as remote sensing and face recognition. One of the challenges in image classification is finding an optimal set of features for a particular task because the choice of features has direct impact on the classification performance. However the goodness of a feature is highly problem dependent and often domain knowledge is required. To address these issues we introduce a Genetic Programming (GP) based image classification method, Two-Tier GP, which directly operates on raw pixels rather than features. The first tier in a classifier is for automatically defining features based on raw image input, while the second tier makes decision. Compared to conventional feature based image classification methods, Two-Tier GP achieved better accuracies on a range of different tasks. Furthermore by using the features defined by the first tier of these Two-Tier GP classifiers, conventional classification methods obtained higher accuracies than classifying on manually designed features. Analysis on evolved Two-Tier image classifiers shows that there are genuine features captured in the programs and the mechanism of achieving high accuracy can be revealed. The Two-Tier GP method has clear advantages in image classification, such as high accuracy, good interpretability and the removal of explicit feature extraction process. © 2012 IEEE.


Author(s):  
Muskan Patidar

Abstract: Social networking platforms have given us incalculable opportunities than ever before, and its benefits are undeniable. Despite benefits, people may be humiliated, insulted, bullied, and harassed by anonymous users, strangers, or peers. Cyberbullying refers to the use of technology to humiliate and slander other people. It takes form of hate messages sent through social media and emails. With the exponential increase of social media users, cyberbullying has been emerged as a form of bullying through electronic messages. We have tried to propose a possible solution for the above problem, our project aims to detect cyberbullying in tweets using ML Classification algorithms like Naïve Bayes, KNN, Decision Tree, Random Forest, Support Vector etc. and also we will apply the NLTK (Natural language toolkit) which consist of bigram, trigram, n-gram and unigram on Naïve Bayes to check its accuracy. Finally, we will compare the results of proposed and baseline features with other machine learning algorithms. Findings of the comparison indicate the significance of the proposed features in cyberbullying detection. Keywords: Cyber bullying, Machine Learning Algorithms, Twitter, Natural Language Toolkit


Mekatronika ◽  
2019 ◽  
Vol 1 (2) ◽  
pp. 115-121
Author(s):  
Asrul Adam ◽  
Ammar Faiz Zainal Abidin ◽  
Zulkifli Md Yusof ◽  
Norrima Mokhtar ◽  
Mohd Ibrahim Shapiai

In this paper, the developments in the field of EEG signals peaks detection and classification methods based on time-domain analysis have been discussed. The use of peak classification algorithm has end up the most significant approach in several applications. Generally, the peaks detection and classification algorithm is a first step in detecting any event-related for the variation of signals. A review based on the variety of peak models on their respective classification methods and applications have been investigated. In addition, this paper also discusses on the existing feature selection algorithms in the field of peaks classification.


Sign in / Sign up

Export Citation Format

Share Document