Context Inference Engine (CiE)

Author(s):  
Umar Mahmud ◽  
Muhammad Younus Javed

Context Awareness is the ability of systems and applications to sense the environment and infer the activity going on in the environment. Context encompasses all knowledge bounded within an environment and includes attributes of both machines and users. A context-aware system is composed of context gathering and context inference modules. This chapter proposes a Context Inference Engine (CiE) that classifies the current context as one of several known context activities. This engine follows a Minkowski distance-based classification approach with standard deviation-based ranks to identify likeliness of classified activity of the current context. Empirical results on different data sets show that the proposed algorithm performs closer to Support Vector Machines (SVM) while it is better than probabilistic reasoning methods where the performance is quantified as success in classification.

Author(s):  
Umar Mahmud ◽  
Mohammed Younus Javed

Context Awareness is the task of inferring contextual data acquired through sensors present in the environment. ‘Context’ encompasses all knowledge bounded by a scope and includes attributes of machines and users. A general context aware system is composed of context gathering and context inference modules. This paper proposes a Context Inference Engine (CiE) that classifies the current context as one of several recorded context activities. The engine follows a distance measure based classification approach with standard deviation based ranks to identify likely activities. The paper presents the algorithm and some results of the context classification process.


2012 ◽  
Vol 24 (4) ◽  
pp. 1047-1084 ◽  
Author(s):  
Xiao-Tong Yuan ◽  
Shuicheng Yan

We investigate Newton-type optimization methods for solving piecewise linear systems (PLSs) with nondegenerate coefficient matrix. Such systems arise, for example, from the numerical solution of linear complementarity problem, which is useful to model several learning and optimization problems. In this letter, we propose an effective damped Newton method, PLS-DN, to find the exact (up to machine precision) solution of nondegenerate PLSs. PLS-DN exhibits provable semiiterative property, that is, the algorithm converges globally to the exact solution in a finite number of iterations. The rate of convergence is shown to be at least linear before termination. We emphasize the applications of our method in modeling, from a novel perspective of PLSs, some statistical learning problems such as box-constrained least squares, elitist Lasso (Kowalski & Torreesani, 2008 ), and support vector machines (Cortes & Vapnik, 1995 ). Numerical results on synthetic and benchmark data sets are presented to demonstrate the effectiveness and efficiency of PLS-DN on these problems.


Author(s):  
Melih S. Aslan ◽  
Hossam Abd El Munim ◽  
Aly A. Farag ◽  
Mohamed Abou El-Ghar

Graft failure of kidneys after transplantation is most often the consequence of the acute rejection. Hence, early detection of the kidney rejection is important for the treatment of renal diseases. In this chapter, authors introduce a new automatic approach to classify normal kidney function from kidney rejection using dynamic contrast enhanced magnetic resonance imaging (DCE-MRI). The kidney has three regions named the cortex, medulla, and pelvis. In their experiment, they use the medulla region because it has specific responses to DCE-MRI that are helpful to identify kidney rejection. In the authors’ process they segment the kidney using the level sets method. They then employ several classification methods such as the Euclidean distance, Mahalanobis distance, and least square support vector machines (LS-SVM). The authors’preliminary results are very encouraging and reproducibility of the results was achieved for 55 clinical data sets. The classification accuracy, diagnostic sensitivity, and diagnostic specificity are 84%, 75%, and 96%, respectively.


2016 ◽  
Vol 28 (6) ◽  
pp. 1217-1247 ◽  
Author(s):  
Yunlong Feng ◽  
Yuning Yang ◽  
Xiaolin Huang ◽  
Siamak Mehrkanoon ◽  
Johan A. K. Suykens

This letter addresses the robustness problem when learning a large margin classifier in the presence of label noise. In our study, we achieve this purpose by proposing robustified large margin support vector machines. The robustness of the proposed robust support vector classifiers (RSVC), which is interpreted from a weighted viewpoint in this work, is due to the use of nonconvex classification losses. Besides the robustness, we also show that the proposed RSCV is simultaneously smooth, which again benefits from using smooth classification losses. The idea of proposing RSVC comes from M-estimation in statistics since the proposed robust and smooth classification losses can be taken as one-sided cost functions in robust statistics. Its Fisher consistency property and generalization ability are also investigated. Besides the robustness and smoothness, another nice property of RSVC lies in the fact that its solution can be obtained by solving weighted squared hinge loss–based support vector machine problems iteratively. We further show that in each iteration, it is a quadratic programming problem in its dual space and can be solved by using state-of-the-art methods. We thus propose an iteratively reweighted type algorithm and provide a constructive proof of its convergence to a stationary point. Effectiveness of the proposed classifiers is verified on both artificial and real data sets.


Author(s):  
Cagatay Catal ◽  
Serkan Tugul ◽  
Basar Akpinar

Software repositories consist of thousands of applications and the manual categorization of these applications into domain categories is very expensive and time-consuming. In this study, we investigate the use of an ensemble of classifiers approach to solve the automatic software categorization problem when the source code is not available. Therefore, we used three data sets (package level/class level/method level) that belong to 745 closed-source Java applications from the Sharejar repository. We applied the Vote algorithm, AdaBoost, and Bagging ensemble methods and the base classifiers were Support Vector Machines, Naive Bayes, J48, IBk, and Random Forests. The best performance was achieved when the Vote algorithm was used. The base classifiers of the Vote algorithm were AdaBoost with J48, AdaBoost with Random Forest, and Random Forest algorithms. We showed that the Vote approach with method attributes provides the best performance for automatic software categorization; these results demonstrate that the proposed approach can effectively categorize applications into domain categories in the absence of source code.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yixue Zhu ◽  
Boyue Chai

With the development of increasingly advanced information technology and electronic technology, especially with regard to physical information systems, cloud computing systems, and social services, big data will be widely visible, creating benefits for people and at the same time facing huge challenges. In addition, with the advent of the era of big data, the scale of data sets is getting larger and larger. Traditional data analysis methods can no longer solve the problem of large-scale data sets, and the hidden information behind big data is digging out, especially in the field of e-commerce. We have become a key factor in competition among enterprises. We use a support vector machine method based on parallel computing to analyze the data. First, the training samples are divided into several working subsets through the SOM self-organizing neural network classification method. Compared with the ever-increasing progress of information technology and electronic equipment, especially the related physical information system finally merges the training results of each working set, so as to quickly deal with the problem of massive data prediction and analysis. This paper proposes that big data has the flexibility of expansion and quality assessment system, so it is meaningful to replace the double-sidedness of quality assessment with big data. Finally, considering the excellent performance of parallel support vector machines in data mining and analysis, we apply this method to the big data analysis of e-commerce. The research results show that parallel support vector machines can solve the problem of processing large-scale data sets. The emergence of data dirty problems has increased the effective rate by at least 70%.


Sign in / Sign up

Export Citation Format

Share Document