scholarly journals Quality of classification with LERS system in the data size context

2018 ◽  
Vol 16 (1/2) ◽  
pp. 29-38 ◽  
Author(s):  
M. Sudha ◽  
A. Kumaravel

Rough set theory is a simple and potential methodology in extracting and minimizing rules from decision tables. Its concepts are core, reduct and discovering knowledge in the form of rules. The decision rules explain the decision state to predict and support the new situation. Initially it was proposed as a useful tool for analysis of decision states. This approach produces a set of decision rules involves two types namely certain and possible rules based on approximation. The prediction may highly be affected if the data size varies in larger numbers. Application of Rough set theory towards this direction has not been considered yet. Hence the main objective of this paper is to study the influence of data size and the number of rules generated by rough set methods. The performance of these methods is presented through the metric like accuracy and quality of classification. The results obtained show the range of performance and first of its kind in current research trend.

2011 ◽  
pp. 38-69 ◽  
Author(s):  
Hung Son Nguyen

This chapter presents the Boolean reasoning approach to problem solving and its applications in Rough sets. The Boolean reasoning approach has become a powerful tool for designing effective and accurate solutions for many problems in decision-making, approximate reasoning and optimization. In recent years, Boolean reasoning has become a recognized technique for developing many interesting concept approximation methods in rough set theory. This chapter presents a general framework for concept approximation by combining the classical Boolean reasoning method with many modern techniques in machine learning and data mining. This modified approach - called “the approximate Boolean reasoning” methodology - has been proposed as an even more powerful tool for problem solving in rough set theory and its applications in data mining. Through some most representative applications in many KDD problems including feature selection, feature extraction, data preprocessing, classification of decision rules and decision trees, association analysis, the author hopes to convince that the proposed approach not only maintains all the merits of its antecedent but also owns the possibility of balancing between quality of the designed solution and its computational time.


Author(s):  
Jiye Liang ◽  
Yuhua Qian ◽  
Deyu Li

In rough set theory, rule extraction and rule evaluation are two important issues. In this chapter, the concepts of positive approximation and converse approximation are first introduced, which can be seen as dynamic approximations of target concepts based on a granulation order. Then, two algorithms for rule extraction called MABPA and REBCA are designed and applied to hierarchically generate decision rules from a decision table. Furthermore, to evaluate the whole performance of a decision rule set, three kinds of measures are proposed for evaluating the certainty, consistency and support of a decision-rule set extracted from a decision table, respectively. The experimental analyses on several decision tables show that these three new measures are adequate for evaluating the decision performance of a decision-rule set extracted from a decision table in rough set theory. The measures may be helpful for determining which rule extraction technique should be chosen in a practical decision problem.


2010 ◽  
Vol 129-131 ◽  
pp. 1191-1195
Author(s):  
Yan Lou

By data mining from 3DFEM simulation and Rough Set Theory (RST), it was performed that the extrusion process and die structures effect on the quality of AZ80 magnesium extrudate. The weights of the effect can be obtained. The results show that the effect of the billet temperature on the product quality is dominate, and its average weight is 0.27. The second important parameter is the ram speed and its average weight is 0.22. In addition, it was also found that the effect of the die characteristic parameters on the extrudate is insignificant.


2013 ◽  
pp. 1225-1251
Author(s):  
Chun-Che Huang ◽  
Tzu-Liang (Bill) Tseng ◽  
Hao-Syuan Lin

Patent infringement risk is a significant issue for corporations due to the increased appreciation of intellectual property rights. If a corporation gives insufficient protection to its patents, it may loss both profits from product, and industry competitiveness. Many studies on patent infringement have focused on measuring the patent trend indicators and the patent monetary value. However, very few studies have attempted to develop a categorization mechanism for measuring and evaluating the patent infringement risk, for example, the categorization of the patent infringement cases, then to determine the significant attributes and introduce the infringement decision rules. This study applies Rough Set Theory (RST), which is suitable for processing qualitative information to induce rules to derive significant attributes for categorization of the patent infringement risk. Moreover, through the use of the concept hierarchy and the credibility index, it can be integrated with RST and then enhance application of the finalized decision rules.


Author(s):  
Benjamin Griffiths

Rough Set Theory (RST), since its introduction in Pawlak (1982), continues to develop as an effective tool in data mining. Within a set theoretical structure, its remit is closely concerned with the classification of objects to decision attribute values, based on their description by a number of condition attributes. With regards to RST, this classification is through the construction of ‘if .. then ..’ decision rules. The development of RST has been in many directions, amongst the earliest was with the allowance for miss-classification in the constructed decision rules, namely the Variable Precision Rough Sets model (VPRS) (Ziarko, 1993), the recent references for this include; Beynon (2001), Mi et al. (2004), and Slezak and Ziarko (2005). Further developments of RST have included; its operation within a fuzzy environment (Greco et al., 2006), and using a dominance relation based approach (Greco et al., 2004). The regular major international conferences of ‘International Conference on Rough Sets and Current Trends in Computing’ (RSCTC, 2004) and ‘International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing’ (RSFDGrC, 2005) continue to include RST research covering the varying directions of its development. This is true also for the associated book series entitled ‘Transactions on Rough Sets’ (Peters and Skowron, 2005), which further includes doctoral theses on this subject. What is true, is that RST is still evolving, with the eclectic attitude to its development meaning that the definitive concomitant RST data mining techniques are still to be realised. Grzymala-Busse and Ziarko (2000), in a defence of RST, discussed a number of points relevant to data mining, and also made comparisons between RST and other techniques. Within the area of data mining and the desire to identify relationships between condition attributes, the effectiveness of RST is particularly pertinent due to the inherent intent within RST type methodologies for data reduction and feature selection (Jensen and Shen, 2005). That is, subsets of condition attributes identified that perform the same role as all the condition attributes in a considered data set (termed ß-reducts in VPRS, see later). Chen (2001) addresses this, when discussing the original RST, they state it follows a reductionist approach and is lenient to inconsistent data (contradicting condition attributes - one aspect of underlying uncertainty). This encyclopaedia article describes and demonstrates the practical application of a RST type methodology in data mining, namely VPRS, using nascent software initially described in Griffiths and Beynon (2005). The use of VPRS, through its relative simplistic structure, outlines many of the rudiments of RST based methodologies. The software utilised is oriented towards ‘hands on’ data mining, with graphs presented that clearly elucidate ‘veins’ of possible information identified from ß-reducts, over different allowed levels of missclassification associated with the constructed decision rules (Beynon and Griffiths, 2004). Further findings are briefly reported when undertaking VPRS in a resampling environment, with leave-one-out and bootstrapping approaches adopted (Wisnowski et al., 2003). The importance of these results is in the identification of the more influential condition attributes, pertinent to accruing the most effective data mining results.


Author(s):  
Yasuo Kudo ◽  
Tetsuya Murai

This paper focuses on rough set theory which provides mathematical foundations of set-theoretical approximation for concepts, as well as reasoning about data. Also presented in this paper is the concept of relative reducts which is one of the most important notions for rule generation based on rough set theory. In this paper, from the viewpoint of approximation, the authors introduce an evaluation criterion for relative reducts using roughness of partitions that are constructed from relative reducts. The proposed criterion evaluates each relative reduct by the average of coverage of decision rules based on the relative reduct, which also corresponds to evaluate the roughness of partition constructed from the relative reduct,


Author(s):  
Nikos Pelekis ◽  
Babis Theodoulidis ◽  
Ioannis Kopanakis ◽  
Yannis Theodoridis

QOSP Quality of Service Open Shortest Path First based on QoS routing has been recognized as a missing piece in the evolution of QoS-based services in the Internet. Data mining has emerged as a tool for data analysis, discovery of new information, and autonomous decision-making. This paper focuses on routing algorithms and their appli-cations for computing QoS routes in OSPF protocol. The proposed approach is based on a data mining approach using rough set theory, for which the attribute-value system about links of networks is created from network topology. Rough set theory offers a knowledge discovery approach to extracting routing-decisions from attribute set. The extracted rules can then be used to select significant routing-attributes and make routing-selections in routers. A case study is conducted to demonstrate that rough set theory is effective in finding the most significant attribute set. It is shown that the algorithm based on data mining and rough set offers a promising approach to the attribute-selection prob-lem in internet routing.


2011 ◽  
pp. 239-268 ◽  
Author(s):  
Krzysztof Pancerz ◽  
Zbigniew Suraj

This chapter constitutes the continuation of a new research trend binding rough set theory with concurrency theory. In general, this trend concerns the following problems: discovering concurrent system models from experimental data represented by information systems, dynamic information systems or specialized matrices, a use of rough set methods for extracting knowledge from data, a use of rules for describing system behaviors, and modeling and analyzing of concurrent systems by means of Petri nets on the basis of extracted rules. Some automatized methods of discovering concurrent system models from data tables are presented. Data tables are created on the basis of observations or specifications of process behaviors in the modeled systems. Proposed methods are based on rough set theory and colored Petri net theory.


2011 ◽  
Vol 14 (04) ◽  
pp. 715-735
Author(s):  
Wen-Rong Jerry Ho

The main purpose of this paper is to advocate a rule-based forecasting technique for anticipating stock index volatility. This paper intends to set up a stock index indicators projection prototype by using a multiple criteria decision making model consisting of the cluster analysis (CA) technique and Rough Set Theory (RST) to select the important attributes and forecast TSEC Capitalization Weighted Stock Index. The projection prototype was then released to forecast the stock index in the first half of 2009 with an accuracy of 66.67%. The results point out that the decision rules were authenticated to employ in forecasting the stock index volatility appropriately.


2013 ◽  
Vol 411-414 ◽  
pp. 2085-2088
Author(s):  
Xiao Qing Geng ◽  
Yu Wang

In this paper, the rough set theory is applied to reduce the complexity of data space and to induct decision rules. It proposes the generic label correcting (GLC) algorithm incorporated with the decision rules to solve supply chain modeling problems. This proposed approach is agile because by combining various operators and comparators, different types of paths in the reduced networks can be solved with one algorithm.


Sign in / Sign up

Export Citation Format

Share Document