Research on the Improved Apriori Based on Rough Set Method

2011 ◽  
Vol 467-469 ◽  
pp. 306-311
Author(s):  
Xian Wen Luo

The paper adopts the knowledge reduction method in Rough Set theory to adjust Apriori Algorithm and proposes “Itemset Reduction Method”to reduce the amount of the candidate sets and improve the effeciency of the algorithm. In the experiments of the research, the results of both the improved algorithm and Apriori Algorithm are compared, and ideal results are gained.

2011 ◽  
Vol 219-220 ◽  
pp. 604-607 ◽  
Author(s):  
Xu Yang Wang

Formal concept analysis and rough set theory provide two different methods for data analysis and knowledge processing. Knowledge reduct in this paper combines the two models. For an initial data sets described by formal context, look for absolute necessary attribute sets by applying rough set theory. The sets can image the concepts and hiberarchy structure completely. Then calculate the value cores of attributes values for all objects and delete redundant attributes. At last, delete repeated instances and get the minimum formal context. Construct the concept lattice of the minimum formal context can diminish the size of concept lattice of the initial table at a certain extent.


Author(s):  
Ayaho Miyamoto

This paper describes an acquisitive method of rule‐type knowledge from the field inspection data on highway bridges. The proposed method is enhanced by introducing an improvement to a traditional data mining technique, i.e. applying the rough set theory to the traditional decision table reduction method. The new rough set theory approach helps in cases of exceptional and contradictory data, which in the traditional decision table reduction method are simply removed from analyses. Instead of automatically removing all apparently contradictory data cases, the proposed method determines whether the data really is contradictory and therefore must be removed or not. The method has been tested with real data on bridge members including girders and filled joints in bridges owned and managed by a highway corporation in Japan. There are, however, numerous inconsistent data in field data. A new method is therefore proposed to solve the problem of data loss. The new method reveals some generally unrecognized decision rules in addition to generally accepted knowledge. Finally, a computer program is developed to perform calculation routines, and some field inspection data on highway bridges is used to show the applicability of the proposed method.


2013 ◽  
Vol 347-350 ◽  
pp. 3119-3122
Author(s):  
Yan Xue Dong ◽  
Fu Hai Huang

The basic theory of rough set is given and a method for texture classification is proposed. According to the GCLM theory, texture feature is extracted and generate 32 feature vectors to form a decision table, find a minimum set of rules for classification after attribute discretization and knowledge reduction, experimental results show that using rough set theory in texture classification, accompanied by appropriate discrete method and reduction algorithm can get better classification results


Author(s):  
JIYE LIANG ◽  
ZONGBEN XU

Rough set theory is emerging as a powerful tool for reasoning about data, knowledge reduction is one of the important topics in the research on rough set theory. It has been proven that finding the minimal reduct of an information system is a NP-hard problem, so is finding the minimal reduct of an incomplete information system. Main reason of causing NP-hard is combination problem of attributes. In this paper, knowledge reduction is defined from the view of information, a heuristic algorithm based on rough entropy for knowledge reduction is proposed in incomplete information systems, the time complexity of this algorithm is O(|A|2|U|). An illustrative example is provided that shows the application potential of the algorithm.


Author(s):  
Yasuo Kudo ◽  
◽  
Tetsuya Murai ◽  

In this paper, we propose a parallel computation framework for a heuristic attribute reduction method. Attribute reduction is a key technique to use rough set theory as a tool in data mining. The authors have previously proposed a heuristic attribute reduction method to compute as many relative reducts as possible from a given dataset with numerous attributes. We parallelize our method by using open multiprocessing. We also evaluate the performance of a parallelized attribute reduction method by experiments.


2016 ◽  
Vol 693 ◽  
pp. 1346-1349
Author(s):  
Xiao Yu Chen ◽  
Wen Liao Du ◽  
An Sheng Li ◽  
Kun Li ◽  
Chun Hua Qian

Rough set theory is a useful tool for attribute reduction of fault diagnosis for rotating machinery, but cannot be efficiently used to sample increased areas. Aiming at the problem of incremental attribute reduction, a novel attribute reduction algorithm was put forward based on the binary resolution matrix for the two updating situations and the algorithm had a low space complex. Finally, with the fault diagnosis experiments of the bearing, the attribute reduction method was proved to be correct.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Hua Li ◽  
Deyu Li ◽  
Yanhui Zhai ◽  
Suge Wang ◽  
Jing Zhang

Owing to the high dimensionality of multilabel data, feature selection in multilabel learning will be necessary in order to reduce the redundant features and improve the performance of multilabel classification. Rough set theory, as a valid mathematical tool for data analysis, has been widely applied to feature selection (also called attribute reduction). In this study, we propose a variable precision attribute reduct for multilabel data based on rough set theory, calledδ-confidence reduct, which can correctly capture the uncertainty implied among labels. Furthermore, judgement theory and discernibility matrix associated withδ-confidence reduct are also introduced, from which we can obtain the approach to knowledge reduction in multilabel decision tables.


Sign in / Sign up

Export Citation Format

Share Document