scholarly journals A Variable Precision Covering-Based Rough Set Model Based on Functions

2014 ◽  
Vol 2014 ◽  
pp. 1-5 ◽  
Author(s):  
Yanqing Zhu ◽  
William Zhu

Classical rough set theory is a technique of granular computing for handling the uncertainty, vagueness, and granularity in information systems. Covering-based rough sets are proposed to generalize this theory for dealing with covering data. By introducing a concept of misclassification rate functions, an extended variable precision covering-based rough set model is proposed in this paper. In addition, we define thef-lower andf-upper approximations in terms of neighborhoods in the extended model and study their properties. Particularly, two coverings with the same reductions are proved to generate the samef-lower andf-upper approximations. Finally, we discuss the relationships between the new model and some other variable precision rough set models.

Author(s):  
Malcolm J. Beynon ◽  
Benjamin Griffiths

This chapter considers, and elucidates, the general methodology of rough set theory (RST), a nascent approach to rule based classification associated with soft computing. There are two parts of the elucidation undertaken in this chapter, firstly the levels of possible pre-processing necessary when undertaking an RST based analysis, and secondly the presentation of an analysis using variable precision rough sets (VPRS), a development on the original RST that allows for misclassification to exist in the constructed “if … then …” decision rules. Throughout the chapter, bespoke software underpins the pre-processing and VPRS analysis undertaken, including screenshots of its output. The problem of US bank credit ratings allows the pertinent demonstration of the soft computing approaches described throughout.


Author(s):  
B. K. Tripathy

Granular Computing has emerged as a framework in which information granules are represented and manipulated by intelligent systems. Granular Computing forms a unified conceptual and computing platform. Rough set theory put forth by Pawlak is based upon single equivalence relation taken at a time. Therefore, from a granular computing point of view, it is single granular computing. In 2006, Qiang et al. introduced a multi-granular computing using rough set, which was called optimistic multigranular rough sets after the introduction of another type of multigranular computing using rough sets called pessimistic multigranular rough sets being introduced by them in 2010. Since then, several properties of multigranulations have been studied. In addition, these basic notions on multigranular rough sets have been introduced. Some of these, called the Neighborhood-Based Multigranular Rough Sets (NMGRS) and the Covering-Based Multigranular Rough Sets (CBMGRS), have been added recently. In this chapter, the authors discuss all these topics on multigranular computing and suggest some problems for further study.


Author(s):  
Malcolm J. Beynon

Rough set theory (RST), since its introduction in Pawlak (1982), continues to develop as an effective tool in classification problems and decision support. In the majority of applications using RST based methodologies, there is the construction of ‘if .. then ..’ decision rules that are used to describe the results from an analysis. The variation of applications in management and decision making, using RST, recently includes discovering the operating rules of a Sicilian irrigation purpose reservoir (Barbagallo, Consoli, Pappalardo, Greco, & Zimbone, 2006), feature selection in customer relationship management (Tseng & Huang, 2007) and decisions that insurance companies make to satisfy customers’ needs (Shyng, Wang, Tzeng, & Wu, 2007). As a nascent symbolic machine learning technique, the popularity of RST is a direct consequence of its set theoretical operational processes, mitigating inhibiting issues associated with traditional techniques, such as within-group probability distribution assumptions (Beynon & Peel, 2001). Instead, the rudiments of the original RST are based on an indiscernibility relation, whereby objects are grouped into certain equivalence classes and inference taken from these groups. Characteristics like this mean that decision support will be built upon the underlying RST philosophy of “Let the data speak for itself” (Dunstch & Gediga, 1997). Recently, RST was viewed as being of fundamental importance in artificial intelligence and cognitive sciences, including decision analysis and decision support systems (Tseng & Huang, 2007). One of the first developments on RST was through the variable precision rough sets model (VPRSß), which allows a level of mis-classification to exist in the classification of objects, resulting in probabilistic rules (see Ziarko, 1993; Beynon, 2001; Li and Wang, 2004). VPRSß has specifically been applied as a potential decision support system with the UK Monopolies and Mergers Commission (Beynon & Driffield, 2005), predicting bank credit ratings (Griffiths & Beynon, 2005) and diffusion of medicaid home care programs (Kitchener, Beynon, & Harrington, 2004). Further developments of RST include extended variable precision rough sets (VPRSl,u), which infers asymmetric bounds on the possible classification and mis-classification of objects (Katzberg & Ziarko, 1996), dominance-based rough sets, which bases their approach around a dominance relation (Greco, Matarazzo, & Slowinski, 2004), fuzzy rough sets, which allows the grade of membership of objects to constructed sets (Greco, Inuiguchi, & Slowinski, 2006), and probabilistic bayesian rough sets model that considers an appropriate certainty gain function (Ziarko, 2005). A literal presentation of the diversity of work on RST can be viewed in the annual volumes of the Transactions on Rough Sets (most recent year 2006), also the annual conferences dedicated to RST and its developments (see for example, RSCTC, 2004). In this article, the theory underlying VPRSl,u is described, with its special case of VPRSß used in an example analysis. The utilisation of VPRSl,u, and VPRSß, is without loss of generality to other developments such as those referenced, its relative simplicity allows the non-proficient reader the opportunity to fully follow the details presented.


2012 ◽  
Vol 9 (3) ◽  
pp. 1-17 ◽  
Author(s):  
D. Calvo-Dmgz ◽  
J. F. Gálvez ◽  
D. Glez-Peña ◽  
S. Gómez-Meire ◽  
F. Fdez-Riverola

Summary DNA microarrays have contributed to the exponential growth of genomic and experimental data in the last decade. This large amount of gene expression data has been used by researchers seeking diagnosis of diseases like cancer using machine learning methods. In turn, explicit biological knowledge about gene functions has also grown tremendously over the last decade. This work integrates explicit biological knowledge, provided as gene sets, into the classication process by means of Variable Precision Rough Set Theory (VPRS). The proposed model is able to highlight which part of the provided biological knowledge has been important for classification. This paper presents a novel model for microarray data classification which is able to incorporate prior biological knowledge in the form of gene sets. Based on this knowledge, we transform the input microarray data into supergenes, and then we apply rough set theory to select the most promising supergenes and to derive a set of easy interpretable classification rules. The proposed model is evaluated over three breast cancer microarrays datasets obtaining successful results compared to classical classification techniques. The experimental results shows that there are not significat differences between our model and classical techniques but it is able to provide a biological-interpretable explanation of how it classifies new samples.


Data Mining ◽  
2011 ◽  
pp. 142-173 ◽  
Author(s):  
Jerzy W. Grzymala-Busse ◽  
Wojciech Ziarko

The chapter is focused on the data mining aspect of the applications of rough set theory. Consequently, the theoretical part is minimized to emphasize the practical application side of the rough set approach in the context of data analysis and model-building applications. Initially, the original rough set approach is presented and illustrated with detailed examples showing how data can be analyzed with this approach. The next section illustrates the Variable Precision Rough Set Model (VPRSM) to expose similarities and differences between these two approaches. Then, the data mining system LERS, based on a different generalization of the original rough set theory than VPRSM, is presented. Brief descriptions of algorithms are also cited. Finally, some applications of the LERS data mining system are listed.


Author(s):  
Hiroshi Sakai ◽  
◽  
Masahiro Inuiguchi ◽  

Rough sets and granular computing, known as new methodologies for computing technology, are now attracting great interest of researchers. This special issue presents 12 articles, and most of them were presented at the second Japanese workshop on Rough Sets held at Kyushu Institute of Technology in Tobata, Kitakyushu, Japan, on August 17-18, 2005. The first article studies the relation between rough set theory and formal concept analysis. These two frameworks are analyzed and connected by using the method of morphism. The second article introduces object-oriented paradigm into rough set theory, and object-oriented rough set models are proposed. Theoretical aspects of these new models are also examined. The third article considers relations between generalized rough sets, topologies and modal logics, and some topological properties of rough sets induced by equivalence relations are presented. The fourth article focuses on a family of polymodal systems, and theoretical aspects of these systems, like the completeness, are investigated. By means of combining polymodal logic concept and rough set theory, a new framework named multi-rough sets is established. The fifth article focuses on the information incompleteness in fuzzy relational models, and a generalized possibility-based fuzzy relational model is proposed. The sixth article presents a developed software EVALPSN (Extended Vector Annotated Logic Program with Strong Negation) and the application of this software to pipeline valve control. The seventh article presents the properties of attribute reduction in variable precision rough set models. Ten kinds of meaningful reducts are newly proposed, and hierarchical relations in these reducts are examined. The eighth article proposes attribute-value reduction for Kansei analysis using information granulation, and illustrative results for some databases in UCI Machine Learning Repository are presented. The ninth article investigates cluster analysis for data with errors tolerance. Two new clustering algorithms, which are based on the entropy regularized fuzzy c-means, are proposed. The tenth article applies binary decision trees to handwritten Japanese Kanji recognition. The consideration to the experimental results of real Kanji data is also presented. The eleventh article applies a rough sets based method to analysing the character of the screen-design in every web site. The obtained character gives us good knowledge to generate a new web site. The last article focuses on rule generation in non-deterministic information systems. For generating minimal certain rules, discernibility functions are introduced. A new algorithm is also proposed for handling every discernibility function. Finally, we would like to acknowledge all the authors for their efforts and contributions. We are very grateful to reviewers for their thorough and on-time reviews, too. We are also grateful to Prof. Toshio Fukuda and Prof. Kaoru Hirota, Editors-in-Chief of JACIII, for inviting us to serve as Guest Editors of this Journal, and to Mr. Uchino and Mr. Ohmori of Fuji Technology Press for their kind assistance in publication of this special issue.


Author(s):  
Guilong Liu ◽  
William Zhu

Rough set theory is an important technique in knowledge discovery in databases. Classical rough set theory proposed by Pawlak is based on equivalence relations, but many interesting and meaningful extensions have been made based on binary relations and coverings, respectively. This paper makes a comparison between covering rough sets and rough sets based on binary relations. This paper also focuses on the authors’ study of the condition under which the covering rough set can be generated by a binary relation and the binary relation based rough set can be generated by a covering.


Sign in / Sign up

Export Citation Format

Share Document