Rough Sets and Granular Computing in Geospatial Information

Author(s):  
Iftikhar U. Sikder

The representation of geographic entities is characterized by inherent granularity due to scale and resolution specific observations. This article discusses the various aspects of rough set-based approximation modeling of spatial and conceptual granularity. It outlines the context and applications of rough set theory in representing objects with intermediate boundaries, spatial reasoning and knowledge discovery.

Author(s):  
B. K. Tripathy

Granular Computing has emerged as a framework in which information granules are represented and manipulated by intelligent systems. Granular Computing forms a unified conceptual and computing platform. Rough set theory put forth by Pawlak is based upon single equivalence relation taken at a time. Therefore, from a granular computing point of view, it is single granular computing. In 2006, Qiang et al. introduced a multi-granular computing using rough set, which was called optimistic multigranular rough sets after the introduction of another type of multigranular computing using rough sets called pessimistic multigranular rough sets being introduced by them in 2010. Since then, several properties of multigranulations have been studied. In addition, these basic notions on multigranular rough sets have been introduced. Some of these, called the Neighborhood-Based Multigranular Rough Sets (NMGRS) and the Covering-Based Multigranular Rough Sets (CBMGRS), have been added recently. In this chapter, the authors discuss all these topics on multigranular computing and suggest some problems for further study.


Author(s):  
Guilong Liu ◽  
William Zhu

Rough set theory is an important technique in knowledge discovery in databases. Classical rough set theory proposed by Pawlak is based on equivalence relations, but many interesting and meaningful extensions have been made based on binary relations and coverings, respectively. This paper makes a comparison between covering rough sets and rough sets based on binary relations. This paper also focuses on the authors’ study of the condition under which the covering rough set can be generated by a binary relation and the binary relation based rough set can be generated by a covering.


Author(s):  
Hiroshi Sakai ◽  
◽  
Masahiro Inuiguchi ◽  

Rough sets and granular computing, known as new methodologies for computing technology, are now attracting great interest of researchers. This special issue presents 12 articles, and most of them were presented at the second Japanese workshop on Rough Sets held at Kyushu Institute of Technology in Tobata, Kitakyushu, Japan, on August 17-18, 2005. The first article studies the relation between rough set theory and formal concept analysis. These two frameworks are analyzed and connected by using the method of morphism. The second article introduces object-oriented paradigm into rough set theory, and object-oriented rough set models are proposed. Theoretical aspects of these new models are also examined. The third article considers relations between generalized rough sets, topologies and modal logics, and some topological properties of rough sets induced by equivalence relations are presented. The fourth article focuses on a family of polymodal systems, and theoretical aspects of these systems, like the completeness, are investigated. By means of combining polymodal logic concept and rough set theory, a new framework named multi-rough sets is established. The fifth article focuses on the information incompleteness in fuzzy relational models, and a generalized possibility-based fuzzy relational model is proposed. The sixth article presents a developed software EVALPSN (Extended Vector Annotated Logic Program with Strong Negation) and the application of this software to pipeline valve control. The seventh article presents the properties of attribute reduction in variable precision rough set models. Ten kinds of meaningful reducts are newly proposed, and hierarchical relations in these reducts are examined. The eighth article proposes attribute-value reduction for Kansei analysis using information granulation, and illustrative results for some databases in UCI Machine Learning Repository are presented. The ninth article investigates cluster analysis for data with errors tolerance. Two new clustering algorithms, which are based on the entropy regularized fuzzy c-means, are proposed. The tenth article applies binary decision trees to handwritten Japanese Kanji recognition. The consideration to the experimental results of real Kanji data is also presented. The eleventh article applies a rough sets based method to analysing the character of the screen-design in every web site. The obtained character gives us good knowledge to generate a new web site. The last article focuses on rule generation in non-deterministic information systems. For generating minimal certain rules, discernibility functions are introduced. A new algorithm is also proposed for handling every discernibility function. Finally, we would like to acknowledge all the authors for their efforts and contributions. We are very grateful to reviewers for their thorough and on-time reviews, too. We are also grateful to Prof. Toshio Fukuda and Prof. Kaoru Hirota, Editors-in-Chief of JACIII, for inviting us to serve as Guest Editors of this Journal, and to Mr. Uchino and Mr. Ohmori of Fuji Technology Press for their kind assistance in publication of this special issue.


Author(s):  
Guilong Liu ◽  
William Zhu

Rough set theory is an important technique in knowledge discovery in databases. Classical rough set theory proposed by Pawlak is based on equivalence relations, but many interesting and meaningful extensions have been made based on binary relations and coverings, respectively. This paper makes a comparison between covering rough sets and rough sets based on binary relations. This paper also focuses on the authors’ study of the condition under which the covering rough set can be generated by a binary relation and the binary relation based rough set can be generated by a covering.


2014 ◽  
Vol 543-547 ◽  
pp. 2017-2023
Author(s):  
Qing Guan ◽  
Jian He Guan

The technique of a new extension of fuzzy rough theory using partition of interval set-valued is proposed for granular computing during knowledge discovery in this paper. The natural intervals of attribute values in decision system to be transformed into multiple sub-interval of [0,1]are given by normalization. And some characteristics of interval set-valued of decision systems in fuzzy rough set theory are discussed. The correctness and effectiveness of the approach are shown in experiments. The approach presented in this paper can also be used as a data preprocessing step for other symbolic knowledge discovery or machine learning methods other than rough set theory.


Author(s):  
Richard Jensen

Data reduction is an important step in knowledge discovery from data. The high dimensionality of databases can be reduced using suitable techniques, depending on the requirements of the data mining processes. These techniques fall in to one of the following categories: those that transform the underlying meaning of the data features and those that are semantics-preserving. Feature selection (FS) methods belong to the latter category, where a smaller set of the original features is chosen based on a subset evaluation function. The process aims to determine a minimal feature subset from a problem domain while retaining a suitably high accuracy in representing the original features. In knowledge discovery, feature selection methods are particularly desirable as they facilitate the interpretability of the resulting knowledge. For this, rough set theory has been successfully used as a tool that enables the discovery of data dependencies and the reduction of the number of features contained in a dataset using the data alone, while requiring no additional information.


2014 ◽  
Vol 2014 ◽  
pp. 1-5 ◽  
Author(s):  
Yanqing Zhu ◽  
William Zhu

Classical rough set theory is a technique of granular computing for handling the uncertainty, vagueness, and granularity in information systems. Covering-based rough sets are proposed to generalize this theory for dealing with covering data. By introducing a concept of misclassification rate functions, an extended variable precision covering-based rough set model is proposed in this paper. In addition, we define thef-lower andf-upper approximations in terms of neighborhoods in the extended model and study their properties. Particularly, two coverings with the same reductions are proved to generate the samef-lower andf-upper approximations. Finally, we discuss the relationships between the new model and some other variable precision rough set models.


Author(s):  
S. Arjun Raj ◽  
M. Vigneshwaran

In this article we use the rough set theory to generate the set of decision concepts in order to solve a medical problem.Based on officially published data by International Diabetes Federation (IDF), rough sets have been used to diagnose Diabetes.The lower and upper approximations of decision concepts and their boundary regions have been formulated here.


Author(s):  
JIYE LIANG ◽  
ZHONGZHI SHI

Rough set theory is a relatively new mathematical tool for use in computer applications in circumstances which are characterized by vagueness and uncertainty. In this paper, we introduce the concepts of information entropy, rough entropy and knowledge granulation in rough set theory, and establish the relationships among those concepts. These results will be very helpful for understanding the essence of concept approximation and establishing granular computing in rough set theory.


Sign in / Sign up

Export Citation Format

Share Document