A comparison of two types of rough approximations based on Nj-neighborhoods

2021 ◽  
pp. 1-14
Author(s):  
Tareq M. Al-Shami ◽  
Ibtesam Alshammari ◽  
Mohammed E. El-Shafei

In 1982, Pawlak proposed the concept of rough sets as a novel mathematical tool to address the issues of vagueness and uncertain knowledge. Topological concepts and results are close to the concepts and results in rough set theory; therefore, some researchers have investigated topological aspects and their applications in rough set theory. In this discussion, we study further properties of Nj-neighborhoods; especially, those are related to a topological space. Then, we define new kinds of approximation spaces and establish main properties. Finally, we make some comparisons of the approximations and accuracy measures introduced herein and their counterparts induced from interior and closure topological operators and E-neighborhoods.

Author(s):  
Kanchana. M ◽  
Rekha. S

Rough set theory is a new mathematical tool for dealing with vague, imprecise, inconsistent and uncertain knowledge. In recent years the research and applications on rough set theory have attracted more. In this paper, we have introduced and analyze the Rough set theory and also decide the factors for corona virus diagnosis by using Indiscernibility matrix.


2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Weidong Tang ◽  
Jinzhao Wu ◽  
Dingwei Zheng

The core concepts of rough set theory are information systems and approximation operators of approximation spaces. Approximation operators draw close links between rough set theory and topology. This paper is devoted to the discussion of fuzzy rough sets and their topological structures. Fuzzy rough approximations are further investigated. Fuzzy relations are researched by means of topology or lower and upper sets. Topological structures of fuzzy approximation spaces are given by means of pseudoconstant fuzzy relations. Fuzzy topology satisfying (CC) axiom is investigated. The fact that there exists a one-to-one correspondence between the set of all preorder fuzzy relations and the set of all fuzzy topologies satisfying (CC) axiom is proved, the concept of fuzzy approximating spaces is introduced, and decision conditions that a fuzzy topological space is a fuzzy approximating space are obtained, which illustrates that we can research fuzzy relations or fuzzy approximation spaces by means of topology and vice versa. Moreover, fuzzy pseudoclosure operators are examined.


2011 ◽  
pp. 1-37 ◽  
Author(s):  
Piotr Wasilewski ◽  
Dominik Slezak

We present three types of knowledge, which can be specified according to the Rough Set theory. Then, we present three corresponding types of algebraic structures appearing in the Rough Set theory. This leads to three following types of vagueness: crispness, classical vagueness, and a new concept of “intermediate” vagueness. We also propose two classifications of information systems and approximation spaces. Based on them, we differentiate between information and knowledge.


Author(s):  
S. Arjun Raj ◽  
M. Vigneshwaran

In this article we use the rough set theory to generate the set of decision concepts in order to solve a medical problem.Based on officially published data by International Diabetes Federation (IDF), rough sets have been used to diagnose Diabetes.The lower and upper approximations of decision concepts and their boundary regions have been formulated here.


Author(s):  
JIYE LIANG ◽  
ZHONGZHI SHI

Rough set theory is a relatively new mathematical tool for use in computer applications in circumstances which are characterized by vagueness and uncertainty. In this paper, we introduce the concepts of information entropy, rough entropy and knowledge granulation in rough set theory, and establish the relationships among those concepts. These results will be very helpful for understanding the essence of concept approximation and establishing granular computing in rough set theory.


Author(s):  
B. K. Tripathy

Granular Computing has emerged as a framework in which information granules are represented and manipulated by intelligent systems. Granular Computing forms a unified conceptual and computing platform. Rough set theory put forth by Pawlak is based upon single equivalence relation taken at a time. Therefore, from a granular computing point of view, it is single granular computing. In 2006, Qiang et al. introduced a multi-granular computing using rough set, which was called optimistic multigranular rough sets after the introduction of another type of multigranular computing using rough sets called pessimistic multigranular rough sets being introduced by them in 2010. Since then, several properties of multigranulations have been studied. In addition, these basic notions on multigranular rough sets have been introduced. Some of these, called the Neighborhood-Based Multigranular Rough Sets (NMGRS) and the Covering-Based Multigranular Rough Sets (CBMGRS), have been added recently. In this chapter, the authors discuss all these topics on multigranular computing and suggest some problems for further study.


Author(s):  
Benjamin Griffiths

Rough Set Theory (RST), since its introduction in Pawlak (1982), continues to develop as an effective tool in data mining. Within a set theoretical structure, its remit is closely concerned with the classification of objects to decision attribute values, based on their description by a number of condition attributes. With regards to RST, this classification is through the construction of ‘if .. then ..’ decision rules. The development of RST has been in many directions, amongst the earliest was with the allowance for miss-classification in the constructed decision rules, namely the Variable Precision Rough Sets model (VPRS) (Ziarko, 1993), the recent references for this include; Beynon (2001), Mi et al. (2004), and Slezak and Ziarko (2005). Further developments of RST have included; its operation within a fuzzy environment (Greco et al., 2006), and using a dominance relation based approach (Greco et al., 2004). The regular major international conferences of ‘International Conference on Rough Sets and Current Trends in Computing’ (RSCTC, 2004) and ‘International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing’ (RSFDGrC, 2005) continue to include RST research covering the varying directions of its development. This is true also for the associated book series entitled ‘Transactions on Rough Sets’ (Peters and Skowron, 2005), which further includes doctoral theses on this subject. What is true, is that RST is still evolving, with the eclectic attitude to its development meaning that the definitive concomitant RST data mining techniques are still to be realised. Grzymala-Busse and Ziarko (2000), in a defence of RST, discussed a number of points relevant to data mining, and also made comparisons between RST and other techniques. Within the area of data mining and the desire to identify relationships between condition attributes, the effectiveness of RST is particularly pertinent due to the inherent intent within RST type methodologies for data reduction and feature selection (Jensen and Shen, 2005). That is, subsets of condition attributes identified that perform the same role as all the condition attributes in a considered data set (termed ß-reducts in VPRS, see later). Chen (2001) addresses this, when discussing the original RST, they state it follows a reductionist approach and is lenient to inconsistent data (contradicting condition attributes - one aspect of underlying uncertainty). This encyclopaedia article describes and demonstrates the practical application of a RST type methodology in data mining, namely VPRS, using nascent software initially described in Griffiths and Beynon (2005). The use of VPRS, through its relative simplistic structure, outlines many of the rudiments of RST based methodologies. The software utilised is oriented towards ‘hands on’ data mining, with graphs presented that clearly elucidate ‘veins’ of possible information identified from ß-reducts, over different allowed levels of missclassification associated with the constructed decision rules (Beynon and Griffiths, 2004). Further findings are briefly reported when undertaking VPRS in a resampling environment, with leave-one-out and bootstrapping approaches adopted (Wisnowski et al., 2003). The importance of these results is in the identification of the more influential condition attributes, pertinent to accruing the most effective data mining results.


2012 ◽  
Vol 3 (2) ◽  
pp. 38-52 ◽  
Author(s):  
Tutut Herawan

This paper presents an alternative way for constructing a topological space in an information system. Rough set theory for reasoning about data in information systems is used to construct the topology. Using the concept of an indiscernibility relation in rough set theory, it is shown that the topology constructed is a quasi-discrete topology. Furthermore, the dependency of attributes is applied for defining finer topology and further characterizing the roughness property of a set. Meanwhile, the notions of base and sub-base of the topology are applied to find attributes reduction and degree of rough membership, respectively.


2011 ◽  
Vol 230-232 ◽  
pp. 625-628
Author(s):  
Lei Shi ◽  
Xin Ming Ma ◽  
Xiao Hong Hu

E-bussiness has grown rapidly in the last decade and massive amount of data on customer purchases, browsing pattern and preferences has been generated. Classification of electronic data plays a pivotal role to mine the valuable information and thus has become one of the most important applications of E-bussiness. Support Vector Machines are popular and powerful machine learning techniques, and they offer state-of-the-art performance. Rough set theory is a formal mathematical tool to deal with incomplete or imprecise information and one of its important applications is feature selection. In this paper, rough set theory and support vector machines are combined to construct a classification model to classify the data of E-bussiness effectively.


Author(s):  
Malcolm J. Beynon

Rough set theory (RST), since its introduction in Pawlak (1982), continues to develop as an effective tool in classification problems and decision support. In the majority of applications using RST based methodologies, there is the construction of ‘if .. then ..’ decision rules that are used to describe the results from an analysis. The variation of applications in management and decision making, using RST, recently includes discovering the operating rules of a Sicilian irrigation purpose reservoir (Barbagallo, Consoli, Pappalardo, Greco, & Zimbone, 2006), feature selection in customer relationship management (Tseng & Huang, 2007) and decisions that insurance companies make to satisfy customers’ needs (Shyng, Wang, Tzeng, & Wu, 2007). As a nascent symbolic machine learning technique, the popularity of RST is a direct consequence of its set theoretical operational processes, mitigating inhibiting issues associated with traditional techniques, such as within-group probability distribution assumptions (Beynon & Peel, 2001). Instead, the rudiments of the original RST are based on an indiscernibility relation, whereby objects are grouped into certain equivalence classes and inference taken from these groups. Characteristics like this mean that decision support will be built upon the underlying RST philosophy of “Let the data speak for itself” (Dunstch & Gediga, 1997). Recently, RST was viewed as being of fundamental importance in artificial intelligence and cognitive sciences, including decision analysis and decision support systems (Tseng & Huang, 2007). One of the first developments on RST was through the variable precision rough sets model (VPRSß), which allows a level of mis-classification to exist in the classification of objects, resulting in probabilistic rules (see Ziarko, 1993; Beynon, 2001; Li and Wang, 2004). VPRSß has specifically been applied as a potential decision support system with the UK Monopolies and Mergers Commission (Beynon & Driffield, 2005), predicting bank credit ratings (Griffiths & Beynon, 2005) and diffusion of medicaid home care programs (Kitchener, Beynon, & Harrington, 2004). Further developments of RST include extended variable precision rough sets (VPRSl,u), which infers asymmetric bounds on the possible classification and mis-classification of objects (Katzberg & Ziarko, 1996), dominance-based rough sets, which bases their approach around a dominance relation (Greco, Matarazzo, & Slowinski, 2004), fuzzy rough sets, which allows the grade of membership of objects to constructed sets (Greco, Inuiguchi, & Slowinski, 2006), and probabilistic bayesian rough sets model that considers an appropriate certainty gain function (Ziarko, 2005). A literal presentation of the diversity of work on RST can be viewed in the annual volumes of the Transactions on Rough Sets (most recent year 2006), also the annual conferences dedicated to RST and its developments (see for example, RSCTC, 2004). In this article, the theory underlying VPRSl,u is described, with its special case of VPRSß used in an example analysis. The utilisation of VPRSl,u, and VPRSß, is without loss of generality to other developments such as those referenced, its relative simplicity allows the non-proficient reader the opportunity to fully follow the details presented.


Sign in / Sign up

Export Citation Format

Share Document