scholarly journals Uncertainty Analysis of Knowledge Reductions in Rough Sets

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Ying Wang ◽  
Nan Zhang

Uncertainty analysis is a vital issue in intelligent information processing, especially in the age of big data. Rough set theory has attracted much attention to this field since it was proposed. Relative reduction is an important problem of rough set theory. Different relative reductions have been investigated for preserving some specific classification abilities in various applications. This paper examines the uncertainty analysis of five different relative reductions in four aspects, that is, reducts’ relationship, boundary region granularity, rules variance, and uncertainty measure according to a constructed decision table.

2014 ◽  
Vol 1 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Sharmistha Bhattacharya Halder

The concept of rough set was first developed by Pawlak (1982). After that it has been successfully applied in many research fields, such as pattern recognition, machine learning, knowledge acquisition, economic forecasting and data mining. But the original rough set model cannot effectively deal with data sets which have noisy data and latent useful knowledge in the boundary region may not be fully captured. In order to overcome such limitations, some extended rough set models have been put forward which combine with other available soft computing technologies. Many researchers were motivated to investigate probabilistic approaches to rough set theory. Variable precision rough set model (VPRSM) is one of the most important extensions. Bayesian rough set model (BRSM) (Slezak & Ziarko, 2002), as the hybrid development between rough set theory and Bayesian reasoning, can deal with many practical problems which could not be effectively handled by original rough set model. Based on Bayesian decision procedure with minimum risk, Yao (1990) puts forward a new model called decision theoretic rough set model (DTRSM) which brings new insights into the probabilistic approaches to rough set theory. Throughout this paper, the concept of decision theoretic rough set is studied and also a new concept of Bayesian decision theoretic rough set is introduced. Lastly a comparative study is done between Bayesian decision theoretic rough set and Rough set defined by Pawlak (1982).


2019 ◽  
Vol 6 (1) ◽  
pp. 3-17 ◽  
Author(s):  
Yunlong Cheng ◽  
Fan Zhao ◽  
Qinghua Zhang ◽  
Guoyin Wang

2021 ◽  
Vol 40 (1) ◽  
pp. 1609-1621
Author(s):  
Jie Yang ◽  
Wei Zhou ◽  
Shuai Li

Vague sets are a further extension of fuzzy sets. In rough set theory, target concept can be characterized by different rough approximation spaces when it is a vague concept. The uncertainty measure of vague sets in rough approximation spaces is an important issue. If the uncertainty measure is not accurate enough, different rough approximation spaces of a vague concept may possess the same result, which makes it impossible to distinguish these approximation spaces for charactering a vague concept strictly. In this paper, this problem will be solved from the perspective of similarity. Firstly, based on the similarity between vague information granules(VIGs), we proposed an uncertainty measure with strong distinguishing ability called rough vague similarity (RVS). Furthermore, by studying the multi-granularity rough approximations of a vague concept, we reveal the change rules of RVS with the changing granularities and conclude that the RVS between any two rough approximation spaces can degenerate to granularity measure and information measure. Finally, a case study and related experiments are listed to verify that RVS possesses a better performance for reflecting differences among rough approximation spaces for describing a vague concept.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Hengrong Ju ◽  
Huili Dou ◽  
Yong Qi ◽  
Hualong Yu ◽  
Dongjun Yu ◽  
...  

Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, aδ-cut decision-theoretic rough set is proposed, which is based on theδ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1) with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2) with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Feng Hu ◽  
Hang Li

Rough set theory is a powerful mathematical tool introduced by Pawlak to deal with imprecise, uncertain, and vague information. The Neighborhood-Based Rough Set Model expands the rough set theory; it could divide the dataset into three parts. And the boundary region indicates that the majority class samples and the minority class samples are overlapped. On the basis of what we know about the distribution of original dataset, we only oversample the minority class samples, which are overlapped with the majority class samples, in the boundary region. So, the NRSBoundary-SMOTE can expand the decision space for the minority class; meanwhile, it will shrink the decision space for the majority class. After conducting an experiment on four kinds of classifiers, NRSBoundary-SMOTE has higher accuracy than other methods when C4.5, CART, and KNN are used but it is worse than SMOTE on classifier SVM.


Rough set theory is a mathematical method proposed by Pawlak . Rough set theory has been developed to manage uncertainties in information that presents missing and noises. Rough set theory is an expansion of the conventional set theory that supports approximations in decision making process. Fundamental of assumption of rough set theory is that with every object of the universe has some information associated it. Rough set theory is correlate two crisp sets, called lower and upper approximation. The lower approximation of a set consists of all elements that surely belong to the set, and the upper approximation of the set constitutes of all elements that possibly belong to the set. The boundary region of the set consists of all elements that cannot be classified uniquely as belonging to the set or as belonging to its complement, with respect to the available knowledge Rough sets are applied in several domains, such as, pattern recognition, medicine, finance, intelligent agents, telecommunication, control theory ,vibration analysis, conflict resolution, image analysis, process industry, marketing, banking risk assessment etc. This paper gives detail survey of rough set theory and its properties and various applications of rough set theory.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 91089-91102
Author(s):  
Jianguo Tang ◽  
Jianghua Wang ◽  
Chunling Wu ◽  
Guojian Ou

2021 ◽  
Vol 8 (4) ◽  
pp. 2084-2094
Author(s):  
Vilat Sasax Mandala Putra Paryoko

Proportional Feature Rough Selector (PFRS) merupakan sebuah metode seleksi fitur yang dikembangkan berdasarkan Rough Set Theory (RST). Pengembangan ini dilakukan dengan merinci pembagian wilayah dalam set data menjadi beberapa bagian penting yaitu lower approximation, upper approximation dan boundary region. PFRS memanfaatkan boundary region untuk menemukan wilayah yang lebih kecil yaitu Member Section (MS) dan Non-Member Section (NMS). Namun PFRS masih hanya digunakan dalam seleksi fitur pada klasifikasi biner dengan tipe data teks. PFRS ini juga dikembangkan tanpa memperhatikan hubungan antar fitur, sehingga PFRS memiliki potensi untuk ditingkatkan dengan mempertimbangkan korelasi antar fitur dalam set data. Untuk itu, penelitian ini bertujuan untuk melakukan penyesuaian PFRS untuk bisa diterapkan pada klasifikasi multi-label dengan data campuran yakni data teks dan data bukan teks serta mempertimbangkan korelasi antar fitur untuk meningkatkan performa klasifikasi multi-label. Pengujian dilakukan pada set data publik yaitu 515k Hotel Reviews dan Netflix TV Shows. Set data ini diuji dengan menggunakan empat metode klasifikasi yaitu DT, KNN, NB dan SVM. Penelitian ini membandingkan penerapan seleksi fitur PFRS pada data multi-label dengan pengembangan PFRS yaitu dengan mempertimbangkan korelasi. Hasil penelitian menunjukkan bahwa penggunaan PFRS berhasil meningkatkan performa klasifikasi. Dengan mempertimbangkan korelasi, PFRS menghasilkan peningkatan akurasi hingga 23,76%. Pengembangan PFRS juga menunjukkan peningkatan kecepatan yang signifikan pada semua metode klasifikasi sehingga pengembangan PFRS dengan mempertimbangkan korelasi mampu memberikan kontribusi dalam meningkatkan performa klasifikasi.


Sign in / Sign up

Export Citation Format

Share Document