scholarly journals Discretization Algorithm for Incomplete Economic Information in Rough Set Based on Big Data

Symmetry ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 1245
Author(s):  
Xiangyang Li ◽  
Yangyang Shen

Discretization based on rough sets is used to divide the space formed by continuous attribute values with as few breakpoint sets as possible, while maintaining the original indistinguishable relationship of the decision system, so as to accurately classify and identify related information. In this study, a discretization algorithm for incomplete economic information in rough set based on big data is proposed. First, the algorithm for filling-in incomplete economic information based on deep learning is used to supplement the incomplete economic information. Then, based on breakpoint discrimination, the algorithm for discretization in the rough set is used to implement the discretization based on rough set for supplementary economic information. The performance of this algorithm was tested using multiple sets of data and compared with other algorithms. Experimental results show that this algorithm is effective for discretization based on a rough set of incomplete economic information. When the number of incomplete economic information rough candidate breakpoints increases, it still has a higher computational efficiency and can effectively improve the integrity of incomplete economic information, and finally the application performance is superior.

Kybernetes ◽  
2017 ◽  
Vol 46 (4) ◽  
pp. 693-705 ◽  
Author(s):  
Yasser F. Hassan

Purpose This paper aims to utilize machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications. Design/methodology/approach The objective of this work is to propose a model for deep rough set theory that uses more than decision table and approximating these tables to a classification system, i.e. the paper propose a novel framework of deep learning based on multi-decision tables. Findings The paper tries to coordinate the local properties of individual decision table to provide an appropriate global decision from the system. Research limitations/implications The rough set learning assumes the existence of a single decision table, whereas real-world decision problem implies several decisions with several different decision tables. The new proposed model can handle multi-decision tables. Practical implications The proposed classification model is implemented on social networks with preferred features which are freely distribute as social entities with accuracy around 91 per cent. Social implications The deep learning using rough sets theory simulate the way of brain thinking and can solve the problem of existence of different information about same problem in different decision systems Originality/value This paper utilizes machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications.


2008 ◽  
Vol 2008 ◽  
pp. 1-13 ◽  
Author(s):  
Aboul ella Hassanien ◽  
Mohamed E. Abdelhafez ◽  
Hala S. Own

The main goal of this study is to investigate the relationship between psychosocial variables and diabetic children patients and to obtain a classifier function with which it was possible to classify the patients on the basis of assessed adherence level. The rough set theory is used to identify the most important attributes and to induce decision rules from 302 samples of Kuwaiti diabetic children patients aged 7–13 years old. To increase the efficiency of the classification process, rough sets with Boolean reasoning discretization algorithm is introduced to discretize the data, then the rough set reduction technique is applied to find all reducts of the data which contains the minimal subset of attributes that are associated with a class label for classification. Finally, the rough sets dependency rules are generated directly from all generated reducts. Rough confusion matrix is used to evaluate the performance of the predicted reducts and classes. A comparison between the obtained results using rough sets with decision tree, neural networks, and statistical discriminate analysis classifier algorithms has been made. Rough sets show a higher overall accuracy rates and generate more compact rules.


2020 ◽  
Vol 39 (5) ◽  
pp. 7107-7122
Author(s):  
Zhang Chuanchao

In view of the characteristics with big data, high feature dimension, and dynamic for a large-scale intuitionistic fuzzy information systems, this paper integrates intuitionistic fuzzy rough sets and generalized dynamic sampling theory, proposes a generalized attribute reduction algorithm based on similarity relation of intuitionistic fuzzy rough sets and dynamic reduction. It uses dynamic reduction sampling theory to divide a big data set into small data sets and relative positive domain cardinality instead of dependency degree as decision-making condition, and obtains reduction attributes of big intuitionistic fuzzy decision information systems, and achieves the goal of extracting key features and fault diagnosis. The innovation of this paper is that it integrates generalized dynamic reduction and intuitionistic fuzzy rough set, and solves the problem of big data set which cannot be solved by intuitionistic fuzzy rough set. Taking an actual data as an example, the scientificity, rationality and effectiveness of the algorithm are verified from the aspects of stability, diagnostic accuracy, optimization ability and time complexity. Compared with similar algorithms, the advantages of the proposed algorithm for big data processing are confirmed.


2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


Author(s):  
S. Arjun Raj ◽  
M. Vigneshwaran

In this article we use the rough set theory to generate the set of decision concepts in order to solve a medical problem.Based on officially published data by International Diabetes Federation (IDF), rough sets have been used to diagnose Diabetes.The lower and upper approximations of decision concepts and their boundary regions have been formulated here.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Sign in / Sign up

Export Citation Format

Share Document