On local multigranulation covering decision-theoretic rough sets

2021 ◽  
pp. 1-24
Author(s):  
Mengmeng Li ◽  
Chiping Zhang ◽  
Minghao Chen ◽  
Weihua Xu

Multi-granulation decision-theoretic rough sets uses the granular structures induced by multiple binary relations to approximate the target concept, which can get a more accurate description of the approximate space. However, Multi-granulation decision-theoretic rough sets is very time-consuming to calculate the approximate value of the target set. Local rough sets not only inherits the advantages of classical rough set in dealing with imprecise, fuzzy and uncertain data, but also breaks through the limitation that classical rough set needs a lot of labeled data. In this paper, in order to make full use of the advantage of computational efficiency of local rough sets and the ability of more accurate approximation space description of multi-granulation decision-theoretic rough sets, we propose to combine the local rough sets and the multigranulation decision-theoretic rough sets in the covering approximation space to obtain the local multigranulation covering decision-theoretic rough sets model. This provides an effective tool for discovering knowledge and making decisions in relation to large data sets. We first propose four types of local multigranulation covering decision-theoretic rough sets models in covering approximation space, where a target concept is approximated by employing the maximal or minimal descriptors of objects. Moreover, some important properties and decision rules are studied. Meanwhile, we explore the reduction among the four types of models. Furthermore, we discuss the relationships of the proposed models and other representative models. Finally, illustrative case of medical diagnosis is given to explain and evaluate the advantage of local multigranulation covering decision-theoretic rough sets model.

2014 ◽  
Vol 644-650 ◽  
pp. 2120-2123 ◽  
Author(s):  
De Zhi An ◽  
Guang Li Wu ◽  
Jun Lu

At present there are many data mining methods. This paper studies the application of rough set method in data mining, mainly on the application of attribute reduction algorithm based on rough set in the data mining rules extraction stage. Rough set in data mining is often used for reduction of knowledge, and thus for the rule extraction. Attribute reduction is one of the core research contents of rough set theory. In this paper, the traditional attribute reduction algorithm based on rough sets is studied and improved, and for large data sets of data mining, a new attribute reduction algorithm is proposed.


Author(s):  
Mohammad Atique ◽  
Leena Homraj Patil

Attribute reduction and feature selection is the main issue in rough set. Researchers have focused on several attribute reduction using rough set. However, the methods found are time consuming for large data sets. Since the key lies in reducing the attributes and selecting the relevant features, the main aim is to reduce the dimensionality of huge amount of data to get the smaller subset which can provide the useful information. Feature selection approach reduces the dimensionality of feature space and improves the overall performance. The challenge in feature selection is to deal with high dimensional. To overcome the issues and challenges, this chapter describes a feature selection based on the proposed neighborhood positive approximation approach and attributes reduction for data sets. This proposed system implements for attribute reduction and finds the relevant features. Evaluation shows that the proposed neighborhood positive approximation algorithm is effective and feasible for large data sets and also reduces the feature space.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 47
Author(s):  
Fariha Iffath ◽  
A. S. M. Kayes ◽  
Md. Tahsin Rahman ◽  
Jannatul Ferdows ◽  
Mohammad Shamsul Arefin ◽  
...  

A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique.


2021 ◽  
Author(s):  
Věra Kůrková ◽  
Marcello Sanguineti
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document