On Foundations and Applications of the Paradigm of Granular Rough Computing

Author(s):  
Lech Polkowski ◽  
Maria Semeniuk-Polkowska

Granular computing, initiated by Lotfi A. Zadeh, has acquired wide popularity as a tool for approximate reasoning, fusion of knowledge, cognitive computing. The need for formal methods of granulation, and means for computing with granules, has been addressed in this work by applying methods of rough mereology. Rough mereology is an extension of mereology taking as the primitive notion the notion of a part to a degree. Granules are formed as classes of objects which are a part to a given degree of a given object. In addition to an exposition of this mechanism of granulation, we point also to some applications like granular logics for approximate reasoning and classifiers built from granulated data sets.

Author(s):  
Qing-Hua Zhang ◽  
Long-Yang Yao ◽  
Guan-Sheng Zhang ◽  
Yu-Ke Xin

In this paper, a new incremental knowledge acquisition method is proposed based on rough set theory, decision tree and granular computing. In order to effectively process dynamic data, describing the data by rough set theory, computing equivalence classes and calculating positive region with hash algorithm are analyzed respectively at first. Then, attribute reduction, value reduction and the extraction of rule set by hash algorithm are completed efficiently. Finally, for each new additional data, the incremental knowledge acquisition method is proposed and used to update the original rules. Both algorithm analysis and experiments show that for processing the dynamic information systems, compared with the traditional algorithms and the incremental knowledge acquisition algorithms based on granular computing, the time complexity of the proposed algorithm is lower due to the efficiency of hash algorithm and also this algorithm is more effective when it is used to deal with the huge data sets.


Author(s):  
Mamata Rath

Big data analytics is an refined advancement for fusion of large data sets that include a collection of data elements to expose hidden prototype, undetected associations, showcase business logic, client inclinations, and other helpful business information. Big data analytics involves challenging techniques to mine and extract relevant data that includes the actions of penetrating a database, effectively mining the data, querying and inspecting data committed to enhance the technical execution of various task segments. The capacity to synthesize a lot of data can enable an association to manage impressive data that can influence the business. In this way, the primary goal of big data analytics is to help business relationship to have enhanced comprehension of data and, subsequently, settle on proficient and educated decisions.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-17 ◽  
Author(s):  
Yuan Gao ◽  
Xiangjian Chen ◽  
Xibei Yang ◽  
Pingxin Wang ◽  
Jusheng Mi

Recently, multigranularity has been an interesting topic, since different levels of granularity can provide different information from the viewpoint of Granular Computing (GrC). However, established researches have focused less on investigating attribute reduction from multigranularity view. This paper proposes an algorithm based on the multigranularity view. To construct a framework of multigranularity attribute reduction, two main problems can be addressed as follows: (1) The multigranularity structure can be constructed firstly. In this paper, the multigranularity structure will be constructed based on the radii, as different information granularities can be induced by employing different radii. Therefore, the neighborhood-based multigranularity can be constructed. (2) The attribute reduction can be designed and realized from the viewpoint of multigranularity. Different from traditional process which computes reduct through employing a fixed granularity, our algorithm aims to obtain reduct from the viewpoint of multigranularity. To realize the new algorithm, two main processes are executed as follows: (1) Considering that different decision classes may require different key condition attributes, the ensemble selector is applied among different decision classes; (2) to accelerate the process of attribute reduction, only the finest and the coarsest granularities are employed. The experiments over 15 UCI data sets are conducted. Compared with the traditional single-granularity approach, the multigranularity algorithm can not only generate reduct which can provide better classification accuracy, but also reduce the elapsed time. This study suggests new trends for considering both the classification accuracy and the time efficiency with respect to the reduct.


Author(s):  
K. G. Srinivasa ◽  
K. R. Venugopal ◽  
L. M. Patnaik

Efficient tools and algorithms for knowledge discovery in large data sets have been devised during the recent years. These methods exploit the capability of computers to search huge amounts of data in a fast and effective manner. However, the data to be analyzed is imprecise and afflicted with uncertainty. In the case of heterogeneous data sources such as text, audio and video, the data might moreover be ambiguous and partly conflicting. Besides, patterns and relationships of interest are usually vague and approximate. Thus, in order to make the information mining process more robust or say, human-like methods for searching and learning it requires tolerance towards imprecision, uncertainty and exceptions. Thus, they have approximate reasoning capabilities and are capable of handling partial truth. Properties of the aforementioned kind are typical soft computing. Soft computing techniques like Genetic Algorithms (GA), Artificial Neural Networks, Fuzzy Logic, Rough Sets and Support Vector Machines (SVM) when used in combination was found to be effective. Therefore, soft computing algorithms are used to accomplish data mining across different applications (Mitra S, Pal S K & Mitra P, 2002; Alex A Freitas, 2002). Extensible Markup Language (XML) is emerging as a de facto standard for information exchange among various applications of World Wide Web due to XML’s inherent data self-describing capacity and flexibility of organizing data. In XML representation, the semantics are associated with the contents of the document by making use of self describing tags which can be defined by the users. Hence XML can be used as a medium for interoperability over the Internet. With these advantages, the amount of data that is being published on the Web in the form of XML is growing enormously and many naïve users find the need to search over large XML document collections (Gang Gou & Rada Chirkova, 2007; Luk R et al., 2000).


2016 ◽  
Vol 2 (3) ◽  
pp. 141-158 ◽  
Author(s):  
Giuseppe D’Aniello ◽  
Angelo Gaeta ◽  
Vincenzo Loia ◽  
Francesco Orciuoli

2014 ◽  
Vol 54 (1) ◽  
pp. 147
Author(s):  
Peter Goldschmidt ◽  
Charles Crawley ◽  
Bashirul Haq ◽  
Santhosh Palanisamy

A well performing as expected is healthy. It is essential for efficient field operations to detect well health (WH) issues that may reduce the production efficiency and/or the overall recovery of an asset. The authors describe an early predictive WH meta-monitoring tool called ARTAM-WH, developed to assist maintaining stable operations (with high recovery). This is achieved by routinely checking the relevant individual WH parameters in context with the well’s operating environment. As alert complexity increases so does the risk of false alerts. Moreover, existing systems raise alarms/alerts after significant deviation without completing extensive cross-checking. An early predictive and WH meta-monitoring tool is highly desirable. This study fills the gap. ARTAM-WH is a new approach designed to provide early identification of existing or developing WH issues—precursors to WH problems if not addressed. ARTAM-WH uses in-situ monitoring systems—when available—or basic pressure, temperature and flow (gas, oil, water) data, coupled with static data, and conducts preliminary analysis and proper cross-checking and then notifies the engineers/operators. Differentiating this approach is that it is not looking at small data sets but at a large palate, including both static (well construct) and dynamic current performance versus performance expectations. ARTAM-WH provides supporting evidence of WH issues to the appropriate stakeholder. Notification includes the necessary and sufficient evidence derived from approximate reasoning algorithms combining multiple variables to identify possible issues and test the WH hypotheses. ARTAM-WH frees up engineers/operators to focus on higher priority activities such as developing solutions. This allows for handling those problems through normal planning rather than emergency fixes.


2016 ◽  
Vol 1 (2) ◽  
pp. 95-113 ◽  
Author(s):  
Andrzej Skowron ◽  
Andrzej Jankowski ◽  
Soma Dutta

Abstract Decision support in solving problems related to complex systems requires relevant computation models for the agents as well as methods for reasoning on properties of computations performed by agents. Agents are performing computations on complex objects [e.g., (behavioral) patterns, classifiers, clusters, structural objects, sets of rules, aggregation operations, (approximate) reasoning schemes]. In Granular Computing (GrC), all such constructed and/or induced objects are called granules. To model interactive computations performed by agents, crucial for the complex systems, we extend the existing GrC approach to Interactive Granular Computing (IGrC) approach by introducing complex granules (c-granules or granules, for short). Many advanced tasks, concerning complex systems, may be classified as control tasks performed by agents aiming at achieving the high-quality computational trajectories relative to the considered quality measures defined over the trajectories. Here, new challenges are to develop strategies to control, predict, and bound the behavior of the system. We propose to investigate these challenges using the IGrC framework. The reasoning, which aims at controlling of computations, to achieve the required targets, is called an adaptive judgement. This reasoning deals with granules and computations over them. Adaptive judgement is more than a mixture of reasoning based on deduction, induction and abduction. Due to the uncertainty the agents generally cannot predict exactly the results of actions (or plans). Moreover, the approximations of the complex vague concepts initiating actions (or plans) are drifting with time. Hence, adaptive strategies for evolving approximations of concepts are needed. In particular, the adaptive judgement is very much needed in the efficiency management of granular computations, carried out by agents, for risk assessment, risk treatment, and cost/benefit analysis. In the paper, we emphasize the role of the rough set-based methods in IGrC. The discussed approach is a step towards realization of the Wisdom Technology (WisTech) program, and is developed over years, based on the work experience on different real-life projects.


Sign in / Sign up

Export Citation Format

Share Document