incremental algorithm
Recently Published Documents


TOTAL DOCUMENTS

248
(FIVE YEARS 38)

H-INDEX

25
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Yu Hu ◽  
Yan Zhu Hu ◽  
Zhong Su ◽  
Xiao Li Li ◽  
Zhen Meng ◽  
...  

Abstract As an effective tool for data analysis, Formal Concept Analysis (FCA) is widely used in software engineering and machine learning. The construction of concept lattice is a key step of the FCA. How to effectively update the concept lattice is still an open, interesting and important issue. The main aim of this paper is to provide a solution to this problem. So, we propose an incremental algorithm for concept lattice based on image structure similarity (SsimAddExtent). In addition, we perform time complexity analysis and experiments to show effectiveness of algorithm.


2021 ◽  
Vol 182 ◽  
pp. 108229
Author(s):  
Juan De La Torre Cruz ◽  
Francisco Jesús Cañadas Quesada ◽  
Damián Martínez-Muñoz ◽  
Nicolás Ruiz Reyes ◽  
Sebastián García Galán ◽  
...  

2021 ◽  
pp. 1-11
Author(s):  
Bin Qin

In reality there are always a large number of complex massive databases. The notion of homomorphism may be a mathematical tool for studying data compression in knowledge bases. This paper investigates a knowledge base in dynamic environments and its data compression with homomorphism, where “dynamic” refers to the fact that the involved information systems need to be updated with time due to the inflow of new information. First, the relationships among knowledge bases, information systems and relation information systems are illustrated. Next, the idea of non-incremental algorithm for data compression with homomorphism and the concept of dynamic knowledge base are introduced. Two incremental algorithms for data compression with homomorphism in dynamic knowledge bases are presented. Finally, an experimental analysis is employed to demonstrate the applications of the non-incremental algorithm and the incremental algorithms for data compression when calculating the knowledge reduction of dynamic knowledge bases.


2021 ◽  
Vol 15 ◽  
Author(s):  
Gopalakrishnan Srinivasan ◽  
Kaushik Roy

Spiking neural networks (SNNs), with their inherent capability to learn sparse spike-based input representations over time, offer a promising solution for enabling the next generation of intelligent autonomous systems. Nevertheless, end-to-end training of deep SNNs is both compute- and memory-intensive because of the need to backpropagate error gradients through time. We propose BlocTrain, which is a scalable and complexity-aware incremental algorithm for memory-efficient training of deep SNNs. We divide a deep SNN into blocks, where each block consists of few convolutional layers followed by a classifier. We train the blocks sequentially using local errors from the classifier. Once a given block is trained, our algorithm dynamically figures out easy vs. hard classes using the class-wise accuracy, and trains the deeper block only on the hard class inputs. In addition, we also incorporate a hard class detector (HCD) per block that is used during inference to exit early for the easy class inputs and activate the deeper blocks only for the hard class inputs. We trained ResNet-9 SNN divided into three blocks, using BlocTrain, on CIFAR-10 and obtained 86.4% accuracy, which is achieved with up to 2.95× lower memory requirement during the course of training, and 1.89× compute efficiency per inference (due to early exit strategy) with 1.45× memory overhead (primarily due to classifier weights) compared to end-to-end network. We also trained ResNet-11, divided into four blocks, on CIFAR-100 and obtained 58.21% accuracy, which is one of the first reported accuracy for SNN trained entirely with spike-based backpropagation on CIFAR-100.


2021 ◽  
Author(s):  
Shaoxia Zhang ◽  
Deyu Li ◽  
Yanhui Zhai

Abstract Decision implication is an elementary representation of decision knowledge in formal concept analysis. Decision implication canonical basis (DICB), a set of decision implications with completeness and nonredundancy, is the most compact representation of decision implications. The method based on true premises (MBTP) for DICB generation is the most efficient one at present. In practical applications, however, data is always changing dynamically, and MBTP has to re-generate inefficiently the whole DICB. This paper proposes an incremental algorithm for DICB generation, which obtains a new DICB just by modifying and updating the existing one. Experimental results verify that when the samples in data are much more than condition attributes, which is actually a general case in practical applications, the incremental algorithm is significantly superior to MBTP. Furthermore, we conclude that, even for the data in which samples is less than condition attributes, when new samples are continually added into data, the incremental algorithm must be also more efficient than MBTP, because the incremental algorithm just needs to modify the existing DICB, which is only a part of work of MBTP.


2021 ◽  
Vol 14 (8) ◽  
pp. 1351-1364
Author(s):  
Wenfei Fan ◽  
Chao Tian ◽  
Yanghao Wang ◽  
Qiang Yin

This paper studies how to catch duplicates, mismatches and conflicts in the same process. We adopt a class of entity enhancing rules that embed machine learning predicates, unify entity resolution and conflict resolution, and are collectively defined across multiple relations. We detect discrepancies as violations of such rules. We establish the complexity of discrepancy detection and incremental detection problems with the rules; they are both NP-complete and W[1]-hard. To cope with the intractability and scale with large datasets, we develop parallel algorithms and parallel incremental algorithms for discrepancy detection. We show that both algorithms are parallelly scalable, i.e. , they guarantee to reduce runtime when more processors are used. Moreover, the parallel incremental algorithm is relatively bounded. The complexity bounds and algorithms carry over to denial constraints, a special case of the entity enhancing rules. Using real-life and synthetic datasets, we experimentally verify the effectiveness, scalability and efficiency of the algorithms.


2021 ◽  
Vol 17 (2) ◽  
pp. 39-62
Author(s):  
Nguyen Long Giang ◽  
Le Hoang Son ◽  
Nguyen Anh Tuan ◽  
Tran Thi Ngan ◽  
Nguyen Nhu Son ◽  
...  

The tolerance rough set model is an effective tool to solve attribute reduction problem directly on incomplete decision systems without pre-processing missing values. In practical applications, incomplete decision systems are often changed and updated, especially in the case of adding or removing attributes. To solve the problem of finding reduct on dynamic incomplete decision systems, researchers have proposed many incremental algorithms to decrease execution time. However, the proposed incremental algorithms are mainly based on filter approach in which classification accuracy was calculated after the reduct has been obtained. As the results, these filter algorithms do not get the best result in term of the number of attributes in reduct and classification accuracy. This paper proposes two distance based filter-wrapper incremental algorithms: the algorithm IFWA_AA in case of adding attributes and the algorithm IFWA_DA in case of deleting attributes. Experimental results show that proposed filter-wrapper incremental algorithm IFWA_AA decreases significantly the number of attributes in reduct and improves classification accuracy compared to filter incremental algorithms such as UARA, IDRA.


Sign in / Sign up

Export Citation Format

Share Document