scholarly journals Null Models for Formal Contexts

Information ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 135
Author(s):  
Maximilian Felde ◽  
Tom Hanika ◽  
Gerd Stumme

Null model generation for formal contexts is an important task in the realm of formal concept analysis. These random models are in particular useful for, but not limited to, comparing the performance of algorithms. Nonetheless, a thorough investigation of how to generate null models for formal contexts is absent. Thus we suggest a novel approach using Dirichlet distributions. We recollect and analyze the classical coin-toss model, recapitulate some of its shortcomings and examine its stochastic properties. Building upon this we propose a model which is capable of generating random formal contexts as well as null models for a given input context. Through an experimental evaluation we show that our approach is a significant improvement with respect to the variety of contexts generated. Furthermore, we demonstrate the applicability of our null models with respect to real world datasets.

2021 ◽  
Author(s):  
Yixuan Yang ◽  
Doo-Soon Park ◽  
Fei Hao ◽  
Sony Peng ◽  
Min-Pyo Hong ◽  
...  

Abstract In the era of artificial intelligence including the fourth industrial revolution, social networks analyzing is a significant topic in big data analysis. Clique detection is a state-of-the-art technique in social network structure mining, which is widely used in a particular social network like signed network. There are positive and negative relationships in signed networks which detect not only cliques or maximal cliques but also maximal balanced cliques.In this paper, two algorithms have been addressed to the problems. First, we modify three-way concept lattice algorithm using a modified formal context and supplement formal context to obtain an object-induced three-way concept lattice (OE-concept) to detect the maximal balanced cliques. Second, in order to improve the cost of memory and efficiency, we modify formal concept analysis algorithm by using modified formal context combine with supplement formal context to find the maximal balance cliques. Additionally, we utilized four real-world datasets to test our proposed approaches as well as the running time in the experimental section.


2022 ◽  
Vol 14 (1) ◽  
pp. 0-0

Discovering and using valuable and meaningful data which is hidden in large databases can have strategic importance in the managerial decision making process for organizations to gain competitive advantage. With the increasing data flow, it has become more difficult for organizations to store this data and gain useful knowledge to manage their business operations and functions. Knowledge discovery process that is based on data mining methods has widely been used in business operations and management functions.This paper investigates formal concept analysis which is a powerful tool in knowledge representation and discovery and explains association rule mining based-on formal concept analysis. An experimental study is given for employee selection function ofHRM by using formal concept analysis method to model the qualifications of candidates which are needed for the job position. The qualifications of the candidates are modelled with concept lattices and the qualifications of the candidates are matched with the ones determined in the job specification.


2012 ◽  
Vol 5 (4) ◽  
pp. 85-98 ◽  
Author(s):  
László Kovács

The morpheme analysis module is an important component in natural language processing engines. The parser modules are usually based on rule systems created by human experts. In the paper, a novel approach is tested for implementation of the morpheme analyzer module. The proposed structure is based on the theory of formal concept analysis. The word inflection can be considered as a classification problem, where the class label denotes the corresponding transformation rule. The main benefit of the proposed method is the efficient generalization feature. The proposed morpheme analyzer module was implemented in a prototype question generation application.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


Sign in / Sign up

Export Citation Format

Share Document