scholarly journals Least General Generalizations in Description Logic: Verification and Existence

2020 ◽  
Vol 34 (03) ◽  
pp. 2854-2861
Author(s):  
Jean Christoph Jung ◽  
Carsten Lutz ◽  
Frank Wolter

We study two forms of least general generalizations in description logic, the least common subsumer (LCS) and most specific concept (MSC). While the LCS generalizes from examples that take the form of concepts, the MSC generalizes from individuals in data. Our focus is on the complexity of existence and verification, the latter meaning to decide whether a candidate concept is the LCS or MSC. We consider cases with and without a background TBox and a target signature. Our results range from coNP-complete for LCS and MSC verification in the description logic εℒ without TBoxes to undecidability of LCS and MSC verification and existence in εℒI with TBoxes. To obtain results in the presence of a TBox, we establish a close link between the problems studied in this paper and concept learning from positive and negative examples. We also give a way to regain decidability in εℒI with TBoxes and study single example MSC as a special case.

2021 ◽  
pp. 024-033
Author(s):  
O.V. Zakharova

Establishing the semantic similarity of information is an integral part of the process of solving any information retrieval tasks, including tasks related to big data processing, discovery of semantic web services, categorization and classification of information, etc. The special functions to determine quantitative indicators of degree of semantic similarity of the information allow ranking the found information on its semantic proximity to the purpose or search request/template. Forming such measures should take into account many aspects from the meanings of the matched concepts to the specifics of the business-task in which it is done. Usually, to construct such similarity functions, semantic approaches are combined with structural ones, which provide syntactic comparison of concepts descriptions. This allows to do descriptions of the concepts more detail, and the impact of syntactic matching can be significantly reduced by using more expressive descriptive logics to represent information and by moving the focus to semantic properties. Today, DL-ontologies are the most developed tools for representing semantics, and the mechanisms of reasoning of descriptive logics (DL) provide the possibility of logical inference. Most of the estimates presented in this paper are based on basic DLs that support only the intersection constructor, but the described approaches can be applied to any DL that provides basic reasoning services. This article contains the analysis of existing approaches, models and measures based on descriptive logics. Classification of the estimation methods both on the levels of defining similarity and the matching types is proposed. The main attention is paid to establishing the similarity between concepts (conceptual level models). The task of establishing the value of similarity between instances and between concept and instance consists of finding the most specific concept for the instance / instances and evaluating the similarity between the concepts. The term of existential similarity is introduced. In this paper the examples of applying certain types of measures to evaluate the degree of semantic similarity of notions and/or knowledge based on the geometry ontology is demonstrated.


2021 ◽  
pp. 016-026
Author(s):  
O.V. Zakharova ◽  

Establishing the semantic similarity of information is an integral part of the process of solving any information retrieval tasks, including tasks related to big data processing, discovery of semantic web services, categorization and classification of information, etc. The special functions to determine quantitative indicators of degree of se­mantic similarity of the information allow ranking the found information on its semantic proximity to the pur­po­se or search request/template. Forming such measures should take into account many aspects from the mea­nings of the matched concepts to the specifics of the business-task in which it is done. Usually, to construct such si­milarity functions, semantic ap­proaches are combined with structural ones, which provide syntactic comparison of concepts descriptions. This allows to do descriptions of the concepts more detail, and the impact of syntactic matching can be significantly reduced by using more expressive descriptive logics to represent information and by moving the focus to semantic properties. Today, DL-ontologies are the most developed tools for representing semantics, and the mechanisms of reasoning of descriptive logics (DL) provide the possibility of logical inference. Most of the estimates presented in this paper are based on basic DLs that support only the intersection constructor, but the described approaches can be applied to any DL that provides basic reasoning services. This article contains the analysis of existing approaches, models and measures based on descriptive logics. Classification of the estimation methods both on the levels of defining similarity and the matching types is proposed. The main attention is paid to establishing the similarity between concepts (conceptual level models). The task of establishing the value of similarity between instances and between concept and instance consists of finding the most specific concept for the instance / instances and evaluating the similarity between the concepts. The term of existential similarity is introduced. In this paper the examples of applying certain types of measures to evaluate the degree of semantic similarity of notions and/or knowledge based on the geometry ontology is demonstrated.


Author(s):  
Thanh-Luong Tran ◽  
Quang-Thuy Ha ◽  
Thi-Lan-Giao Hoang ◽  
Linh Anh Nguyen ◽  
Hung Son Nguyen ◽  
...  

2018 ◽  
Vol 41 ◽  
Author(s):  
Daniel Crimston ◽  
Matthew J. Hornsey

AbstractAs a general theory of extreme self-sacrifice, Whitehouse's article misses one relevant dimension: people's willingness to fight and die in support of entities not bound by biological markers or ancestral kinship (allyship). We discuss research on moral expansiveness, which highlights individuals’ capacity to self-sacrifice for targets that lie outside traditional in-group markers, including racial out-groups, animals, and the natural environment.


Author(s):  
Dr. G. Kaemof

A mixture of polycarbonate (PC) and styrene-acrylonitrile-copolymer (SAN) represents a very good example for the efficiency of electron microscopic investigations concerning the determination of optimum production procedures for high grade product properties.The following parameters have been varied:components of charge (PC : SAN 50 : 50, 60 : 40, 70 : 30), kind of compounding machine (single screw extruder, twin screw extruder, discontinuous kneader), mass-temperature (lowest and highest possible temperature).The transmission electron microscopic investigations (TEM) were carried out on ultra thin sections, the PC-phase of which was selectively etched by triethylamine.The phase transition (matrix to disperse phase) does not occur - as might be expected - at a PC to SAN ratio of 50 : 50, but at a ratio of 65 : 35. Our results show that the matrix is preferably formed by the components with the lower melting viscosity (in this special case SAN), even at concentrations of less than 50 %.


2016 ◽  
Vol 32 (3) ◽  
pp. 204-214 ◽  
Author(s):  
Emilie Lacot ◽  
Mohammad H. Afzali ◽  
Stéphane Vautier

Abstract. Test validation based on usual statistical analyses is paradoxical, as, from a falsificationist perspective, they do not test that test data are ordinal measurements, and, from the ethical perspective, they do not justify the use of test scores. This paper (i) proposes some basic definitions, where measurement is a special case of scientific explanation; starting from the examples of memory accuracy and suicidality as scored by two widely used clinical tests/questionnaires. Moreover, it shows (ii) how to elicit the logic of the observable test events underlying the test scores, and (iii) how the measurability of the target theoretical quantities – memory accuracy and suicidality – can and should be tested at the respondent scale as opposed to the scale of aggregates of respondents. (iv) Criterion-related validity is revisited to stress that invoking the explanative power of test data should draw attention on counterexamples instead of statistical summarization. (v) Finally, it is argued that the justification of the use of test scores in specific settings should be part of the test validation task, because, as tests specialists, psychologists are responsible for proposing their tests for social uses.


Sign in / Sign up

Export Citation Format

Share Document