Ontology Learning from Thesauri

Author(s):  
Javier Nogueras-Iso ◽  
Javier Lacasta ◽  
Jacques Teller ◽  
Gilles Falquet ◽  
Jacques Guyot

Ontology learning is the term used to encompass methods and techniques employed for the (semi-)automatic processing of knowledge resources that facilitate the acquisition of knowledge during ontology construction. This chapter focuses on ontology learning techniques using thesauri as input sources. Thesauri are one of the most promising sources for the creation of domain ontologies thanks to the richness of term definitions, the existence of a priori relationships between terms, and the consensus provided by their extensive use in the library context. Apart from reviewing the state of the art, this chapter shows how ontology learning techniques can be applied in the urban domain for the development of domain ontologies.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 37 ◽  
Author(s):  
Luca Cappelletti ◽  
Tommaso Fontana ◽  
Guido Walter Di Donato ◽  
Lorenzo Di Tucci ◽  
Elena Casiraghi ◽  
...  

Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle “complex data”, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences.


2019 ◽  
Vol 4 (4) ◽  
pp. 828-849 ◽  
Author(s):  
Daniel C. Elton ◽  
Zois Boukouvalas ◽  
Mark D. Fuge ◽  
Peter W. Chung

We review a recent groundswell of work which uses deep learning techniques to generate and optimize molecules.


2012 ◽  
Vol 06 (04) ◽  
pp. 491-507 ◽  
Author(s):  
IVO SERRA ◽  
ROSARIO GIRARDI ◽  
PAULO NOVAIS

Learning Non-Taxonomic Relationships is a sub-field of Ontology Learning that aims at automating the extraction of these relationships from text. This article discusses the problem of Learning Non-Taxonomic Relationships of ontologies and proposes a generic process for approaching it. Some techniques representing the state of the art of this field are discussed along with their advantages and limitations. Finally, a framework for Learning Non-Taxonomic Relationships being developed by the authors is briefly discussed. This framework intends to be a customizable solution to reach good effectiveness in the process of extraction of non-taxonomic relationships according to the characteristics of the corpus.


2003 ◽  
Vol 18 (4) ◽  
pp. 293-316 ◽  
Author(s):  
MEHRNOUSH SHAMSFARD ◽  
AHMAD ABDOLLAHZADEH BARFOROUSH

In recent years there have been some efforts to automate the ontology acquisition and construction process. The proposed systems differ from each other in some factors and have many features in common. This paper presents the state of the art in Ontology Learning (OL) and introduces a framework for classifying and comparing OL systems. The dimensions of the framework concern what to learn, from where to learn it and how it may be learnt. They include features of the input, the methods of learning and knowledge acquisition, the elements learned, the resulting ontology and also the evaluation process. To extract this framework, over 50 OL systems or modules thereof that have been described in recent articles are studied here and seven prominent ones, which illustrate the greatest differences, are selected for analysis according to our framework. In this paper after a brief description of the seven selected systems we describe the dimensions of the framework. Then we place the representative ontology learning systems into our framework. Finally, we describe the differences, strengths and weaknesses of various values for our dimensions in order to present a guideline for researchers to choose the appropriate features to create or use an OL system for their own domain or application.


Author(s):  
Luis Felipe Borja ◽  
Jorge Azorin-Lopez ◽  
Marcelo Saval-Calvo

The human behaviour analysis has been a subject of study in various fields of science (e.g. sociology, psychology, computer science). Specifically, the automated understanding of the behaviour of both individuals and groups remains a very challenging problem from the sensor systems to artificial intelligence techniques. Being aware of the extent of the topic, the objective of this paper is to review the state of the art focusing on machine learning techniques and computer vision as sensor system to the artificial intelligence techniques. Moreover, a lack of review comparing the level of abstraction in terms of activities duration is found in the literature. In this paper, a review of the methods and techniques based on machine learning to classify group behaviour in sequence of images is presented. The review takes into account the different levels of understanding and the number of people in the group.


Author(s):  
Luís G. Magalhães ◽  
Telmo Adão ◽  
Emanuel Peres

Accurate modeling/reconstruction and visualization of real environments, particularly archaeological sites, is both a major challenge and a crucial task. This work will address the entire process of the virtual reconstruction of archaeological sites, since the construction of the virtual model until its visualization. The chapter begins with an introduction to the process of virtual reconstruction of archaeological sites, where the several stages that should take place to obtain a faithful virtual representation of an archaeological site and its artifacts are identified. Moreover, each stage is characterized and its main methods and techniques are identified, in dedicated sections. The authors' contribution for the state of the art will be highlighted in each stage. The chapter ends with the authors' vision about future trends for this field and unveils what could be their contributions to this vision.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1469
Author(s):  
Caleb Vununu ◽  
Suk-Hwan Lee ◽  
Ki-Ryong Kwon

In computer-aided diagnosis (CAD) systems, the automatic classification of the different types of the human epithelial type 2 (HEp-2) cells represents one of the critical steps in the diagnosis procedure of autoimmune diseases. Most of the methods prefer to tackle this task using the supervised learning paradigm. However, the necessity of having thousands of manually annotated examples constitutes a serious concern for the state-of-the-art HEp-2 cells classification methods. We present in this work a method that uses active learning in order to minimize the necessity of annotating the majority of the examples in the dataset. For this purpose, we use cross-modal transfer learning coupled with parallel deep residual networks. First, the parallel networks, which take simultaneously different wavelet coefficients as inputs, are trained in a fully supervised way by using a very small and already annotated dataset. Then, the trained networks are utilized on the targeted dataset, which is quite larger compared to the first one, using active learning techniques in order to only select the images that really need to be annotated among all the examples. The obtained results show that active learning, when mixed with an efficient transfer learning technique, can allow one to achieve a quite pleasant discrimination performance with only a few annotated examples in hands. This will help in building CAD systems by simplifying the burdensome task of labeling images while maintaining a similar performance with the state-of-the-art methods.


2021 ◽  
Author(s):  
Tao Zhang ◽  
Zhenhua Tan

With the development of social media and human-computer interaction, video has become one of the most common data formats. As a research hotspot, emotion recognition system is essential to serve people by perceiving people’s emotional state in videos. In recent years, a large number of studies focus on tackling the issue of emotion recognition based on three most common modalities in videos, that is, face, speech and text. The focus of this paper is to sort out the relevant studies of emotion recognition using facial, speech and textual cues due to the lack of review papers concentrating on the three modalities. On the other hand, because of the effective leverage of deep learning techniques to learn latent representation for emotion recognition, this paper focuses on the emotion recognition method based on deep learning techniques. In this paper, we firstly introduce widely accepted emotion models for the purpose of interpreting the definition of emotion. Then we introduce the state-of-the-art for emotion recognition based on unimodality including facial expression recognition, speech emotion recognition and textual emotion recognition. For multimodal emotion recognition, we summarize the feature-level and decision-level fusion methods in detail. In addition, the description of relevant benchmark datasets, the definition of metrics and the performance of the state-of-the-art in recent years are also outlined for the convenience of readers to find out the current research progress. Ultimately, we explore some potential research challenges and opportunities to give researchers reference for the enrichment of emotion recognition-related researches.


Sign in / Sign up

Export Citation Format

Share Document