Recherche d'équivalences et structuration des réseaux notionnels

Terminology ◽  
1996 ◽  
Vol 3 (1) ◽  
pp. 53-83
Author(s):  
Marc Van Campenhoudt

Terminologists generally take a conceptual approach which leads them to consider the observed semantic relations between the described concepts. Hence, they are today directing their attention to the works of cognitivists and those who specialize in semantic networks are trying, like them, to build terminological knowledge bases. The object of this paper is to examine the various relations between the constituent parts and the whole, to describe how they interact with the hyponymy (class inclusion), and to view their role in the establishment of equivalences in multilingual terminology. In particular, the typology of meronomic (part-whole) relations proposed by certain cognitivists is compared against the relations which may be observed in nautical terminology.

Author(s):  
Philippe Martin ◽  
Michel Eboueya

This chapter first argues that current approaches for sharing and retrieving learning objects or any other kinds of information are not efficient or scalable, essentially because almost all of these approaches are based on the manual or automatic indexation or merge of independently created formal or informal resources. It then shows that tightly interconnected collaboratively updated formal or semiformal large knowledge bases (semantic networks) can, should, and probably will, be used as a shared medium for the tasks of researching, publishing, teaching, learning, evaluating, or collaborating, and thus ease or complement traditional methods such as face-to-face teaching and document publishing. To test and support these claims, the authors have implemented their ideas into a knowledge server named WebKB- 2 and begun representing their research domain and several courses at their universities. The same underlying techniques could be applied to a semantic/learning grid or peer-to-peer network.


2015 ◽  
Vol 21 (5) ◽  
pp. 661-664
Author(s):  
ZORNITSA KOZAREVA ◽  
VIVI NASTASE ◽  
RADA MIHALCEA

Graph structures naturally model connections. In natural language processing (NLP) connections are ubiquitous, on anything between small and web scale. We find them between words – as grammatical, collocation or semantic relations – contributing to the overall meaning, and maintaining the cohesive structure of the text and the discourse unity. We find them between concepts in ontologies or other knowledge repositories – since the early ages of artificial intelligence, associative or semantic networks have been proposed and used as knowledge stores, because they naturally capture the language units and relations between them, and allow for a variety of inference and reasoning processes, simulating some of the functionalities of the human mind. We find them between complete texts or web pages, and between entities in a social network, where they model relations at the web scale. Beyond the more often encountered ‘regular’ graphs, hypergraphs have also appeared in our field to model relations between more than two units.


2015 ◽  
Vol 3 (2) ◽  
pp. 39-51
Author(s):  
Ying Zheng ◽  
Harry Zhou

This article presents an intelligent corporate governance analysis and rating system, called IDA System, capable of retrieving SEC required documents of public companies and performing analysis and rating in terms of recommended corporate governance practices. With the techniques of analogical learning, local knowledge bases, databases, and question-dependent semantic networks, the IDA system is able to automatically evaluate the strengths, deficiencies, and risks of a company's corporate governance practices based on the documents stored in the “SEC EDGAR database by (U.S. Securities and Exchange Commission 2013)”. A produced score reduces a complex corporate governance process and related policies into a single number which enables concerned government agencies, investors and legislators to assess the governance characteristics of individual companies.


Author(s):  
P. O. Skobelev ◽  
O. I. Lakhin ◽  
I. V. Mayorov ◽  
E. V. Simonova

Introduction:Currently, new solutions are required in managing industrial resources, in order to maintain a high level of adaptability and efficiency. Classical combinatorial or heuristic methods and tools cannot provide adequate solutions for real-time resource management.Purpose:Development of a method for planning industrial resources based on multi-agent technologies and ontology, in order to adapt the system to unforeseen events, such as new orders, unavailable resources, etc.Results:An adaptive planning method has been developed, in which agents continuously improve the system performance in real time by identifying and resolving conflict situations caused by unforeseen events. To adjust multi-agent planning to specific features of the production process, semantic networks (ontologies) are used, which are the basis of ontological knowledge bases for storing information about the peculiarities of a particular enterprise. In this regard, the following elements have been developed: the basic planning ontology, an ontology editor for creating a specialized enterprise ontology, a knowledge base in the form of a semantic Wikipedia for the enterprise, and a multiagent scheduler which can be customized using the basic and specialized ontologies in accordance with specific production features and requirements for the technological operations.Practical relevance:Application of the system developed with the method of planning industrial resources is not limited to machine-building enterprises, but can be recommended for managing projects, supply chains, etc.


2002 ◽  
Vol 26 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Rose M. Marra ◽  
David H. Jonassen

Semantic networks and expert systems can support learning and critical thinking as Mindtools. Mindtools are computer-based tools that function as intellectual partners with the learner in order to engage and facilitate critical thinking and higher-order learning. Semantic networks and expert systems in particular are cognitive reflection tools that help learners to build a representation of what they know by designing their own knowledge bases. Semantic networks have been used as a knowledge elicitation tool for expert system construction; however, the effects of using these tools together has never been formally studied. This study examined the effects of building semantic networks on the coherence and utility of expert systems subsequently constructed. Subjects who constructed semantic networks first produced expert systems with significantly more rules and rule types than a control group. The task's intentional ambiguity and the differences in thinking necessary for semantic network and expert system construction may have affected the non-significance of other expert system complexity variables.


Author(s):  
Ruobing Xie ◽  
Xingchi Yuan ◽  
Zhiyuan Liu ◽  
Maosong Sun

Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.


Terminology ◽  
2004 ◽  
Vol 10 (2) ◽  
pp. 241-263 ◽  
Author(s):  
Caroline Barrière

Corpus analysis is today at the heart of building Terminological Knowledge Bases (TKBs). Important terms are usually first extracted from a corpus and then related to one another via semantic relations. This research brings the discovery of semantic relations to the forefront to allow the discovery of less stable lexical units or unlabeled concepts, which are important to include in a TKB to facilitate knowledge organization. We suggest a concept hierarchy made of concept nodes defined via a representational structure emphasizing both labeling and conceptual representation. The Conceptual Graph formalism chosen for conceptual representation allows a compositional view of concepts, which is relevant for their comparison and their organization in a concept lattice. Examples manually extracted from a scuba-diving corpus are presented to explore the possibilities of this approach. Subsequently, steps toward a semi-automatic construction of a concept hierarchy from corpus analysis are presented to evaluate their underlying hypothesis and feasibility.


2012 ◽  
Vol 3 (4) ◽  
pp. 47-56 ◽  
Author(s):  
Jorge González Lorenzo ◽  
José Emilio Labra Gayo ◽  
José María Álvarez Rodríguez

The emerging Web of Data as part of the Semantic Web initiative and the sheer mass of information now available make it possible the deployment of new services and applications based on the reuse of existing vocabularies and datasets. A huge amount of this information is published by governments and organizations using semantic web languages and formats such as RDF, implicit graph structures developed using W3C standard languages: RDF-Schema or OWL, but new flexible programming models to process and exploit this data are required. In that sense the use of algorithms such as Spreading Activation is growing in order to find relevant and related information in this new data realm. Nevertheless the efficient exploration of the large knowledge bases has not yet been resolved and that is why new paradigms are emerging to boost the definitive deployment of the Web of Data. This cornerstone is being addressed applying new programming models such as MapReduce in combination with old-fashioned techniques of Document and Information Retrieval. In this paper an implementation of the Spreading Activation technique based on the MapReduce programming model and the problems of applying this paradigm to graph-based structures are introduced. Finally, a concrete experiment with real data is presented to illustrate the algorithm performance and scalability.


Sign in / Sign up

Export Citation Format

Share Document