Towards a Semantic Web-enabled Knowledge Base to Elicit Security Requirements for Misuse Cases

2018 ◽  
Vol 2 ◽  
pp. e25614 ◽  
Author(s):  
Florian Pellen ◽  
Sylvain Bouquin ◽  
Isabelle Mougenot ◽  
Régine Vignes-Lebbe

Xper3 (Vignes Lebbe et al. 2016) is a collaborative knowledge base publishing platform that, since its launch in november 2013, has been adopted by over 2 thousand users (Pinel et al. 2017). This is mainly due to its user friendly interface and the simplicity of its data model. The data are stored in MySQL Relational DBs, but the exchange format uses the TDWG standard format SDD (Structured Descriptive DataHagedorn et al. 2005). However, each Xper3 knowledge base is a closed world that the author(s) may or may not share with the scientific community or the public via publishing content and/or identification key (Kopfstein 2016). The explicit taxonomic, geographic and phenotypic limits of a knowledge base are not always well defined in the metadata fields. Conversely terminology vocabularies, such as Phenotype and Trait Ontology PATO and the Plant Ontology PO, and software to edit them, such as Protégé and Phenoscape, are essential in the semantic web, but difficult to handle for biologist without computer skills. These ontologies constitute open worlds, and are expressed themselves by RDF triples (Resource Description Framework). Protégé offers vizualisation and reasoning capabilities for these ontologies (Gennari et al. 2003, Musen 2015). Our challenge is to combine the user friendliness of Xper3 with the expressive power of OWL (Web Ontology Language), the W3C standard for building ontologies. We therefore focused on analyzing the representation of the same taxonomic contents under Xper3 and under different models in OWL. After this critical analysis, we chose a description model that allows automatic export of SDD to OWL and can be easily enriched. We will present the results obtained and their validation on two knowledge bases, one on parasitic crustaceans (Sacculina) and the second on current ferns and fossils (Corvez and Grand 2014). The evolution of the Xper3 platform and the perspectives offered by this link with semantic web standards will be discussed.


2007 ◽  
Vol 19 (2) ◽  
pp. 297-309 ◽  
Author(s):  
Yuanbo Guo ◽  
Abir Qasem ◽  
Zhengxiang Pan ◽  
Jeff Heflin

Author(s):  
Souad Bouaicha ◽  
Zizette Boufaida

Although OWL (Web Ontology Language) and SWRL (Semantic Web Rule Language) add considerable expressiveness to the Semantic Web, they do have expressive limitations. For some reasoning problems, it is necessary to modify existing knowledge in an ontology. This kind of problem cannot be fully resolved by OWL and SWRL, as they only support monotonic inference. In this paper, the authors propose SWRLx (Extended Semantic Web Rule Language) as an extension to the SWRL rules. The set of rules obtained with SWRLx are posted to the Jess engine using rewrite meta-rules. The reason for this combination is that it allows the inference of new knowledge and storing it in the knowledge base. The authors propose a formalism for SWRLx along with its implementation through an adaptation of different object-oriented techniques. The Jess rule engine is used to transform these techniques to the Jess model. The authors include a demonstration that demonstrates the importance of this kind of reasoning. In order to verify their proposal, they use a case study inherent to interpretation of a preventive medical check-up.


Author(s):  
Thabet Slimani ◽  
Boutheina Ben Yaghlane ◽  
Khaled Mellouli

Due to the rapidly increasing use of information and communications technology, Semantic Web technology is being increasingly applied in a large spectrum of applications in which domain knowledge is represented by means of an ontology in order to support reasoning performed by a machine. A semantic association (SA) is a set of relationships between two entities in knowledge base represented as graph paths consisting of a sequence of links. Because the number of relationships between entities in a knowledge base might be much greater than the number of entities, it is recommended to develop tools and invent methods to discover new unexpected links and relevant semantic associations in the large store of the preliminary extracted semantic association. Semantic association mining is a rapidly growing field of research, which studies these issues in order to create efficient methods and tools to help us filter the overwhelming flow of information and extract the knowledge that reflect the user need. The authors present, in this work, an approach which allows the extraction of association rules (SWARM: Semantic Web Association Rule Mining) from a structured semantic association store. Then, present a new method which allows the discovery of relevant semantic associations between a preliminary extracted SA and predefined features, specified by user, with the use of Hyperclique Pattern (HP) approach. In addition, the authors present an approach which allows the extraction of hidden entities in knowledge base. The experimental results applied to synthetic and real world data show the benefit of the proposed methods and demonstrate their promising effectiveness.


Author(s):  
Christopher Walton

In the introductory chapter of this book, we discussed the means by which knowledge can be made available on the Web. That is, the representation of the knowledge in a form by which it can be automatically processed by a computer. To recap, we identified two essential steps that were deemed necessary to achieve this task: 1. We discussed the need to agree on a suitable structure for the knowledge that we wish to represent. This is achieved through the construction of a semantic network, which defines the main concepts of the knowledge, and the relationships between these concepts. We presented an example network that contained the main concepts to differentiate between kinds of cameras. Our network is a conceptualization, or an abstract view of a small part of the world. A conceptualization is defined formally in an ontology, which is in essence a vocabulary for knowledge representation. 2. We discussed the construction of a knowledge base, which is a store of knowledge about a domain in machine-processable form; essentially a database of knowledge. A knowledge base is constructed through the classification of a body of information according to an ontology. The result will be a store of facts and rules that describe the domain. Our example described the classification of different camera features to form a knowledge base. The knowledge base is expressed formally in the language of the ontology over which it is defined. In this chapter we elaborate on these two steps to show how we can define ontologies and knowledge bases specifically for the Web. This will enable us to construct Semantic Web applications that make use of this knowledge. The chapter is devoted to a detailed explanation of the syntax and pragmatics of the RDF, RDFS, and OWL Semantic Web standards. The resource description framework (RDF) is an established standard for knowledge representation on the Web. Taken together with the associated RDF Schema (RDFS) standard, we have a language for representing simple ontologies and knowledge bases on the Web.


2020 ◽  
pp. 016555152093438
Author(s):  
Jose L. Martinez-Rodriguez ◽  
Ivan Lopez-Arevalo ◽  
Ana B. Rios-Alvarado

The Semantic Web provides guidelines for the representation of information about real-world objects (entities) and their relations (properties). This is helpful for the dissemination and consumption of information by people and applications. However, the information is mainly contained within natural language sentences, which do not have a structure or linguistic descriptions ready to be directly processed by computers. Thus, the challenge is to identify and extract the elements of information that can be represented. Hence, this article presents a strategy to extract information from sentences and its representation with Semantic Web standards. Our strategy involves Information Extraction tasks and a hybrid semantic similarity measure to get entities and relations that are later associated with individuals and properties from a Knowledge Base to create RDF triples (Subject–Predicate–Object structures). The experiments demonstrate the feasibility of our method and that it outperforms the accuracy provided by a pattern-based method from the literature.


2011 ◽  
pp. 74-100
Author(s):  
Eliana Campi ◽  
Gianluca Lorenzo

This chapter presents technologies and approaches for information retrieval in a knowledge base. We intend to show that the use of ontology for domain representation and knowledge search offers a more efficient approach for knowledge management. This approach focuses on the meaning of the word, thus becoming an important element in the building of the Semantic Web. The search based on both keywords and ontology allows more effective information retrieval exploiting the Semantic of the information in a variety of data. We present a method for taxonomy building, annotating, and searching documents with taxonomy concepts. We also describe our experience in the creation of an informal taxonomy, the automatic classification, and the validation of search results with traditional measures, such as precision, recall and f-measure.


Terminology ◽  
2019 ◽  
Vol 25 (2) ◽  
pp. 222-258 ◽  
Author(s):  
Pilar León-Araúz ◽  
Arianne Reimerink ◽  
Pamela Faber

Abstract Reutilization and interoperability are major issues in the fields of knowledge representation and extraction, as reflected in initiatives such as the Semantic Web and the Linked Open Data Cloud. This paper shows how terminological resources can be integrated and reused within different types of application. EcoLexicon is a multilingual terminological knowledge base (TKB) on environmental science that integrates conceptual, linguistic and visual information. It has led to the following by-products: (i) the EcoLexicon English Corpus; (ii) EcoLexiCAT, a terminology-enhanced translation tool; and (iii) Manzanilla, an image annotation tool. This paper explains EcoLexicon and its by-products, and shows how the latter exploit and enhance the data in the TKB.


2019 ◽  
Vol 35 (S1) ◽  
pp. 68-68
Author(s):  
Gergő Merész ◽  
Bence Takács

IntroductionIn Hungary, the procedure for health technology assessment of innovative pharmaceutical products allows 13 assessors 43 calendar days to evaluate reimbursement submissions. These short timelines have created a need for smart capacity building, namely, streamlining the scientific evaluation process while making sure that the quality of the critical appraisals remain high. The objective of this study was to present and evaluate the implementation of an online knowledge base to distill community knowledge, and also for management purposes.MethodsThe scope and the content-, functional-, and technical specification was developed, and information technology security requirements were identified during the pre-implementation phase. An existing platform was chosen for adaptation, ensuring that descriptive follow-up data is available on uptake for monitoring purposes. Both the adaptation and maintenance were carried out internally by the Department of Health Technology Assessment at the National Institute of Pharmacy and Nutrition.ResultsThe key requirements identified when developing the specification were searchability, low maintenance need, low operating costs and attractivity for users. An already existing open-source, flat file content management system was chosen for adaptation. In terms of content, a health technology assessment handbook, process documentation, a news bulletin section was created, and corporate identity elements were added. Since the start of the service in September 2018, the number of total daily page downloads to the knowledge base varied between four and 1,193 (average 205 per day), with the assessment handbook topping the overall page visit statistics.ConclusionsThe implementation of this knowledge base enables the Department of Technology Assessment to rely more on the formalized community knowledge when carrying out critical appraisal, while enabling better knowledge and quality management. Uptake remains an issue on the long run, indicating a need for continuous content development.


Sign in / Sign up

Export Citation Format

Share Document