scholarly journals A new competency ontology for learning environments personalization

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Gilbert Paquette ◽  
Olga Marino ◽  
Rim Bejaoui

AbstractCompetency is a central concept for human resource management, training and education. We define a competency as the capacity of a person to display a generic skill with a certain level of performance when applied to one or more knowledge entities. Competencies, and competency referentials grouping competencies, are essential elements for user models, e-Portfolios, adaptive learning, and personalization in Technology-based learning. But to be processed both by humans and by software tools, competencies should be represented in a formal, non-ambiguous model called an ontology. Moreover, this model should use a shared vocabulary to describe the generic skills and the knowledge entities. Defining and linking shared vocabularies is the purpose of ontologies in the semantic web. The goal of our research is to develop a competency ontology for the semantic web to be used as a shared referential in the description of competencies and competency profiles. We analysed five previous competency models and developed COMP2, a new competency ontology that integrates important elements of previous models and the richness of the semantic web vocabulary. COMP2 provides processing capabilities both to humans and computers. Its graphic model is highly readable by humans for design, evaluation and communication purposes. It also translates, together with its data sets, to standard semantic Web code for machine processing. The ontology is composed of five stages that are interlinked with other ontologies in use within the web of linked open data. We will present an example for the use of the ontology for competency-based personalization in learning environments.

2020 ◽  
Vol 1 (1) ◽  
pp. 428-444 ◽  
Author(s):  
Silvio Peroni ◽  
David Shotton

OpenCitations is an infrastructure organization for open scholarship dedicated to the publication of open citation data as Linked Open Data using Semantic Web technologies, thereby providing a disruptive alternative to traditional proprietary citation indexes. Open citation data are valuable for bibliometric analysis, increasing the reproducibility of large-scale analyses by enabling publication of the source data. Following brief introductions to the development and benefits of open scholarship and to Semantic Web technologies, this paper describes OpenCitations and its data sets, tools, services, and activities. These include the OpenCitations Data Model; the SPAR (Semantic Publishing and Referencing) Ontologies; OpenCitations’ open software of generic applicability for searching, browsing, and providing REST APIs over resource description framework (RDF) triplestores; Open Citation Identifiers (OCIs) and the OpenCitations OCI Resolution Service; the OpenCitations Corpus (OCC), a database of open downloadable bibliographic and citation data made available in RDF under a Creative Commons public domain dedication; and the OpenCitations Indexes of open citation data, of which the first and largest is COCI, the OpenCitations Index of Crossref Open DOI-to-DOI Citations, which currently contains over 624 million bibliographic citations and is receiving considerable usage by the scholarly community.


2016 ◽  
Vol 3 (23) ◽  
pp. 124
Author(s):  
José Nelson Pérez-Castillo ◽  
María Fernanda Díaz-Hernández ◽  
Nubia Rincón-Mosquera

Una de las grandes oportunidades que se plantea en el desarrollo actual de las Tecnologías de la Información y la Comunicación (TIC), es la construcción de una Web Semántica en el país que haga aportes a la Estrategia de Datos Abiertos del Estado Colombiano. La apertura de datos en Colombia ha sido establecida en el Decreto 2573 de 2014 e implementada mediante una Guía para la Apertura de Datos a todos los niveles de la administración pública. En el país el decreto garantiza la libre adopción de tecnologías y recomendaciones internacionales, dando la oportunidad a los desarrolladores de software de adoptar una de las más importantes orientaciones de la W3C (World Wide Web Consortium), cual es la de usar e implementar una red “inteligente” que pueda vincular diversos conjuntos de datos de una forma significativa. Se piensa que sin esta novedosa implementación, el esfuerzo gubernamental por aprovechar el gran potencial de los datos abiertos puede quedar a medio camino, quedando la información disponible  en la red tradicional, de una manera estática y centralizada, con el consiguiente riesgo de convertirse en un nuevo silo de datos, o que éstos no sean aprovechados con todas las ventajas de crecimiento y beneficio que la unión entre los dos conceptos: web semántica y datos abiertos podrían aportar. A continuación se presenta un panorama de ventajas relacionadas con la construcción de la Web Semántica en el país para apoyar completamente la estrategia estatal mediante la vinculación significativa de conjuntos de datos abiertos. Concluyendo que se deben establecer unos centros de investigación para el análisis y diseño de redes semánticas con ontologías para ser parte de  los esfuerzos globales hacia la apertura de datos. Semantic Web and its contribution to the strategy of Colombian State open dataAbstractOne of the great opportunities that arise in the current Information and Communication Technologies (ICT) development, is the construction of a Semantic Web in the country to make contributions to the strategy of the Colombian State Open Data. The open data in Colombia has been established in Decree 2573 of 2014 and implemented through a Guide to Open Data at all levels of public administration. In the country the decree ensures the free adoption of technologies and international recommendations, providing an opportunity for software developers to adopt one of the most important guidelines of the W3C (World Wide Web Consortium), which is to use and implement a smart network that can link various datasets in a meaningful way. We think that without this new implementation, the government's effort to harness the great potential of open data can be halfway, leaving the information available in the traditional network, a static, centralized, with the risk of becoming a new silo of data, or that they are not exploited all the benefits of growth and profit that the connection between the two concepts: semantic web and open data could provide. This document provides an overview of benefits related to the construction of the Semantic Web in the country to fully support the state strategy by significantly linking open data sets. Concluding that it should establish research centers for analysis and design of ontologies for semantic networks to be part of global efforts to the open data.ResumoUma das grandes oportunidades que surgem no desenvolvimento atual das Tecnologias da Informação e a Comunicaçao (TIC), é a contrução de uma web semântica no país que faça contribuições para a estratégia de dados abertos do governo colombiano. A abertura de dados na Colômbia foi estabelecida no Decreto 2573 de 2014 e implementada mediante uma guia para a abertura dos dados a todos os níveis da administração pública. No país, o decreto garante a livre adoção de tecnologias e recomendações internacionais, proporcionando uma oportunidade aos desenvolvedores de software para adotar uma das diretrizes mais importantes da W3C (World Wide Web Consortium), que é usar e implementar uma rede inteligente que possa vincular diferentes conjuntos de dados de uma forma significativa. Pensa-se que sem esta nova aplicação o esforço do governo para aproveitar o grande potencial dos dados abertos pode ficar a meio caminho, deixando a informação disponível na rede tradicional de uma manera estática e centralizada, com o risco de se tornar um novo silo dados, ou que eles não sejam aproveitados com todas as vantagens de crescimento e benefício que a conexão entre os dois conceitos (web semântica e de dados abertos) poderia proporcionar. O que se segue é uma visão geral das vantagens relacionadas com a construção da web semântica no país para apoiar plenamente a estratégia estatal mediante à vinculação significativa de conjuntos de dados abertos. Concluindo que se devem estabelecer centros de pesquisa para a análise e desenho de redes semânticas com ontologias para ser parte dos esforços globais focados à abertura de dados.


2019 ◽  
Vol 24 (1) ◽  
pp. 129-141
Author(s):  
Peter Hinkelmanns

Abstract The inclusion of Semantic Web technologies into the lexicographic ‘Middle High German Conceptual Database’ (MHDBDB) is a challenge for this long-term project. Since the 1970 s the Middle High German Concept Database has aimed to provide an onomasiological dictionary for Middle High German. The latest technological revision dates back to 1992, so there is a growing demand for more contemporary infrastructure and usability. The data models themselves, as well as the linking of data sets with authority files, need to be modernised to ensure compatibility with the Semantic Web. This paper summarises the current discussion on formats and ontologies for online dictionaries with a focus on Middle High German lexicography.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5204
Author(s):  
Anastasija Nikiforova

Nowadays, governments launch open government data (OGD) portals that provide data that can be accessed and used by everyone for their own needs. Although the potential economic value of open (government) data is assessed in millions and billions, not all open data are reused. Moreover, the open (government) data initiative as well as users’ intent for open (government) data are changing continuously and today, in line with IoT and smart city trends, real-time data and sensor-generated data have higher interest for users. These “smarter” open (government) data are also considered to be one of the crucial drivers for the sustainable economy, and might have an impact on information and communication technology (ICT) innovation and become a creativity bridge in developing a new ecosystem in Industry 4.0 and Society 5.0. The paper inspects OGD portals of 60 countries in order to understand the correspondence of their content to the Society 5.0 expectations. The paper provides a report on how much countries provide these data, focusing on some open (government) data success facilitating factors for both the portal in general and data sets of interest in particular. The presence of “smarter” data, their level of accessibility, availability, currency and timeliness, as well as support for users, are analyzed. The list of most competitive countries by data category are provided. This makes it possible to understand which OGD portals react to users’ needs, Industry 4.0 and Society 5.0 request the opening and updating of data for their further potential reuse, which is essential in the digital data-driven world.


2021 ◽  
Vol 10 (4) ◽  
pp. 251
Author(s):  
Christina Ludwig ◽  
Robert Hecht ◽  
Sven Lautenbach ◽  
Martin Schorcht ◽  
Alexander Zipf

Public urban green spaces are important for the urban quality of life. Still, comprehensive open data sets on urban green spaces are not available for most cities. As open and globally available data sets, the potential of Sentinel-2 satellite imagery and OpenStreetMap (OSM) data for urban green space mapping is high but limited due to their respective uncertainties. Sentinel-2 imagery cannot distinguish public from private green spaces and its spatial resolution of 10 m fails to capture fine-grained urban structures, while in OSM green spaces are not mapped consistently and with the same level of completeness everywhere. To address these limitations, we propose to fuse these data sets under explicit consideration of their uncertainties. The Sentinel-2 derived Normalized Difference Vegetation Index was fused with OSM data using the Dempster–Shafer theory to enhance the detection of small vegetated areas. The distinction between public and private green spaces was achieved using a Bayesian hierarchical model and OSM data. The analysis was performed based on land use parcels derived from OSM data and tested for the city of Dresden, Germany. The overall accuracy of the final map of public urban green spaces was 95% and was mainly influenced by the uncertainty of the public accessibility model.


2020 ◽  
Vol 12 (1) ◽  
pp. 580-597
Author(s):  
Mohamad Hamzeh ◽  
Farid Karimipour

AbstractAn inevitable aspect of modern petroleum exploration is the simultaneous consideration of large, complex, and disparate spatial data sets. In this context, the present article proposes the optimized fuzzy ELECTRE (OFE) approach based on combining the artificial bee colony (ABC) optimization algorithm, fuzzy logic, and an outranking method to assess petroleum potential at the petroleum system level in a spatial framework using experts’ knowledge and the information available in the discovered petroleum accumulations simultaneously. It uses the characteristics of the essential elements of a petroleum system as key criteria. To demonstrate the approach, a case study was conducted on the Red River petroleum system of the Williston Basin. Having completed the assorted preprocessing steps, eight spatial data sets associated with the criteria were integrated using the OFE to produce a map that makes it possible to delineate the areas with the highest petroleum potential and the lowest risk for further exploratory investigations. The success and prediction rate curves were used to measure the performance of the model. Both success and prediction accuracies lie in the range of 80–90%, indicating an excellent model performance. Considering the five-class petroleum potential, the proposed approach outperforms the spatial models used in the previous studies. In addition, comparing the results of the FE and OFE indicated that the optimization of the weights by the ABC algorithm has improved accuracy by approximately 15%, namely, a relatively higher success rate and lower risk in petroleum exploration.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 621
Author(s):  
Giuseppe Psaila ◽  
Paolo Fosci

Internet technology and mobile technology have enabled producing and diffusing massive data sets concerning almost every aspect of day-by-day life. Remarkable examples are social media and apps for volunteered information production, as well as Open Data portals on which public administrations publish authoritative and (often) geo-referenced data sets. In this context, JSON has become the most popular standard for representing and exchanging possibly geo-referenced data sets over the Internet.Analysts, wishing to manage, integrate and cross-analyze such data sets, need a framework that allows them to access possibly remote storage systems for JSON data sets, to retrieve and query data sets by means of a unique query language (independent of the specific storage technology), by exploiting possibly-remote computational resources (such as cloud servers), comfortably working on their PC in their office, more or less unaware of real location of resources. In this paper, we present the current state of the J-CO Framework, a platform-independent and analyst-oriented software framework to manipulate and cross-analyze possibly geo-tagged JSON data sets. The paper presents the general approach behind the J-CO Framework, by illustrating the query language by means of a simple, yet non-trivial, example of geographical cross-analysis. The paper also presents the novel features introduced by the re-engineered version of the execution engine and the most recent components, i.e., the storage service for large single JSON documents and the user interface that allows analysts to comfortably share data sets and computational resources with other analysts possibly working in different places of the Earth globe. Finally, the paper reports the results of an experimental campaign, which show that the execution engine actually performs in a more than satisfactory way, proving that our framework can be actually used by analysts to process JSON data sets.


Sign in / Sign up

Export Citation Format

Share Document