Semantic Data Mapping on E-Learning Usage Index Tool to Handle Heterogeneity of Data Representation

2014 ◽  
Vol 69 (5) ◽  
Author(s):  
Arda Yunianta ◽  
Norazah Yusof ◽  
Mohd Shahizan Othman ◽  
Abdul Aziz ◽  
Nataniel Dengen ◽  
...  

Distribution and heterogeneity of data is the current issues in data level implementation. Different data representation between applications makes the integration problem increasingly complex. Stored data between applications sometimes have similar meaning, but because of the differences in data representation, the application cannot be integrated with the other applications. Many researchers found that the semantic technology is the best way to resolve the current data integration issues. Semantic technology can handle heterogeneity of data; data with different representations and sources. With semantic technology data mapping can also be done from different database and different data format that have the same meaning data. This paper focuses on the semantic data mapping using semantic ontology approach. In the first level of process, semantic data mapping engine will produce data mapping language with turtle (.ttl) file format that can be used for Local Java Application using Jena Library and Triple Store. In the second level process, D2R Server that can be access from outside environment is provided using HTTP Protocol to access using SPARQL Clients, Linked Data Clients (RDF Formats) and HTML Browser. Future work to will continue on this topic, focusing on E-Learning Usage Index Tool (IPEL) application that is able to integrate with others system applications like Moodle E-Learning Systems. 

Author(s):  
Arda Yunianta ◽  
Norazah Yusof ◽  
Arif Bramantoro ◽  
Haviluddin Haviluddin ◽  
Mohd Shahizan Othman ◽  
...  

Many applications are developed on education domain. Information and data for each application are stored in distributed locations with different data representations on each database. This situation leads to heterogeneity at the level of integration data. Heterogeneity data may cause many problems. One major issue is about the semantic relationships data among applications on education domain, in which the learning data may have the same name but with a different meaning, or learning data that has a different name with same meaning. This paper discusses on semantic data mapping process to handle semantic relationships problem on education domain. There are two main parts in the semantic data mapping process. The first part is the semantic data mapping engine to produce data mapping language with turtle (.ttl) file format as a standard XML file schema, that can be used for Local Java Application using Jena Library and Triple Store. The Turtle file contains detail information about data schema of every application inside the database system. The second part is to provide D2R Server that can be accessed from outside environment using HTTP Protocol. This can be done using SPARQL Clients, Linked Data Clients (RDF Formats) and HTML Browser. To implement the semantic data process, this paper focuses on the student grading system in the learning environment of education domain. By following the proposed semantic data mapping process, the turtle file format is produced as a result of the first part of the process. Finally, this file is used to be combined and integrated with other turtle files in order to map and link with other data representation of other applications.


This paper explores the aspects of providing education through E-learning model evaluating its relevance to distance education and for ICT systems. A subset of E-learning is a Web based learning that makes the learning -easier, impressive, structured and properly managed. The paper defines an university ontology describing how e-learning provides resources which are available online and designated cloud that can be delivered anywhere any time among the users. In the proposed model data is stored in designated cloud and users are able to share efficiently the same as it provides services to learner. Provenance or trust with respect to the academic resource is a major concern in these types of models, users accessing data must be trustable which help learners, researchers, developers, and users in future work also. This paper proposes an e-learning model which is well organized and structured, such that the machine responds with the accurate, trustable, desired information and results. The paper defines an ontology for semantic structuring, semantic rendering and applies provenance on suggested ontology to achieve authentic results. It is also desired to establish trust of the source contents of the Semantic Web, with the result that a user receiving data will need to verify whether the received data from source is in fact trustable or not. The defined ontogoly is suitable for consumption of both man and machine in the context of the e-learning and Semantic data rendering Web Keywords


2019 ◽  
pp. 10-21
Author(s):  
Dimitris Kanellopoulos

The advent of social networking applications, media streaming technologies, and synchronous communications has created an evolution towards dynamic shared media experiences. In this new model, geographically distributed groups of users can be immersed in a common virtual networked environment in which they can interact and collaborate in real- time within the context of simultaneous media content consumption. In this environment, intra-stream and inter-stream synchronization techniques are used inside the consumers’ playout devices, while synchronization of media streams across multiple separated locations is required. This synchronization is nown as multipoint, group or Inter-Destination Multimedia Synchronization (IDMS) and is needed in many applications such as social TV and synchronous e-learning. This survey paper discusses intraand inter-stream synchronization issues, but it mainly focuses on the most well-known IDMS techniques that can be used in emerging distributed multimedia applications. In addition, it provides some research directions for future work.


Author(s):  
Shubham Dubey ◽  
Biro Piroska ◽  
Manjulata Gautam

The world is changing rapidly, so is academics. E-learning has altered the area of academics and education. ICT enabled learning has given ideal services to students by providing any type of content on demand which is proportional to the performance of students. The concentration of learner has been found instinctive; thus there is a need of engaging mind towards course progress with its entirety till the objectives of the course will be achieved. There are several e-learning platforms available as EdX, Udacity, Khan Academy, Alison those have a number of learners registered for various courses. Studies suggest that these platforms suffer from the common problem of learners’ dropping out. Investigations also claim early leaving rate is increasing due to lack of quality of content, distraction factors, learners’ mind change, outdated and succinct information, and some more detraction factors. These issues have been observed on the basis of early leaving rates in various MOOCs. Thus there is a mammoth scope for minimizing the impact of these reasons on the learners’ mind. It can be achieved by identifying these factors affecting learners’ motivation during the course. This study is aiming on identifying these factors. The approach is to explore some certain keywords on previous literature (total 41) and then calculating their frequencies and co-factors associated with them. Both grouped factors contribution and individual factor contribution have been taken care. The study gives a direction for future work towards overcoming these actor and engaging learners in ICT enabled learning.


Author(s):  
Damaris Fuentes-Lorenzo ◽  
Juan Miguel Gómez ◽  
Ángel García Crespo

This chapter deals with a semantic wiki application devoted to news publishing, Cool- WikNews. This semantic application offers the functionalities of a traditional wiki, but enhanced with semantic data. It focuses on the simplicity of both use and browsing, the accurate retrieval of information, and its flexibility to be applied in any domain apart from news publishing. In this chapter, definitions related with this topic will be explained, apart from describing the steps taken so far and the problems still to overcome. Inwards and advantages of CoolWikNews will be presented, paying more attention in how this application overcomes the problems arisen. The chapter concludes with future work and several remarks.


Author(s):  
Reinaldo Padilha França ◽  
Ana Carolina Borges Monteiro ◽  
Rangel Arthur ◽  
Yuzo Iano

The Semantic Web concept is an extension of the web obtained by adding semantics to the current data representation format. It is considered a network of correlating meanings. It is the result of a combination of web-based conceptions and technologies and knowledge representation. Since the internet has gone through many changes and steps in its web versions 1.0, 2.0, and Web 3.0, this last call of smart web, the concept of Web 3.0, is to be associated with the Semantic Web, since technological advances have allowed the internet to be present beyond the devices that were made exactly with the intention of receiving the connection, not limited to computers or smartphones since it has the concept of reading, writing, and execution off-screen, performed by machines. Therefore, this chapter aims to provide an updated review of Semantic Web and its technologies showing its technological origins and approaching its success relationship with a concise bibliographic background, categorizing and synthesizing the potential of technologies.


2010 ◽  
Vol 2 (5) ◽  
Author(s):  
Johan Berntsson ◽  
Norman Lin ◽  
Zoltan Dezso

In this paper we present a general-purpose middleware, called ExtSim that allows OpenSim to communicate with external simulation software, and to synchronize the in-world representation of the simulator state. We briefly present two projects in ScienceSim where ExtSim has been used; Galaxsee which is an interactive real-time N-body simulation, and a protein folding demonstration, before discussing the merits and problems with the current approach. The main limitation is that we until now only have been limited to a third-party viewer, and a fixed server-client protocol, but we present our work on a new viewer, called 3Di Viewer “Rei”, which opens new possibilities in enhancing both performance and richness of the visualization suitable for scientific computing,. Finally we discuss some ideas we are currently studying for future work.


2016 ◽  
Vol 861 ◽  
pp. 547-555
Author(s):  
Melisa Čović ◽  
Ulrich Pont ◽  
Neda Ghiassi ◽  
Mahnameh Taheri ◽  
Rainer Bräuer ◽  
...  

The timely availability and quality of building product information is critical prerequisite for a successful building delivery process. However, little is known about the processes by which stakeholders acquire and use such data. This contribution documents the results of recent relevant surveys, addressing the building product data processing by planers, clients, and the industry. Web questionnaires and interviews with opinion leaders were conducted. Altogether, over 100 participants provided pertinent insights regarding strengths and weaknesses of the current data representation practices. A comparison of the obtained data with that of an earlier study allows for the documentation of the evolutionary trends in web-based data provision. Most importantly, the results facilitate the formulation of strategies for a more effective presentation and distribution of building product data.


2020 ◽  
Vol 11 (01) ◽  
pp. 023-033
Author(s):  
Robert C. McClure ◽  
Caroline L. Macumber ◽  
Julia L. Skapik ◽  
Anne Marie Smith

Abstract Background Electronic clinical quality measures (eCQMs) seek to quantify the adherence of health care to evidence-based standards. This requires a high level of consistency to reduce the effort of data collection and ensure comparisons are valid. Yet, there is considerable variability in local data capture, in the use of data standards and in implemented documentation processes, so organizations struggle to implement quality measures and extract data reliably for comparison across patients, providers, and systems. Objective In this paper, we discuss opportunities for harmonization within and across eCQMs; specifically, at the level of the measure concept, the logical clauses or phrases, the data elements, and the codes and value sets. Methods The authors, experts in measure development, quality assurance, standards and implementation, reviewed measure structure and content to describe the state of the art for measure analysis and harmonization. Our review resulted in the identification of four measure component levels for harmonization. We provide examples for harmonization of each of the four measure components based on experience with current quality measurement programs including the Centers for Medicare and Medicaid Services eCQM programs. Results In general, there are significant issues with lack of harmonization across measure concepts, logical phrases, and data elements. This magnifies implementation problems, confuses users, and requires more elaborate data mapping and maintenance. Conclusion Comparisons using semantically equivalent data are needed to accurately measure performance and reduce workflow interruptions with the aim of reducing evidence-based care gaps. It comes as no surprise that electronic health record designed for purposes other than quality improvement and used within a fragmented care delivery system would benefit greatly from common data representation, measure harmony, and consistency. We suggest that by enabling measure authors and implementers to deliver consistent electronic quality measure content in four key areas; the industry can improve quality measurement.


Sign in / Sign up

Export Citation Format

Share Document