Linked Open Data - Applications, Trends and Future Developments
Latest Publications


TOTAL DOCUMENTS

6
(FIVE YEARS 6)

H-INDEX

0
(FIVE YEARS 0)

Published By Intechopen

9781839626715, 9781839626722

Author(s):  
Julthep Nandakwang ◽  
Prabhas Chongstitvatana

Currently, Linked Data is increasing at a rapid rate as the growth of the Web. Aside from new information that has been created exclusively as Semantic Web-ready, part of them comes from the transformation of existing structural data to be in the form of five-star open data. However, there are still many legacy data in structured and semi-structured form, for example, tables and lists, which are the principal format for human-readable, waiting for transformation. In this chapter, we discuss attempts in the research area to transform table and list data to make them machine-readable in various formats. Furthermore, our research proposes a novel method for transforming tables and lists into RDF format while maintaining their essential configurations thoroughly. And, it is possible to recreate their original form back informatively. We introduce a system named TULIP which embodied this conversion method as a tool for the future development of the Semantic Web. Our method is more flexible compared to other works. The TULIP data model contains complete information of the source; hence it can be projected into different views. This tool can be used to create a tremendous amount of data for the machine to be used at a broader scale.


Author(s):  
Anju Shukla ◽  
Shishir Kumar ◽  
Harikesh Singh

Computational approaches contribute a significance role in various fields such as medical applications, astronomy, and weather science, to perform complex calculations in speedy manner. Today, personal computers are very powerful but underutilized. Most of the computer resources are idle; 75% of the time and server are often unproductive. This brings the sense of distributed computing, in which the idea is to use the geographically distributed resources to meet the demand of high-performance computing. The Internet facilitates users to access heterogeneous services and run applications over a distributed environment. Due to openness and heterogeneous nature of distributed computing, the developer must deal with several issues like load balancing, interoperability, fault occurrence, resource selection, and task scheduling. Load balancing is the mechanism to distribute the load among resources optimally. The objective of this chapter is to discuss need and issues of load balancing that evolves the research scope. Various load balancing algorithms and scheduling methods are analyzed that are used for performance optimization of web resources. A systematic literature with their solutions and limitations has been presented. The chapter provides a concise narrative of the problems encountered and dimensions for future extension.


Author(s):  
Monday Osagie Adenomon

This book chapter investigated the place of backtesting approach in financial time series analysis in choosing a reliable Generalized Auto-Regressive Conditional Heteroscedastic (GARCH) Model to analyze stock returns in Nigeria. To achieve this, The chapter used a secondary data that was collected from www.cashcraft.com under stock trend and analysis. Daily stock price was collected on Zenith bank stock price from October 21st 2004 to May 8th 2017. The chapter used nine different GARCH models (standard GARCH (sGARCH), Glosten-Jagannathan-Runkle GARCH (gjrGARCH), Exponential GARCH (Egarch), Integrated GARCH (iGARCH), Asymmetric Power Autoregressive Conditional Heteroskedasticity (ARCH) (apARCH), Threshold GARCH (TGARCH), Non-linear GARCH (NGARCH), Nonlinear (Asymmetric) GARCH (NAGARCH) and The Absolute Value GARCH (AVGARCH) with maximum lag of 2. Most the information criteria for the sGARCH model were not available due to lack of convergence. The lowest information criteria were associated with apARCH (2,2) with Student t-distribution followed by NGARCH(2,1) with skewed student t-distribution. The backtesting result of the apARCH (2,2) was not available while eGARCH(1,1) with Skewed student t-distribution, NGARCH(1,1), NGARCH(2,1), and TGARCH (2,1) failed the backtesting but eGARCH (1,1) with student t-distribution passed the backtesting approach. Therefore with the backtesting approach, eGARCH(1,1) with student distribution emerged the superior model for modeling Zenith Bank stock returns in Nigeria. This chapter recommended the backtesting approach to selecting reliable GARCH model.


Author(s):  
Jung-Ran Park ◽  
Andrew Brenza ◽  
Lori Richards

The BIBFRAME model is designed with a high degree of flexibility in that it can accommodate any number of existing models as well as models yet to be developed within the Web environment. The model’s flexibility is intended to foster extensibility. This study discusses the relationship of BIBFRAME to the prevailing content standards and models employed by cultural heritage institutions across museums, archives, libraries, historical societies, and community centers or those in the process of being adopted by cultural heritage institutions. This is to determine the degree to which BIBFRAME, as it is currently understood, can be a viable and extensible framework for bibliographic description and exchange in the Web environment. We highlight the areas of compatibility as well as areas of incompatibility. BIBFRAME holds the promise of freeing library data from the silos of online catalogs permitting library data to interact with data both within and outside the library community. We discuss some of the challenges that need to be addressed in order to optimize the potential capabilities that the BIBFRAME model holds.


Author(s):  
Kingsley Okoye

Today, one of the state-of-the-art technologies that have shown its importance towards data integration and analysis is the linked open data (LOD) systems or applications. LOD constitute of machine-readable resources or mechanisms that are useful in describing data properties. However, one of the issues with the existing systems or data models is the need for not just representing the derived information (data) in formats that can be easily understood by humans, but also creating systems that are able to process the information that they contain or support. Technically, the main mechanisms for developing the data or information processing systems are the aspects of aggregating or computing the metadata descriptions for the various process elements. This is due to the fact that there has been more than ever an increasing need for a more generalized and standard definition of data (or information) to create systems capable of providing understandable formats for the different data types and sources. To this effect, this chapter proposes a semantic-based linked open data framework (SBLODF) that integrates the different elements (entities) within information systems or models with semantics (metadata descriptions) to produce explicit and implicit information based on users’ search or queries. In essence, this work introduces a machine-readable and machine-understandable system that proves to be useful for encoding knowledge about different process domains, as well as provides the discovered information (knowledge) at a more conceptual level.


Author(s):  
Kuo-Chi Chang ◽  
Kai-Chun Chu ◽  
Hsiao-Chuan Wang ◽  
Yuh-Chung Lin ◽  
Tsui-Lien Hsu ◽  
...  

Modern FAB uses a large number of high-energy processes, including plasma, CVD, and ion implantation. Furnaces are one of the important tools for semiconductor manufacturing. According to the requirements of conversion production management, FAB installed a set of IoT-based research based on 12″ 7 nm-level furnaces chip process. Two furnace processing tool measurement points were set up in a 12-inch 7 nm-level factory in Hsinchu Science Park, Taiwan, this is a 24-hour continuous monitoring system, the data obtained every second is sequentially send and stored in the cloud system. This study will be set in the cloud database for big data analysis and decision-making. The lower limit of TEOS, C2H4, CO is 0.4, 1.5, 1 ppm. Semiconductor process, so that IoT integration and big data operations can be performed in all processes, this is an important step to promote FAB intelligent production, and also an important contribution to this research.


Sign in / Sign up

Export Citation Format

Share Document