scholarly journals Velocity on the Web

Author(s):  
Riccardo Tommasini

AbstractA new generation of Web Applications is pushing the Web infrastructure to process data as soon as they arrive and before they are no longer valuable. However, the Web infrastructure as it is not adequate, and Stream Processing technologies cannot deal with heterogeneous data streams and events. To solve these issues, we need to investigate how to identify, represent, and process streams and events on the Web. In this chapter, we discuss the recent advancements for taming Velocity on the Web of Data without neglecting Data Variety. Thus, we present a Design Science research investigation that builds on the state of the art of Stream Reasoning and RDF Stream Processing. We present our research results, for representing and processing stream and events on the Web, and we discuss their potential impact.

Author(s):  
Norman Schaffer ◽  
Martin Engert ◽  
Girts Leontjevs ◽  
Helmut Krcmar

Software tools hold great promise to support the modeling, analyzing, and innovation of business models. Current tools only focus on the design of business models and do not incorporate the complexity of existing interdependencies between business model components. These tools merely allow simulating inherent dynamics within the models or different strategic decision scenarios. In this research, we use design science research to develop a prototype that is capable of modeling and simulating dynamic business models. We use system dynamics as a simulation approach and containers to allow deployment as web applications. This paper represents the first of three design cycles, realizing six out of 59 requirements that are collected from the literature on software tools for business models. We contribute toward the design of novel artifacts for business model innovation as well as their evaluation. Future research can use these results to build tools that consider and address the complexity of business models. Lastly, we present several options for extending the proposed tool in the future.


Author(s):  
Alexey Cheptsov ◽  
Stefan Wesner ◽  
Bastian Koller

The modern Semantic Web scenarios require reasoning algorithms to be flexible, modular, and highly-configurable. A solid approach, followed in the design of the most currently existing reasoners, is not sufficient when dealing with today's challenges of data analysis across multiple sources of heterogeneous data or when the data amount grows to the “Big Data” sizes. The “reasoning as a workflow” concept has attracted a lot of attention in the design of new-generation Semantic Web applications, offering a lot of opportunities to improve both flexibility and scalability of the reasoning process. Considering a single workflow component as a service offers a lot of opportunities for a reasoning algorithm to target a much wider range of potentially enabled Semantic Web use cases by taking benefits of a service-oriented and component-based implementation. We introduce a technique for developing service-oriented Semantic Reasoning applications based on the workflow concept. We also present the Large Knowledge Collider - a software platform for developing workflow-based Semantic Web applications, taking advantages of on-demand high performance computing and cloud infrastructures.


Author(s):  
Leila Zemmouchi-Ghomari

Data play a central role in the effectiveness and efficiency of web applications, such as the Semantic Web. However, data are distributed across a very large number of online sources, due to which a significant effort is needed to integrate this data for its proper utilization. A promising solution to this issue is the linked data initiative, which is based on four principles related to publishing web data and facilitating interlinked and structured online data rather than the existing web of documents. The basic ideas, techniques, and applications of the linked data initiative are surveyed in this paper. The authors discuss some Linked Data open issues and potential tracks to address these pending questions.


Author(s):  
Alexey Cheptsov ◽  
Stefan Wesner ◽  
Bastian Koller

The modern Semantic Web scenarios require reasoning algorithms to be flexible, modular, and highly-configurable. A solid approach, followed in the design of the most currently existing reasoners, is not sufficient when dealing with today's challenges of data analysis across multiple sources of heterogeneous data or when the data amount grows to the “Big Data” sizes. The “reasoning as a workflow” concept has attracted a lot of attention in the design of new-generation Semantic Web applications, offering a lot of opportunities to improve both flexibility and scalability of the reasoning process. Considering a single workflow component as a service offers a lot of opportunities for a reasoning algorithm to target a much wider range of potentially enabled Semantic Web use cases by taking benefits of a service-oriented and component-based implementation. We introduce a technique for developing service-oriented Semantic Reasoning applications based on the workflow concept. We also present the Large Knowledge Collider - a software platform for developing workflow-based Semantic Web applications, taking advantages of on-demand high performance computing and cloud infrastructures.


2020 ◽  
Author(s):  
Nils Brinckmann ◽  
Massimiliano Pittore ◽  
Matthias Rüster ◽  
Benjamin Proß ◽  
Juan Camilo Gomez-Zapata

<p>Today's Earth-related scientific questions are more complex and more interdisciplinary than ever, so much that is extremely challenging for single-domain experts to master all different aspects of the problem at once. As a consequence, modular and distributed frameworks are increasingly gaining momentum, since they allow the collaborative development of complex, multidisciplinary processing solutions.</p> <p>A technical implementation focus on the use of modern web technologies with their broad variety of standards, protocols and available development frameworks. RESTful services - one of the main drivers of the modern web - are often sub optimal for the implementation of complex scientific processing solutions. In fact, while they offer great flexibility, they also tend to be bound to very specific formats (and often poorly documented).</p> <p>With the introduction of the Web Processing Service (WPS) specifications, the Open Geospatial Consortium (OGC) proposed a standard for the implementation of a new generation of computing modules overcoming most of the drawbacks of the RESTful approach. The WPS allow a flexible and reliable specification of input and output formats as well as the exploration of the services´capabilities with the GetCapabilities and DescribeProcess operations.</p> <p>The main drawback of the WPS approach with respect to RESTful services is that the latter can be easily implemented for any programming language, while the efficient integration of WPS is currently mostly relying on Java, C and Python implementations. In the framework of Earth Science Research we are often confronted with a plethora of programming languages and coding environments. Converting already existing complex scientific programs into a language suitable for WPS integration can be a daunting effort and may even result in additional errors being introduced due to conflicts and misunderstandings between the original code authors and the developers working on the WPS integration. Also the maintenance of these hybrid processing components is often very difficult since most scientists are not familiar with web programming technologies and conversely the web developers cannot (or do not have the time to) get adequately acquainted with the underlying science.</p> <p>Facing these problems in the context of the RIESGOS project we developed a framework for a Java-based WPS server able to run any kind of scientific code scripts or command line programs. The proposed approach is based on the use of Docker containers encapsulating the running processes, and Docker images to manage all necessary dependencies.</p> <p>A simple set of ASCII configuration files provides all information needed for WPS integration: how to call the program, how to give input parameters - including command line arguments and input files - and how to interpret the output of the program - both from stdout and from serialized files. There are a bunch of predefined format converters and we also include mechanisms for extensions to allow maximum flexibility.</p> <p>The result is a encapsulated, modular, safe and extendable architecture that allows scientists to expose their scientific programs on the web with little effort, and to collaboratively create complex, multidisciplinary processing pipelines. </p>


2014 ◽  
Vol 536-537 ◽  
pp. 494-498
Author(s):  
Wen Ming Shuai ◽  
Xiu Fen Fu

With the rapid development of information technology, the growth of heterogeneous Web data and the requirements of access to the Web of data also is growing. In view of this, a method of heterogeneous data integration based on SOA(Service-Oriented Architecture) is proposed. This method combines the technology of middleware and SOA design, using XML and Web services technologies, presents a framework of heterogeneous data integration based on SOA, and introduces the architecture of SOA data integration middleware. Experimental results show that this method reduces the coupling of heterogeneous data integration system effectively, and improves the scalability of the system.


2018 ◽  
Vol 48 (3) ◽  
pp. 84-90 ◽  
Author(s):  
E. A. Lapchenko ◽  
S. P. Isakova ◽  
T. N. Bobrova ◽  
L. A. Kolpakova

It is shown that the application of the Internet technologies is relevant in the selection of crop production technologies and the formation of a rational composition of the machine-and-tractor fl eet taking into account the conditions and production resources of a particular agricultural enterprise. The work gives a short description of the web applications, namely “ExactFarming”, “Agrivi” and “AgCommand” that provide a possibility to select technologies and technical means of soil treatment, and their functions. “ExactFarming” allows to collect and store information about temperature, precipitation and weather forecast in certain areas, keep records of information about crops and make technological maps using expert templates. “Agrivi” allows to store and provide access to weather information in the fi elds with certain crops. It has algorithms to detect and make warnings about risks related to diseases and pests, as well as provides economic calculations of crop profi tability and crop planning. “AgCommand” allows to track the position of machinery and equipment in the fi elds and provides data on the weather situation in order to plan the use of agricultural machinery in the fi elds. The web applications presented hereabove do not show relation between the technologies applied and agro-climatic features of the farm location zone. They do not take into account the phytosanitary conditions in the previous years, or the relief and contour of the fi elds while drawing up technological maps or selecting the machine-and-tractor fl eet. Siberian Physical-Technical Institute of Agrarian Problems of Siberian Federal Scientifi c Center of AgroBioTechnologies of the Russian Academy of Sciences developed a software complex PIKAT for supporting machine agrotechnologies for production of spring wheat grain at an agricultural enterprise, on the basis of which there is a plan to develop a web application that will consider all the main factors limiting the yield of cultivated crops.


2019 ◽  
Author(s):  
FRANCISCO CARLOS PALETTA

This work aims to presents partial results on the research project conducted at the Observatory of the Labor Market in Information and Documentation, School of Communications and Arts of the University of São Paulo on Information Science and Digital Humanities. Discusses Digital Humanities and informational literacy. Highlights the evolution of the Web, the digital library and its connections with Digital Humanities. Reflects on the challenges of the Digital Humanities transdisciplinarity and its connections with the Information Science. This is an exploratory study, mainly due to the current and emergence of the theme and the incipient bibliography existing both in Brazil and abroad.Keywords: Digital Humanities; Information Science; Transcisciplinrity; Information Literacy; Web of Data; Digital Age.


Sign in / Sign up

Export Citation Format

Share Document