Asynchronous XSLT

Author(s):  
Michael Kay

This paper describes a proposal for language extensions to XSLT 3.0, and to the XDM data model, to provide for asynchronous processing. The proposal is particularly motivated by the requirement for asynchronous retrieval of external resources on the Javascript platform (whether client-side or server-side), but other use cases for asynchronous processing, and other execution platforms, are also considered.

Author(s):  
Kostyantyn Kharchenko

The approach to organizing the automated calculations’ execution process using the web services (in particular, REST-services) is reviewed. The given solution will simplify the procedure of introduction of the new functionality in applied systems built according to the service-oriented architecture and microservice architecture principles. The main idea of the proposed solution is in maximum division of the server-side logic development and the client-side logic, when clients are used to set the abstract computation goals without any dependencies to existing applied services. It is proposed to rely on the centralized scheme to organize the computations (named as orchestration) and to put to the knowledge base the set of rules used to build (in multiple steps) the concrete computational scenario from the abstract goal. It is proposed to include the computing task’s execution subsystem to the software architecture of the applied system. This subsystem is composed of the service which is processing the incoming requests for execution, the service registry and the orchestration service. The clients send requests to the execution subsystem without any references to the real-world services to be called. The service registry searches the knowledge base for the corresponding input request template, then the abstract operation description search for the request template is performed. Each abstract operation may already have its implementation in the form of workflow composed of invocations of the real applied services’ operations. In case of absence of the corresponding workflow in the database, this workflow implementation could be synthesized dynamically according to the input and output data and the functionality description of the abstract operation and registered applied services. The workflows are executed by the orchestrator service. Thus, adding some new functions to the client side can be possible without any changes at the server side. And vice versa, adding new services can impact the execution of the calculations without updating the clients.


2003 ◽  
Vol 3 (2) ◽  
pp. 170-173 ◽  
Author(s):  
Karthik Ramani, ◽  
Abhishek Agrawal, and ◽  
Mahendra Babu ◽  
Christoph Hoffmann

New and efficient paradigms for web-based collaborative product design in a global economy will be driven by increased outsourcing, increased competition, and pressures to reduce product development time. We have developed a three-tier (client-server-database) architecture based collaborative shape design system, Computer Aided Distributed Design and Collaboration (CADDAC). CADDAC has a centralized geometry kernel and constraint solver. The server-side provides support for solid modeling, constraint solving operations, data management, and synchronization of clients. The client-side performs real-time creation, modification, and deletion of geometry over the network. In order to keep the clients thin, many computationally intensive operations are performed at the server. Only the graphics rendering pipeline operations are performed at the client-side. A key contribution of this work is a flexible architecture that decouples Application Data (Model), Controllers, Viewers, and Collaboration. This decoupling allows new feature development to be modular and easy to develop and manage.


2013 ◽  
Vol 739 ◽  
pp. 628-631
Author(s):  
Xiao Meng Chen ◽  
Wei Chang Feng

E-Box multimedia system is developed for the rich audio and video resource on the Internet and on its server side, it can automatically search and integration of network video and audio resources, and send to the client side for the user in real-time broadcast TV viewing, full use of remote control operation, Simply its a very easy to use multimedia system. This article introduces its infrastructure, main technical ideas and you can also see some details about server side and client side.


2011 ◽  
Vol 338 ◽  
pp. 796-799
Author(s):  
Wei Chang Feng

E-Yuan multimedia system is developed for the rich audio and video resource on the Internet and on its server side, it can automatically search and integration of network video and audio resources, and send to the client side for the user in real-time broadcast TV viewing, full use of remote control operation, Simply it’s a very easy to use multimedia system. This article introduces its infrastructure, main technical ideas and you can also see some details about server side and client side. At the same time, the improvement on how to collect and integrate video resources is comprehensively elaborated.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 230
Author(s):  
C. Vasan Sai Krishna ◽  
Y. Bhuvana ◽  
P. Pavan Kumar ◽  
R. Murugan

In a typical DoS attack, the attacker tries to bring the server down. In this case, the attacker sends a lot of bogus queries to the server to consume its computing power and bandwidth. As the server’s bandwidth and computing power are always greater than attacker’s client machine, He seeks help from a group of connected computers. DDoS attack involves a lot of client machines which are hijacked by the attacker (together called as botnet). As the server handles all these requests sent by the attacker, all its resources get consumed and it cannot provide services. In this project, we are more concerned about reducing the computing power on the server side by giving the client a puzzle to solve. To prevent such attacks, we use client puzzle mechanism. In this mechanism, we introduce a client-side puzzle which demands the machine to perform tasks that require more resources (computation power). The client’s request is not directly sent to the server. Moreover, there will be an Intermediate Server to monitor all the requests that are being sent to the main server. Before the client’s request is sent to the server, it must solve a puzzle and send the answer. Intermediate Server is used to validate the answer and give access to the client or block the client from accessing the server.


2021 ◽  
Author(s):  
Andrii Salnikov ◽  
Balázs Kónya

AbstractDistributed e-Infrastructure is a key component of modern BIG Science. Service discovery in e-Science environments, such as Worldwide LHC Computing Grid (WLCG), is a crucial functionality that relies on service registry. In this paper we re-formulate the requirements for the service endpoint registry based on our more than 10 years experience with many systems designed or used within the WLCG e-Infrastructure. To satisfy those requirements the paper proposes a novel idea to use the existing well-established Domain Name System (DNS) infrastructure together with a suitable data model as a service endpoint registry. The presented ARC Hierarchical Endpoints Registry (ARCHERY) system consists of a minimalistic data model representing services and their endpoints within e-Infrastructures, a rendering of the data model embedded into DNS-records, a lightweight software layer for DNS-record management and client-side data discovery. Our approach for the ARCHERY registry required minimal software development and inherits all the benefits of one of the most reliable distributed information discovery source of the internet, the DNS infrastructure. In particular, deployment, management and operation of ARCHERY is fully relying on DNS. Results of ARCHERY deployment use-cases are provided together with performance analysis.


2015 ◽  
Vol 12 (2) ◽  
pp. 655-681 ◽  
Author(s):  
Tomas Cerny ◽  
Miroslav Macik ◽  
Michael Donahoo ◽  
Jan Janousek

Increasing demands on user interface (UI) usability, adaptability, and dynamic behavior drives ever-growing development and maintenance complexity. Traditional UI design techniques result in complex descriptions for data presentations with significant information restatement. In addition, multiple concerns in UI development leads to descriptions that exhibit concern tangling, which results in high fragment replication. Concern-separating approaches address these issues; however, they fail to maintain the separation of concerns for execution tasks like rendering or UI delivery to clients. During the rendering process at the server side, the separation collapses into entangled concerns that are provided to clients. Such client-side entanglement may seem inconsequential since the clients are simply displaying what is sent to them; however, such entanglement compromises client performance as it results in problems such as replication, fragment granularity ill-suited for effective caching, etc. This paper considers advantages brought by concern-separation from both perspectives. It proposes extension to the aspect-oriented UI design with distributed concern delivery (DCD) for client-server applications. Such an extension lessens the serverside involvement in UI assembly and reduces the fragment replication in provided UI descriptions. The server provides clients with individual UI concerns, and they become partially responsible for the UI assembly. This change increases client-side concern reuse and extends caching opportunities, reducing the volume of transmitted information between client and server to improve UI responsiveness and performance. The underlying aspect-oriented UI design automates the server-side derivation of concerns related to data presentations adapted to runtime context, security, conditions, etc. Evaluation of the approach is considered in a case study applying DCD to an existing, production web application. Our results demonstrate decreased volumes of UI descriptions assembled by the server-side and extended client-side caching abilities, reducing required data/fragment transmission, which improves UI responsiveness. Furthermore, we evaluate the potential benefits of DCD integration implications in selected UI frameworks.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


Author(s):  
Subrata Acharya

There is a need to be able to verify plaintext HTTP content transfers. Common sense dictates authentication and sensitive content should always be protected by SSL/HTTPS, but there is still great exploitation potential in the modification of static content in transit. Pre-computed signatures and client-side verification offers integrity protection of HTTP content in applications where SSL is not feasible. In this chapter, the authors demonstrate a mechanism by which a Web browser or other HTTP client can verify that content transmitted over an untrusted channel has not been modified. Verifiable HTTP is not intended to replace SSL. Rather, it is intended to be used in applications where SSL is not feasible, specifically, when serving high-volume static content and/or content from non-secure sources such as Content Distribution Networks. Finally, the authors find content verification is effective with server-side overhead similar to SSL. With future optimization such as native browser support, content verification could achieve comparable client-side efficiency.


Sign in / Sign up

Export Citation Format

Share Document