scholarly journals Compiling XSLT3, in the browser, in itself

Author(s):  
John Lumley ◽  
Debbie Lockett ◽  
Michael Kay

This paper describes the development of a compiler for XSLT3.0 which can run directly in modern browsers. It exploits a virtual machine written in JavaScript, Saxon-JS, which interprets an execution plan for an XSLT transform, consuming source documents and interpolating the results into the displayed web page. Ordinarily these execution plans (Stylesheet Export File, SEF), which are written in XML, are generated offline by the Java-based Saxon-EE product. Saxon-JS has been extended to handle dynamic XPath evaluation, by adding an XPath parser and a compiler from the XPath parse tree to SEF. By constructing an XSLT transform that consumes an XSLT stylesheet and creates an appropriate SEF, exploiting this XPath compiler, we have managed to construct an in-browser compiler for XSLT3.0 with high levels of standards compilance. This opens the way to support both dynamic transforms, in-browser stylesheet construction and execution and a potential route to language-portable XSLT compiler technologies.

Author(s):  
Nigel Ward ◽  

Potential applicants to graduate school find it difficult to predict, even approximately, which schools will accept them. We have created a predictive model of admissions decision-making, packaged in the form of a web page that allows students to enter their information and see a list of schools where they are likely to be accepted. This paper explains the rationale for the model’s design and parameter values. Interesting issues include the way that evidence is combined, the estimation of parameters, and the modeling of uncertainty.


Author(s):  
Pooja Gaikwad ◽  
Priyanka Ghumare ◽  
Gayatri Chaudhari ◽  
Sayali Nikam ◽  
Rupali Murtdak ◽  
...  

In this paper, a detailed overview on steganography & its Types, tools, techniques is conducted to study and look over them. This research involves the steganography using binwalk tool in the Necromancer. Necromancer is the vulnerable virtual machine, in order to gain the root access of VM (Virtual Machine) there are 11 flags to collect on the way, Few flags are found by using the Binwalk tool, to know the hint behind image, so we have used an Image steganography in one of flag of Necromancer. Flags are nothing but any encrypted code. Steganography refers to the act of camouflage the secret data within any image, audio, video in order to avoid the detection. The secret data is then extracted at its destination. The use of steganography are often combined with encryption as an additional step for hiding or protecting data.


InterConf ◽  
2021 ◽  
pp. 437-442
Author(s):  
Bahar Asgarova ◽  
Gulshan Sattarova

Have you ever thought of the amount of data you create each day? Every message you send, every credit card transaction, even every web page you open… They all collect a total of 2.5 quintillion bytes of data produced daily by the global population. This offers endless opportunities to leverage this data for the most forward-thinking businesses in many areas, and the banking industry is no exception While digital banking is used by almost half of the world's adult population, financial institutions have enough data at their disposal to rethink the way they work, becoming more efficient, more customer-focused, and ultimately more profitable.


Author(s):  
Francisco Yus

In this chapter the author analyzes, from a cognitive pragmatics point of view and, more specifically, from a relevance-theoretic approach, the way Internet users assess the qualities of web pages in their search for optimally relevant interpretive outcomes. The relevance of a web page is measured as a balance between the interest that information provides (the so-called “positive cognitive effects” in relevance theory terminology) and the mental effort involved in their extraction. On paper, optimal relevance is achieved when the interest is high and the effort involved is low. However, as the relevance grid in this chapter shows, there are many possible combinations when measuring the relevance of content on web pages. The author also addresses how the quality and design of web pages may influence the way balances of interest (cognitive effects) and mental effort are assessed by users when processing the information contained on the web page. The analysis yields interesting implications on how web pages should be designed and on web usability in general.


Author(s):  
Harshala Bhoir ◽  
K. Jayamalini

Now a days Internet is widely used by users to find required information. Searching on web for useful information has become more difficult. Web crawler helps to extract the relevant and irrelevant links from the web. Web crawler downloads web pages through the program. This paper implements web crawler with Scrapy and Beautiful Soup python web crawler framework to crawls news on news web sites.Scrapy is a web crawling framework that allow programmer to create spider that define how a certain site or a group of sites will be scraped. It has built-in support for extracting data from HTML sources using XPath expression and CSS expression. BeautifulSoup is a framework that extract data from web pages. Beautiful Soup provides a few simple methods for navigating, searching and modifying a parse tree. BeautifulSoup automatically convert incoming document to Unicode and outgoing document to UTF-8.Proposed system use BeautifulSoup and scrapy framework to crawls news web sites. This paper also compares scrapy and beautiful Soup4 web crawler frameworks.


2020 ◽  
Vol 3 (1) ◽  
pp. 320-330
Author(s):  
Adam Muc ◽  
Tomasz Muchowski ◽  
Albert Zawadzki ◽  
Adam Szeleziński

AbstractBusinesses are increasingly confronted with server-related problems. More and more, businesses are enabling remote working and need to rely on network services. The provision of network services requires rebuilding the network infrastructure and the way employees are provided with data. Web applications and server services use common dependencies and require a specific network configuration. This often involves collisions between network ports and common dependencies’ configuration. This problem can be solved by separating the conflicting applications into different servers, but this involves the cost of maintaining several servers. Another solution may be to isolate applications with virtual machines, but this involves a significant overhead on server resources, as each virtual machine must be equipped with an operating system. An alternative to virtual machines can be application containerization, which is growing in popularity. Containerization also allows to isolate applications, but operates on the server’s native operating system. This means eliminating the overhead on server resources present in virtual machines. This article presents an example of web application containerization.


2018 ◽  
Vol 41 ◽  
Author(s):  
Maria Babińska ◽  
Michal Bilewicz

AbstractThe problem of extended fusion and identification can be approached from a diachronic perspective. Based on our own research, as well as findings from the fields of social, political, and clinical psychology, we argue that the way contemporary emotional events shape local fusion is similar to the way in which historical experiences shape extended fusion. We propose a reciprocal process in which historical events shape contemporary identities, whereas contemporary identities shape interpretations of past traumas.


2020 ◽  
Vol 43 ◽  
Author(s):  
Aba Szollosi ◽  
Ben R. Newell

Abstract The purpose of human cognition depends on the problem people try to solve. Defining the purpose is difficult, because people seem capable of representing problems in an infinite number of ways. The way in which the function of cognition develops needs to be central to our theories.


Sign in / Sign up

Export Citation Format

Share Document