The security mechanism in the World Wide Web (WWW) and the Common Gateway Interface (CGI). Example of Central Police University entrance examination system

Author(s):  
Chenyuan Kou ◽  
F. Springsteel
Author(s):  
Michael Lang

Although its conceptual origins can be traced back a few decades (Bush, 1945), it is only recently that hypermedia has become popularized, principally through its ubiquitous incarnation as the World Wide Web (WWW). In its earlier forms, the Web could only properly be regarded a primitive, constrained hypermedia implementation (Bieber & Vitali, 1997). Through the emergence in recent years of standards such as eXtensible Markup Language (XML), XLink, Document Object Model (DOM), Synchronized Multimedia Integration Language (SMIL) and WebDAV, as well as additional functionality provided by the Common Gateway Interface (CGI), Java, plug-ins and middleware applications, the Web is now moving closer to an idealized hypermedia environment. Of course, not all hypermedia systems are Web based, nor can all Web-based systems be classified as hypermedia (see Figure 1). See the terms and definitions at the end of this article for clarification of intended meanings. The focus here shall be on hypermedia systems that are delivered and used via the platform of the WWW; that is, Web-based hypermedia systems.


2021 ◽  
Vol 8 (7) ◽  
pp. 202321
Author(s):  
Metod Jazbec ◽  
Barna Pàsztor ◽  
Felix Faltings ◽  
Nino Antulov-Fantulin ◽  
Petter N. Kolm

We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a non-profit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stock performance of US companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the US stock market. Furthermore, we analyse and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provide support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events in financial markets.


2021 ◽  
Author(s):  
Polina Gafurova ◽  
Alexander Elizarov ◽  
Evgeny Lipachev

Algorithms for documents metadata formation of unstructured digital mathematical collections are proposed. They are based on the use of search queries to such open scientific resources of the World Wide Web as DBPedia, Wikidata, Wikipedia, Freebase. The developed algorithms made it possible to generate metadata in cases where it is very difficult or even impossible to extract it from document using text analytics methods. The results of applying these algorithms in the process of forming fundamental metadata sets of retro-collections included in the Lobachevskii Digital Mathematical Library ((Lobachevskii-DML, https://lobachevskii-dml.ru/) are presented. The composition of the main set of metadata complies with the requirements of the European Digital Mathematical Library (EuDML). The xml-language based on the Archive and Log Exchange Tag Package (NISO JATS) was used to represent the metadata.


2009 ◽  
Author(s):  
Blair Williams Cronin ◽  
Ty Tedmon-Jones ◽  
Lora Wilson Mau

Sign in / Sign up

Export Citation Format

Share Document