Web-Powered Databases

2003 ◽  
pp. 176-202
Author(s):  
Claude Del Vigna

Web-powered databases (WPDB) refer both to databases accessible through the Web and to their underlying architecture. This chapter concerns this architecture. It presents the low-level implementation of a WPDB mock-up. The claim which supports the chapter is that this low level analysis can facilitate the understanding of the fundamental mechanisms embedded in a WPDB. All the components of the mock-up, except the Web server itself, are coded in C++. This will illustrate how the techniques such as Internet connections, multitasking, multithreading, and named pipes can be used to develop a WPDB architecture. Moreover, beyond its explanatory aim, the present chapter offers a very practical issue as the C++ codes can be used as a guideline or even more reused as is for the development of more complex WPDBs.

Author(s):  
Milena Vesić ◽  
◽  
Nenad Kojić ◽  

Web applications are the most common type of application in modern society since they can be accessed by a large number of users at any time from any device. The only condition for their use is an Internet connection. Most applications run using the HTTP protocol and client-server architecture. This architecture is based on the use of API (Application programming interface), most often REST architecture (Representational State Transfer). If there are several different functionalities on the website that fill their content with data from the web server, for most of them a special HTTP request must be generated with one of the existing methods (GET, POST, PUT, DELETE). This way of communication can be a big problem if the connection to the Internet is weak, there are a lot of HTTP requests because you have to wait for each request to be executed and for the web server to return the data. In this paper, one implementation of GraphQL is presented. GraphQL is an open-source data query and manipulation language for APIs. GraphQL enables faster application development and has less server code. The key advantage is the number of HTTP requests because all the desired data of the page is obtained with one request. This paper will show a comparative analysis on the example of a real website in the case of using the REST architecture and GraphQL in the case of different qualities of Internet connections, code complexity and the number of required requests.


2016 ◽  
Vol 1 (1) ◽  
pp. 001
Author(s):  
Harry Setya Hadi

String searching is a common process in the processes that made the computer because the text is the main form of data storage. Boyer-Moore is the search string from right to left is considered the most efficient methods in practice, and matching string from the specified direction specifically an algorithm that has the best results theoretically. A system that is connected to a computer network that literally pick a web server that is accessed by multiple users in different parts of both good and bad aim. Any activity performed by the user, will be stored in Web server logs. With a log report contained in the web server can help a web server administrator to search the web request error. Web server log is a record of the activities of a web site that contains the data associated with the IP address, time of access, the page is opened, activities, and access methods. The amount of data contained in the resulting log is a log shed useful information.


2009 ◽  
Vol 43 (1) ◽  
pp. 203-205 ◽  
Author(s):  
Chetan Kumar ◽  
K. Sekar

The identification of sequence (amino acids or nucleotides) motifs in a particular order in biological sequences has proved to be of interest. This paper describes a computing server,SSMBS, which can locate and display the occurrences of user-defined biologically important sequence motifs (a maximum of five) present in a specific order in protein and nucleotide sequences. While the server can efficiently locate motifs specified using regular expressions, it can also find occurrences of long and complex motifs. The computation is carried out by an algorithm developed using the concepts of quantifiers in regular expressions. The web server is available to users around the clock at http://dicsoft1.physics.iisc.ernet.in/ssmbs/.


2020 ◽  
Author(s):  
Snehal D. Karpe ◽  
Vikas Tiwari ◽  
Sowdhamini Ramanathan

AbstractInsect Olfactory Receptors (ORs) are diverse family of membrane protein receptors responsible for most of the insect olfactory perception and communication, and hence they are of utmost importance for developing repellents or pesticides. Hence, accurate gene prediction of insect ORs from newly sequenced genomes is an important but challenging task. We have developed a dedicated web-server, ‘insectOR’, to predict and validate insect OR genes using multiple gene prediction algorithms, accompanied by relevant validations. It is possible to employ this sever nearly automatically and perform rapid prediction of the OR gene loci from thousands of OR-protein-to-genome alignments, resolve gene boundaries for tandem OR genes and refine them further to provide more complete OR gene models. InsectOR outperformed the popular genome annotation pipelines (MAKER and NCBI eukaryotic genome annotation) in terms of overall sensitivity at base, exon and locus level, when tested on two distantly related insect genomes. It displayed more than 95% nucleotide level precision in both tests. Finally, given the same input data and parameters, InsectOR missed less than 2% gene loci, in contrast to 55% loci missed by MAKER for Drosophila melanogaster. The web-server is freely available on the web at http://caps.ncbs.res.in/insectOR/. All major browsers are supported. Website is implemented in Python with Jinja2 for templating and bootstrap framework which uses HTML, CSS and JavaScript/Ajax. The core pipeline is written in Perl.


Data Mining ◽  
2013 ◽  
pp. 1312-1319
Author(s):  
Marco Scarnò

CASPUR allows many academic Italian institutions located in the Centre-South of Italy to access more than 7 million articles through a digital library platform. The behaviour of its users were analyzed by considering their “traces”, which are stored in the web server log file. Using several web mining and data mining techniques the author discovered a gradual and dynamic change in the way articles are accessed. In particular there is evidence of a journal browsing increase in comparison to the searching mode. Such phenomenon were interpreted using the idea that browsing better meets the needs of users when they want to keep abreast about the latest advances in their scientific field, in comparison to a more generic searching inside the digital library.


Author(s):  
Ibrahim Mahmood Ibrahim ◽  
Siddeeq Y. Ameen ◽  
Hajar Maseeh Yasin ◽  
Naaman Omar ◽  
Shakir Fattah Kak ◽  
...  

Today, web services rapidly increased and are accessed by many users, leading to massive traffic on the Internet. Hence, the web server suffers from this problem, and it becomes challenging to manage the total traffic with growing users. It will be overloaded and show response time and bottleneck, so this massive traffic must be shared among several servers. Therefore, the load balancing technologies and server clusters are potent methods for dealing with server bottlenecks. Load balancing techniques distribute the load among servers in the cluster so that it balances all web servers. The motivation of this paper is to give an overview of the several load balancing techniques used to enhance the efficiency of web servers in terms of response time, throughput, and resource utilization. Different algorithms are addressed by researchers and get good results like the pending job, and IP hash algorithms achieve better performance.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


2011 ◽  
pp. 259-273
Author(s):  
Carlos D. Santos ◽  
Márcio A. Gonçalves ◽  
Fabio Kon

Open source communities such as the ones responsible for Linux and Apache became well known for producing, with volunteer labor innovating over the Internet, high-quality software that has been widely adopted by organizations. In the web server market, Apache has dominated in terms of market share for over 15 years, outperforming corporations and research institutions. The resource-based view (RBV) of firms posits that an organization outperforms its competitors because it has valuable, rare, imperfectly imitable, and non-substitutable resources. Accordingly, one concludes that Apache possesses such resources to sustain its competitive advantage. However, one does not know what those resources are. This chapter is an effort to locate them, answering the question: “What resources enable Apache to outperform its for-profit competitors consistently?” This research draws on the RBV to develop a series of propositions about Apache’s internal resources and organizational capabilities. For each proposition developed, methods for their empirical validation are proposed, and future research directions are provided.


Sign in / Sign up

Export Citation Format

Share Document