client request
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 10)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 11 (23) ◽  
pp. 11267
Author(s):  
Achraf Gazdar ◽  
Lotfi Hidri ◽  
Belgacem Ben Ben Youssef ◽  
Meriam Kefi

Video streaming services are one of the most resource-consuming applications on the Internet. Thus, minimizing the consumed resources at runtime in general and the server/network bandwidth in particular are still challenging for researchers. Currently, most streaming techniques used on the Internet open one stream per client request, which makes the consumed bandwidth increases linearly. Hence, many broadcasting/streaming protocols have been proposed in the literature to minimize the streaming bandwidth. These protocols can be divided into two main categories, namely, reactive and proactive broadcasting protocols. While the first category is recommended for streaming unpopular videos, the second category is recommended for streaming popular videos. In this context, in this paper we propose an enhanced version of the reactive protocol Slotted Stream Tapping (SST) called Share All SST (SASST), which we prove to further reduce the streaming bandwidth with regard to SST. We also propose a new proactive protocol named the New Optimal Proactive Protocol (NOPP) based on an optimal scheduling of video segments on streaming-channel. SASST and NOPP are to be used in cloud and CDN (content delivery network) networks where the IP multicast or multicast HTTP on QUIC could be enabled, as their key principle is to allow the sharing of ongoing streams among clients requesting the same video content. Thus, clients and servers are often services running on virtual machines or in containers belonging to the same cloud or CDN infrastructure.


2021 ◽  
Vol 3 (3) ◽  
pp. 368-375
Author(s):  
Aep Setiawan ◽  
Rifa Ade Rahmah

College of Vocational IPB University (SV-IPB) uses a client-server system as an information technology architecture. The server provides several services that are used to assist the teaching and learning process at the IPB Vocational School. The application used to provide services is the Modular Object-Oriented Dynamic Learning Environment (MOODLE) which is used for e-learning. SV-IPB provides two Virtual Machines which are used as a web server and a database server. The use of a single web server to replace the request is certainly less stable because there is no web server to back it up so that the service will stop. This situation states that the use of a single web server does not have high information (high available). To overcome this problem, the cluster technology can be used to group several web servers in SV-IPB. The web server clustering technology used is the Gluster File System (GlusterFS) with the volume type used, namely Distributed-replicated volume. Based on the tests that have been carried out, this project can solve the problem that has been described earlier that "one web server is down, there is still another web server that can so that the client request process does not stop. In addition, the clustering technology used is required for the use of load balancing web servers so that it can reduce the load on each server because the request process will be sent alternately between web servers.


Author(s):  
Anna Vadimovna Lapkina ◽  
Andrew Alexandrovitch Petukhov

The problem of automatic requests classification, as well as the problem of determining the routing rules for the requests on the server side, is directly connected with analysis of the user interface of dynamic web pages. This problem can be solved at the browser level, since it contains complete information about possible requests arising from interaction interaction between the user and the web application. In this paper, in order to extract the classification features, using data from the request execution context in the web client is suggested. A request context or a request trace is a collection of additional identification data that can be obtained by observing the web page JavaScript code execution or the user interface elements changes as a result of the interface elements activation. Such data, for example, include the position and the style of the element that caused the client request, the JavaScript function call stack, and the changes in the page's DOM tree after the request was initialized. In this study the implementation of the Chrome Developer Tools Protocol is used to solve the problem at the browser level and to automate the request trace selection.


2021 ◽  
Vol 1 ◽  
pp. 84-90
Author(s):  
Rustam Kh. Khamdamov ◽  
◽  
Komil F. Kerimov ◽  

Web applications are increasingly being used in activities such as reading news, paying bills, and shopping online. As these services grow, you can see an increase in the number and extent of attacks on them, such as: theft of personal information, bank data and other cases of cybercrime. All of the above is a consequence of the openness of information in the database. Web application security is highly dependent on database security. Client request data is usually retrieved by a set of requests that request the application user. If the data entered by the user is not scanned very carefully, you can collect a whole host of types of attacks that use web applications to create security threats to the database. Unfortunately, due to time constraints, web application programmers usually focus on the functionality of web applications, but only few worry about security. This article provides methods for detecting anomalies using a database firewall. The methods of penetration and types of hacks are investigated. A database firewall is proposed that can block known and unknown attacks on Web applications. This software can work in various ways depending on the configuration. There are almost no false positives, and the overhead of performance is relatively small. The developed database firewall is designed to protect against attacks on web application databases. It works as a proxy, which means that requests for SQL expressions received from the client will first be sent to the developed firewall, rather than to the database server itself. The firewall analyzes the request: requests that are considered strange are blocked by the firewall and an empty result is returned to the client.


Author(s):  
Cristian Margineanu ◽  
Costin Grigoras ◽  
Mihai Carabas ◽  
Sergiu Weisz ◽  
Darius Mihai ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3820
Author(s):  
Abdul Ghafar Jaafar ◽  
Saiful Adli Ismail ◽  
Mohd Shahidan Abdullah ◽  
Nazri Kama ◽  
Azri Azmi ◽  
...  

Application Layer Distributed Denial of Service (DDoS) attacks are very challenging to detect. The shortfall at the application layer allows formation of HTTP DDoS as the request headers are not compulsory to be attached in an HTTP request. Furthermore, the header is editable, thus providing an attacker with the advantage to execute HTTP DDoS as it contains almost similar request header that can emulate a genuine client request. To the best of the authors’ knowledge, there are no recent studies that provide forged request headers pattern with the execution of the current HTTP DDoS attack scripts. Besides that, the current dataset for HTTP DDoS is not publicly available which leads to complexity for researchers to disclose false headers, causing them to rely on old dataset rather than more current attack patterns. Hence, this study conducted an analysis to disclose forged request headers patterns created by HTTP DDoS. The results of this study successfully disclose eight forged request headers patterns constituted by HTTP DDoS. The analysis was executed by using actual machines and eight real attack scripts which are capable of overwhelming a web server in a minimal duration. The request headers patterns were explained supported by a critical analysis to provide the outcome of this paper.


2019 ◽  
Vol 14 (1) ◽  
pp. 48-56
Author(s):  
Albert Yakobus Chandra

Dalam keberadaan World Wide Web (WWW), web server menjadi salah satu faktor penting agar sebuah website dapat berjalan dengan baik dan melayani kebutuhan pengguna web tersebut. Web server yang tepat digunakan untuk sebuah sistem website maka dapat dipastikan website tersebut selalu dapat berjalan dengan baik. Saat ini banyak pilihan web server yang dapat digunakan untuk menjalankan sistem website, dua yang paling populer digunakan adalah Apache dan Nginx web server. Penelitian ini akan melakukan pengujian pada kedua web server tersebut untuk mengetahui salah satu web server yang terbaik dalam menyelesaikan client request. Pengujian yang dilakukan mengggunakan tools Apache Bench untuk melakukan benchmarking dari sisi banyak nya client request yang bervariasi mulai dari 100 request sampai dengan 1000000 request dan waktu yang dibutuhkan untuk menyelesaikan request tersebut. Hasil uji pada penelitian ini memberikan hasil benchmarking yang menunjukkan bahwa dari sisi penggunaan waktu, nginx menggunakan waktu lebih sedikit daripada Apache dalam menyelesaikan client request.


Picture compass stow away profitable data. The prerequisite for picture recovery is high in setting of the rapidly creating extents of picture information. Picture mining deals with the extraction of picture structures from an enormous social occasion of pictures in database. Obviously, picture mining is uncommon in association with low-level PC vision and picture dealing with systems in light of the manner in which that the point of convergence of picture mining is in extraction of models from gigantic party of pictures as exhibited by client request, however the purpose of union of PC vision and picture taking care of procedures is in recognition and moreover isolating explicit highlights from a particular picture. In picture mining, the objective is the divulgence of picture structures that are huge in a given social affair of pictures as shown by client request. In this paper the social affair strategies are examined and isolated. Additionally, we propose a philosophy HDK that utilizations more than one social affair framework to propel the execution of picture recuperation. This framework makes utilization of dynamic and isolate and vanquish K-Means gathering system with equivalency and extraordinary affiliation contemplations to update the execution of the K Means for utilizing as a bit of high dimensional datasets. It likewise showed the part like hiding, surface and shape for cautious and staggering recovery structure


Big data is nothing but unstructured and structured data which is not possible to process by our traditional system its not only have the volume of data also velocity and verity of data, Processing means ( store and analyze for knowledge information to take decision), Every living, non living and each and every device generates tremendous amount of data every fraction of seconds, Hadoop is a software frame work to process big data to get knowledge out of stored data and enhance the business and solve the societal problems, Hadoop basically have two important components HDFS and Map Reduce HDFS for store and mapreduce to process. HDFS includes name node and data nodes for storage, Map-Reduce includes frame works of Job tracker and Task tracker. Whenever client request Hadoop to store name node responds with available free memory data nodes then client will write data to respective data nodes then replication factor of hadoop copies the blocks of data with other data nodes to overcome fault tolerance Name node stores the meta of data nodes. Replication is for back-up as hadoop HDFS uses commodity hardware for storage, also name node have back-up secondary name node as only point of failure the hadoop. Whenever clients want to process the data, client request the name node Job tracker then Name node communicate to Task tracker for task done. All the above components of hadoop are frame works on-top of OS for efficient utilization and manage the system recourses for big data processing. Big data processing performance is measured with bench marks programs in our research work we compared the processing i.e. execution time of bench mark program word count with Hadoop Map-Reduce python Jar code, PIG script and Hive query with same input file big.txt. and we can say that Hive is much faster than PIG and Map-reduce Python jar code Map-reduce execution time is 1m, 29sec Pig Execution time is 57 sec Hive execution time is 31 sec


2019 ◽  
Vol 16 (2) ◽  
pp. 65-87
Author(s):  
Fayçal M'hamed Bouyakoub ◽  
Abdelkader Belkhir ◽  
Amina Belkacemnacer ◽  
Sara Harfouche

The article presents an electronic negotiation agent, integrated within a multiagent system for an electronic tourism platform. The e-negotiation process is based on a winner-winner approach, using a bargaining protocol. However, with the proliferation of services, the task of searching for relevant services becomes more and more difficult. Thus, the authors also propose a search agent to find tourism services corresponding to the client request and profile. The discovery process uses a quantitative similarity measure.


Sign in / Sign up

Export Citation Format

Share Document