request queue
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 3)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 2091 (1) ◽  
pp. 012003
Author(s):  
Rakesh Kumar ◽  
Bhavneet Singh Soodan ◽  
Godlove Suila Kuaban ◽  
Piotr Czekalski ◽  
Sapana Sharma

Abstract Queuing theory has been extensively used in the modelling and performance analysis of cloud computing systems. The phenomenon of the task (or request) reneging, that is, the dropping of requests from the request queue often occur in cloud computing systems, and it is important to consider it when developing performance evaluations models for cloud computing infrastructures. Majority of studies in the performance evaluation of cloud computing data centres with the use of queuing theory do not consider the fact that the tasks could be removed from queue without being serviced. The removal of tasks from the queue could be due to the user impatience, execution deadline expiration, security reasons, or as an active queue management strategy. The reneging could be correlated in nature, that is, if a request is dropped (or reneged) at any time epoch, and then there is a probability that a request may or may not be dropped at the next time epoch. This kind of dropping (or reneging) of requests is referred to as correlated request reneging. In this paper we have modelled a cloud computing infrastructure with correlated request reneging using queuing theory. An M/M/1/N queuing model with correlated reneging has been used to study the performance analysis of the load balancing server of a cloud computing system. The steady-state as well as the transient performance analyses have been carried out. Important measures of performance like average queue size, average delay, probability of task blocking, and the probability of no waiting in the queue are studied. Finally, some comparisons are performed which describe the effect of correlated task reneging over simple exponential reneging.



2021 ◽  
Vol 64 (4) ◽  
pp. 174-188
Author(s):  
Yang Zhang ◽  
Zhanyou Ma ◽  
Jiaqi Fan ◽  
Qiannan Si


Author(s):  
Manish Kumar ◽  
Abhinav Bhandari

As the world is getting increasingly dependent on the Internet, the availability of web services has been a key concern for various organizations. Application Layer DDoS (AL-DDoS) attacks may hamper the availability of web services to the legitimate users by flooding the request queue of the web server. Hence, it is pertinent to focus fundamentally on studying the queue scheduling policies of web server against the HTTP request flooding attack which has been the base of this research work. In this paper, the various types of AL-DDoS attacks launched by exploiting the HTTP protocol have been reviewed. The key aim is to compare the requests queue scheduling policies of web server against HTTP request flooding attack using NS2 simulator. Various simulation scenarios have been presented for comparison, and it has been established that queue scheduling policy can be a significant role player in tolerating the AL-DDoS attacks.



2020 ◽  
Vol 76 (4) ◽  
pp. 3129-3154
Author(s):  
Juan Fang ◽  
Mengxuan Wang ◽  
Zelin Wei

AbstractMultiple CPUs and GPUs are integrated on the same chip to share memory, and access requests between cores are interfering with each other. Memory requests from the GPU seriously interfere with the CPU memory access performance. Requests between multiple CPUs are intertwined when accessing memory, and its performance is greatly affected. The difference in access latency between GPU cores increases the average latency of memory accesses. In order to solve the problems encountered in the shared memory of heterogeneous multi-core systems, we propose a step-by-step memory scheduling strategy, which improve the system performance. The step-by-step memory scheduling strategy first creates a new memory request queue based on the request source and isolates the CPU requests from the GPU requests when the memory controller receives the memory request, thereby preventing the GPU request from interfering with the CPU request. Then, for the CPU request queue, a dynamic bank partitioning strategy is implemented, which dynamically maps it to different bank sets according to different memory characteristics of the application, and eliminates memory request interference of multiple CPU applications without affecting bank-level parallelism. Finally, for the GPU request queue, the criticality is introduced to measure the difference of the memory access latency between the cores. Based on the first ready-first come first served strategy, we implemented criticality-aware memory scheduling to balance the locality and criticality of application access.





Author(s):  
Agung Riyadi

The One of many way to connect to the database through the android application is using volleyball and RESTAPI. By using RestAPI, the android application does not directly connect to the database but there is an intermediary in the form of an API. In android development, Android-volley has the disadvantage of making requests from large and large data, so an evaluation is needed to test the capabilities of the Android volley. This research was conducted to test android-volley to retrieve data through RESTAPI presented in the form of an application to retrieve medicinal plant data. From the test results can be used by volley an error occurs when the back button is pressed, in this case another process is carried out if the previous volley has not been loaded. This error occurred on several android versions such as lollipops and marshmallows also on some brands of devices. So that in using android-volley developer need to check the request queue process that is carried out by the user, if the data retrieval process by volley has not been completed, it is necessary to stop the process to download data using volley so that there is no Android Not Responding (ANR) error.Keywords: Android, Volley, WP REST API, ANR Error



2017 ◽  
Vol 11 (4) ◽  
pp. 29-46
Author(s):  
Manish Kumar ◽  
Abhinav Bhandari

As the world is getting increasingly dependent on the Internet, the availability of web services has been a key concern for various organizations. Application Layer DDoS (AL-DDoS) attacks may hamper the availability of web services to the legitimate users by flooding the request queue of the web server. Hence, it is pertinent to focus fundamentally on studying the queue scheduling policies of web server against the HTTP request flooding attack which has been the base of this research work. In this paper, the various types of AL-DDoS attacks launched by exploiting the HTTP protocol have been reviewed. The key aim is to compare the requests queue scheduling policies of web server against HTTP request flooding attack using NS2 simulator. Various simulation scenarios have been presented for comparison, and it has been established that queue scheduling policy can be a significant role player in tolerating the AL-DDoS attacks.



2015 ◽  
Vol 21 ◽  
pp. 92-102 ◽  
Author(s):  
Maurice Khabbaz ◽  
Chadi Assi ◽  
Mazen Hasna ◽  
Ali Ghrayeb ◽  
Wissam Fawaz


2014 ◽  
Vol 15 (3) ◽  
pp. 1155-1167 ◽  
Author(s):  
Maurice Khabbaz ◽  
Mazen Hasna ◽  
Chadi M. Assi ◽  
Ali Ghrayeb


Sign in / Sign up

Export Citation Format

Share Document