incoming request
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 5)

H-INDEX

1
(FIVE YEARS 0)

2021 ◽  
Vol 11 (23) ◽  
pp. 11532
Author(s):  
Tomasz Rak ◽  
Dariusz Rzonca

Simulation models are elements of science that use software tools to solve complex mathematical problems. They are beneficial in areas such as performance engineering and communications systems. Nevertheless, to achieve more accurate results, researchers should use more detailed models. Having an analysis of the system operations in the early modeling phases could help one make better decisions relating to the solution. In this paper, we introduce the use of the QPME tool, based on queueing Petri nets, to model the system stream generator. This formalism was not considered during the first tool development. As a result of the analysis, an alternative design model is proposed. By comparing the behavior of the proposed generator against the one already developed, a better adjustment of the stream to the customer’s needs was obtained. The study results show that appropriately adjusting queueing Petri net models can help produce better streams of data (tokens).


Author(s):  
Chinmai Shetty, Dr. Sarojadevi H, Suraj Prabhu

The flexibility provided by the cloud service provider at reduced cost popularized the cloud tremendously. The cloud service provider must schedule the incoming requests dynamically. In a cloud environment tasks must be scheduled such that proper resource utilization is achieved. Hence task scheduling plays a significant role in the functionality and performance of cloud computing systems. While there exist many approaches for boosting the task scheduling in the cloud, it is still an unresolved issue. In this proposed framework we attempt to optimize the usage of cloud computing resources by applying machine learning techniques. The new proposed framework dynamically selects the scheduling algorithm for the incoming request rather than arbitrary assigning a task to the scheduling algorithm. The scheduling algorithm is predicted dynamically using a neural network which is the best for the incoming request. The proposed framework considers scheduling parameters namely cost, throughput, makespan and degree of imbalance. The algorithms chosen for scheduling are 1) MET 2) MCT 3) Sufferage 4)Min-min 5) Min-mean 6) Min-var. The framework includes 4 neural networks to predict the best algorithm for each scheduling parameters considered for optimization. PCA algorithm is used for extracting relevant features from the input data set. The proposed framework shows the scope for the overall system performance by dynamically selecting precise scheduling algorithms for each incoming request from the user. 


2020 ◽  
Vol 4 (4) ◽  
pp. 38
Author(s):  
Lisa Muller ◽  
Christos Chrysoulas ◽  
Nikolaos Pitropakis ◽  
Peter J. Barclay

The shift towards microservisation which can be observed in recent developments of the cloud landscape for applications has led towards the emergence of the Function as a Service (FaaS) concept, also called Serverless. This term describes the event-driven, reactive programming paradigm of functional components in container instances, which are scaled, deployed, executed and billed by the cloud provider on demand. However, increasing reports of issues of Serverless services have shown significant obscurity regarding its reliability. In particular, developers and especially system administrators struggle with latency compliance. In this paper, following a systematic literature review, the performance indicators influencing traffic and the effective delivery of the provider’s underlying infrastructure are determined by carrying out empirical measurements based on the example of a File Upload Stream on Amazon’s Web Service Cloud. This popular example was used as an experimental baseline in this study, based on different incoming request rates. Different parameters were used to monitor and evaluate changes through the function’s logs. It has been found that the so-called Cold-Start, meaning the time to provide a new instance, can increase the Round-Trip-Time by 15%, on average. Cold-Start happens after an instance has not been called for around 15 min, or after around 2 h have passed, which marks the end of the instance’s lifetime. The research shows how the numbers have changed in comparison to earlier related work, as Serverless is a fast-growing field of development. Furthermore, emphasis is given towards future research to improve the technology, algorithms, and support for developers.


2020 ◽  
Vol 9 (01) ◽  
pp. 24921-24924
Author(s):  
Minakshi Roy ◽  
Shamsh Ahsan ◽  
Gaurav Kumar ◽  
Ajay Vimal

With the advent of the Internet growth worldwide, we need to have a protocol which is faster and provides a better support for the following problems: Faster Connection Establishment Time Good Congestion Control Connection Migration Good Error Correction One of the key aspects taken under consideration was current scenario of connection establishment time whenever a website is requested and poor video buffering over existing Internet Connections. The prime objective is to create a proxy server which routes the incoming connection requests to QUIC supported libraries if the client supports QUIC. If the client does not support QUIC then it routes the incoming request to existing web server which can then handle the request using TCP.  After creation of the proxy server a website has to be created using which we can test various aspects of the QUIC protocol. Keyword: Quick, Protocol, UDP, Network


Displaying of examination results by a single central entity, for lakhs of students becomes a tedious task, and sometimes may also result in server crashing. These servers typically rely on heavy and often unrestricted threads spawned to handle each incoming request which is the reason why the server resources are used up quickly. We propose a solution that is three fold: First, multiple Volunteer entities are brought in to hold the data and donate a portion of their computing power to offload the enormous work placed on the central entity. Second, the central entity is changed to play the role of dispatcher that generates monitors and assigns extremely lightweight, independent processes (called agents) to each user request without requiring any additional hardware upgrade. Each agent will be responsible to satisfy their assigned user requests. Third, we introduce a load balancing technique derived from the ideas of autonomous agents load balancing techniques in cloud to provide load balancing among the Volunteer entities and the central entity such that the Volunteer entities can continue with its own tasks and not be overwhelmed by its Volunteer work while ensuring fast response time and better reliability and response to the user.


2015 ◽  
Vol 14 (6) ◽  
pp. 5803-5808 ◽  
Author(s):  
Settu Bharti ◽  
Naseeb Singh

Cloud computing is an emerging paradigm in the computer industry where the computing is moved to a cloud of computers. Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. This paper is focused on the load balancing issues of cloud computing and techniques to overcome the waiting time and turnaround time. Load Balancing is done with the help of load balancers where each incoming request is redirected and is transparent to client who makes the request. Based on predetermined parameters, such as availability or current load, the load balancer uses various scheduling algorithm to determine which server should handle and forwards the request on to the selected server.


2013 ◽  
Vol 4 (2) ◽  
pp. 277-283 ◽  
Author(s):  
Imtiyaz Ahmad Lone ◽  
Jahangeer Ali ◽  
Kalimullah Lone

In this paper, security attacks in ARP are classified and logically organized/represented in a more lucid manner.ARP provides no authentication mechanism to the incoming request packets this is the reason that any client can forge an ARP message contains malicious information to poison the  ARP cache of target host. There are many possible attacks on ARP which can make the communication unsecure such as man-in-the-middle (MITM), Denial of service (DOS) and cloning attack.


Author(s):  
Jay Ramanathan ◽  
Rajiv Ramnath

Complex service-oriented organizations (such as IT customer service or the hospital emergency) deal with many challenges due to incoming request types that we characterize as non-routine. Each such request reflects significant variations in the environment and consequently requirements, which drives discovery of processing needs. At the same time such organizations are often challenged with sharing high-cost resources and satisfying multiple stakeholders with different expectations. Performance improvement in this context is particularly challenging and requires new methods. To address this, the authors present an ontology designed for highly dynamic service organizations where traceable workflow data is difficult to obtain and there are many stakeholders. The ontology provides the contextual framework by with useful knowledge can be successfully extracted from mined performance data obtained from scattered sources. Specifically the service ontology 1) obtains tacit knowledge as explicit in-the-micro feedback from workers performing Roles, 2) provides the structure for organizing in-the-small execution data from evolving process and instances, and 3) aggregates process instances metrics into a performance and decision-making facility to align to in-the-large goals of stakeholders. Using actual customer service requests they illustrate the benefits of the ontology for relating aggregated goals to feedback from individual roles of workers. The authors also illustrate the benefits in terms of identifying actionable improvement targets.


2011 ◽  
Vol 05 (04) ◽  
pp. 337-361 ◽  
Author(s):  
BENOIT CHRISTOPHE ◽  
VINCENT VERDOT ◽  
VINCENT TOUBIANA

With the proliferation of connected devices and the widespread adoption of the Web, ubiquitous computing success has recently been brought into the fashion of an emergent paradigm called the 'Web of Things', where Web-enabled objects are offered through interconnected smart spaces. While some predict a near future with billions of Web-enabled objects, the success of this vision now depends on the creation of efficient processes and the availability of tools enabling users or applications to find connected objects matching a set of requirements (and expectations). We present an on-going work that aims to develop a search process dedicated to the 'Web of Things' and that relies on three contributions. Firstly, the creation and use of semantic profiles for connected objects. Secondly, the establishment of similarities between semantic profiles of different connected objects to gather them into clusters. Thirdly, the computation of a score associating a 'context of search' to an incoming request and enabling the selection of the most appropriate search algorithms, involving either probabilistic or precise reasoning.


Sign in / Sign up

Export Citation Format

Share Document