scholarly journals A QoS-Latency Aware Event Stream Processing with Elastic-FaaS

Stream processing systems need to be elastically scalable to process and respond the unpredictable massive load spike in real-time with high throughput and low latency. Though the modern cloud technologies can help in elastically provisioning the required computing resources on-the-fly, finding out the right point-in-time varies among systems based on their expected QoS characteristics. The latency sensitivity of the stream processing applications varies based on their nature and pre-set requirements. For few applications, even a little latency in the response will have huge impact, whereas for others the little latency will not have that much impact. For the former ones, the processing systems are expected to be highly available, elastically scalable, and fast enough to perform, whenever there is a spike. The time required to elasticity provision the systems under FaaS is very high, comparing to provisioning the Virtual Machines and Containers. However, the current FaaS systems have some limitations that need to be overcome to handle the unexpected spike in real-time. This paper proposes a new algorithm called Elastic-FaaS on top of the existing FaaS to overcome this QoS latency issue. Our proposed algorithm will provision required number of FaaS container instances than any typical FaaS can provision normally, whenever there is a demand to avoid the latency issue. We have experimented our algorithm with an event stream processing system and the result shows that our proposed Elastic-FaaS algorithm performs better than typical FaaS by improving the throughput that meets the high accuracy and low latency requirements.

2021 ◽  
Author(s):  
Frank Appiah

<i>This is about the overall functionality and complexity (size) of the open source event stream processing system or StreamEPS for short. The elements of the platform will be functional if the design follows application interfaces as described in this work. The engine architecture details the overall functionality in terms of engine core, engine context, engine processing and of itself.</i>


Author(s):  
Prof. (Dr) Pawan Bhaladhare

At any given time thousands of people are searching about a particular thing and only about a fraction of those people might get the answer that they wanted. Whenever we do a quick search the possibility of getting the right answer is good but when we look into the time required to reach the right answer is not always fast as for every single search query we get about hundreds of search results which is good but also is a bit confusing for the user. The user might have to try many links only to reach his desired answer. So, in our proposed system the user just has to enter the search query and has to select his desired website for the answer and in a few seconds the answer will be displayed to him. When the user inputs the search query and a particular website he data is scrapped from that particular website and is then fed to a NLP sys-tem which is responsible to minimize the size of the answer keeping in mind not to change or lose any valuable data.


2021 ◽  
Vol 7 ◽  
pp. e426
Author(s):  
Nirav Bhatt ◽  
Amit Thakkar

Stream data is the data that is generated continuously from the different data sources and ideally defined as the data that has no discrete beginning or end. Processing the stream data is a part of big data analytics that aims at querying the continuously arriving data and extracting meaningful information from the stream. Although earlier processing of such stream was using batch analytics, nowadays there are applications like the stock market, patient monitoring, and traffic analysis which can cause a drastic difference in processing, if the output is generated in levels of hours and minutes. The primary goal of any real-time stream processing system is to process the stream data as soon as it arrives. Correspondingly, analytics of the stream data also needs consideration of surrounding dependent data. For example, stock market analytics results are often useless if we do not consider their associated or dependent parameters which affect the result. In a real-world application, these dependent stream data usually arrive from the distributed environment. Hence, the stream processing system has to be designed, which can deal with the delay in the arrival of such data from distributed sources. We have designed the stream processing model which can deal with all the possible latency and provide an end-to-end low latency system. We have performed the stock market prediction by considering affecting parameters, such as USD, OIL Price, and Gold Price with an equal arrival rate. We have calculated the Normalized Root Mean Square Error (NRMSE) which simplifies the comparison among models with different scales. A comparative analysis of the experiment presented in the report shows a significant improvement in the result when considering the affecting parameters. In this work, we have used the statistical approach to forecast the probability of possible data latency arrives from distributed sources. Moreover, we have performed preprocessing of stream data to ensure at-least-once delivery semantics. In the direction towards providing low latency in processing, we have also implemented exactly-once processing semantics. Extensive experiments have been performed with varying sizes of the window and data arrival rate. We have concluded that system latency can be reduced when the window size is equal to the data arrival rate.


ETRI Journal ◽  
2009 ◽  
Vol 31 (4) ◽  
pp. 463-465 ◽  
Author(s):  
Jongik Kim ◽  
Oh-Cheon Kwon ◽  
Hyunsuk Kim

2021 ◽  
Author(s):  
Frank Appiah

<i>This is about the overall functionality and complexity (size) of the open source event stream processing system or StreamEPS for short. The elements of the platform will be functional if the design follows application interfaces as described in this work. The engine architecture details the overall functionality in terms of engine core, engine context, engine processing and of itself.</i>


The high throughput - low latency stream processing systems are required to be elastic enough to scale for varying load spike on-demand. However, in the current stream processing systems, the load shedding is observed which impacts the final accuracy. In order to get rid of this issue, the elasticity can be implemented in all kinds of resources involved in the stream processing systems. This paper focuses on providing the elastic scalability in queues and Serverless functions for the event stream processing systems. First, we explain the need of elastic multi-queue with Serverless function in detail for event stream processing, and then will propose an algorithm for elastic scalability of multi-M/M/s/K Queuing with Serverless functions for the efficient stream processing. The experiment result shows that the system scales very well in short span of time with the help of our proposed algorithm. The increased availability in turn helps improving the high processing throughput in low latency.


Sign in / Sign up

Export Citation Format

Share Document