scholarly journals An Efficient Stream Data Processing Model for Multiuser Cryptographic Service

2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Li Li ◽  
Fenghua Li ◽  
Guozhen Shi ◽  
Kui Geng

In view of the demand for high-concurrency massive data encryption and decryption application services in the security field, this paper proposes a dual-channel pipeline parallel data processing model (DPP) according to the characteristics of cryptographic operations and realized cryptographic operations of cross-data streams with different service requirements in a multiuser environment. By encapsulating cryptographic operation requirements in job packages, the input data flow is divided by the dual-channel mechanism and job packages parallel scheduling, which ensures the synchronization between the processing of the dependent job packages and parallel packages and hides the processing of the independent job package in the processing of the dependent job package. Prototyping experiments prove that this model can realize the correct and rapid processing of multiservice cross-data streams. Increasing the pipeline depth and improving the processing performance in each stage of the pipeline are the key to improving the system performance.

Author(s):  
Parimala N.

A data stream is a real-time continuous sequence that may be comprised of data or events. Data stream processing is different from static data processing which resides in a database. The data stream data is seen only once. It is too voluminous to store statically. A small portion of data called a window is considered at a time for querying, computing aggregates, etc. In this chapter, the authors explain the different types of window movement over incoming data. A query on a stream is repeatedly executed on the new data created by the movement of the window. SQL extensions to handle continuous queries is addressed in this chapter. Streams that contain transactional data as well as those that contain events are considered.


2020 ◽  
Vol 1 (1) ◽  
pp. 1-21
Author(s):  
Devesh Kumar Lal ◽  
Ugrasen Suman

The processing of real-time data streams is complex with large number of volume and variety. The volume and variety of data streams enhances a number of processing units to run in real time. The required number of processing units used for processing data streams are lowered by using a windowing mechanism. Therefore, the appropriate size of window selection is vital for stream data processing. The coarse size window will directly affect the overall processing time. On the other hand, a finely sized window has to deal with an increased number of management costs. In order to manage such streams of data, we have proposed a SBASH architecture, which can be helpful for determining a unipartite size of a sheer window. The sheer window reduces the overall latency of data stream processing by a certain extent. The time complexity to process such sheer window is equivalent to w log n w. These windows are allocated and retrieved in a stack-based manner, where stacks ≥ n, which is helpful in reducing the number of comparisons made during retrieval.


Data streams pose several computational challenges due to their large volume of massive data arriving at a very fast rate. Data streams are gaining the attention of today’s research community for their utility in almost all fields. In turn, organizing the data into groups enables the researchers to derive with many useful and valuable information and conclusions based on the categories that were discovered. Clustering makes this organization or grouping easier and plays an important role in exploratory data analysis. This paper focuses on the amalgamation of two very important algorithms namely Density Based clustering used to group the data and the dissimilarity matrix algorithm used to find the outlier among the data. Before feeding the data, the algorithm filters out the sparse data and a continuous monitoring system provides the frequent outlier and inlier checks on the live stream data using buffer timer. This approach provides an optimistic solution in recognizing the outlier data which may later get reverted as inlier based on certain criteria. The concept of DenDis approach will pave a new innovation world of considering every data which “May Get Life in Future”.


2015 ◽  
Vol 22 (3) ◽  
pp. 99-104 ◽  
Author(s):  
Henryk Krawczyk ◽  
Michał Nykiel ◽  
Jerzy Proficz

Abstract The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.


Sign in / Sign up

Export Citation Format

Share Document