scholarly journals COMPARISON OF SYSTEM PERFORMANCE FOR STREAMING DATA ANALYSIS IN IMAGE PROCESSING TASKS BY SLIDING WINDOW

2014 ◽  
Vol 38 (4) ◽  
pp. 804-810 ◽  
Author(s):  
N. L. Kazanskiy ◽  
V. I. Protsenko ◽  
P. G. Serafimovich
Author(s):  
S. N. Kumar ◽  
A. Lenin Fred ◽  
L. R. Jonisha Miriam ◽  
Parasuraman Padmanabhan ◽  
Balázs Gulyás ◽  
...  

Author(s):  
Dimitrios Katramatos ◽  
Meng Yue ◽  
Shinjae Yoo ◽  
Kerstin Kleese van Dam ◽  
Jin Xu ◽  
...  

Author(s):  
Mária Ždímalová ◽  
Tomáš Bohumel ◽  
Katarína Plachá-Gregorovská ◽  
Peter Weismann ◽  
Hisham El Falougy

2019 ◽  
Vol 15 (12) ◽  
pp. 155014771989454
Author(s):  
Hao Luo ◽  
Kexin Sun ◽  
Junlu Wang ◽  
Chengfeng Liu ◽  
Linlin Ding ◽  
...  

With the development of streaming data processing technology, real-time event monitoring and querying has become a hot issue in this field. In this article, an investigation based on coal mine disaster events is carried out, and a new anti-aliasing model for abnormal events is proposed, as well as a multistage identification method. Coal mine micro-seismic signal is of great importance in the investigation of vibration characteristic, attenuation law, and disaster assessment of coal mine disasters. However, as affected by factors like geological structure and energy losses, the micro-seismic signals of the same kind of disasters may produce data drift in the time domain transmission, such as weak or enhanced signals, which affects the accuracy of the identification of abnormal events (“the coal mine disaster events”). The current mine disaster event monitoring method is a lagged identification, which is based on monitoring a series of sensors with a 10-s-long data waveform as the monitoring unit. The identification method proposed in this article first takes advantages of the dynamic time warping algorithm, which is widely applied in the field of audio recognition, to build an anti-aliasing model and identifies whether the perceived data are disaster signal based on the similarity fitting between them and the template waveform of historical disaster data, and second, since the real-time monitoring data are continuous streaming data, it is necessary to identify the start point of the disaster waveform before the identification of the disaster signal. Therefore, this article proposes a strategy based on a variable sliding window to align two waveforms, locating the start point of perceptual disaster wave and template wave by gradually sliding the perceptual window, which can guarantee the accuracy of the matching. Finally, this article proposes a multistage identification mechanism based on the sliding window matching strategy and the characteristics of the waveforms of coal mine disasters, adjusting the early warning level according to the identification extent of the disaster signal, which increases the early warning level gradually with the successful result of the matching of 1/ N size of the template, and the piecewise aggregate approximation method is used to optimize the calculation process. Experimental results show that the method proposed in this article is more accurate and be used in real time.


Author(s):  
Guillaume Aupy ◽  
Brice Goglin ◽  
Valentin Honoré ◽  
Bruno Raffin

With the goal of performing exascale computing, the importance of input/output (I/O) management becomes more and more critical to maintain system performance. While the computing capacities of machines are getting higher, the I/O capabilities of systems do not increase as fast. We are able to generate more data but unable to manage them efficiently due to variability of I/O performance. Limiting the requests to the parallel file system (PFS) becomes necessary. To address this issue, new strategies are being developed such as online in situ analysis. The idea is to overcome the limitations of basic postmortem data analysis where the data have to be stored on PFS first and processed later. There are several software solutions that allow users to specifically dedicate nodes for analysis of data and distribute the computation tasks over different sets of nodes. Thus far, they rely on a manual resource partitioning and allocation by the user of tasks (simulations, analysis). In this work, we propose a memory-constraint modelization for in situ analysis. We use this model to provide different scheduling policies to determine both the number of resources that should be dedicated to analysis functions and that schedule efficiently these functions. We evaluate them and show the importance of considering memory constraints in the model. Finally, we discuss the different challenges that have to be addressed to build automatic tools for in situ analytics.


2019 ◽  
Vol 3 (1) ◽  
pp. 6 ◽  
Author(s):  
Konstantinos Demertzis ◽  
Nikos Tziritas ◽  
Panayiotis Kikiras ◽  
Salvador Llopis Sanchez ◽  
Lazaros Iliadis

A Security Operations Center (SOC) is a central technical level unit responsible for monitoring, analyzing, assessing, and defending an organization’s security posture on an ongoing basis. The SOC staff works closely with incident response teams, security analysts, network engineers and organization managers using sophisticated data processing technologies such as security analytics, threat intelligence, and asset criticality to ensure security issues are detected, analyzed and finally addressed quickly. Those techniques are part of a reactive security strategy because they rely on the human factor, experience and the judgment of security experts, using supplementary technology to evaluate the risk impact and minimize the attack surface. This study suggests an active security strategy that adopts a vigorous method including ingenuity, data analysis, processing and decision-making support to face various cyber hazards. Specifically, the paper introduces a novel intelligence driven cognitive computing SOC that is based exclusively on progressive fully automatic procedures. The proposed λ-Architecture Network Flow Forensics Framework (λ-ΝF3) is an efficient cybersecurity defense framework against adversarial attacks. It implements the Lambda machine learning architecture that can analyze a mixture of batch and streaming data, using two accurate novel computational intelligence algorithms. Specifically, it uses an Extreme Learning Machine neural network with Gaussian Radial Basis Function kernel (ELM/GRBFk) for the batch data analysis and a Self-Adjusting Memory k-Nearest Neighbors classifier (SAM/k-NN) to examine patterns from real-time streams. It is a forensics tool for big data that can enhance the automate defense strategies of SOCs to effectively respond to the threats their environments face.


2019 ◽  
Vol 9 (11) ◽  
pp. 2382 ◽  
Author(s):  
Jose Rabadan ◽  
Victor Guerra ◽  
Carlos Guerra ◽  
Julio Rufo ◽  
Rafael Perez-Jimenez

In this work, a new Time Difference of Arrival (TDoA) scheme for distance measurement based on Optical Camera Communication (OCC) systems is proposed. It relies on the use of optical pulses instead of radio-frequency signals as the time reference triggers, and the introduction of a rolling shutter camera, whose characteristics allows substituting the timer modules used in conventional TDoA techniques by image processing of the illuminated area in the picture. This processing on the camera’s images provides time measurements and implies and specific analysis, which is presented in this work. The system performance and properties, such as resolution and range, mainly depends on the camera characteristics, such as the frames capture rate and the image quality. This new technique is suitable to be implemented in smartphones or other Commercial Off-The-Shelf (COTS) devices equipped with a camera and speakers.


Sign in / Sign up

Export Citation Format

Share Document