scholarly journals Adaptive replica consistency policy for Kafka

2018 ◽  
Vol 173 ◽  
pp. 01019 ◽  
Author(s):  
Zonghuai Guo ◽  
Shiwang Ding

With the rapid development of the Internet, such as storm, s4, sparkstreaming and other large data real-time computing framework, is widely used in real-time monitoring, real-time recommendation, real-time transaction analysis and other systems for real-time consumption of data streams, Kafka messaging system has been widely deployed. Aiming at the problem that the Kafka cluster needs a lot of network overhead, disk overhead and memory consumption to ensure the reliability of the message, the clustering load is increased, and a replica adaptive synchronization strategy based on the message heat and replica update frequency is proposed. It is proved that the Kafka cluster can guarantee the reliability of the message, and it can significantly reduce the overhead of the resource and improve the throughput of the cluster by using the method of dynamically adjusting the replica synchronization to reduce the system resource consumption while ensuring the reliability of the message. to ensure the system availability and high performance.

2020 ◽  
Author(s):  
Markus Wiedemann ◽  
Bernhard S.A. Schuberth ◽  
Lorenzo Colli ◽  
Hans-Peter Bunge ◽  
Dieter Kranzlmüller

<p>Precise knowledge of the forces acting at the base of tectonic plates is of fundamental importance, but models of mantle dynamics are still often qualitative in nature to date. One particular problem is that we cannot access the deep interior of our planet and can therefore not make direct in situ measurements of the relevant physical parameters. Fortunately, modern software and powerful high-performance computing infrastructures allow us to generate complex three-dimensional models of the time evolution of mantle flow through large-scale numerical simulations.</p><p>In this project, we aim at visualizing the resulting convective patterns that occur thousands of kilometres below our feet and to make them "accessible" using high-end virtual reality techniques.</p><p>Models with several hundred million grid cells are nowadays possible using the modern supercomputing facilities, such as those available at the Leibniz Supercomputing Centre. These models provide quantitative estimates on the inaccessible parameters, such as buoyancy and temperature, as well as predictions of the associated gravity field and seismic wavefield that can be tested against Earth observations.</p><p>3-D visualizations of the computed physical parameters allow us to inspect the models such as if one were actually travelling down into the Earth. This way, convective processes that occur thousands of kilometres below our feet are virtually accessible by combining the simulations with high-end VR techniques.</p><p>The large data set used here poses severe challenges for real time visualization, because it cannot fit into graphics memory, while requiring rendering with strict deadlines. This raises the necessity to balance the amount of displayed data versus the time needed for rendering it.</p><p>As a solution, we introduce a rendering framework and describe our workflow that allows us to visualize this geoscientific dataset. Our example exceeds 16 TByte in size, which is beyond the capabilities of most visualization tools. To display this dataset in real-time, we reduce and declutter the dataset through isosurfacing and mesh optimization techniques.</p><p>Our rendering framework relies on multithreading and data decoupling mechanisms that allow to upload data to graphics memory while maintaining high frame rates. The final visualization application can be executed in a CAVE installation as well as on head mounted displays such as the HTC Vive or Oculus Rift. The latter devices will allow for viewing our example on-site at the EGU conference.</p>


2017 ◽  
Author(s):  
Christian Patauner ◽  
Roberto Biasi ◽  
Mario Andrighettoni ◽  
Gerald Angerer ◽  
Dietrich Pescoller ◽  
...  

2005 ◽  
Vol 38 (2) ◽  
pp. 32-39
Author(s):  
M. Palomera-Pérez ◽  
L. Almeida ◽  
H. Benítez-Pérez

2013 ◽  
Vol 401-403 ◽  
pp. 1507-1513 ◽  
Author(s):  
Zhong Hu Yuan ◽  
Wen Tao Liu ◽  
Xiao Wei Han

In the weld image acquisition system, real-time image processing has been a difficult design bottleneck to break through, especially for the occasion of large data processing capability and more demanding real-time requirements, in which the traditional MCU can not adapt, so using high-performance FPGA as the core of the high speed image acquisition and processing card, better meets the large amount of data in most of the image processing system and high demanding real-time requirements. At the same time, system data collection, storage and display were implemented by using Verilog, and in order to reducing the influence of edge detection noise, the combination of image enhancement and median filtering image preprocessing algorithm was used. Compared to the pre-processing algorithm of the software implementation, it has a great speed advantage, and simplifies the subsequent processing work load, improves the speed and efficiency of the entire image processing system greatly. So it proves that the system has strong ability of restraining the noise of image, and more accurate extracted edge positioning, it can be applied in the seam tracking field which need higher real-time requirements.


1997 ◽  
Vol 1 (3) ◽  
pp. 241-256 ◽  
Author(s):  
William Johnston ◽  
Jin Guojun ◽  
Case Larsen ◽  
Jason Lee ◽  
Gary Hoo ◽  
...  

Author(s):  
M. Palomera-Pérez ◽  
L. Almeida ◽  
H. Benítez-Pérez

Author(s):  
Alejandra Sarahi Sanchez-Moreno ◽  
Hector Manuel Perez-Meana ◽  
Jesus Olivares-Mercado ◽  
Gabriel Sanchez-Perez ◽  
Karina Toscano-Medina

Facial recognition systems has captivated research attention in recent years. Facial recognition technology is often required in real-time systems. With the rapid development, diverse algorithms of machine learning for detection and facial recognition have been proposed to address the challenges existing. In the present paper we proposed a system for facial detection and recognition under unconstrained conditions in video sequences. We analyze learning based and hand-crafted feature extraction approaches that have demonstrated high performance in task of facial recognition. In the proposed system, we compare different traditional algorithms with the avant-garde algorithms of facial recognition based on approaches discussed. The experiments on unconstrained datasets to study the face detection and face recognition show that learning based algorithms achieves a remarkable performance to face the challenges in real-time systems.


2019 ◽  
Vol 9 (4) ◽  
pp. 760 ◽  
Author(s):  
Tuan Thanh Le ◽  
JongBeom Jeong ◽  
Eun-Seok Ryu

In recent years, the rapid development of surveillance information in closed-circuit television (CCTV) has become an indispensable element in security systems. Several CCTV systems designed for video compression and encryption need to improve for the best performance and different security levels. Specially, the advent of 360 video makes the CCTV promising for surveillance without any blind areas. Compared to current systems, 360 CCTV requires the large bandwidth with low latency to run smoothly. Therefore, to improve the system performance, it needs to be more robust to run smoothly. Video transmission and transcoding is an essential process in converting codecs, changing bitrates or resizing the resolution for 360 videos. High-performance transcoding is one of the key factors of real time CCTV stream. Additionally, the security of video streams from cameras to endpoints is also an important priority in CCTV research. In this paper, a real-time transcoding system designed with the ARIA block cipher encryption algorithm is presented. Experimental results show that the proposed method achieved approximately 200% speedup compared to libx265 FFmpeg in transcoding task, and it could handle multiple transcoding sessions simultaneously at high performance for both live 360 CCTV system and existing 2D/3D CCTV system.


Sign in / Sign up

Export Citation Format

Share Document