A Shared Memory Cache Layer across Multiple Executors in Apache Spark

Author(s):  
Wei Rang ◽  
Donglin Yang ◽  
Dazhao Cheng
Author(s):  
Kun Liu ◽  
Jan Boehm ◽  
Christian Alis

Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.


Author(s):  
E Wes Bethel ◽  
Mark Howison

Given the computing industry trend of increasing processing capacity by adding more cores to a chip, the focus of this work is tuning the performance of a staple visualization algorithm, raycasting volume rendering, for shared-memory parallelism on multi-core CPUs and many-core GPUs. Our approach is to vary tunable algorithmic settings, along with known algorithmic optimizations and two different memory layouts, and measure performance in terms of absolute runtime and L2 memory cache misses. Our results indicate there is a wide variation in runtime performance on all platforms, as much as 254% for the tunable parameters we test on multi-core CPUs and 265% on many-core GPUs, and the optimal configurations vary across platforms, often in a non-obvious way. For example, our results indicate the optimal configurations on the GPU occur at a crossover point between those that maintain good cache utilization and those that saturate computational throughput. This result is likely to be extremely difficult to predict with an empirical performance model for this particular algorithm because it has an unstructured memory access pattern that varies locally for individual rays and globally for the selected viewpoint. Our results also show that optimal parameters on modern architectures are markedly different from those in previous studies run on older architectures. In addition, given the dramatic performance variation across platforms for both optimal algorithm settings and performance results, there is a clear benefit for production visualization and analysis codes to adopt a strategy for performance optimization through auto-tuning. These benefits will likely become more pronounced in the future as the number of cores per chip and the cost of moving data through the memory hierarchy both increase.


1987 ◽  
Vol 1 (3) ◽  
pp. 26-44 ◽  
Author(s):  
R.E. Benner ◽  
G.R. Montry ◽  
G.G. Weigand ◽  
Iain Duff

2015 ◽  
Vol 75 (1) ◽  
pp. 4-19
Author(s):  
Xiang Shi ◽  
Xiaofei Liao ◽  
Dayang Zheng ◽  
Hai Jin ◽  
Haikun Liu

Sign in / Sign up

Export Citation Format

Share Document