A parallel architecture for ray-tracing with an embedded intersection algorithm

Author(s):  
Alexandre S. Nery ◽  
Nadia Nedjah ◽  
Felipe M.G. Franca ◽  
Lech Jozwiak
2011 ◽  
Vol 70 (2) ◽  
pp. 189-202
Author(s):  
Alexandre S. Nery ◽  
Nadia Nedjah ◽  
Felipe M. G. França

Author(s):  
Wang Jun-Feng ◽  
Ding Gang-Yi ◽  
Wang Yi-Ou ◽  
Li Yu-Gang ◽  
Zhang Fu-Quan

AbstractThis paper proposes a parallel computing analysis model HPM and analyzes the parallel architecture of CPU–GPU based on this model. On this basis, we study the parallel optimization of the ray-tracing algorithm on the CPU–GPU parallel architecture and give full play to the parallelism between nodes, the parallelism of the multi-core CPU inside the node, and the parallelism of the GPU, which improve the calculation speed of the ray-tracing algorithm. This paper uses the space division technology to divide the ground data, constructs the KD-tree organization structure, and improves the construction method of KD-tree to reduce the time complexity of the algorithm. The ground data is evenly distributed to each computing node, and the computing nodes use a combination of CPU–GPU for parallel optimization. This method dramatically improves the drawing speed while ensuring the image quality and provides an effective means for quickly generating photorealistic images.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document