physically based rendering
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 17)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 40 (5) ◽  
pp. 1-18
Author(s):  
Julien Philip ◽  
Sébastien Morgenthaler ◽  
Michaël Gharbi ◽  
George Drettakis

We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation. Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials. We start with multiple images of the scene and a three-dimensional mesh obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is well explained as the sum of a view-independent diffuse component and a view-dependent glossy term concentrated around the mirror reflection direction. We design a convolutional network around input feature maps that facilitate learning of an implicit representation of scene materials and illumination, enabling both relighting and free-viewpoint navigation. We generate these input maps by exploiting the best elements of both image-based and physically based rendering. We sample the input views to estimate diffuse scene irradiance, and compute the new illumination caused by user-specified light sources using path tracing. To facilitate the network's understanding of materials and synthesize plausible glossy reflections, we reproject the views and compute mirror images . We train the network on a synthetic dataset where each scene is also reconstructed with MVS. We show results of our algorithm relighting real indoor scenes and performing free-viewpoint navigation with complex and realistic glossy reflections, which so far remained out of reach for view-synthesis techniques.


2021 ◽  
Author(s):  
George Psomathianos ◽  
Nikitas Sourdakos ◽  
Konstantinos Moustakas

2021 ◽  
Vol 36 (04) ◽  
pp. 791-794
Author(s):  
Ljupka Radosavljević

Tema ovog istraživanja bavi se mogućnostima i analizom generisanja PBR (eng. Physically Based Rendering) materijala kamena.


2021 ◽  
Vol 102 ◽  
pp. 04015
Author(s):  
Rion Sato ◽  
Michael Cohen

We introduce a way of implementing physically-based renderers that can switch rendering methods with a raytracing library. Various physically-based rendering (PBR) methods can generate beautiful images that are close to human view of real world. However, comparison between corresponding pairs of pixels of image pairs generated by different rendering methods is necessary to verify whether the implementation correctly obeys mathematical models of PBR. For comparison, result images must be same scene, same resolution, from same camera angle. We explain fundamental theory of PBR first, and present overview of a library for PBR, Embree, developed by Intel, as a way of rendering-switchable implementation. Finally, we demonstrate computing result images by a renderer we developed. The renderer can switch rendering methods and be extended for other method implementations.


2021 ◽  
Author(s):  
Gonçalo Soares ◽  
João Madeiras Pereira

Real-time physically based rendering has long been looked at as the holy grail in Computer Graphics. With the introduction of Nvidia RTX-enabled GPUs family, light transport simulations under real-time constraint started to look like a reality. This paper presents Lift, an educational framework written in C++ that explores the RTX hardware pipeline by using the low-level Vulkan API and its Ray Tracing extension, recently made available by Khronos Group. Furthermore, to accomplish low variance rendered images, we integrated the AI-based denoiser available from the Nvidia ́s OptiX framework. Lift’s development arose primarily in the context of the graduate 3D Programming course taught at Instituto Superior Técnico and Master Theses focused on Real-Time Ray Trac- ing and provides the foundations for laboratory assignments and projects development. The platform aims to make easier students to learn and to develop, by programming the shaders of the RT pipeline, their physically-based ren- dering approaches and to compare them with the built-in progressive unidirectional and bidirectional path tracers. The GUI allows a user to specify camera settings and navigation speed, to select the input scene as well as the rendering method, to define the number of samples per pixel and the path length as well as to denoise the generated image either every frame or just the final frame. Statistics related with the timings, image resolution and total number of accumulated samples are provided too. Such platform will teach that nowadays physically-accurate images can be rendered in real-time under different lighting conditions and how well a denoiser can reconstruct images rendered with just one sample per pixel.


2020 ◽  
Vol 32 (1) ◽  
Author(s):  
Van Nhan Nguyen ◽  
Robert Jenssen ◽  
Davide Roverso

Abstract In unmanned aerial vehicle (UAV) flights, power lines are considered as one of the most threatening hazards and one of the most difficult obstacles to avoid. In recent years, many vision-based techniques have been proposed to detect power lines to facilitate self-driving UAVs and automatic obstacle avoidance. However, most of the proposed methods are typically based on a common three-step approach: (i) edge detection, (ii) the Hough transform, and (iii) spurious line elimination based on power line constrains. These approaches not only are slow and inaccurate but also require a huge amount of effort in post-processing to distinguish between power lines and spurious lines. In this paper, we introduce LS-Net, a fast single-shot line-segment detector, and apply it to power line detection. The LS-Net is by design fully convolutional, and it consists of three modules: (i) a fully convolutional feature extractor, (ii) a classifier, and (iii) a line segment regressor. Due to the unavailability of large datasets with annotations of power lines, we render synthetic images of power lines using the physically based rendering approach and propose a series of effective data augmentation techniques to generate more training data. With a customized version of the VGG-16 network as the backbone, the proposed approach outperforms existing state-of-the-art approaches. In addition, the LS-Net can detect power lines in near real time. This suggests that our proposed approach has a promising role in automatic obstacle avoidance and as a valuable component of self-driving UAVs, especially for automatic autonomous power line inspection.


Author(s):  
A. El Saer ◽  
C. Stentoumis ◽  
I. Kalisperakis ◽  
P. Nomikou

Abstract. In this work, we present a methodology for precise 3D modelling and multi-source geospatial data blending for the purposes of Virtual Reality immersive and interactive experiences. We evaluate it on the volcanic island of Santorini due to its formidable geological terrain and the interest it poses for scientific and touristic purposes. The methodology developed here consists of three main steps. Initially, bathymetric and SRTM data are scaled down to match the smallest resolution of our dataset (LIDAR). Afterwards, the resulted elevations are combined based on the slope of the relief, while considering a buffer area to enforce a smoother terrain. As a final step, the orthophotos are combined with the estimated Digital Terrain Model, via applying a nearest neighbour matching schema leading to the final terrain background. In addition to this, both onshore and offshore points-of-interest were modelled via image-based 3D reconstruction and added to the virtual scene. The overall geospatial data that need to be visualized in applications demanding photo-textured hyper-realistic models pose a significant challenge. The 3D models are treated via a mesh optimization workflow, suitable for efficient and fast visualization in virtual reality engines, through mesh simplification, physically based rendering texture maps baking, and level-of-details.


Author(s):  
A. El Saer ◽  
C. Stentoumis ◽  
I. Kalisperakis ◽  
L. Grammatikopoulos ◽  
P. Nomikou ◽  
...  

Abstract. In this contribution, we propose a versatile image-based methodology for 3D reconstructing underwater scenes of high fidelity and integrating them into a virtual reality environment. Typically, underwater images suffer from colour degradation (blueish images) due to the propagation of light through water, which is a more absorbing medium than air, as well as the scattering of light on suspended particles. Other factors, such as artificial lights, also, diminish the quality of images and, thus, the quality of the image-based 3D reconstruction. Moreover, degraded images have a direct impact on the user perception of the virtual environment, due to geometric and visual degenerations. Here, it is argued that these can be mitigated by image pre-processing algorithms and specialized filters. The impact of different filtering techniques on images is evaluated, in order to eliminate colour degradation and mismatches in the image sequences. The methodology in this work consists of five sequential pre-processes; saturation enhancement, haze reduction, and Rayleigh distribution adaptation, to de-haze the images, global histogram matching to minimize differences among images of the dataset, and image sharpening to strengthen the edges of the scene. The 3D reconstruction of the models is based on open-source structure-from-motion software. The models are optimized for virtual reality through mesh simplification, physically based rendering texture maps baking, and level-of-details. The results of the proposed methodology are qualitatively evaluated on image datasets captured in the seabed of Santorini island in Greece, using a ROV platform.


Sign in / Sign up

Export Citation Format

Share Document