view synthesis
Recently Published Documents


TOTAL DOCUMENTS

748
(FIVE YEARS 183)

H-INDEX

32
(FIVE YEARS 8)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 470
Author(s):  
Wenxin Zhang ◽  
Yumei Wang ◽  
Yu Liu

Generating high-quality panorama is a key element in promoting the development of VR content. The panoramas generated by the traditional image stitching algorithm have some limitations, such as artifacts and irregular shapes. We consider solving this problem from the perspective of view synthesis. We propose a view synthesis approach based on optical flow to generate a high-quality omnidirectional panorama. In the first stage, we present a novel optical flow estimation algorithm to establish a dense correspondence between the overlapping areas of the left and right views. The result obtained can be approximated as the parallax of the scene. In the second stage, the reconstructed version of the left and the right views is generated by warping the pixels under the guidance of optical flow, and the alpha blending algorithm is used to synthesize the final novel view. Experimental results demonstrate that the subjective experience obtained by our approach is better than the comparison algorithm without cracks or artifacts. Besides the commonly used image quality assessment PSNR and SSIM, we also calculate MP-PSNR, which can provide accurate high-quality predictions for synthesized views. Our approach can achieve an improvement of about 1 dB in MP-PSNR and PSNR and 25% in SSIM, respectively.


IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
M. Shahzeb Khan Gul ◽  
M. Umair Mukati ◽  
Michel Botz ◽  
Soren Forchhammer ◽  
Joachim Keinert

2022 ◽  
Vol 65 (1) ◽  
pp. 99-106
Author(s):  
Ben Mildenhall ◽  
Pratul P. Srinivasan ◽  
Matthew Tancik ◽  
Jonathan T. Barron ◽  
Ravi Ramamoorthi ◽  
...  

We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location ( x , y , z ) and viewing direction ( θ, ϕ )) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis.


2021 ◽  
Author(s):  
Pulkit Gera ◽  
Aakash K T ◽  
Dhawal Sirikonda ◽  
P. J. Narayanan

2021 ◽  
Author(s):  
Zuria Bauer ◽  
Zuoyue Li ◽  
Sergio Orts-Escolano ◽  
Miguel Cazorla ◽  
Marc Pollefeys ◽  
...  

Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6680
Author(s):  
Min-Jae Lee ◽  
Gi-Mun Um ◽  
Joungil Yun ◽  
Won-Sik Cheong ◽  
Soon-Yong Park

In this paper, we propose a multi-view stereo matching method, EnSoft3D (Enhanced Soft 3D Reconstruction) to obtain dense and high-quality depth images. Multi-view stereo is one of the high-interest research areas and has wide applications. Motivated by the Soft3D reconstruction method, we introduce a new multi-view stereo matching scheme. The original Soft3D method is introduced for novel view synthesis, while occlusion-aware depth is also reconstructed by integrating the matching costs of the Plane Sweep Stereo (PSS) and soft visibility volumes. However, the Soft3D method has an inherent limitation because the erroneous PSS matching costs are not updated. To overcome this limitation, the proposed scheme introduces an update process of the PSS matching costs. From the object surface consensus volume, an inverse consensus kernel is derived, and the PSS matching costs are iteratively updated using the kernel. The proposed EnSoft3D method reconstructs a highly accurate 3D depth image because both the multi-view matching cost and soft visibility are updated simultaneously. The performance of the proposed method is evaluated by using structured and unstructured benchmark datasets. Disparity error is measured to verify 3D reconstruction accuracy, and both PSNR and SSIM are measured to verify the simultaneous enhancement of view synthesis.


2021 ◽  
Author(s):  
M. Shahzeb Khan Gul ◽  
M. Umair Mukati ◽  
Michel Batz ◽  
Soren Forchhammer ◽  
Joachim Keinert

Sign in / Sign up

Export Citation Format

Share Document