A geometric space-view redundancy descriptor for light fields: Predicting the compression potential of the JPEG Pleno light field datasets

Author(s):  
Marcio P. Pereira ◽  
Gustavo Alves ◽  
Carla L. Pagliari ◽  
Murilo B. de Carvalho ◽  
Eduardo A. B. da Silva ◽  
...  
Author(s):  
Shuyao Zhou ◽  
Tianqian Zhu ◽  
Kanle Shi ◽  
Yazi Li ◽  
Wen Zheng ◽  
...  

AbstractLight fields are vector functions that map the geometry of light rays to the corresponding plenoptic attributes. They describe the holographic information of scenes by representing the amount of light flowing in every direction through every point in space. The physical concept of light fields was first proposed in 1936, and light fields are becoming increasingly important in the field of computer graphics, especially with the fast growth of computing capacity as well as network bandwidth. In this article, light field imaging is reviewed from the following aspects with an emphasis on the achievements of the past five years: (1) depth estimation, (2) content editing, (3) image quality, (4) scene reconstruction and view synthesis, and (5) industrial products because the technologies of lights fields also intersect with industrial applications. State-of-the-art research has focused on light field acquisition, manipulation, and display. In addition, the research has extended from the laboratory to industry. According to these achievements and challenges, in the near future, the applications of light fields could offer more portability, accessibility, compatibility, and ability to visualize the world.


2020 ◽  
Vol 2020 (28) ◽  
pp. 293-298
Author(s):  
Cehao Yu ◽  
Sylvia Pont

In complex scenes, the light reflected by surfaces causes secondary illumination, which contributes significantly to the actual light in the space (the "light field"). Secondary illumination is dependent on the primary illumination, geometry, and materials of a space. Hence, primary illumination and secondary illumination can have non-identical spectral properties, and render object colors differently. Lighting technology and research predominantly relies on the color rendering properties of the illuminant. Little attention has been given to the impact of secondary illumination on the "effective color rendering" within light fields. Here we measure the primary and secondary illumination for a simple spatial geometry and demonstrate empirically their differential "effective color rendering" properties. We found that color distortions due to secondary illumination from chromatic furnishing materials led to systematic and significant color shifts, and major differences between the lamp-specified color rendition and temperature and the actual light-based "effective color rendering" and "effective color temperature". On the basis of these results we propose a methodological switch from assessing the color rendering and temperature of illuminants only to assessing the "effective color rendering and temperature" in context too.


2020 ◽  
Author(s):  
Yuan Gao

This thesis discusses approaches and techniques to convert Sparsely- Sampled Light Fields (SSLFs) into Densely-Sampled Light Fields (DSLFs), which can be used for visualization on 3DTV and Virtual Reality (VR) de- vices. Exemplarily, a movable 1D large-scale light field acquisition system for capturing SSLFs in real-world environments is evaluated. This system consists of 24 sparsely placed RGB cameras and two Kinect V2 sensors. The real-world SSLF data captured with this setup can be leveraged to reconstruct real-world DSLFs. To this end, three challenging problems require to be solved for this system: (i) how to estimate the rigid trans- formation from the coordinate system of a Kinect V2 to the coordinate system of an RGB camera; (ii) how to register the two Kinect V2 sensors with a large displacement; (iii) how to reconstruct a DSLF from a SSLF with moderate and large disparity ranges. To overcome these three challenges, we propose: (i) a novel self- calibration method, which takes advantage of the geometric constraints from the scene and the cameras, for estimating the rigid transformations from the camera coordinate frame of one Kinect V2 to the camera coordi- nate frames of 12-nearest RGB cameras; (ii) a novel coarse-to-fine approach for recovering the rigid transformation from the coordinate system of one Kinect to the coordinate system of the other by means of local color and geometry information; (iii) several novel algorithms that can be categorized into two groups for reconstructing a DSLF from an input SSLF, including novel view synthesis methods, which are inspired by the state-of-the-art video frame interpolation algorithms, and Epipolar-Plane Image (EPI) in- painting methods, which are inspired by the Shearlet Transform (ST)-based DSLF reconstruction approaches.


2020 ◽  
Vol 6 (12) ◽  
pp. 138
Author(s):  
Nicola Viganò ◽  
Felix Lucka ◽  
Ombeline de La Rochefoucauld ◽  
Sophia Bethany Coban ◽  
Robert van Liere ◽  
...  

X-ray plenoptic cameras acquire multi-view X-ray transmission images in a single exposure (light-field). Their development is challenging: designs have appeared only recently, and they are still affected by important limitations. Concurrently, the lack of available real X-ray light-field data hinders dedicated algorithmic development. Here, we present a physical emulation setup for rapidly exploring the parameter space of both existing and conceptual camera designs. This will assist and accelerate the design of X-ray plenoptic imaging solutions, and provide a tool for generating unlimited real X-ray plenoptic data. We also demonstrate that X-ray light-fields allow for reconstructing sharp spatial structures in three-dimensions (3D) from single-shot data.


2021 ◽  
Author(s):  
Yuriy Anisimov ◽  
Gerd Reis ◽  
Didier Stricker

The ability to create an accurate three-dimensional reconstruction of a captured scene draws attention to the prin- ciples of light fields. This paper presents an approach for light field camera calibration and rectification, based on pairwise pattern-based parameters extraction. It is followed by a correspondence-based algorithm for camera parameters refinement from arbitrary scenes using the triangulation filter and nonlinear optimization. The effec- tiveness of our approach is validated on both real and synthetic data.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Farnoud Kazemzadeh ◽  
Alexander Wong

<p>We present a device and method for performing lens-free spectral<br />light-field fusion microscopy at sub-pixel resolutions while taking<br />advantage of the large field-of-view capability. A collection of<br />lasers at different wavelengths is used in pulsed mode and enables<br />the capture of interferometric light-field encodings of a specimen<br />placed near the detector. Numerically fusing the spectral complex<br />light-fields obtained from the encodings produces an image of the<br />specimen at higher resolution and signal-to-noise-ratio while suppressing<br />various aberrations and artifacts.</p>


2006 ◽  
Vol 11 (3) ◽  
pp. 263-276
Author(s):  
P. Miškinis

Qualitative analysis of hypersound generation is described by the inhomogeneous Burgers equation in the case of the non-harmonic and arbitrary light field. A qualitative possibility of the appearance of discrete values of the coefficient of extinction of the sound wave and the possibility of generation of the same sound signal by different light fields is shown.


2019 ◽  
Vol 86 (12) ◽  
pp. 758-764
Author(s):  
Maximilian Schambach ◽  
Fernando Puente León

AbstractWe present a novel method to reconstruct multispectral images of flat objects from spectrally coded light fields as taken by an unfocused light field camera with a spectrally coded microlens array. In this sense, the spectrally coded light field camera is used as a multispectral snapshot imager, acquiring a multispectral datacube in a single exposure. The multispectral image, corresponding to the light field’s central view, is reconstructed by shifting the spectrally coded subapertures onto the central view according to their respective disparity. We assume that the disparity of the scene is approximately constant and non-zero. Since the spectral mask is identical for all subapertures, the missing spectral data of the central view will be filled up from the shifted spectrally coded subapertures. We investigate the reconstruction quality for different spectral masks and camera parameter sets optimized for real life applications such as in-line production monitoring for which the constant disparity constraint naturally holds. For synthesized reference scenes, using 16 color channels, we achieve a reconstruction \mathrm{PSNR} of up to 51 dB.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2687 ◽  
Author(s):  
Chiara Galdi ◽  
Valeria Chiesa ◽  
Christoph Busch ◽  
Paulo Lobato Correia ◽  
Jean-Luc Dugelay ◽  
...  

The term “plenoptic” comes from the Latin words plenus (“full”) + optic. The plenoptic function is the 7-dimensional function representing the intensity of the light observed from every position and direction in 3-dimensional space. Thanks to the plenoptic function it is thus possible to define the direction of every ray in the light-field vector function. Imaging systems are rapidly evolving with the emergence of light-field-capturing devices. Consequently, existing image-processing techniques need to be revisited to match the richer information provided. This article explores the use of light fields for face analysis. This field of research is very recent but already includes several works reporting promising results. Such works deal with the main steps of face analysis and include but are not limited to: face recognition; face presentation attack detection; facial soft-biometrics classification; and facial landmark detection. This article aims to review the state of the art on light fields for face analysis, identifying future challenges and possible applications.


2016 ◽  
Vol 28 (1) ◽  
pp. 92-100
Author(s):  
Francisco C. Calderon ◽  
Carlos A. Parra ◽  
Cesar L. Niño

The light field or LF is a function that describes the amount of light traveling in every direction (angular) through every point (spatial) in a scene, this LF can be captured in several ways, using arrays of cameras, or more recently using a single camera with an special lens, that allows the capture of angular and spatial information of light rays of a scene (LF). This recent camera implementation gives a different approach to find the dept of a scene using only a single camera. In order to estimate the depth, we describe a taxonomy, similar to the one used in stereo Depth-map algorithms. That consist in the creation of a cost tensor to represent the matching cost between different disparities, then, using a support weight window, aggregate the cost tensor, finally, using a winner-takes-all optimization algorithm, search for the best disparities. This paper explains in detail the several changes made to an stereo-like taxonomy, to be applied in a light field, and evaluate this algorithm using a recent database that for the first time, provides several ground-truth light fields, with a respective ground-truth depth map.


Sign in / Sign up

Export Citation Format

Share Document