Realistic Projection on Casual Dual-Planar Surfaces with Global Illumination Compensation

2016 ◽  
Vol 16 (03) ◽  
pp. 1650014
Author(s):  
Shamsuddin N. Ladha ◽  
Kate Smith-Miles ◽  
Sharat Chandran

Projectors are deployed in increasingly demanding environments. The fidelity of the projected image as seen by a user is compromised when projectors are deployed in dual-planar environments (e.g. corner of a room or an office cubicle), thereby diminishing the richness of the user experience. There are many reasons for this. The focus of this paper is to compensate for the global illumination effects due to inter-reflection of light. In the process we also correct geometry distortion. Our system is built from off-the-shelf components and easily deployable without any elaborated setup. In this paper, we describe two complementary methods to compensate for global illumination effects in dual-planar environments. Our methods are based on the systematic adaptation and interpretation of the classical radiosity equation in the image domain. The technique neither assumes nor computes 3D scene geometry, relying on an implicit inference. The system is calibrated in an off-line mode once; in our first method, corrected images and video are computed in real time, and in our second method, a richer scene is offered with a modest increase in computational time. The corrected images when projected have better contrast and are more appealing to the user.

Vestnik MEI ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 70-75
Author(s):  
Vladimir P. Budak ◽  
◽  
Viktor S. Zheltov ◽  
Tatyana V. Meshkova ◽  
Viktor D. Chembaev ◽  
...  

Computer-aided designing of lighting systems has been remaining of relevance for more than ten years. The most popular CAD packages for calculating lighting systems, such as DIAlux and Relux, are based on solving the radiosity equation. By using this equation, the illuminance distributions can be modeled, based on which the standardized quantitative lighting characteristics can be calculated. However, the human eye perceives brightness, not illuminance. The qualitative parameters of lighting are closely linked with the spatial-angular distribution of brightness, for calculation of which it is necessary to solve the global illumination equation. An analysis of the engineering matters concerned with designing of lighting systems points to the obvious need for a so-called view-independent calculation of lighting scenes, which means the possibility to visually represent a scene from different positions of sighting (a camera). The approach based on local estimations of the Monte Carlo method as one of efficient techniques for solving the global illumination equation is considered, and an algorithm for view-independent modeling based on the local estimations method is presented. Various algorithms for solving the problem of searching the intersection for the casted beams from a light source with the studied illumination scene are investigated.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. U65-U76 ◽  
Author(s):  
Tongning Yang ◽  
Jeffrey Shragge ◽  
Paul Sava

Image-domain wavefield tomography is a velocity model building technique using seismic images as the input and seismic wavefields as the information carrier. However, the method suffers from the uneven illumination problem when it applies a penalty operator to highlighting image inaccuracies due to the velocity model error. The uneven illumination caused by complex geology such as salt or by incomplete data creates defocusing in common-image gathers even when the migration velocity model is correct. This additional defocusing violates the wavefield tomography assumption stating that the migrated images are perfectly focused in the case of the correct model. Therefore, defocusing rising from illumination mixes with defocusing rising from the model errors and degrades the model reconstruction. We addressed this problem by incorporating the illumination effects into the penalty operator such that only the defocusing by model errors was used for model construction. This was done by first characterizing the illumination defocusing in gathers by illumination analysis. Then an illumination-based penalty was constructed that does not penalize the illumination defocusing. This method improved the robustness and effectiveness of image-domain wavefield tomography applied in areas characterized by poor illumination. Our tests on synthetic examples demonstrated that velocity models were more accurately reconstructed by our method using the illumination compensation, leading to a more accurate model and better subsurface images than those in the conventional approach without illumination compensation.


2006 ◽  
Vol 12 (4) ◽  
pp. 593-615 ◽  
Author(s):  
Gustavo Olague ◽  
Francisco Fernández ◽  
Cynthia B. Pérez ◽  
Evelyne Lutton

We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.


Author(s):  
E. Paiz-Reyes ◽  
M. Brédif ◽  
S. Christophe

Abstract. Archivists, historians and national mapping agencies, among others, are archiving large datasets of historical photographs. Nevertheless, the capturing devices used to acquire these images possessed a diversity of effects that influenced the quality of the final resulting picture, e.g. geometric distortion, chromatic aberration, depth of field variation, etc. This paper examines singularly the topic of geometric distortion for a co-visualization of historical photos within a 3D model of the photographed scene. A distortion function of an image is ordinarily estimated only on the image domain by adjusting its parameters to observations of point correspondences. This mathematical function may exhibit overfits, oscillations or may not be well defined outside of this domain. The contribution of this work is the description of a distortion model defined on the whole undistorted image plane. We extrapolate the distortion estimated only on the image domain and then transfer this distortion information to the view of the 3D scene. This enables to look at the scene through an estimated camera and zoom out to see the context around the original photograph with a well-defined and behaved distortion. These findings may be a significant addition to the overall purpose of creating innovative ways to examine and visualize old photographs.


2018 ◽  
Author(s):  
Soumyajit Gupta ◽  
Shachi Mittal ◽  
Andre Kajdacsy-Balla ◽  
Rohit Bhargava ◽  
Chandrajit Bajaj

AbstractHigh dimensional data, for example from infrared spectral imaging, involves an inherent trade-off in the acquisition time and quality of spatial-spectral data. Minimum Noise Fraction (MNF) developed by Green et al. [1] has been extensively studied as an algorithm for noise removal in HSI (Hyper-Spectral Imaging) data. However, there is a speed-accuracy trade-off in the process of manually deciding the relevant bands in the MNF space, which by current methods could become a person month time for analyzing an entire TMA (Tissue Micro Array). We propose three approaches termed ‘Fast MNF’, ‘Approx MNF’ and ‘Rand MNF’ where the computational time of the algorithm is reduced, as well as the entire process of band selection is fully automated. This automated approach is shown to perform at the same level of reconstruction accuracy as MNF with large speedup factors, resulting in the same task to be accomplished in hours. The different approximations of the algorithm, show the reconstruction accuracy vs storage (50×) and runtime speed (60×) trade-off. We apply the approach for automating the denoising of different tissue histology samples, in which the accuracy of classification (differentiating between the different histologic and pathologic classes) strongly depends on the SNR (signal to noise ratio) of recovered data. Therefore, we also compare the effect of the proposed denoising algorithms on classification accuracy. Since denoising HSI data is done without any ground truth, we also use a metric that assesses the quality of denoising in the image domain between the noisy and denoised image in absence of ground truth.


Sign in / Sign up

Export Citation Format

Share Document