scholarly journals Illumination Space: A Feature Space for Radiance Maps

2021 ◽  
Author(s):  
◽  
Andrew Chalmers

<p>From red sunsets to blue skies, the natural world contains breathtaking scenery with complex lighting which many computer graphics applications strive to emulate. Achieving such realism is a computationally challenging task and requires proficiency with rendering software. To aid in this process, radiance maps (RM) are a convenient storage structure for representing the real-world. In this form, it can be used to realistically illuminate synthetic objects or for backdrop replacement in chroma key compositing. An artist can also freely change a RM to another that better matches their desired lighting or background conditions. This motivates the need for a large collection of RMs such that an artist has a range of environmental conditions to choose from. Due to the practicality of RMs, databases of RMs have continually grown since its inception. However, a comprehensive collection of RMs is not useful without a method for searching through the collection.  This thesis defines a semantic feature space that allows an artist to interactively browse through databases of RMs, with applications for both lighting and backdrop replacement in mind. The set of features are automatically extracted from the RMs in an offline pre-processing step, and are queried in real-time for browsing. Illumination features are defined to concisely describe lighting properties of a RM, allowing an artist to find a RM to illuminate their target scene. Texture features are used to describe visual elements of a RM, allowing an artist to search the database for reflective or backdrop properties for their target scene. A combination of the two sets of features allows an artist to search for RMs with desirable illumination effects which match the background environment.</p>

2021 ◽  
Author(s):  
◽  
Andrew Chalmers

<p>From red sunsets to blue skies, the natural world contains breathtaking scenery with complex lighting which many computer graphics applications strive to emulate. Achieving such realism is a computationally challenging task and requires proficiency with rendering software. To aid in this process, radiance maps (RM) are a convenient storage structure for representing the real-world. In this form, it can be used to realistically illuminate synthetic objects or for backdrop replacement in chroma key compositing. An artist can also freely change a RM to another that better matches their desired lighting or background conditions. This motivates the need for a large collection of RMs such that an artist has a range of environmental conditions to choose from. Due to the practicality of RMs, databases of RMs have continually grown since its inception. However, a comprehensive collection of RMs is not useful without a method for searching through the collection.  This thesis defines a semantic feature space that allows an artist to interactively browse through databases of RMs, with applications for both lighting and backdrop replacement in mind. The set of features are automatically extracted from the RMs in an offline pre-processing step, and are queried in real-time for browsing. Illumination features are defined to concisely describe lighting properties of a RM, allowing an artist to find a RM to illuminate their target scene. Texture features are used to describe visual elements of a RM, allowing an artist to search the database for reflective or backdrop properties for their target scene. A combination of the two sets of features allows an artist to search for RMs with desirable illumination effects which match the background environment.</p>


2021 ◽  
Author(s):  
Lohit Petikam

<p>Art direction is crucial for films and games to maintain a cohesive visual style. This involves carefully controlling visual elements like lighting and colour to unify the director's vision of a story. With today's computer graphics (CG) technology 3D animated films and games have become increasingly photorealistic. Unfortunately, art direction using CG tools remains laborious. Since realistic lighting can go against artistic intentions, art direction is almost impossible to preserve in real-time and interactive applications. New live applications like augmented and mixed reality (AR and MR) now demand automatically art-directed compositing in unpredictably changing real-world lighting. </p> <p>This thesis addresses the problem of dynamically art-directed 3D composition into real scenes. Realism is a basic component of art direction, so we begin by optimising scene geometry capture in realistic composites. We find low perceptual thresholds to retain perceived seamlessness with respect to optimised real-scene fidelity. We then propose new techniques for automatically preserving art-directed appearance and shading for virtual 3D characters. Our methods allow artists to specify their intended appearance for different lighting conditions. Unlike with previous work, artists can direct and animate stylistic edits to automatically adapt to changing real-world environments. We achieve this with a new framework for look development and art direction using a novel latent space of varied lighting conditions. For more dynamic stylised lighting, we also propose a new framework for art-directing stylised shadows using novel parametric shadow editing primitives. This is a first approach that preserves art direction and stylisation under varied lighting in AR/MR.</p>


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 33-33 ◽  
Author(s):  
I Davies ◽  
J Howes ◽  
J Huber ◽  
J Nicholls

We report a series of experiments in which spatial judgments of the real world were compared with equivalent judgments of photographs of the real-world scenes. In experiment 1, subjects judged the angle from the horizontal of natural slopes. Judgments of slope correlated with true slope (r=0.88) but judgments were in general overestimates. Equivalent judgments of slope in photographs again correlated with true slope (r=0.91) but judgments tended to be overestimates for small angles (6°) and underestimates for larger angles (up to 25°). In experiment 2 slope judgments were made under laboratory conditions rather than in the natural world. The slopes, which were viewed monocularly, varied from 5° – 45°, and were either plain, or textured, or included perspective information (a rectangle drawn on the surface) or had both texture and perspective. Judgments were overestimates, but the correlation with true slope was high (r=0.97). Slopes with either texture or perspective were judged more accurately than plain slopes, but combining texture and perspective information conferred no further benefit. Judgment of the angle of the same slopes in photographs produced similar results, but the degree of overestimation (closer to the vertical) was greater than for the real slopes. In experiment 3, subjects either judged the distance of landmarks ranging from 200 m to 5000 m from the observation point, or judged distance to the landmarks in photographs. In both cases subjects' judgments were well described by a power function with exponents close to one. Although there are large individual differences, subjects' judgments of slope and distance are accurate to a scale factor, and photographs yield similar judgments to real scenes.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1443
Author(s):  
Mai Ramadan Ibraheem ◽  
Shaker El-Sappagh ◽  
Tamer Abuhmed ◽  
Mohammed Elmogy

The formation of malignant neoplasm can be seen as deterioration of a pre-malignant skin neoplasm in its functionality and structure. Distinguishing melanocytic skin neoplasms is a challenging task due to their high visual similarity with different types of lesions and the intra-structural variants of melanocytic neoplasms. Besides, there is a high visual likeliness level between different lesion types with inhomogeneous features and fuzzy boundaries. The abnormal growth of melanocytic neoplasms takes various forms from uniform typical pigment network to irregular atypical shape, which can be described by border irregularity of melanocyte lesion image. This work proposes analytical reasoning for the human-observable phenomenon as a high-level feature to determine the neoplasm growth phase using a novel pixel-based feature space. The pixel-based feature space, which is comprised of high-level features and other color and texture features, are fed into the classifier to classify different melanocyte neoplasm phases. The proposed system was evaluated on the PH2 dermoscopic images benchmark dataset. It achieved an average accuracy of 95.1% using a support vector machine (SVM) classifier with the radial basis function (RBF) kernel. Furthermore, it reached an average Disc similarity coefficient (DSC) of 95.1%, an area under the curve (AUC) of 96.9%, and a sensitivity of 99%. The results of the proposed system outperform the results of other state-of-the-art multiclass techniques.


2001 ◽  
Vol 10 (6) ◽  
pp. 613-631 ◽  
Author(s):  
Oliver Bimber ◽  
L. Miguel Encarnação ◽  
Pedro Branco

A prototype of an optical extension for table-like rear-projection systems is described. A large, half-silvered mirror beam splitter is used as the optical combiner to unify a virtual and a real workbench. The virtual workbench has been enabled to display computer graphics beyond its projection boundaries and to combine virtual environments with the adjacent real world. A variety of techniques are described and referred to that allow indirect interaction with virtual objects through the mirror. Furthermore, the optical distortion that is caused by the half-silvered mirror combiner is analyzed, and techniques are presented to compensate for this distortion.


2020 ◽  
Vol 82 (9) ◽  
pp. 614-618
Author(s):  
Lily Apedaile

Model-based inquiry, inquiry-based learning, and phenomenon are all popular terms in K–12 science education right now. Science education in our public education system is rapidly changing due to the implementation of the Next Generation Science Standards (NGSS). These standards ask teachers to move away from direct instruction to having students develop their understanding of the natural world through guided-learning activities. Under NGSS, students are expected to develop this understanding through one of the main scientific practices, model building, which requires a complex, real-world phenomenon to drive the learning experience. Phenomena work best in the classroom when they apply to students’ lives and pique their interest. Finding such phenomena can be hard – especially finding ones that have not already been thoroughly explained on the internet. A great way to find a complex, real-world phenomenon that will interest students is to partner with a local research lab to bring part of their research project into the classroom. This article lays out a process for bringing a local research project into the classroom and designing NGSS-aligned curricula around this project to create a more authentic learning experience for high school students.


2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Luciana Nedel ◽  
Anderson Maciel ◽  
Carla Dal Sasso Freitas ◽  
Claudio Jung ◽  
Manuel Oliveira ◽  
...  

The Computer Graphics, Image Processing and Interaction (CGIP) group at UFRGS concentrates expertise from many different and complementary graphics related domains. In this paper we introduce the group and present our re- search lines and some ongoing projects. We selected mainly the projects related to 3D interaction and navigation, which includes applications as massive data visualization, surgery planning and simulation, tracking and computer vision algorithms, and modeling approaches for human perception and natural world.


2020 ◽  
Author(s):  
Kevin Miller ◽  
Sarah Jo Venditto

Decisions in the natural world are rarely made in isolation. Each action that an organism selects will affect the future situations in which it finds itself, and those situations will in turn affect the future actions that are available. Achieving real-world goals often requires successfully navigating a sequence of many actions. An efficient and flexible way to achieve such goals is to construct an internal model of the environment, and use it to plan behavior multiple steps into the future. This process is known as multi-step planning, and its neural mechanisms are only beginning to be understood. Here, we review recent advances in our understanding of these mechanisms, many of which take advantage of multi-step decision tasks for humans and animals.


Author(s):  
Salman Qadri

The purpose of this study is to highlight the significance of machine vision for the Classification of kidney stone identification. A novel optimized fused texture features frame work was designed to identify the stones in kidney.  A fused 234 texture feature namely (GLCM, RLM and Histogram) feature set was acquired by each region of interest (ROI). It was observed that on each image 8 ROI’s of sizes (16x16, 20x20 and 22x22) were taken. It was difficult to handle a large feature space 280800 (1200x234). Now to overcome this data handling issue we have applied feature optimization technique namely POE+ACC and acquired 30 most optimized features set for each ROI. The optimized fused features data set 3600(1200x30) was used to four machine vision Classifiers that is Random Forest, MLP, j48 and Naïve Bayes. Finally, it was observed that Random Forest provides best results of 90% accuracy on ROI 22x22 among the above discussed deployed Classifiers


Author(s):  
Ida Nurhaida ◽  
Hong Wei ◽  
Remmy A. M. Zen ◽  
Ruli Manurung ◽  
Aniati M. Arymurthy

<p>This paper systematically investigates the effect of image texture features on batik motif retrieval performance. The retrieval process uses a query motif image to find matching motif images in a database. In this study, feature fusion of various image texture features such as Gabor, Log-Gabor, Grey Level Co-Occurrence Matrices (GLCM), and Local Binary Pattern (LBP) features are attempted in motif image retrieval. With regards to performance evaluation, both individual features and fused feature sets are applied. Experimental results show that optimal feature fusion outperforms individual features in batik motif retrieval. Among the individual features tested, Log-Gabor features provide the best result. The proposed approach is best used in a scenario where a query image containing multiple basic motif objects is applied to a dataset in which retrieved images also contain multiple motif objects. The retrieval rate achieves 84.54% for the rank 3 precision when the feature space is fused with Gabor, GLCM and Log-Gabor features. The investigation also shows that the proposed method does not work well for a retrieval scenario where the query image contains multiple basic motif objects being applied to a dataset in which the retrieved images only contain one basic motif object.</p>


Sign in / Sign up

Export Citation Format

Share Document