scholarly journals Adaptive Rendering of Dynamic 3D Scenes

Author(s):  
Вячеслав Гонахчян ◽  
Vyacheslav Gonahchyan

Rendering of dynamic 3d scenes is challenging because it is impossible to perform preprocessing to merge and simplify polygonal models, to precalculate visibility information. The dynamic behavior of objects (visibility change, movement) is causing command buffers rebuilding and rejecting of invisible objects often does not result in performance gains. We propose an adaptive method for visualizing dynamic scenes, which selects the most efficient method for recording and using command buffers and the number of hardware occlusion queries. Proposed adaptive method is based on the performance model, which performs an estimation of the execution time of the main stages of forward rendering. Testing results of the proposed method showed its effectiveness when rendering large dynamic scenes.

2017 ◽  
Vol 79 (7) ◽  
Author(s):  
Chayanan Nawapornanan ◽  
Sarun Intakosum ◽  
Veera Boonjing

The share frequent patterns mining is more practical than the traditional frequent patternset mining because it can reflect useful knowledge such as total costs and profits of patterns. Mining share-frequent patterns becomes one of the most important research issue in the data mining. However, previous algorithms extract a large number of candidate and spend a lot of time to generate and test a large number of useless candidate in the mining process. This paper proposes a new efficient method for discovering share-frequent patterns. The new method reduces a number of candidates by generating candidates from only high transaction-measure-value patterns. The downward closure property of transaction-measure-value patterns assures correctness of the proposed method. Experimental results on dense and sparse datasets show that the proposed method is very efficient in terms of execution time. Also, it decreases the number of generated useless candidates in the mining process by at least 70%.


Author(s):  
Min Liu ◽  
Yang Liu ◽  
Cong Liu ◽  
Juan Wang ◽  
Minghu Wu

The dynamic texture (DT) which treats the transient video process a sample from the spatiotemporal model, has shown the surprising performance for moving objects detection in the scenes with the background motions (e.g., swaying branches, falling snow, waving water). However, DT parameters estimation is based on batch-PCA, which is a computationally inefficient method for high-dimensional vectors. Besides, in the realm of DT, the dimension of state space is given or set experimentally. In this work, the authors present a new framework to address these issues. First, they introduce a two-step method, which combines batch-PCA and the increment PCA (IPCA) to estimate the DT parameters in a micro video element (MVE) group. The parameters of the first DT are learned with the batch-PCA as the basis parameters. Parameters of the remaining DTs are estimated by IPCA with the basis parameters and the arriving observation vectors. Second, inspired by the concept of “Observability” from the control theory, the authors extend an adaptive method for salient motion detection according to the increment of singular entropy (ISE). The proposed scheme is tested in various scenes. Its computational efficiency outperforms the state-of-the-art methods and the Equal Error Rate (EER) is lower than other methods.


Author(s):  
Benoit Gallet ◽  
Michael Gowanlock

Abstract Given two datasets (or tables) A and B and a search distance $$\epsilon$$ ϵ , the distance similarity join, denoted as $$A \ltimes _\epsilon B$$ A ⋉ ϵ B , finds the pairs of points ($$p_a$$ p a , $$p_b$$ p b ), where $$p_a \in A$$ p a ∈ A and $$p_b \in B$$ p b ∈ B , and such that the distance between $$p_a$$ p a and $$p_b$$ p b is $$\le \epsilon$$ ≤ ϵ . If $$A = B$$ A = B , then the similarity join is equivalent to a similarity self-join, denoted as $$A \bowtie _\epsilon A$$ A ⋈ ϵ A . We propose in this paper Heterogeneous Epsilon Grid Joins (HEGJoin), a heterogeneous CPU-GPU distance similarity join algorithm. Efficiently partitioning the work between the CPU and the GPU is a challenge. Indeed, the work partitioning strategy needs to consider the different characteristics and computational throughput of the processors (CPU and GPU), as well as the data-dependent nature of the similarity join that accounts in the overall execution time (e.g., the number of queries, their distribution, the dimensionality, etc.). In addition to HEGJoin, we design in this paper a dynamic and two static work partitioning strategies. We also propose a performance model for each static partitioning strategy to perform the distribution of the work between the processors. We evaluate the performance of all three partitioning methods by considering the execution time and the load imbalance between the CPU and GPU as performance metrics. HEGJoin achieves a speedup of up to $$5.46\times$$ 5.46 × ($$3.97\times$$ 3.97 × ) over the GPU-only (CPU-only) algorithms on our first test platform and up to $$1.97\times$$ 1.97 × ($$12.07\times$$ 12.07 × ) on our second test platform over the GPU-only (CPU-only) algorithms.


Author(s):  
Panagiotis Kouvaros ◽  
Alessio Lomuscio

We introduce an efficient method for the complete verification of ReLU-based feed-forward neural networks. The method implements branching on the ReLU states on the basis of a notion of dependency between the nodes. This results in dividing the original verification problem into a set of sub-problems whose MILP formulations require fewer integrality constraints. We evaluate the method on all of the ReLU-based fully connected networks from the first competition for neural network verification. The experimental results obtained show 145% performance gains over the present state-of-the-art in complete verification.


Sign in / Sign up

Export Citation Format

Share Document