Automatic LOD selection using viewpoint entropy

Author(s):  
Xiaodong Wang ◽  
Fengju Kang ◽  
Hao Gu

Discrete Level of Detail (LOD) strategy is an effective solution for balancing the inherent contradiction between quality and speed in computer graphics. However, application of this strategy requires both the appropriate selection approach and the visually comfortable transition procedure to reduce the popping effects introduced by switching between different levels. In this paper, a novel automatic LOD selection method based on viewpoint entropy was proposed. Firstly, the traditional calculation method of viewpoint entropy contained in a scene was improved for the preprocessing of LOD models. Then, the runtime rendering scheme combined with our selection method was designed for balance of frame rate during real-time rendering, and a novel smoothing LOD transition strategy based on Alternate Frame Rendered (ALR) was put forward for elimination of popping effects. Finally, an example of complex scene combined with our proposal was researched to verify its real-time performance and ability in holding continuity of rendered images. Experimental studies show the effectiveness for stabilizing frame rate and improvements in reducing loss of continuity.

Author(s):  
David J. Lobina

The study of cognitive phenomena is best approached in an orderly manner. It must begin with an analysis of the function in intension at the heart of any cognitive domain (its knowledge base), then proceed to the manner in which such knowledge is put into use in real-time processing, concluding with a domain’s neural underpinnings, its development in ontogeny, etc. Such an approach to the study of cognition involves the adoption of different levels of explanation/description, as prescribed by David Marr and many others, each level requiring its own methodology and supplying its own data to be accounted for. The study of recursion in cognition is badly in need of a systematic and well-ordered approach, and this chapter lays out the blueprint to be followed in the book by focusing on a strict separation between how this notion applies in linguistic knowledge and how it manifests itself in language processing.


Author(s):  
Parastoo Soleimani ◽  
David W. Capson ◽  
Kin Fun Li

AbstractThe first step in a scale invariant image matching system is scale space generation. Nonlinear scale space generation algorithms such as AKAZE, reduce noise and distortion in different scales while retaining the borders and key-points of the image. An FPGA-based hardware architecture for AKAZE nonlinear scale space generation is proposed to speed up this algorithm for real-time applications. The three contributions of this work are (1) mapping the two passes of the AKAZE algorithm onto a hardware architecture that realizes parallel processing of multiple sections, (2) multi-scale line buffers which can be used for different scales, and (3) a time-sharing mechanism in the memory management unit to process multiple sections of the image in parallel. We propose a time-sharing mechanism for memory management to prevent artifacts as a result of separating the process of image partitioning. We also use approximations in the algorithm to make hardware implementation more efficient while maintaining the repeatability of the detection. A frame rate of 304 frames per second for a $$1280 \times 768$$ 1280 × 768 image resolution is achieved which is favorably faster in comparison with other work.


2018 ◽  
Vol 25 (4) ◽  
pp. 1135-1143 ◽  
Author(s):  
Faisal Khan ◽  
Suresh Narayanan ◽  
Roger Sersted ◽  
Nicholas Schwarz ◽  
Alec Sandy

Multi-speckle X-ray photon correlation spectroscopy (XPCS) is a powerful technique for characterizing the dynamic nature of complex materials over a range of time scales. XPCS has been successfully applied to study a wide range of systems. Recent developments in higher-frame-rate detectors, while aiding in the study of faster dynamical processes, creates large amounts of data that require parallel computational techniques to process in near real-time. Here, an implementation of the multi-tau and two-time autocorrelation algorithms using the Hadoop MapReduce framework for distributed computing is presented. The system scales well with regard to the increase in the data size, and has been serving the users of beamline 8-ID-I at the Advanced Photon Source for near real-time autocorrelations for the past five years.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Yaghoub Dabiri ◽  
Alex Van der Velden ◽  
Kevin L. Sack ◽  
Jenny S. Choy ◽  
Julius M. Guccione ◽  
...  

AbstractAn understanding of left ventricle (LV) mechanics is fundamental for designing better preventive, diagnostic, and treatment strategies for improved heart function. Because of the costs of clinical and experimental studies to treat and understand heart function, respectively, in-silico models play an important role. Finite element (FE) models, which have been used to create in-silico LV models for different cardiac health and disease conditions, as well as cardiac device design, are time-consuming and require powerful computational resources, which limits their use when real-time results are needed. As an alternative, we sought to use deep learning (DL) for LV in-silico modeling. We used 80 four-chamber heart FE models for feed forward, as well as recurrent neural network (RNN) with long short-term memory (LSTM) models for LV pressure and volume. We used 120 LV-only FE models for training LV stress predictions. The active material properties of the myocardium and time were features for the LV pressure and volume training, and passive material properties and element centroid coordinates were features of the LV stress prediction models. For six test FE models, the DL error for LV volume was 1.599 ± 1.227 ml, and the error for pressure was 1.257 ± 0.488 mmHg; for 20 LV FE test examples, the mean absolute errors were, respectively, 0.179 ± 0.050 for myofiber, 0.049 ± 0.017 for cross-fiber, and 0.039 ± 0.011 kPa for shear stress. After training, the DL runtime was in the order of seconds whereas equivalent FE runtime was in the order of several hours (pressure and volume) or 20 min (stress). We conclude that using DL, LV in-silico simulations can be provided for applications requiring real-time results.


Sign in / Sign up

Export Citation Format

Share Document