fast rendering
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 8)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Author(s):  
jian li ◽  
pei-rong liu ◽  
xinyu wang ◽  
hao cui ◽  
yurong ma

Abstract In view of the problems in traditional geological modeling methods, such as the insufficient utilization of geological survey data, the inaccurate expression of a stratigraphic model, and the large amount of model data, a 3D geological model cannot be smoothly loaded and rendered on the web end. In this paper, a 3D geological implicit modeling method of regular voxel splitting based on hierarchical interpolation data is proposed. This method first uses the boreholes and geological section data from a geological survey for data conversion and fusion, compares the applicability of different interpolation algorithms through cross-validation research, and uses the best fitting algorithm to interpolate and encrypt discrete points in the formation. Then, it constructs the regular voxels, designs five different regular voxel split types, and divides the voxels. In addition, the data structure design of the voxel split model is implemented, and the irregular voxel metadata structure is analyzed and displayed through Three.js. Using this method, based on the survey data of an area in Zhengzhou, the global workflow from data processing to model construction and visualization is demonstrated. The experimental results show that the model can integrate multisource hierarchical interpolation data; express different stratum structures accurately and smoothly, and can realize the fast rendering, spatial query and analysis of the internal information of a geological body in a browser.


2021 ◽  
Vol 11 (18) ◽  
pp. 8750
Author(s):  
Styliani Verykokou ◽  
Argyro-Maria Boutsi ◽  
Charalabos Ioannidis

Mobile Augmented Reality (MAR) is designed to keep pace with high-end mobile computing and their powerful sensors. This evolution excludes users with low-end devices and network constraints. This article presents ModAR, a hybrid Android prototype that expands the MAR experience to the aforementioned target group. It combines feature-based image matching and pose estimation with fast rendering of 3D textured models. Planar objects of the real environment are used as pattern images for overlaying users’ meshes or the app’s default ones. Since ModAR is based on the OpenCV C++ library at Android NDK and OpenGL ES 2.0 graphics API, there are no dependencies on additional software, operating system version or model-specific hardware. The developed 3D graphics engine implements optimized vertex-data rendering with a combination of data grouping, synchronization, sub-texture compression and instancing for limited CPU/GPU resources and a single-threaded approach. It achieves up to 3 × speed-up compared to standard index rendering, and AR overlay of a 50 K vertices 3D model in less than 30 s. Several deployment scenarios on pose estimation demonstrate that the oriented FAST detector with an upper threshold of features per frame combined with the ORB descriptor yield best results in terms of robustness and efficiency, achieving a 90% reduction of image matching time compared to the time required by the AGAST detector and the BRISK descriptor, corresponding to pattern recognition accuracy of above 90% for a wide range of scale changes, regardless of any in-plane rotations and partial occlusions of the pattern.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xiaonan Cao

This paper starts with the study of realistic three-dimensional models, from the two aspects of ink art style simulation model and three-dimensional display technology, explores the three-dimensional display model of three-dimensional model ink style, and conducts experiments through the software development platform and auxiliary software. The feasibility of the model is verified. Aiming at the problem of real-time rendering of large-scale 3D scenes in the model, efficient visibility rejection method and a multiresolution fast rendering method were designed to realize the rapid construction and rendering of ink art 3D virtual reality scenes in a big data environment. A two-dimensional cellular automaton is used to simulate a brushstroke model with ink and wash style, and outlines are drawn along the path of the brushstroke to obtain an effect close to the artistic style of ink and wash painting. Set the surface of the model with ink style brushstroke texture patterns, refer to the depth map, normal map, and curvature map information of the model, and simulate the drawing effect of the method by procedural texture mapping. Example verification shows that the rapid visualization analysis model of ink art big data designed in this paper is in line with the prediction requirements of ink art big data three-dimensional display indicators. The fast visibility removal method is used to deal with large-scale three-dimensional ink art in a big data environment. High efficiency is achieved in virtual reality scenes, and the multiresolution fast rendering method better maintains the appearance of the prediction model without major deformation.


2020 ◽  
Author(s):  
Michail Schwab

The dominant markup language for Web visualizations - Scalable Vector Graphics (SVG) - is comparatively easy to learn, and is open, accessible, customizable via CSS, and searchable via the DOM, with easy interaction handling and debugging. Because these attributes allow visualization creators to focus on design on implementation details, tools built on top of SVG, such as D3.js, are essential to the visualization community. However, slow SVG rendering can limit designs by effectively capping the number of on-screen data points, and this can force visualization creators to switch to Canvas or WebGL. These are less flexible (e.g., no search or styling via CSS), and harder to learn.We introduce Scalable Scalable Vector Graphics (SSVG) to reduce these limitations and allow complex and smooth visualizations to be created with SVG.SSVG automatically translates interactive SVG visualizations into a dynamic virtual DOM (VDOM) to bypass the browser's slow `to specification' rendering by intercepting JavaScript function calls. De-coupling the SVG visualization specification from SVG rendering, and obtaining a dynamic VDOM, creates flexibility and opportunity for visualization system research. SSVG uses this flexibility to free up the main thread for more interactivity and renders the visualization with Canvas or WebGL on a web worker. Together, these concepts create a drop-in JavaScript library which can improve rendering performance by 3-9X with only one line of code added.To demonstrate applicability, we describe the use of SSVG on multiple example visualizations including published visualization research. A free copy of this paper, collected data, and source code are available as open science at osf.io/ge8wp.


2020 ◽  
Vol 6 (9) ◽  
pp. 91
Author(s):  
Ibtissam Constantin ◽  
Joseph Constantin ◽  
André Bigand

Convolution neural networks usually require large labeled data-sets to construct accurate models. However, in many real-world scenarios, such as global illumination, labeling data are a time-consuming and costly human intelligent task. Semi-supervised learning methods leverage this issue by making use of a small labeled data-set and a larger set of unlabeled data. In this paper, our contributions focus on the development of a robust algorithm that combines active and deep semi-supervised convolution neural network to reduce labeling workload and to accelerate convergence in case of real-time global illumination. While the theoretical concepts of photo-realistic rendering are well understood, the increased need for the delivery of highly dynamic interactive content in vast virtual environments has increased recently. Particularly, the quality measure of computer-generated images is of great importance. The experiments are conducted on global illumination scenes which contain diverse distortions. Compared with human psycho-visual thresholds, the good consistency between these thresholds and the learning models quality measures can been seen. A comparison has also been made with SVM and other state-of-the-art deep learning models. We do transfer learning by running the convolution base of these models over our image set. Then, we use the output features of the convolution base as input to retrain the parameters of the fully connected layer. The obtained results show that our proposed method provides promising efficiency in terms of precision, time complexity, and optimal architecture.


Author(s):  
Guangming An ◽  
Changlei Jiang ◽  
Taichi Watanabe
Keyword(s):  

2019 ◽  
Vol 32 (2) ◽  
pp. 427-446 ◽  
Author(s):  
Joseph Constantin ◽  
Andre Bigand ◽  
Ibtissam Constantin

2017 ◽  
Vol 36 (6) ◽  
pp. 1-15 ◽  
Author(s):  
Pramook Khungurn ◽  
Rundong Wu ◽  
James Noeckel ◽  
Steve Marschner ◽  
Kavita Bala

Sign in / Sign up

Export Citation Format

Share Document