3d content
Recently Published Documents


TOTAL DOCUMENTS

189
(FIVE YEARS 25)

H-INDEX

13
(FIVE YEARS 1)

2022 ◽  
Vol 59 (1) ◽  
pp. 3-18
Author(s):  
Jiří Frank ◽  
Josef Kortan ◽  
Miroslav Kukrál ◽  
Vojtěch Leischner ◽  
Lukáš Menšík ◽  
...  

One of the challenges that museums often face is how to present their ‚treasures‘ in a form that is both comprehensive and relevant to today‘s audiences. Digital content alone is not enough in this context and 3D content is increasingly gaining importance. One of the most accessible and at the same time most effective 3D digitisation methods is photogrammetry. The result, if procedures are followed correctly, is not only high-quality content with a wide range of uses, but also potential stepping stones for effective business models. This can reduce acquisition costs quite significantly and make 3D digitisation accessible to a wider range of institutions.


2021 ◽  
Vol 11 (21) ◽  
pp. 9889
Author(s):  
Zehao He ◽  
Xiaomeng Sui ◽  
Liangcai Cao

Holographic display has the potential to be utilized in many 3D application scenarios because it provides all the depth cues that human eyes can perceive. However, the shortage of 3D content has limited the application of holographic 3D displays. To enrich 3D content for holographic display, a 2D to 3D rendering approach is presented. In this method, 2D images are firstly classified into three categories, including distant view images, perspective view images and close-up images. For each category, the computer-generated depth map (CGDM) is calculated using a corresponding gradient model. The resulting CGDMs are applied in a layer-based holographic algorithm to obtain computer-generated holograms (CGHs). The correctly reconstructed region of the image changes with the reconstruction distance, providing a natural 3D display effect. The realistic 3D effect makes the proposed approach can be applied in many applications, such as education, navigation, and health sciences in the future.


Author(s):  
Yashvi Desai ◽  
Naisha Shah ◽  
Vrushali Shah ◽  
Prasenjit Bhavathankar ◽  
Kaisar Katchi
Keyword(s):  

Author(s):  
B. Danthine ◽  
G. Hiebel ◽  
C. Posch ◽  
H. Stadler

Abstract. In this article a use case is presented how a semantic network can be used to enrich the existing virtual exhibition “They Shared their Destiny. Women and the Cossacks’ Tragedy in Lienz 1945” about the fate of women during the Cossack tragedy in Lienz. By connecting via CIDOC CRM information about people, events, finds and places the goal was not only to make this information interoperable, but also to integrate the resulting knowledge graph into the exhibition, thus providing a further navigation level and enhancing the visitors’ experience.First, a short introduction to the existing exhibition and the presented project is given. In the second part, the scientific background of CIDOC CRM and its semantically enriched 3D content is outlined. In the third part the implementation and the project as a use case is described with respect to the data modelling and the integration of the semantic network into the 3-dimensional environment as well as the integration of spatial aspects and other internet resources. At the end, there is a summary with an outlook on future planned projects.


2021 ◽  
Vol 11 (16) ◽  
pp. 7536
Author(s):  
Kyungho Yu ◽  
Juhyeon Noh ◽  
Hee-Deok Yang

Recently, three-dimensional (3D) content used in various fields has attracted attention owing to the development of virtual reality and augmented reality technologies. To produce 3D content, we need to model the objects as vertices. However, high-quality modeling is time-consuming and costly. Drawing-based modeling is a technique that shortens the time required for modeling. It refers to creating a 3D model based on a user’s line drawing, which is a 3D feature represented by two-dimensional (2D) lines. The extracted line drawing provides information about a 3D model in the 2D space. It is sometimes necessary to generate a line drawing from a 2D cartoon image to represent the 3D information of a 2D cartoon image. The extraction of consistent line drawings from 2D cartoons is difficult because the styles and techniques differ depending on the designer who produces the 2D cartoons. Therefore, it is necessary to extract line drawings that show the geometric characteristics well in 2D cartoon shapes of various styles. This paper proposes a method for automatically extracting line drawings. The 2D cartoon shading image and line drawings are learned using a conditional generative adversarial network model, which outputs the line drawings of the cartoon artwork. The experimental results show that the proposed method can obtain line drawings representing the 3D geometric characteristics with a 2D line when a 2D cartoon painting is used as the input.


Author(s):  
Nashwan Alsalam Ali ◽  
Abdul Monem S. Rahma ◽  
Shaimaa H. Shaker

<p class="0abstract">The rapidly growing 3D content exchange over the internet makes securing 3D content became a very important issue. The solution for this issue is to encrypting data of 3D content, which included two main parts texture map and 3D models. The standard encryption methods such as AES and DES are not a suitable solution for 3D applications due to the structure of 3D content, which must maintain dimensionality and spatial stability. So, these problems are overcome by using chaotic maps in cryptography, which provide confusion and diffusion by providing uncorrelated numbers and randomness. Various works have been applied in the field of 3D content-encryption based on the chaotic system. This survey will attempt to review the approaches and aspects of the structure used for 3D content encryption methods for different papers. It found the methods that used chaotic maps with large keyspace are more robust to various attacks than other methods that used encryption schemes without chaotic maps. The methods that encrypting texture, polygon, and vertices for 3D content provide full protection than another method that provides partial protection.</p>


Author(s):  
U. Bacher

Abstract. In aerial data acquisition a new era started with the introduction of the first real hybrid sensor systems, like the Leica CityMapper-2. Hybrid in this context means the combination of an (oblique) camera system with a topographic LiDAR into an integrated aerial mapping system. By combining these complimentary sub-systems into one system the weaknesses of the one system could be compensated by using the alternative data source. An example is the mapping of low-light urban canyons, where image-based systems mostly produce unreliable results. For an LiDAR sensor the geometrical reconstruction of these areas is straight forward and leads to accurate results. The paper gives a detailed overview over the development and technical characteristics of hybrid sensor systems. The process of data acquisition is discussed and strategies for hybrid urban mapping are proposed. A hybrid sensor alone is just a part of the whole procedure to generate 3D content. As important as the senor itself is the workflow to generate the products. Here again a hybrid approach, with the processing of all datasets in one environment, is discussed. Special attention is paid to the hybrid orientation of the data and the integrated generation of base and enhanced products. The paper is rounded off by the discussion of the advantage of LiDAR data for the 3D Mesh generation for urban modelling.


2021 ◽  
Author(s):  
Raymond Phan

In this work, we describe a system for accurately estimating depth through synthetic depth maps in unconstrained conventional monocular images and video sequences, to semi-automatically convert these into their stereoscopic 3D counterparts. With current accepted industry efforts, this conversion process is performed automatically in a black box fashion, or manually converted using human operators to extract features and objects on a frame by frame basis, known as rotoscopers. Automatic conversion is the least labour intensive, but allows little to no user intervention, and error correction can be difficult. Manual is the most accurate, providing the most control, but very time consuming, and is prohibitive for use to all but the largest production studios. Noting the merits and disadvantages between these two methods, a semi-automatic method blends the two together, allowing for faster and accurate conversion, while decreasing time for releasing 3D content for user digest. Semi-automatic methods require the user to place user-defined strokes over the image, or over several keyframes in the case of video, corresponding to a rough estimate of the depths in the scene at these strokes. After, the rest of the depths are determined, creating depth maps to generate stereoscopic 3D content, and Depth Image Based Rendering is employed to generate the artificial views. Here, depth map estimation can be considered as a multi-label image segmentation problem: each class is a depth value. Additionally, for video, we allow the option of labeling only the first frame, and the strokes are propagated using one of two techniques: A modified computer vision object tracking algorithm, and edge-aware temporally consistent optical flow./p pFundamentally, this work combines the merits of two well-respected segmentation algorithms: Graph Cuts and Random Walks. The diffusion of depths, with smooth gradients from Random Walks, combined with the edge preserving properties from Graph Cuts can create the best possible result. To demonstrate that the proposed framework generates good quality stereoscopic content with minimal effort, we create results and compare to the current best known semi-automatic conversion framework. We also show that our results are more suitable for human perception in comparison to this framework.


2021 ◽  
Author(s):  
Raymond Phan

In this work, we describe a system for accurately estimating depth through synthetic depth maps in unconstrained conventional monocular images and video sequences, to semi-automatically convert these into their stereoscopic 3D counterparts. With current accepted industry efforts, this conversion process is performed automatically in a black box fashion, or manually converted using human operators to extract features and objects on a frame by frame basis, known as rotoscopers. Automatic conversion is the least labour intensive, but allows little to no user intervention, and error correction can be difficult. Manual is the most accurate, providing the most control, but very time consuming, and is prohibitive for use to all but the largest production studios. Noting the merits and disadvantages between these two methods, a semi-automatic method blends the two together, allowing for faster and accurate conversion, while decreasing time for releasing 3D content for user digest. Semi-automatic methods require the user to place user-defined strokes over the image, or over several keyframes in the case of video, corresponding to a rough estimate of the depths in the scene at these strokes. After, the rest of the depths are determined, creating depth maps to generate stereoscopic 3D content, and Depth Image Based Rendering is employed to generate the artificial views. Here, depth map estimation can be considered as a multi-label image segmentation problem: each class is a depth value. Additionally, for video, we allow the option of labeling only the first frame, and the strokes are propagated using one of two techniques: A modified computer vision object tracking algorithm, and edge-aware temporally consistent optical flow./p pFundamentally, this work combines the merits of two well-respected segmentation algorithms: Graph Cuts and Random Walks. The diffusion of depths, with smooth gradients from Random Walks, combined with the edge preserving properties from Graph Cuts can create the best possible result. To demonstrate that the proposed framework generates good quality stereoscopic content with minimal effort, we create results and compare to the current best known semi-automatic conversion framework. We also show that our results are more suitable for human perception in comparison to this framework.


Sign in / Sign up

Export Citation Format

Share Document