normal maps
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 8)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
Friedrich Hegenbarth ◽  
Dušan Repovš

Abstract Let $X^{n}$ be an oriented closed generalized $n$ -manifold, $n\ge 5$ . In our recent paper (Proc. Edinb. Math. Soc. (2) 63 (2020), no. 2, 597–607), we have constructed a map $t:\mathcal {N}(X^{n}) \to H^{st}_{n} ( X^{n}; \mathbb{L}^{+})$ which extends the normal invariant map for the case when $X^{n}$ is a topological $n$ -manifold. Here, $\mathcal {N}(X^{n})$ denotes the set of all normal bordism classes of degree one normal maps $(f,\,b): M^{n} \to X^{n},$ and $H^{st}_{*} ( X^{n}; \mathbb{E})$ denotes the Steenrod homology of the spectrum $\mathbb{E}$ . An important non-trivial question arose whether the map $t$ is bijective (note that this holds in the case when $X^{n}$ is a topological $n$ -manifold). It is the purpose of this paper to prove that the answer to this question is affirmative.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3469
Author(s):  
Leo Miyashita ◽  
Akihiro Nakamura ◽  
Takuto Odagawa ◽  
Masatoshi Ishikawa

We propose a novel method for detecting features on normal maps and describing binary features, called BIFNOM, which is three-dimensionally rotation invariant and detects and matches interest points at high speed regardless of whether a target is textured or textureless and rigid or non-rigid. Conventional methods of detecting features on normal maps can also be applied to textureless targets, in contrast with features on luminance images; however, they cannot deal with three-dimensional rotation between each pair of corresponding interest points due to the definition of orientation, or they have difficulty achieving fast detection and matching due to a heavy-weight descriptor. We addressed these issues by introducing a three dimensional local coordinate system and converting a normal vector to a binary code, and achieved more than 750fps real-time feature detection and matching. Furthermore, we present an extended descriptor and criteria for real-time tracking, and evaluate the performance with both simulation and actual system.


Author(s):  
Dong-Yu She ◽  
Kun Xu

AbstractLearning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision is strongly preferred for its soaring performance on visual representation learning. This paper introduces a contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability. Specifically, we propose to optimize a contrastive learning task and a physical property prediction task simultaneously. Given the synthetic scene, the first task aims to maximize agreement between a pair of synthetic images generated by our proposed view sampling module, while the second task aims to predict three physical property maps, i.e., depth, instance contour maps, and surface normal maps. In addition, a feature-level domain adaptation technique with adversarial training is applied to reduce the domain difference between the realistic and the synthetic data. Experiments demonstrate that our proposed method achieves state-of-the-art performance on several visual recognition datasets.


Author(s):  
Mukul Khanna ◽  
Tanu Sharma ◽  
Ayyappa Swamy Thatavarthy ◽  
K. Madhava Krishna

2021 ◽  
Author(s):  
Rui Fan ◽  
Hengli Wang ◽  
Bohuan Xue ◽  
Huaiyang Huang ◽  
Yuan Wang ◽  
...  

This paper proposes three-filters-to-normal (3F2N), an accurate and ultrafast surface normal estimator (SNE), which is designed for structured range sensor data, e.g., depth/disparity images. 3F2N SNE computes surface normals by simply performing three filtering operations (two image gradient filters in horizontal and vertical directions, respectively, and a mean/median filter) on an inverse depth image or a disparity image. Despite the simplicity of 3F2N SNE, no similar method already exists in the literature. To evaluate the performance of our proposed SNE, we created three large-scale synthetic datasets (easy, medium and hard) using 24 3D mesh models, each of which is used to generate 1800--2500 pairs of depth images (resolution: 480X640 pixels) and the corresponding ground-truth surface normal maps from different views. 3F2N SNE demonstrates the state-of-the-art performance, outperforming all other existing geometry-based SNEs, where the average angular errors with respect to the easy, medium and hard datasets are 1.66 degrees, 5.69 degrees and 15.31 degrees, respectively. Furthermore, our C++ and CUDA implementations achieve a processing speed of over 260 Hz and 21 kHz, respectively. Our datasets and source code are publicly available at sites.google.com/view/3f2n.


Author(s):  
A. Dhanda ◽  
M. Reina Ortiz ◽  
A. Weigert ◽  
A. Paladini ◽  
A. Min ◽  
...  

<p><strong>Abstract.</strong> In this paper, we propose a workflow for recreating places of cultural heritage in Virtual Reality (VR) using structure from motion (SfM) photogrammetry. The unique texture of heritage places makes them ideal for full photogrammetric capture. An optimized model is created from the photogrammetric data so that it is small enough to render in a real-time environment. The optimized model, combined with mesh maps (texture maps, normal maps, etc.) looks like the original high detail model. The capture of a whole space makes it possible to create a VR experience with six degrees of freedom (6DoF) that allows the user to explore the historic place. Creating these experiences can bring people to cultural heritage that is either endangered or too remote for some people to access. The workflow described in this paper will be demonstrated with the case study of Myin-pya-gu, an 11th century temple in Bagan, Myanmar.</p>


2018 ◽  
Vol 36 (2) ◽  
pp. 267-277 ◽  
Author(s):  
Xavier Chermain ◽  
Frédéric Claux ◽  
Stéphane Mérillou
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document