rigid transformations
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 3)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ze Lin Tan ◽  
Jing Bai ◽  
Shao Min Zhang ◽  
Fei Wei Qin

AbstractIn an image based virtual try-on network, both features of the target clothes and the input human body should be preserved. However, current techniques failed to solve the problems of blurriness on complex clothes details and artifacts on human body occlusion regions at the same time. To tackle this issue, we propose a non-local virtual try-on network NL-VTON. Considering that convolution is a local operation and limited by its convolution kernel size and rectangular receptive field, which is unsuitable for large size non-rigid transformations of persons and clothes in virtual try-on, we introduce a non-local feature attention module and a grid regularization loss so as to capture detailed features of complex clothes, and design a human body segmentation prediction network to further alleviate the artifacts on occlusion regions. The quantitative and qualitative experiments based on the Zalando dataset demonstrate that our proposed method significantly improves the ability to preserve features of bodies and clothes compared with the state-of-the-art methods.



2021 ◽  
Author(s):  
Alexis R. Tudor ◽  
Gunner Stone ◽  
Alireza Tavakkoli ◽  
Emily M. Hand


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1187
Author(s):  
Bin Liu ◽  
Weiming Wang ◽  
Jun Zhou ◽  
Bo Li ◽  
Xiuping Liu

Canonical extrinsic representations for non-rigid shapes with different poses are preferable in many computer graphics applications, such as shape correspondence and retrieval. The main reason for this is that they give a pose invariant signature for those jobs, which significantly decreases the difficulty caused by various poses. Existing methods based on multidimentional scaling (MDS) always result in significant geometric distortions. In this paper, we present a novel shape unfolding algorithm, which deforms any given 3D shape into a canonical pose that is invariant to non-rigid transformations. The proposed method can effectively preserve the local structure of a given 3D model with the regularization of local rigid transform energy based on the shape deformation technique, and largely reduce geometric distortion. Our algorithm is quite simple and only needs to solve two linear systems during alternate iteration processes. The computational efficiency of our method can be improved with parallel computation and the robustness is guaranteed with a cascade strategy. Experimental results demonstrate the enhanced efficacy of our algorithm compared with the state-of-the-art methods on 3D shape unfolding.



Author(s):  
Patrick Talon ◽  
Juan Ignacio Bravo Perez-Villar ◽  
Anneley Hadland ◽  
Nina Sofia Wyniawskyj ◽  
David Petit ◽  
...  


2020 ◽  
Vol 10 (3) ◽  
pp. 1156
Author(s):  
Chang Shu ◽  
Lin-Lin Li ◽  
Guoqing Li ◽  
Xi Chen ◽  
Hua Han

In this paper, we propose a novel noniterative algorithm to simultaneously estimate optimal rigid transformations for serial section images, which is a key component in performing volume reconstructions of serial sections of biological tissue. To avoid the error accumulation and propagation caused by current algorithms, we add an extra condition: that the positions of the first and last section images should remain unchanged. This constrained simultaneous registration problem has not previously been solved. Our solution is noniterative; thus, it can simultaneously compute rigid transformations for a large number of serial section images in a short time. We demonstrate that our algorithm obtains optimal solutions under ideal conditions and shows great robustness under nonideal circumstances. Further, we experimentally show that our algorithm outperforms state-of-the-art methods in terms of speed and accuracy.



2019 ◽  
Vol 86 (11) ◽  
pp. 685-698 ◽  
Author(s):  
Markus Ulrich ◽  
Patrick Follmann ◽  
Jan-Hendrik Neudeck

AbstractMatching, i. e. determining the exact 2D pose (e. g., position and orientation) of objects, is still one of the key tasks in machine vision applications like robot navigation, measuring, or grasping an object. There are many classic approaches for matching, based on edges or on the pure gray values of the template. In recent years, deep learning has been utilized mainly for more difficult tasks where the objects of interest are from many different categories with high intra-class variations and classic algorithms are failing. In this work, we compare one of the latest deep-learning-based object detectors with classic shape-based matching. We evaluate the methods both on a matching dataset as well as an object detection dataset that contains rigid objects and is thus also suitable for shape-based matching. We show that for datasets of this type, where rigid objects appear with rigid transformations, shape-based matching still outperforms recent object detectors regarding runtime, robustness, and precision if only a single template image per object is used. On the other hand, we show that for the application of object detection, the deep-learning-based approach outperforms the classic approach if annotated data is used for training. Ultimately, the choice of the best suited approach depends on the conditions and requirements of the application.



2019 ◽  
Vol 11 (14) ◽  
pp. 3894
Author(s):  
Fabrice Monna ◽  
Nicolas Navarro ◽  
Jérôme Magail ◽  
Rodrigue Guillon ◽  
Tanguy Rolland ◽  
...  

Photospheres, or 360° photos, offer valuable opportunities for perceiving space, especially when viewed through head-mounted displays designed for virtual reality. Here, we propose to take advantage of this potential for archaeology and cultural heritage, and to extend it by augmenting the images with existing documentation, such as 2D maps or 3D models, resulting from research studies. Photospheres are generally produced in the form of distorted equirectangular projections, neither georeferenced nor oriented, so that any registration of external documentation is far from straightforward. The present paper seeks to fill this gap by providing simple practical solutions, based on rigid and non-rigid transformations. Immersive virtual environments augmented by research materials can be very useful to contextualize archaeological discoveries, and to test research hypotheses, especially when the team is back at the laboratory. Colleagues and the general public can also be transported to the site, almost physically, generating an authentic sense of presence, which greatly facilitates the contextualization of the archaeological information gathered. This is especially true with head-mounted displays, but the resulting images can also be inspected using applications designed for the web, or viewers for smartphones, tablets and computers.



2019 ◽  
Vol 24 (7) ◽  
pp. 414-421
Author(s):  
Peter Wiles ◽  
Travis Lemon ◽  
Alessandra King

Students move from slides, flips, and turns into reasoning about the characteristics of rigid transformations.



Sign in / Sign up

Export Citation Format

Share Document