feature preserving
Recently Published Documents


TOTAL DOCUMENTS

265
(FIVE YEARS 45)

H-INDEX

24
(FIVE YEARS 5)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ze Lin Tan ◽  
Jing Bai ◽  
Shao Min Zhang ◽  
Fei Wei Qin
Keyword(s):  

Author(s):  
Shigang Wang ◽  
Shuai Peng ◽  
Jiawen He

Due to the point cloud of oral scan denture has a large amount of data and redundant points. A point cloud simplification algorithm based on feature preserving is proposed to solve the problem that the feature preserving is incomplete when processing point cloud data and cavities occur in relatively flat regions. Firstly, the algorithm uses kd-tree to construct the point cloud spatial topological to search the k-Neighborhood of the sampling point. On the basis of that to calculate the curvature of each point, the angle between the normal vector, the distance from the point to the neighborhood centroid, as well as the standard deviation and the average distance from the point to the neighborhood on this basis, therefore, the detailed features of point cloud can be extracted by multi-feature extraction and threshold determination. For the non-characteristic region, the non-characteristic point cloud is spatially divided through Octree to obtain the K-value of K-means clustering algorithm and the initial clustering center point. The simplified results of non-characteristic regions are obtained after further subdivision. Finally, the extracted detail features and the reduced result of non-featured region will be merged to obtain the final simplification result. The experimental results show that the algorithm can retain the characteristic information of point cloud model better, and effectively avoid the phenomenon of holes in the simplification process. The simplified results have better smoothness, simplicity and precision, and are of high practical value.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ze Lin Tan ◽  
Jing Bai ◽  
Shao Min Zhang ◽  
Fei Wei Qin

AbstractIn an image based virtual try-on network, both features of the target clothes and the input human body should be preserved. However, current techniques failed to solve the problems of blurriness on complex clothes details and artifacts on human body occlusion regions at the same time. To tackle this issue, we propose a non-local virtual try-on network NL-VTON. Considering that convolution is a local operation and limited by its convolution kernel size and rectangular receptive field, which is unsuitable for large size non-rigid transformations of persons and clothes in virtual try-on, we introduce a non-local feature attention module and a grid regularization loss so as to capture detailed features of complex clothes, and design a human body segmentation prediction network to further alleviate the artifacts on occlusion regions. The quantitative and qualitative experiments based on the Zalando dataset demonstrate that our proposed method significantly improves the ability to preserve features of bodies and clothes compared with the state-of-the-art methods.


2021 ◽  
Vol 13 (19) ◽  
pp. 3968
Author(s):  
Daning Tan ◽  
Yu Liu ◽  
Gang Li ◽  
Libo Yao ◽  
Shun Sun ◽  
...  

In recent years, the interpretation of SAR images has been significantly improved with the development of deep learning technology, and using conditional generative adversarial nets (CGANs) for SAR-to-optical transformation, also known as image translation, has become popular. Most of the existing image translation methods based on conditional generative adversarial nets are modified based on CycleGAN and pix2pix, focusing on style transformation in practice. In addition, SAR images and optical images are characterized by heterogeneous features and large spectral differences, leading to problems such as incomplete image details and spectral distortion in the heterogeneous transformation of SAR images in urban or semiurban areas and with complex terrain. Aiming to solve the problems of SAR-to-optical transformation, Serial GANs, a feature-preserving heterogeneous remote sensing image transformation model, is proposed in this paper for the first time. This model uses the Serial Despeckling GAN and Colorization GAN to complete the SAR-to-optical transformation. Despeckling GAN transforms the SAR images into optical gray images, retaining the texture details and semantic information. Colorization GAN transforms the optical gray images obtained in the first step into optical color images and keeps the structural features unchanged. The model proposed in this paper provides a new idea for heterogeneous image transformation. Through decoupling network design, structural detail information and spectral information are relatively independent in the process of heterogeneous transformation, thereby enhancing the detail information of the generated optical images and reducing its spectral distortion. Using SEN-2 satellite images as the reference, this paper compares the degree of similarity between the images generated by different models and the reference, and the results revealed that the proposed model has obvious advantages in feature reconstruction and the economical volume of the parameters. It also showed that Serial GANs have great potential in decoupling image transformation.


2021 ◽  
Vol 30 (05) ◽  
Author(s):  
Ping Guan ◽  
Jun Qiang ◽  
Wuji Liu ◽  
Xixi Li ◽  
Dongfang Wang

2021 ◽  
Vol 7 (8) ◽  
pp. 153
Author(s):  
Jieying Wang ◽  
Jiří Kosinka ◽  
Alexandru Telea

Medial descriptors are of significant interest for image simplification, representation, manipulation, and compression. On the other hand, B-splines are well-known tools for specifying smooth curves in computer graphics and geometric design. In this paper, we integrate the two by modeling medial descriptors with stable and accurate B-splines for image compression. Representing medial descriptors with B-splines can not only greatly improve compression but is also an effective vector representation of raster images. A comprehensive evaluation shows that our Spline-based Dense Medial Descriptors (SDMD) method achieves much higher compression ratios at similar or even better quality to the well-known JPEG technique. We illustrate our approach with applications in generating super-resolution images and salient feature preserving image compression.


2021 ◽  
Vol 13 (16) ◽  
pp. 3089
Author(s):  
Annan Zhou ◽  
Yumin Chen ◽  
John P. Wilson ◽  
Heng Su ◽  
Zhexin Xiong ◽  
...  

High-resolution DEMs are important spatial data, and are used in a wide range of analyses and applications. However, the high cost to obtain high-resolution DEM data over a large area through sensors with higher precision poses a challenge for many geographic analysis applications. Inspired by the convolution neural network (CNN) excellent performance in super-resolution (SR) image analysis, this paper investigates the use of deep residual neural networks and low-resolution DEMs to generate high-resolution DEMs. An enhanced double-filter deep residual neural network (EDEM-SR) method is proposed, which uses filters with different receptive field sizes to fuse and extract features and reconstruct a more realistic high-resolution DEM. The results were compared with those generated with the bicubic, bilinear, and EDSR methods. The numerical accuracy and terrain feature preserving effects of the EDEM-SR method can generate reconstructed DEMs that better match the original DEMs, show lower MAE and RMSE, and improve the accuracy of the terrain parameters. MAE is reduced by about 30 to 50% compared with traditional interpolation methods. The results show how the EDEM-SR method can generate high-resolution DEMs using low-resolution DEMs.


Sign in / Sign up

Export Citation Format

Share Document