style consistency
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 10)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Vol 10 (9) ◽  
pp. 593
Author(s):  
Kejia Huang ◽  
Chenliang Wang ◽  
Shaohua Wang ◽  
Runying Liu ◽  
Guoxiong Chen ◽  
...  

With the extensive application of big spatial data and the emergence of spatial computing, augmented reality (AR) map rendering has attracted significant attention. A common issue in existing solutions is that AR-GIS systems rely on different platform-specific graphics libraries on different operating systems, and rendering implementations can vary across various platforms. This causes performance degradation and rendering styles that are not consistent across environments. However, high-performance rendering consistency across devices is critical in AR-GIS, especially for edge collaborative computing. In this paper, we present a high-performance, platform-independent AR-GIS rendering engine; the augmented reality universal graphics library (AUGL) engine. A unified cross-platform interface is proposed to preserve AR-GIS rendering style consistency across platforms. High-performance AR-GIS map symbol drawing models are defined and implemented based on a unified algorithm interface. We also develop a pre-caching strategy, optimized spatial-index querying, and a GPU-accelerated vector drawing algorithm that minimizes IO latency throughout the rendering process. Comparisons to existing AR-GIS visualization engines indicate that the performance of the AUGL engine is two times higher than that of the AR-GIS rendering engine on the Android, iOS, and Vuforia platforms. The drawing efficiency for vector polygons is improved significantly. The rendering performance is more than three times better than the average performances of existing Android and iOS systems.


2021 ◽  
Author(s):  
Matteo Cardaioli ◽  
Mauro Conti ◽  
Andrea Di Sorbo ◽  
Enrico Fabrizio ◽  
Sonia Laudanna ◽  
...  

Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2074
Author(s):  
Hung-Hsiang Wang ◽  
Chih-Ping Chen

Brand style and product identity are critical to the core value of a brand. Yet how to identify the style and identity is highly dependent on the human expert’s judgment. As deep learning for image recognition has made a rapid process in recent years, it’s the application of brand style and design features have potential. This investigation assessed the car styling evolution of two car brands, Dodge and Jaguar, by training convolutional neural network. The method used heat map analysis of deep learning and was supplemented by statistical methods. The two datasets in this investigation were the car design features dataset and the car style images dataset. Results using the deep learning method show that the average accuracy of the last ten under verification modes was 95.90%, while 78% of the new cars continue the early brand style. Moreover, Jaguar had a higher proportion of style consistency than Dodge. Results using statistical methods reveal two cars had evolved in two different trends regarding the vehicle length. In terms of the design features, Jaguar had no noticeable design features of the rocket-tailfin. The heat map method of deep learning indicates a design feature’s focus area, and the method is beneficial for future brand style analysis.


Author(s):  
Ning Wang ◽  
Jingyuan Li ◽  
Lefei Zhang ◽  
Bo Du

We study the task of image inpainting, where an image with missing region is recovered with plausible context. Recent approaches based on deep neural networks have exhibited potential for producing elegant detail and are able to take advantage of background information, which gives texture information about missing region in the image. These methods often perform pixel/patch level replacement on the deep feature maps of missing region and therefore enable the generated content to have similar texture as background region. However, this kind of replacement is a local strategy and often performs poorly when the background information is misleading. To this end, in this study, we propose to use a multi-scale image contextual attention learning (MUSICAL) strategy that helps to flexibly handle richer background information while avoid to misuse of it. However, such strategy may not promising in generating context of reasonable style. To address this issue, both of the style loss and the perceptual loss are introduced into the proposed method to achieve the style consistency of the generated image. Furthermore, we have also noticed that replacing some of the down sampling layers in the baseline network with the stride 1 dilated convolution layers is beneficial for producing sharper and fine-detailed results. Experiments on the Paris Street View, Places, and CelebA datasets indicate the superior performance of our approach compares to the state-of-the-arts. 


Author(s):  
Shancheng Fang ◽  
Hongtao Xie ◽  
Jianjun Chen ◽  
Jianlong Tan ◽  
Yongdong Zhang

In this work, we propose an entirely learning-based method to automatically synthesize text sequence in natural images leveraging conditional adversarial networks. As vanilla GANs are clumsy to capture structural text patterns, directly employing GANs for text image synthesis typically results in illegible images. Therefore, we design a two-stage architecture to generate repeated characters in images. Firstly, a character generator attempts to synthesize local character appearance independently, so that the legible characters in sequence can be obtained. To achieve style consistency of characters, we propose a novel style loss based on variance-minimization. Secondly, we design a pixel-manipulation word generator constrained by self-regularization, which learns to convert local characters to plausible word image. Experiments on SVHN dataset and ICDAR, IIIT5K datasets demonstrate our method is able to synthesize visually appealing text images. Besides, we also show the high-quality images synthesized by our method can be used to boost the performance of a scene text recognition algorithm.


Sign in / Sign up

Export Citation Format

Share Document