visual fidelity
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 11)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Vol 173 ◽  
pp. 102834
Author(s):  
Yang-Wai Chow ◽  
Willy Susilo ◽  
Jianfeng Wang ◽  
Richard Buckland ◽  
Joonsang Baek ◽  
...  

2020 ◽  
Vol 2020 (15) ◽  
pp. 259-1-259-4 ◽  
Author(s):  
Francisco Diaz-Barrancas ◽  
Halina C. Cwierz ◽  
Pedro J. Pardo ◽  
Angel Luis Perez ◽  
Maria Isabel Suero

Virtual reality has experienced a strong advance in recent years, its use in industries such as video games, automotive and medicine is now a reality. Previous work has determined that color is one of the most important characteristics when it comes to creating a sense of realism in the user. Therefore, through the use of spectral techniques we have found that it is possible to improve the fidelity in the reproduction of a real scene inside a virtual reality scene. In order to do this, we have developed a workflow that allows the user to compare real scenes with virtual scenes. The results of this test show that there exists an improvement of the visual fidelity in the reproduction of real scenes inside a virtual reality system compared with previous works.


2019 ◽  
Vol 26 (7) ◽  
pp. S165
Author(s):  
A Prabhu ◽  
S Masghati ◽  
PW Hernandez ◽  
SJ Kim ◽  
NC Klein

Author(s):  
Aishan Liu ◽  
Xianglong Liu ◽  
Jiaxin Fan ◽  
Yuqing Ma ◽  
Anlan Zhang ◽  
...  

Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with imperceptible perturbations mislead DNNs to incorrect results. Recently, adversarial patch, with noise confined to a small and localized patch, emerged for its easy accessibility in real-world. However, existing attack strategies are still far from generating visually natural patches with strong attacking ability, since they often ignore the perceptual sensitivity of the attacked network to the adversarial patch, including both the correlations with the image context and the visual attention. To address this problem, this paper proposes a perceptual-sensitive generative adversarial network (PS-GAN) that can simultaneously enhance the visual fidelity and the attacking ability for the adversarial patch. To improve the visual fidelity, we treat the patch generation as a patch-to-patch translation via an adversarial process, feeding any types of seed patch and outputting the similar adversarial patch with high perceptual correlation with the attacked image. To further enhance the attacking ability, an attention mechanism coupled with adversarial generation is introduced to predict the critical attacking areas for placing the patches, which can help producing more realistic and aggressive patches. Extensive experiments under semi-whitebox and black-box settings on two large-scale datasets GTSRB and ImageNet demonstrate that the proposed PS-GAN outperforms state-of-the-art adversarial patch attack methods.


2019 ◽  
Author(s):  
Pranava Madhyastha ◽  
Josiah Wang ◽  
Lucia Specia

Sign in / Sign up

Export Citation Format

Share Document