scholarly journals Engineering pupil function for optical adversarial attack

2022 ◽  
Author(s):  
Kyulim Kim ◽  
JEONGSOO KIM ◽  
Seungri Song ◽  
Jun-Ho Choi ◽  
Chulmin Joo ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4011
Author(s):  
Chuanwei Yao ◽  
Yibing Shen

The image deconvolution technique can recover potential sharp images from blurred images affected by aberrations. Obtaining the point spread function (PSF) of the imaging system accurately is a prerequisite for robust deconvolution. In this paper, a computational imaging method based on wavefront coding is proposed to reconstruct the wavefront aberration of a photographic system. Firstly, a group of images affected by local aberration is obtained by applying wavefront coding on the optical system’s spectral plane. Then, the PSF is recovered accurately by pupil function synthesis, and finally, the aberration-affected images are recovered by image deconvolution. After aberration correction, the image’s coefficient of variation and mean relative deviation are improved by 60% and 30%, respectively, and the image can reach the limit of resolution of the sensor, as proved by the resolution test board. Meanwhile, the method’s robust anti-noise capability is confirmed through simulation experiments. Through the conversion of the complexity of optical design to a post-processing algorithm, this method offers an economical and efficient strategy for obtaining high-resolution and high-quality images using a simple large-field lens.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3922
Author(s):  
Sheeba Lal ◽  
Saeed Ur Rehman ◽  
Jamal Hussain Shah ◽  
Talha Meraj ◽  
Hafiz Tayyab Rauf ◽  
...  

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Victor R. Kebande ◽  
Sadi Alawadi ◽  
Feras Awaysheh ◽  
Jan A. Persson

Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 52
Author(s):  
Richard Evan Sutanto ◽  
Sukho Lee

Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


Sign in / Sign up

Export Citation Format

Share Document