USAGE OF ADVERSARIAL EXAMPLES TO PROTECT A HUMAN IMAGE FROM BEING DETECTED BY RECOGNITION SYSTEMS BASED ON DEEP NEURAL NETWORKS

Author(s):  
S. A. Sakulin ◽  
A. N. Alfimtsev ◽  
D. A. Loktev ◽  
A. O. Kovalenko ◽  
V. V. Devyatkov

Recently, human recognition systems based on deep machine learning, in particular, on the basis of deep neural networks, have become widespread. In this regard, research has become relevant in the field of protection against recognition by such systems. In this article a method of designing a specially selected type of camouflage applied to clothing, which will protect a person both from recognition by a human observer and from a deep neural network recognition system is proposed. This type of camouflage is constructed on the basis of competitive examples that are generated by a deep neural network. The article describes experiments on human protection from recognition by Faster-RCNN (Regional Convolution Neural Networks) Inception V2 and Faster-RCNN ResNet101 systems. However, the implementation of camouflage is considered on a macro level, which assesses the combination of the camouflage and background, and the micro level which analyzes the relationship between the properties of individual regions of the camouflage properties of the adjacent regions, with constraints on their continuity, smoothness, closure, asymmetry. The dependence of camouflage characteristics on the conditions of observation of the object and the environment is also considered: the transparency of the atmosphere, the intensity of pixels of the sky horizon and the background, the level of contrast of the background and the camouflaged object, the distance to the object. As an example of a possible attack, a “black box” attack, which involves preliminary testing of generated adversarial examples on a target recognition system without knowledge of the internal structure of this system, is considered. Results of these experiments showed the high efficiency of the proposed method in the virtual world, when there is access to each pixel of the image supplied to the input systems. In the real world, results are less impressive, which can be explained by the distortion of colors when printing on the fabric, as well as the lack of spatial resolution of this print.

Author(s):  
Felix Specht ◽  
Jens Otto

AbstractCondition monitoring systems based on deep neural networks are used for system failure detection in cyber-physical production systems. However, deep neural networks are vulnerable to attacks with adversarial examples. Adversarial examples are manipulated inputs, e.g. sensor signals, are able to mislead a deep neural network into misclassification. A consequence of such an attack may be the manipulation of the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. This can result in a serious threat for production systems and employees. This work introduces an approach named CyberProtect to prevent misclassification caused by adversarial example attacks. The approach generates adversarial examples for retraining a deep neural network which results in a hardened variant of the deep neural network. The hardened deep neural network sustains a significant better classification rate (82% compared to 20%) while under attack with adversarial examples, as shown by empirical results.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Florian Stelzer ◽  
André Röhm ◽  
Raul Vicente ◽  
Ingo Fischer ◽  
Serhiy Yanchuk

AbstractDeep neural networks are among the most widely applied machine learning tools showing outstanding performance in a broad range of tasks. We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops. This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals. The network states emerge in time as a temporal unfolding of the neuron’s dynamics. By adjusting the feedback-modulation within the loops, we adapt the network’s connection weights. These connection weights are determined via a back-propagation algorithm, where both the delay-induced and local network connections must be taken into account. Our approach can fully represent standard Deep Neural Networks (DNN), encompasses sparse DNNs, and extends the DNN concept toward dynamical systems implementations. The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.


2021 ◽  
Author(s):  
Luke Gundry ◽  
Gareth Kennedy ◽  
Alan Bond ◽  
Jie Zhang

The use of Deep Neural Networks (DNNs) for the classification of electrochemical mechanisms based on training with simulations of the initial cycle of potential have been reported. In this paper,...


2021 ◽  
pp. 1-15
Author(s):  
Wenjun Tan ◽  
Luyu Zhou ◽  
Xiaoshuo Li ◽  
Xiaoyu Yang ◽  
Yufei Chen ◽  
...  

BACKGROUND: The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE: Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS: First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation ratio and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS: By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS: Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.


2019 ◽  
Vol 10 (15) ◽  
pp. 4129-4140 ◽  
Author(s):  
Kyle Mills ◽  
Kevin Ryczko ◽  
Iryna Luchak ◽  
Adam Domurad ◽  
Chris Beeler ◽  
...  

We present a physically-motivated topology of a deep neural network that can efficiently infer extensive parameters (such as energy, entropy, or number of particles) of arbitrarily large systems, doing so with scaling.


2018 ◽  
Vol 129 (4) ◽  
pp. 649-662 ◽  
Author(s):  
Christine K. Lee ◽  
Ira Hofer ◽  
Eilon Gabel ◽  
Pierre Baldi ◽  
Maxime Cannesson

Abstract Editor’s Perspective What We Already Know about This Topic What This Article Tells Us That Is New Background The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. Methods The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. Results In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). Conclusions Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.


Author(s):  
Anna Ilina ◽  
Vladimir Korenkov

The task of counting the number of people is relevant when conducting various types of events, which may include seminars, lectures, conferences, meetings, etc. Instead of monotonous manual counting of participants, it is much more effective to use facial recognition technology, which makes it possible not only to quickly count those present, but also to recognize each of them, which makes it possible to conduct further analysis of this data, identify patterns in them and predict. The research conducted in this paper determines the quality assessment of the use of facial recognition technology in images andvideo streams, based on the use of a deep neural network, to solve the problem of automating attendance tracking.


2020 ◽  
Vol 61 (11) ◽  
pp. 1967-1973
Author(s):  
Takashi Akagi ◽  
Masanori Onishi ◽  
Kanae Masuda ◽  
Ryohei Kuroki ◽  
Kohei Baba ◽  
...  

Abstract Recent rapid progress in deep neural network techniques has allowed recognition and classification of various objects, often exceeding the performance of the human eye. In plant biology and crop sciences, some deep neural network frameworks have been applied mainly for effective and rapid phenotyping. In this study, beyond simple optimizations of phenotyping, we propose an application of deep neural networks to make an image-based internal disorder diagnosis that is hard even for experts, and to visualize the reasons behind each diagnosis to provide biological interpretations. Here, we exemplified classification of calyx-end cracking in persimmon fruit by using five convolutional neural network models with various layer structures and examined potential analytical options involved in the diagnostic qualities. With 3,173 visible RGB images from the fruit apex side, the neural networks successfully made the binary classification of each degree of disorder, with up to 90% accuracy. Furthermore, feature visualizations, such as Grad-CAM and LRP, visualize the regions of the image that contribute to the diagnosis. They suggest that specific patterns of color unevenness, such as in the fruit peripheral area, can be indexes of calyx-end cracking. These results not only provided novel insights into indexes of fruit internal disorders but also proposed the potential applicability of deep neural networks in plant biology.


Blood ◽  
2019 ◽  
Vol 134 (Supplement_1) ◽  
pp. 2084-2084 ◽  
Author(s):  
Ta-Chuan Yu ◽  
Wen-Chien Chou ◽  
Chao-Yuan Yeh ◽  
Cheng-Kun Yang ◽  
Sheng-Chuan Huang ◽  
...  

Purpose Differential counting of blood cells is the basis of diagnostic hematology. In many circumstances, identification of cells in bone marrow smears is the golden standard for diagnosis. Presently, methods for automatic differential counting of peripheral blood are readily available commercially. However, morphological assessment and differential counting of bone marrow smears are still performed manually. This procedure is tedious, time-consuming and laden with high inter-operator variation. In recent years, deep neural networks have proven useful in many medical image recognition tasks, such as diagnosis of diabetic retinopathy, and detection of cancer metastasis in lymph nodes. However, there has been no published work on using deep neural networks for complete differential counting of entire bone marrow smear. In this work, we present the results of using deep convolutional neural network for automatic differential counting of bone marrow nucleated cells. Materials & Methods The bone marrow smears from patients with either benign or malignant disorders in National Taiwan University Hospital were recruited in this study. The bone marrow smears are stained with Liu's stain, a modified Romanowsky stain. Digital images of the bone marrow smears were taken using 1000x oil immersion lens and 20MP color CCD camera on a single microscope with standard illumination and white-balance settings. The contour of each nucleated cell was artificially defined. These cells were then divided into a training/validation set and a test set. Each cell was then classified into 1 of the 11 categories (blast, promyelocyte, neutrophilic myelocyte, neutrophilic metamyelocyte, neutrophils, eosinophils and precursors, basophil, monocyte and precursors, lymphocyte, erythroid lineage cells, and invalid cell). In training/validation set, the classification of each cell was annotated once by experienced medical technician or hematologist. The annotated dataset was used to train a Path-Aggregation Network for instance segmentation task. In test set, cell classification was annotated by three medical technicians or hematologists; only over 2/3 consensus was regarded as valid. After the neural network model was fully trained, the ability of the model to classify and detect bone marrow nucleated cells was evaluated in terms of precision, recall and accuracy. During the model training, we used group normalization and stochastic gradient descent optimizer for training. Random noise, Gaussian blur, rotation, contrast and color shift were also used as means for data augmentation. Results The digital images of 150 bone marrow aspirate smears were taken for this study. They included 61 for acute leukemia, 39 for lymphoma, 2 for myelodysplastic syndrome (MDS), 2 for myeloproliferative neoplasm (MPN), 10 for MDS/MPN, 12 for multiple myeloma, 4 for hemolytic anemia, 9 for aplastic anemia, 8 for infectious etiology and 3 for solid cancers. The final data contained 5927 images and 187730 nucleated bone marrow cells, which were divided into 2 sets: 5630 images containing 170966 cells as the training/validation set, and 297 images containing 16764 cells as the test set. Among the 16764 cells annotated in test set, 15676 cells (93.6 %) reached over 2/3 consensus. The trained neural network achieved 0.832 recall and 0.736 precision for cell detection task, 0.79 mean intersection over union (IOU) for cell segmentation task, mean average precision of 0.659 and accuracy of 0.801 for cell classification. For individual cell categories, the model performs the best with "erythroid-lineage-cells" (0.971 recall, 0.935 precision) and the worst with "monocyte-and-precursors" (0.825 recall, 0.337 precision). Conclusions We have created the largest and the most comprehensive annotated bone marrow smear image dataset for deep neural network training. Compared with previous works, our approach is more practical for clinical application because it is able to take in an entire field of smear and generate differential counts without any other preprocessing steps. Current results are highly encouraging. With continued expansion of dataset, our model would be more precise and clinically useful. Figure Disclosures Yeh: aether AI: Other: CEO and co-founder. Yang:aether AI: Employment. Tien:Novartis: Honoraria; Daiichi Sankyo: Honoraria; Celgene: Research Funding; Roche: Honoraria; Johnson &Johnson: Honoraria; Alexion: Honoraria; BMS: Honoraria; Roche: Research Funding; Celgene: Honoraria; Pfizer: Honoraria; Abbvie: Honoraria. Hsu:aether AI: Employment.


Author(s):  
Firdaus .

This paper proposed an improved method for author name disambiguation problem, both homonym and synonym. The data prepared is the distance data of each pair of author’s attributes, Levenshtein distance are used. Using Deep Neural Networks, we found large gains on performance. The result shows that level of accuracy is 99.6% with a low number of hidden layers


Sign in / Sign up

Export Citation Format

Share Document