scholarly journals Minimal Modifications of Deep Neural Networks using Verification

10.29007/699q ◽  
2020 ◽  
Author(s):  
Ben Goldberger ◽  
Guy Katz ◽  
Yossi Adi ◽  
Joseph Keshet

Deep neural networks (DNNs) are revolutionizing the way complex systems are de- signed, developed and maintained. As part of the life cycle of DNN-based systems, there is often a need to modify a DNN in subtle ways that affect certain aspects of its behav- ior, while leaving other aspects of its behavior unchanged (e.g., if a bug is discovered and needs to be fixed, without altering other functionality). Unfortunately, retraining a DNN is often difficult and expensive, and may produce a new DNN that is quite different from the original. We leverage recent advances in DNN verification and propose a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior. Using a proof-of-concept implementation, we demonstrate the usefulness and potential of our approach in addressing two real-world needs: (i) measuring the resilience of DNN watermarking schemes; and (ii) bug repair in already-trained DNNs.

2021 ◽  
Author(s):  
Chih-Kuan Yeh ◽  
Been Kim ◽  
Pradeep Ravikumar

Understanding complex machine learning models such as deep neural networks with explanations is crucial in various applications. Many explanations stem from the model perspective, and may not necessarily effectively communicate why the model is making its predictions at the right level of abstraction. For example, providing importance weights to individual pixels in an image can only express which parts of that particular image is important to the model, but humans may prefer an explanation which explains the prediction by concept-based thinking. In this work, we review the emerging area of concept based explanations. We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors. We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats. Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.


Author(s):  
Gary Smith ◽  
Jay Cordes

Computer software, particularly deep neural networks and Monte Carlo simulations, are extremely useful for the specific tasks that they have been designed to do, and they will get even better, much better. However, we should not assume that computers are smarter than us just because they can tell us the first 2000 digits of pi or show us a street map of every city in the world. One of the paradoxical things about computers is that they can excel at things that humans consider difficult (like calculating square roots) while failing at things that humans consider easy (like recognizing stop signs). They can’t pass simple tests like the Winograd Schema Challenge because they do not understand the world the way humans do. They have neither common sense nor wisdom. They are our tools, not our masters.


Author(s):  
Wen Xu ◽  
Jing He ◽  
Yanfeng Shu

Transfer learning is an emerging technique in machine learning, by which we can solve a new task with the knowledge obtained from an old task in order to address the lack of labeled data. In particular deep domain adaptation (a branch of transfer learning) gets the most attention in recently published articles. The intuition behind this is that deep neural networks usually have a large capacity to learn representation from one dataset and part of the information can be further used for a new task. In this research, we firstly present the complete scenarios of transfer learning according to the domains and tasks. Secondly, we conduct a comprehensive survey related to deep domain adaptation and categorize the recent advances into three types based on implementing approaches: fine-tuning networks, adversarial domain adaptation, and sample-reconstruction approaches. Thirdly, we discuss the details of these methods and introduce some typical real-world applications. Finally, we conclude our work and explore some potential issues to be further addressed.


2019 ◽  
Vol 6 (4) ◽  
pp. 104 ◽  
Author(s):  
Liang Liang ◽  
Bill Sun

Artificial heart valves, used to replace diseased human heart valves, are life-saving medical devices. Currently, at the device development stage, new artificial valves are primarily assessed through time-consuming and expensive benchtop tests or animal implantation studies. Computational stress analysis using the finite element (FE) method presents an attractive alternative to physical testing. However, FE computational analysis requires a complex process of numeric modeling and simulation, as well as in-depth engineering expertise. In this proof of concept study, our objective was to develop machine learning (ML) techniques that can estimate the stress and deformation of a transcatheter aortic valve (TAV) from a given set of TAV leaflet design parameters. Two deep neural networks were developed and compared: the autoencoder-based ML-models and the direct ML-models. The ML-models were evaluated through Monte Carlo cross validation. From the results, both proposed deep neural networks could accurately estimate the deformed geometry of the TAV leaflets and the associated stress distributions within a second, with the direct ML-models (ML-model-d) having slightly larger errors. In conclusion, although this is a proof-of-concept study, the proposed ML approaches have demonstrated great potential to serve as a fast and reliable tool for future TAV design.


2018 ◽  
pp. 20170545 ◽  
Author(s):  
Jeremy R Burt ◽  
Neslisah Torosdagli ◽  
Naji Khosravan ◽  
Harish RaviPrakash ◽  
Aliasghar Mortazi ◽  
...  

2020 ◽  
Author(s):  
Timothy J. Hackmann

AbstractMicrobes can metabolize more chemical compounds than any other group of organisms. As a result, their metabolism is of interest to investigators across biology. Despite the interest, information on metabolism of specific microbes is hard to access. Information is buried in text of books and journals, and investigators have no easy way to extract it out. Here we investigate if neural networks can extract out this information and predict metabolic traits. For proof of concept, we predicted two traits: whether microbes carry one type of metabolism (fermentation) or produce one metabolite (acetate). We collected written descriptions of 7,021 species of bacteria and archaea from Bergey’s Manual. We read the descriptions and manually identified (labeled) which species were fermentative or produced acetate. We then trained neural networks to predict these labels. In total, we identified 2,364 species as fermentative, and 1,009 species as also producing acetate. Neural networks could predict which species were fermentative with 97.3% accuracy. Accuracy was even higher (98.6%) when predicting species also producing acetate. We used these predictions to draw phylogenetic trees of species with these traits. The resulting trees were close to the actual trees (drawn using labels). Previous counts of fermentative species are 4-fold lower than our own. For acetate-producing species, they are 100-fold lower. This undercounting confirms past difficulty in extracting metabolic traits from text. Our approach with neural networks can extract information efficiently and accurately. It paves the way for putting more metabolic traits into databases, providing easy access of information by investigators.


2017 ◽  
Vol 1 (3) ◽  
pp. 83 ◽  
Author(s):  
Chandrasegar Thirumalai ◽  
Ravisankar Koppuravuri

In this paper, we will use deep neural networks for predicting the bike sharing usage based on previous years usage data. We will use because deep neural nets for getting higher accuracy. Deep neural nets are quite different from other machine learning techniques; here we can add many numbers of hidden layers to improve the accuracy of our prediction and the model can be trained in the way we want such that we can achieve the results we want. Nowadays many AI experts will say that deep learning is the best AI technique available now and we can achieve some unbelievable results using this technique. Now we will use that technique to predict bike sharing usage of a rental company to make sure they can take good business decisions based on previous years data.


Author(s):  
Anibal Pedraza ◽  
Oscar Deniz ◽  
Gloria Bueno

AbstractThe phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.


Sign in / Sign up

Export Citation Format

Share Document