gradient based
Recently Published Documents


TOTAL DOCUMENTS

3348
(FIVE YEARS 1040)

H-INDEX

74
(FIVE YEARS 12)

2022 ◽  
Vol 44 (1) ◽  
pp. 1-54
Author(s):  
Maria I. Gorinova ◽  
Andrew D. Gordon ◽  
Charles Sutton ◽  
Matthijs Vákár

A central goal of probabilistic programming languages (PPLs) is to separate modelling from inference. However, this goal is hard to achieve in practice. Users are often forced to re-write their models to improve efficiency of inference or meet restrictions imposed by the PPL. Conditional independence (CI) relationships among parameters are a crucial aspect of probabilistic models that capture a qualitative summary of the specified model and can facilitate more efficient inference. We present an information flow type system for probabilistic programming that captures conditional independence (CI) relationships and show that, for a well-typed program in our system, the distribution it implements is guaranteed to have certain CI-relationships. Further, by using type inference, we can statically deduce which CI-properties are present in a specified model. As a practical application, we consider the problem of how to perform inference on models with mixed discrete and continuous parameters. Inference on such models is challenging in many existing PPLs, but can be improved through a workaround, where the discrete parameters are used implicitly , at the expense of manual model re-writing. We present a source-to-source semantics-preserving transformation, which uses our CI-type system to automate this workaround by eliminating the discrete parameters from a probabilistic program. The resulting program can be seen as a hybrid inference algorithm on the original program, where continuous parameters can be drawn using efficient gradient-based inference methods, while the discrete parameters are inferred using variable elimination. We implement our CI-type system and its example application in SlicStan: a compositional variant of Stan. 1


Aerospace ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 43
Author(s):  
Robert Valldosera Martinez ◽  
Frederico Afonso ◽  
Fernando Lau

In order to decrease the emitted airframe noise by a two-dimensional high-lift configuration during take-off and landing performance, a morphing airfoil has been designed through a shape design optimisation procedure starting from a baseline airfoil (NLR 7301), with the aim of emulating a high-lift configuration in terms of aerodynamic performance. A methodology has been implemented to accomplish such aerodynamic improvements by means of the compressible steady RANS equations at a certain angle of attack, with the objective of maximising its lift coefficient up to equivalent values regarding the high-lift configuration, whilst respecting the imposed structural constraints to guarantee a realistic optimised design. For such purposes, a gradient-based optimisation through the discrete adjoint method has been undertaken. Once the optimised airfoil is achieved, unsteady simulations have been carried out to obtain surface pressure distributions along a certain time-span to later serve as the input data for the aeroacoustic prediction framework, based on the Farassat 1A formulation, where the subsequent results for both configurations are post-processed to allow for a comparative analysis. Conclusively, the morphing airfoil has proven to be advantageous in terms of aeroacoustics, in which the noise has been reduced with respect to the conventional high-lift configuration for a comparable lift coefficient, despite being hampered by a significant drag coefficient increase due to stall on the morphing airfoil’s trailing edge.


Author(s):  
Chenhua Geng ◽  
Hong-Ye Hu ◽  
Yijian Zou

Abstract Differentiable programming is a new programming paradigm which enables large scale optimization through automatic calculation of gradients also known as auto-differentiation. This concept emerges from deep learning, and has also been generalized to tensor network optimizations. Here, we extend the differentiable programming to tensor networks with isometric constraints with applications to multiscale entanglement renormalization ansatz (MERA) and tensor network renormalization (TNR). By introducing several gradient-based optimization methods for the isometric tensor network and comparing with Evenbly-Vidal method, we show that auto-differentiation has a better performance for both stability and accuracy. We numerically tested our methods on 1D critical quantum Ising spin chain and 2D classical Ising model. We calculate the ground state energy for the 1D quantum model and internal energy for the classical model, and scaling dimensions of scaling operators and find they all agree with the theory well.


Author(s):  
Soumyashee Soumyaprakash Panda ◽  
Ravi Hegde

Abstract Free-space diffractive optical networks are a class of trainable optical media that are currently being explored as a novel hardware platform for neural engines. The training phase of such systems is usually performed in a computer and the learned weights are then transferred onto optical hardware ("ex-situ training"). Although this process of weight transfer has many practical advantages, it is often accompanied by performance degrading faults in the fabricated hardware. Being analog systems, these engines are also subject to performance degradation due to noises in the inputs and during optoelectronic conversion. Considering diffractive optical networks (DON) trained for image classification tasks on standard datasets, we numerically study the performance degradation arising out of weight faults and injected noises and methods to ameliorate these effects. Training regimens based on intentional fault and noise injection during the training phase are only found marginally successful at imparting fault tolerance or noise immunity. We propose an alternative training regimen using gradient based regularization terms in the training objective that are found to impart some degree of fault tolerance and noise immunity in comparison to injection based training regimen.


Fluids ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 25
Author(s):  
Minghao W. Rostami ◽  
Weifan Liu ◽  
Amy Buchmann ◽  
Eva Strawbridge ◽  
Longhua Zhao

In this work, we outline a methodology for determining optimal helical flagella placement and phase shift that maximize fluid pumping through a rectangular flow meter above a simulated bacterial carpet. This method uses a Genetic Algorithm (GA) combined with a gradient-based method, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, to solve the optimization problem and the Method of Regularized Stokeslets (MRS) to simulate the fluid flow. This method is able to produce placements and phase shifts for small carpets and could be adapted for implementation in larger carpets and various fluid tasks. Our results show that given identical helices, optimal pumping configurations are influenced by the size of the flow meter. We also show that intuitive designs, such as uniform placement, do not always lead to a high-performance carpet.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 154
Author(s):  
Yuxin Ding ◽  
Miaomiao Shao ◽  
Cai Nie ◽  
Kunyang Fu

Deep learning methods have been applied to malware detection. However, deep learning algorithms are not safe, which can easily be fooled by adversarial samples. In this paper, we study how to generate malware adversarial samples using deep learning models. Gradient-based methods are usually used to generate adversarial samples. These methods generate adversarial samples case-by-case, which is very time-consuming to generate a large number of adversarial samples. To address this issue, we propose a novel method to generate adversarial malware samples. Different from gradient-based methods, we extract feature byte sequences from benign samples. Feature byte sequences represent the characteristics of benign samples and can affect classification decision. We directly inject feature byte sequences into malware samples to generate adversarial samples. Feature byte sequences can be shared to produce different adversarial samples, which can efficiently generate a large number of adversarial samples. We compare the proposed method with the randomly injecting and gradient-based methods. The experimental results show that the adversarial samples generated using our proposed method have a high successful rate.


2022 ◽  
Author(s):  
Daniel Simanowitsch ◽  
Anand Sudhi ◽  
Alexander Theiss ◽  
Camli Badrya ◽  
Stefan Hein

Sign in / Sign up

Export Citation Format

Share Document