An Axiomatization for Bilinear Models

Author(s):  
Uta Wille
Keyword(s):  
Author(s):  
Amir Ardestani-Jaafari ◽  
Erick Delage

In this article, we discuss an alternative method for deriving conservative approximation models for two-stage robust optimization problems. The method mainly relies on a linearization scheme employed in bilinear programming; therefore, we will say that it gives rise to the linearized robust counterpart models. We identify a close relation between this linearized robust counterpart model and the popular affinely adjustable robust counterpart model. We also describe methods of modifying both types of models to make these approximations less conservative. These methods are heavily inspired by the use of valid linear and conic inequalities in the linearization process for bilinear models. We finally demonstrate how to employ this new scheme in location-transportation and multi-item newsvendor problems to improve the numerical efficiency and performance guarantees of robust optimization.


2000 ◽  
Vol 12 (6) ◽  
pp. 1247-1283 ◽  
Author(s):  
Joshua B. Tenenbaum ◽  
William T. Freeman

Perceptual systems routinely separate “content” from “style,” classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.


Author(s):  
Ronald K. Pearson

One of the main points of Chapter 4 is that nonlinear moving-average (NMAX) models are both inherently better-behaved and easier to analyze than more general NARMAX models. For example, it was shown in Sec. 4.2.2 that if ɡ(· · ·) is a continuous map from Rq+1 to R1 and if ys = ɡ (us,..., us), then uk → us implies yk → ys. Although it is not always satisfied, continuity is a relatively weak condition to impose on the map ɡ(· · ·) . For example, Hammerstein or Wiener models based on moving average models and the hard saturation nonlinearity represent discontinuous members of the class of NMAX models. This chapter considers the analytical consequences of requiring ɡ(·) to be analytic, implying the existence of a Taylor series expansion. Although this requirement is much stronger than continuity, it often holds, and when it does, it leads to an explicit representation: Volterra models. The principal objective of this chapter is to define the class of Volterra models and discuss various important special cases and qualitative results. Most of this discussion is concerned with the class V(N,M) of finite Volterra models, which includes the class of linear finite impulse response models as a special case, along with a number of practically important nonlinear moving average model classes. In particular, the finite Volterra model class includes Hammerstein models, Wiener models, and Uryson models, along with other more general model structures. In addition, one of the results established in this chapter is that most of the bilinear models discussed in Chapter 3 may be expressed as infinite-order Volterra models. This result is somewhat analogous to the equivalence between finite-dimensional linear autoregressive models and infinite-dimensional linear moving average models discussed in Chapter 2. The bilinear model result presented here is strictly weaker, however, since there exist classes of bilinear models that do not possess Volterra series representations. Specifically, it is shown in Sec. 5.6 that completely bilinear models do not exhibit Volterra series representations. Conversely, one of the results discussed at the end of this chapter is that the class of discrete-time fading memory systems may be approximated arbitrarily well by finite Volterra models (Boyd and Chua, 1985).


2020 ◽  
Vol 24 (10) ◽  
pp. 2844-2851 ◽  
Author(s):  
Tennison Liu ◽  
Nhan Duy Truong ◽  
Armin Nikpour ◽  
Luping Zhou ◽  
Omid Kavehei

Sign in / Sign up

Export Citation Format

Share Document