A general duality theory for uplink and downlink beamforming

Author(s):  
H. Boche ◽  
M. Schubert
1978 ◽  
Vol 18 (1) ◽  
pp. 65-75
Author(s):  
C.H. Scott ◽  
T.R. Jefferson

The idea of duality is now a widely accepted and useful idea in the analysis of optimization problems posed in real finite dimensional vector spaces. Although similar ideas have filtered over to the analysis of optimization problems in complex space, these have mainly been concerned with problems of the linear and quadratic programming variety. In this paper we present a general duality theory for convex mathematical programs in finite dimensional complex space, and, by means of an example, show that this formulation captures all previous results in the area.


2013 ◽  
Vol 23 (03) ◽  
pp. 457-502 ◽  
Author(s):  
SEBASTIAN KERKHOFF

Inspired by work of Mašulović, we outline a general duality theory for clones that will allow us to dualize any given clone, together with its relational counterpart and the relationship between them. Afterwards, we put the approach to work and illustrate it by producing some specific results for concrete examples as well as some general results that come from studying the duals of clones in a rather abstract fashion.


1995 ◽  
Vol 33 (3) ◽  
pp. 428-439 ◽  
Author(s):  
B. A. Davey ◽  
L. Heindorf ◽  
R. McKenzie

2021 ◽  
Vol 36 ◽  
Author(s):  
Sergio Valcarcel Macua ◽  
Ian Davies ◽  
Aleksi Tukiainen ◽  
Enrique Munoz de Cote

Abstract We propose a fully distributed actor-critic architecture, named diffusion-distributed-actor-critic Diff-DAC, with application to multitask reinforcement learning (MRL). During the learning process, agents communicate their value and policy parameters to their neighbours, diffusing the information across a network of agents with no need for a central station. Each agent can only access data from its local task, but aims to learn a common policy that performs well for the whole set of tasks. The architecture is scalable, since the computational and communication cost per agent depends on the number of neighbours rather than the overall number of agents. We derive Diff-DAC from duality theory and provide novel insights into the actor-critic framework, showing that it is actually an instance of the dual-ascent method. We prove almost sure convergence of Diff-DAC to a common policy under general assumptions that hold even for deep neural network approximations. For more restrictive assumptions, we also prove that this common policy is a stationary point of an approximation of the original problem. Numerical results on multitask extensions of common continuous control benchmarks demonstrate that Diff-DAC stabilises learning and has a regularising effect that induces higher performance and better generalisation properties than previous architectures.


Sign in / Sign up

Export Citation Format

Share Document