Upper-semi-continuity and cone-concavity of multi-valued vector functions in a duality theory for vector optimization

1997 ◽  
Vol 46 (2) ◽  
pp. 169-192 ◽  
Author(s):  
Issoufou Kouada

Filomat ◽  
2017 ◽  
Vol 31 (14) ◽  
pp. 4555-4570 ◽  
Author(s):  
I. Ahmad ◽  
Krishna Kummari ◽  
Vivek Singh ◽  
Anurag Jayswal

The aim of this work is to study optimality conditions for nonsmooth minimax programming problems involving locally Lipschitz functions by means of the idea of convexifactors that has been used in [J. Dutta, S. Chandra, Convexifactors, generalized convexity and vector optimization, Optimization, 53 (2004) 77-94]. Further, using the concept of optimality conditions, Mond-Weir and Wolfe type duality theory has been developed for such a minimax programming problem. The results in this paper extend the corresponding results obtained using the generalized Clarke subdifferential in the literature.





Author(s):  
Walter Briec ◽  
Paola Ravelojaona


2021 ◽  
Vol 36 ◽  
Author(s):  
Sergio Valcarcel Macua ◽  
Ian Davies ◽  
Aleksi Tukiainen ◽  
Enrique Munoz de Cote

Abstract We propose a fully distributed actor-critic architecture, named diffusion-distributed-actor-critic Diff-DAC, with application to multitask reinforcement learning (MRL). During the learning process, agents communicate their value and policy parameters to their neighbours, diffusing the information across a network of agents with no need for a central station. Each agent can only access data from its local task, but aims to learn a common policy that performs well for the whole set of tasks. The architecture is scalable, since the computational and communication cost per agent depends on the number of neighbours rather than the overall number of agents. We derive Diff-DAC from duality theory and provide novel insights into the actor-critic framework, showing that it is actually an instance of the dual-ascent method. We prove almost sure convergence of Diff-DAC to a common policy under general assumptions that hold even for deep neural network approximations. For more restrictive assumptions, we also prove that this common policy is a stationary point of an approximation of the original problem. Numerical results on multitask extensions of common continuous control benchmarks demonstrate that Diff-DAC stabilises learning and has a regularising effect that induces higher performance and better generalisation properties than previous architectures.



Sign in / Sign up

Export Citation Format

Share Document