Local-utopia policy selection for multi-objective reinforcement learning

Author(s):  
Simone Parisi ◽  
Alexander Blank ◽  
Tobias Viernickel ◽  
Jan Peters
Author(s):  
Stefan Schneider ◽  
Ramin Khalili ◽  
Adnan Manzoor ◽  
Haydar Qarawlus ◽  
Rafael Schellenberg ◽  
...  

2021 ◽  
Author(s):  
Hao Cheng ◽  
Yihang Huang ◽  
Dazhi He ◽  
Yin Xu ◽  
Yanfeng Wang ◽  
...  

Author(s):  
Akkhachai Phuphanin ◽  
Wipawee Usaha

Coverage control is crucial for the deployment of wireless sensor networks (WSNs). However, most coverage control schemes are based on single objective optimization such as coverage area only, which do not consider other contradicting objectives such as energy consumption, the number of working nodes, wasteful overlapping areas. This paper proposes on a Multi-Objective Optimization (MOO) coverage control called Scalarized Q Multi-Objective Reinforcement Learning (SQMORL). The two objectives are to achieve the maximize area coverage and to minimize the overlapping area to reduce energy consumption. Performance evaluation is conducted for both simulation and multi-agent lighting control testbed experiments. Simulation results show that SQMORL can obtain more efficient area coverage with fewer working nodes than other existing schemes.  The hardware testbed results show that SQMORL algorithm can find the optimal policy with good accuracy from the repeated runs.


Sign in / Sign up

Export Citation Format

Share Document