Intelligent Mobile Messaging for Smart Cities Based on Reinforcement Learning

2018 ◽  
pp. 227-253 ◽  
Author(s):  
Behrooz Shahriari ◽  
Melody Moh
2020 ◽  
Vol 14 (10) ◽  
pp. 1278-1285
Author(s):  
Jing Tang ◽  
Xin Wei ◽  
Jialin Zhao ◽  
Yun Gao

2019 ◽  
Vol 1343 ◽  
pp. 012058
Author(s):  
Jose Vazquez-Canteli ◽  
Thomas Detjeen ◽  
Gregor Henze ◽  
Jérôme Kämpf ◽  
Zoltan Nagy

2020 ◽  
Vol 21 (4) ◽  
pp. 295-302
Author(s):  
Haris Ballis ◽  
Loukas Dimitriou

AbstractSmart Cities promise to their residents, quick journeys in a clean and sustainable environment. Despite, the benefits accrued by the introduction of traffic management solutions (e.g. improved travel times, maximisation of throughput, etc.), these solutions usually fall short on assessing the environmental impact around the implementation areas. However, environmental performance corresponds to a primary goal of contemporary mobility planning and therefore, solutions guaranteeing environmental sustainability are significant. This study presents an advanced Artificial Intelligence-based (AI) signal control framework, able to incorporate environmental considerations into the core of signal optimisation processes. More specifically, a highly flexible Reinforcement Learning (RL) algorithm has been developed towards the identification of efficient but-more importantly-environmentally friendly signal control strategies. The methodology is deployed on a large-scale micro-simulation environment able to realistically represent urban traffic conditions. Alternative signal control strategies are designed, applied, and evaluated against their achieved traffic efficiency and environmental footprint. Based on the results obtained from the application of the methodology on a core part of the road urban network of Nicosia, Cyprus the best strategy achieved a 4.8% increase of the network throughput, 17.7% decrease of the average queue length and a remarkable 34.2% decrease of delay while considerably reduced the CO emissions by 8.1%. The encouraging results showcase ability of RL-based traffic signal controlling to ensure improved air-quality conditions for the residents of dense urban areas.


2019 ◽  
Vol 4 (3) ◽  
pp. 52 ◽  
Author(s):  
Serrano

Intelligent infrastructure, including smart cities and intelligent buildings, must learn and adapt to the variable needs and requirements of users, owners and operators in order to be future proof and to provide a return on investment based on Operational Expenditure (OPEX) and Capital Expenditure (CAPEX). To address this challenge, this article presents a biological algorithm based on neural networks and deep reinforcement learning that enables infrastructure to be intelligent by making predictions about its different variables. In addition, the proposed method makes decisions based on real time data. Intelligent infrastructure must be able to proactively monitor, protect and repair itself: this includes independent components and assets working the same way any autonomous biological organisms would. Neurons of artificial neural networks are associated with a prediction or decision layer based on a deep reinforcement learning algorithm that takes into consideration all of its previous learning. The proposed method was validated against an intelligent infrastructure dataset with outstanding results: the intelligent infrastructure was able to learn, predict and adapt to its variables, and components could make relevant decisions autonomously, emulating a living biological organism in which data flow exhaustively.


2019 ◽  
Vol 57 (4) ◽  
pp. 88-93 ◽  
Author(s):  
Lei Zhao ◽  
Jiadai Wang ◽  
Jiajia Liu ◽  
Nei Kato

Sign in / Sign up

Export Citation Format

Share Document