Reinforcement learning applications in dynamic pricing of retail markets

Author(s):  
C.V.L. Raju ◽  
Y. Narahari ◽  
K. Ravikumar
Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1818
Author(s):  
Jaein Song ◽  
Yun Ji Cho ◽  
Min Hee Kang ◽  
Kee Yeon Hwang

As ridesharing services (including taxi) are often run by private companies, profitability is the top priority in operation. This leads to an increase in the driver’s refusal to take passengers to areas with low demand where they will have difficulties finding subsequent passengers, causing problems such as an extended waiting time when hailing a vehicle for passengers bound for these regions. The study used Seoul’s taxi data to find appropriate surge rates of ridesharing services between 10:00 p.m. and 4:00 a.m. by region using a reinforcement learning algorithm to resolve this problem during the worst time period. In reinforcement learning, the outcome of centrality analysis was applied as a weight affecting drivers’ destination choice probability. Furthermore, the reward function used in the learning was adjusted according to whether the passenger waiting time value was applied or not. The profit was used for reward value. By using a negative reward for the passenger waiting time, the study was able to identify a more appropriate surge level. Across the region, the surge averaged a value of 1.6. To be more specific, those located on the outskirts of the city and in residential areas showed a higher surge, while central areas had a lower surge. Due to this different surge, a driver’s refusal to take passengers can be lessened and the passenger waiting time can be shortened. The supply of ridesharing services in low-demand regions can be increased by as much as 7.5%, allowing regional equity problems related to ridesharing services in Seoul to be reduced to a greater extent.


2016 ◽  
Vol 7 (5) ◽  
pp. 2187-2198 ◽  
Author(s):  
Byung-Gook Kim ◽  
Yu Zhang ◽  
Mihaela van der Schaar ◽  
Jang-Won Lee

Sign in / Sign up

Export Citation Format

Share Document