HIGHWAY ON-RAMP CONTROL USING SLIDING MODES

Author(s):  
JIA LEI ◽  
UMIT OZGUNER
Keyword(s):  
Author(s):  
Arnau Doria-Cerezo ◽  
Victor Repecho ◽  
Domingo Biel

2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
T. Osuna ◽  
O. E. Montano ◽  
Y. Orlov

TheL2-gain analysis is extended towards hybrid mechanical systems, operating under unilateral constraints and admitting both sliding modes and collision phenomena. Sufficient conditions for such a system to be internally asymptotically stable and to possessL2-gain less than ana priorigiven disturbance attenuation level are derived in terms of two independent inequalities which are imposed on continuous-time dynamics and on discrete disturbance factor that occurs at the collision time instants. The former inequality may be viewed as the Hamilton-Jacobi inequality for discontinuous vector fields, and it is separately specified beyond and along sliding modes, which occur in the system between collisions. Thus interpreted, the former inequality should impose the desired integral input-to-state stability (iISS) property on the Filippov dynamics between collisions whereas the latter inequality is invoked to ensure that the impact dynamics (when the state trajectory hits the unilateral constraint) are input-to-state stable (ISS). These inequalities, being coupled together, form the constructive procedure, effectiveness of which is supported by the numerical study made for an impacting double integrator, driven by a sliding mode controller. Desired disturbance attenuation level is shown to satisfactorily be achieved under external disturbances during the collision-free phase and in the presence of uncertainties in the transition phase.


2015 ◽  
Vol 2015 ◽  
pp. 1-16
Author(s):  
Chao Lu ◽  
Yanan Zhao ◽  
Jianwei Gong

Reinforcement learning (RL) has shown great potential for motorway ramp control, especially under the congestion caused by incidents. However, existing applications limited to single-agent tasks and based onQ-learning have inherent drawbacks for dealing with coordinated ramp control problems. For solving these problems, a Dyna-Qbased multiagent reinforcement learning (MARL) system named Dyna-MARL has been developed in this paper. Dyna-Qis an extension ofQ-learning, which combines model-free and model-based methods to obtain benefits from both sides. The performance of Dyna-MARL is tested in a simulated motorway segment in the UK with the real traffic data collected from AM peak hours. The test results compared with Isolated RL and noncontrolled situations show that Dyna-MARL can achieve a superior performance on improving the traffic operation with respect to increasing total throughput, reducing total travel time and CO2emission. Moreover, with a suitable coordination strategy, Dyna-MARL can maintain a highly equitable motorway system by balancing the travel time of road users from different on-ramps.


Sign in / Sign up

Export Citation Format

Share Document