Optimizing Right-Turn Signals to Benefit Pedestrian–Vehicle Interactions

Author(s):  
Jiawen Wang ◽  
Chengcheng Yang ◽  
Jieshuang Dong ◽  
Xizhao Zhou

In most right-driving urban signalized intersections, right-turn vehicle signals do not usually control turns. To address the problem of signal control in a pedestrian–vehicle interaction, this paper establishes a right-turn signal optimization (RTSO) model that considers both efficiency and safety. First, the main factors influencing the behavior of vehicle and pedestrian during pedestrian–vehicle interaction are analyzed, and a pedestrian–vehicle interaction model (PVI model) at an urban road crosswalk is established. This model is used to determine the probabilities of four pedestrian–vehicle interaction situations. Then, based on the traffic conflict theory, the next step was to construct an objective function that minimizes the total delay of traffic participants considering pedestrian–vehicle interactions, and another objective function that minimizes the potential conflicts considering pedestrian–vehicle interactions. Then, an RTSO model is obtained by introducing a safety-efficiency coefficient to combine the previously described two constructed functions. Finally, the PVI model and delay model are verified through video observation data and the establishment of a cellular automata simulation platform of pedestrian–vehicle interaction. Using these models, a field signal plan, the delay minimization scheme, the conflict minimization scheme, and the proposed scheme are numerically analyzed under different yielding rates. This proposed scheme is further numerically analyzed under different safety-efficiency coefficients. The results show that this paper’s RTSO model has certain advantages in increasing safety and reducing delay. In addition, using these results, this paper gives a recommended value for the safety-efficiency coefficients in different application scenarios.

2021 ◽  
Author(s):  
Wei Xia ◽  
Taimoor Akhtar ◽  
Christine A. Shoemaker

Abstract. This study introduced a novel Dynamically Normalized objective function (DYNO) for multi-variable (i.e., temperature and velocity) model calibration problems. DYNO combines the error metrics of multiple variables into a single objective function by dynamically normalizing each variable's error terms using information available during the search. DYNO is proposed to dynamically adjust the weight of the error of each variable hence balancing the calibration to each variable during optimization search. The DYNO is applied to calibrate a tropical hydrodynamic model where temperature and velocity observation data are used for model calibration simultaneously. We also investigated the efficiency of DYNO by comparing the result of using DYNO to results of calibrating to either temperature or velocity observation only. The result indicates that DYNO can balance the calibration in terms of water temperature and velocity and that calibrating to only one variable (e.g., temperature or velocity) cannot guarantee the goodness-of-fit of another variable (e.g., velocity or temperature). Our study suggested that both temperature and velocity measures should be used for hydrodynamic model calibration in real practice. Our example problems were computed with a parallel optimization method PODS but DYNO can also be easily used in serial applications.


2011 ◽  
Vol 16 (1) ◽  
pp. 73-81
Author(s):  
Gregory M. Benton ◽  
Bitapi C. Sinha

The first study of interpretation in India examined the effectiveness of interpretive facilities and exhibits to convey interpretive conservation messages. Kanha Tiger Reserve features a large budget, advanced technology, and international visitation. The single-case, multiple-methods approach examined visitor knowledge and behavior regarding exhibits. Pre- and post-program surveys, video observation of visitor flow through the interpretive center, and the readability of text were analyzed. Results from the survey indicate that visitor knowledge increased in spite of noise in the center. Video observation data suggests that visitor interest measured by attention index and holding power were greatest for the management related exhibits and decreased as participants moved further into the interpretive center. Images of tigers were found to be more important for attraction and holding power than the center's advanced floor light panels and other interpretive techniques. Dioramas, maps, and models were favored over text by visitors for readability.


2020 ◽  
Vol 224 (1) ◽  
pp. 1-16
Author(s):  
Mianshui Rong ◽  
Xiaojun Li ◽  
Lei Fu

SUMMARY Given the improvements that have been made in the forward calculations of seismic noise horizontal-to-vertical spectral ratios (NHVSRs) or earthquake ground motion HVSRs (EHVSRs), a number of HVSR inversion methods have been proposed to identify underground velocity structures. Compared with the studies on NHVSR inversion, the research on the EHVSR-based inversion methods is relatively rare. In this paper, to make full use of the widely available and constantly accumulating strong-motion observation data, we propose an S-wave HVSR inversion method based on diffuse-field approximation. Herein, the S-wave components of earthquake ground motion recordings are considered as data source. Improvements to the objective function has been achieved in this work. An objective function with the slope term is introduced. The new objective function can mitigate the multisolution phenomenon encountered when working with HVSR curves with multipeaks. Then, a synthetic case is used to show the verification of the proposed method and this method has been applied to invert underground velocity structures for six KiK-net stations based on earthquake observations. The results show that the proposed S-wave EHVSR inversion method is effective for identifying underground velocity structures.


2020 ◽  
Author(s):  
Guillaume Drouen ◽  
Daniel Schertzer ◽  
Ioulia Tchiguirinskaia

<p>As cities are put under greater pressure from the threat of the global impact of climate change, in particular the risk of heavier rainfall and flooding, there is a growing need to establish a hierarchical form of resilience in which critical infrastructure can become sustainable. The main difficulty is that geophysics and urban dynamics are strongly nonlinear with an associated, extreme variability over a wide range of space-time scales. To better link the fundamental and experimental research on these topics, an advanced urban hydro-meteorological observatory with the associated SaaS developments, the Fresnel platform (https://hmco.enpc.fr/portfolio-archive/fresnel-platform/), has been purposely set-up to provide the concerned communities with the necessary observation data thanks to an unprecedented deployment of higher resolution sensors, that easily yield Big Data.</p><p>To give an example, the installation of the polarimetric X-band radar at the ENPC’s campus (East of Paris) introduced a paradigm change in the prospects of environmental monitoring in Ile-de France. The radar is operated since May 2015 and has several characteristics that makes it of central importance for the environmental monitoring of the region. In particular, it demonstrated the crucial importance to have high resolution 3D+1 data, whereas earlier remote sensing developments have been mostly focused on vertical measurements.</p><p>This presentation discusses the associated Fresnel SaaS (Sofware as a Service) platform as an example of nowadays IT tools to dynamically enhance urban resilience. It is rooted on an integrated suite of modular components based on an asynchronous event-driven JavaScript runtime environment. It features non-blocking interaction model and high scalability to ensure optimized availability. It includes a comprehensive and (real-time) accessible database to support multi-criteria choices and it has been built up through stakeholder consultation and participative co-creation. At the same time these components are designed in such a way that they are tunable for specific case studies with the help of an adjustable visual interface. Depending on that case study, these components can be integrated to satisfy the particular needs with the help of maps other visual tools and forecasting systems, eventually from third parties.</p><p>All these developments have greatly benefited from the support of the Chair “Hydrology for a Resilient City” (https://hmco.enpc.fr/portfolio-archive/chair-hydrology-for-resilient-cities/) endowed by the world leader industrial in water management and from previous EU framework programmes. To sustain the necessary public-private partnerships, Fresnel facilitates synergies between research and innovation, fosters the theoretical research, national and international collaborative networking, and the development of various aspects of data science for a resilient city.</p>


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Zhaowei Qu ◽  
Yuzhou Duan ◽  
Hongyu Hu ◽  
Xianmin Song

To estimate the capacity of roundabouts more accurately, the priority rank of each stream is determined through the classification technique given in the Highway Capacity Manual 2010 (HCM2010), which is based on macroscopical analysis of the relationship between entry flow and circulating flow. Then a conflict matrix is established using the additive conflict flow method and by considering the impacts of traffic characteristics and limited priority with high volume. Correspondingly, the conflict relationships of streams are built using probability theory. Furthermore, the entry capacity model of roundabouts is built, and sensitivity analysis is conducted on the model parameters. Finally, the entrance delay model is derived using queuing theory, and the proposed capacity model is compared with the model proposed by Wu and that in the HCM2010. The results show that the capacity calculated by the proposed model is lower than the others for an A-type roundabout, while it is basically consistent with the estimated values from HCM2010 for a B-type roundabout.


2021 ◽  
Vol 13 (23) ◽  
pp. 4747
Author(s):  
Sergey Korolev ◽  
Aleksei Sorokin ◽  
Igor Urmanov ◽  
Aleksandr Kamaev ◽  
Olga Girina

Currently, video observation systems are actively used for volcano activity monitoring. Video cameras allow us to remotely assess the state of a dangerous natural object and to detect thermal anomalies if technical capabilities are available. However, continuous use of visible band cameras instead of special tools (for example, thermal cameras), produces large number of images, that require the application of special algorithms both for preliminary filtering out the images with area of interest hidden due to weather or illumination conditions, and for volcano activity detection. Existing algorithms use preselected regions of interest in the frame for analysis. This region could be changed occasionally to observe events in a specific area of the volcano. It is a problem to set it in advance and keep it up to date, especially for an observation network with multiple cameras. The accumulated perennial archives of images with documented eruptions allow us to use modern deep learning technologies for whole frame analysis to solve the specified task. The article presents the development of algorithms to classify volcano images produced by video observation systems. The focus is on developing the algorithms to create a labelled dataset from an unstructured archive using existing and authors proposed techniques. The developed solution was tested using the archive of the video observation system for the volcanoes of Kamchatka, in particular the observation data for the Klyuchevskoy volcano. The tests show the high efficiency of the use of convolutional neural networks in volcano image classification, and the accuracy of classification achieved 91%. The resulting dataset consisting of 15,000 images and labelled in three classes of scenes is the first dataset of this kind of Kamchatka volcanoes. It can be used to develop systems for monitoring other stratovolcanoes that occupy most of the video frame.


Author(s):  
Marwa K. Farhan ◽  
Muayad S. Croock

<span style="font-size: 9pt; font-family: 'Times New Roman', serif;">Wireless devices have been equiping extensive services over recent years. Since most of these devices are randomly distributed, a fundamental trade-off to be addressed is the transmission rate, latency, and packet loss of the ad hoc route selection in device to device (D2D) networks. Therefore, this paper introduces a notion of weighted transmission rate and total delay, as well as the probability of packet loss. By designing optimal transmission algorithms, this proposed algorithm aims to select the best path for device-to-device communication that maximizes the transmission rate while maintaining minimum delay and packet loss. Using the Lagrange optimization method, the lagrangian optimization of rate, delay, and the probability of packet loss algorithm (LORDP) is modeled. For practical designation, we consider the fading effect of the wireless channels scenario. The proposed optimal algorithm is modeled to compute the optimal cost objective function and represents the best possible solution for the corresponding path. Moreover, a simulation for the optimized algorithm is presented based on optimal cost objective function. Simulation results establish the efficiency of the proposed LORDP algorithm</span><span>.</span><span style="font-size: 9pt; font-family: 'Times New Roman', serif;">Wireless devices have been equiping extensive services over recent years. Since most of these devices are randomly distributed, a fundamental trade-off to be addressed is the transmission rate, latency, and packet loss of the ad hoc route selection in device to device (D2D) networks. Therefore, this paper introduces a notion of weighted transmission rate and total delay, as well as the probability of packet loss. By designing optimal transmission algorithms, this proposed algorithm aims to select the best path for device-to-device communication that maximizes the transmission rate while maintaining minimum delay and packet loss. Using the Lagrange optimization method, the lagrangian optimization of rate, delay, and the probability of packet loss algorithm (LORDP) is modeled. For practical designation, we consider the fading effect of the wireless channels scenario. The proposed optimal algorithm is modeled to compute the optimal cost objective function and represents the best possible solution for the corresponding path. Moreover, a simulation for the optimized algorithm is presented based on optimal cost objective function. Simulation results establish the efficiency of the proposed LORDP algorithm</span>


Filomat ◽  
2018 ◽  
Vol 32 (4) ◽  
pp. 1273-1283
Author(s):  
Marija Krstic

In this paper we study a stochastic model for tumor-immune interaction with delay. More precisely, we extend the deterministic delay tumor-immune interaction model by introducing random perturbations and obtain stochastic model. For this model, we first prove existence and uniqueness of the global positive solution, and then, by using suitable Lyapunov functionals, we obtain stability conditions for the equilibrium state when tumor cells and resting cells approach their carrying capacities. We also carry numerical simulation with reliable data to illustrate our theoretical findings.


2014 ◽  
Vol 587-589 ◽  
pp. 2151-2155
Author(s):  
Juan Zi Zhang

According to traffic safety factor analysis of freeway entrances and exits, this paper use traffic behavior and traffic conflict theory, select the number of conflicts as the evaluation index, use SSAM software to analyze the safety factors in freeway entrance area. On this basis, this paper put forward recommendations about intersection traffic safety design.


2015 ◽  
Vol 2015 ◽  
pp. 1-6 ◽  
Author(s):  
Ying Liu ◽  
Cheng Sun ◽  
Yiming Bie

Unidirectional pedestrian movement is a special phenomenon in the evacuation process of large public buildings and urban environments at pedestrian scale. Several macroscopic models for collective behaviors have been built to predict pedestrian flow. However, current models do not explain the diffusion behavior in pedestrian crowd movement, which can be important in representing spatial-temporal crowd density differentiation in the movement process. This study builds a macroscopic model for describing crowd diffusion behavior and evaluating unidirectional pedestrian flow. The proposed model employs discretization of time and walking speed in geometric distribution to calculate downstream pedestrian crowd flow and analyze movement process based on upstream number of pedestrians and average walking speed. The simulated results are calibrated with video observation data in a baseball stadium to verify the model precision. Statistical results have verified that the proposed pedestrian diffusion model could accurately describe pedestrian macromovement behavior within the margin of error.


Sign in / Sign up

Export Citation Format

Share Document