Use of Neural Networks for Modelling of Passenger Dynamics in Airport Terminal Environment

2014 ◽  
Vol 708 ◽  
pp. 107-112
Author(s):  
Pavlína Hlavsová ◽  
Jaromír Široký

Neural networks are methods inspired by animals´ central nervous systems, particularly by the human brain. As one of the modern mathematics methods, neural networks have been used to solve a wide variety of both practical and theoretical tasks. The aim of this paper is to illustrate the use of neural networks for modelling of passenger dynamics in the airport terminal environment. This model could be used for passenger flow control, since for the management to be appropriate it should involve passenger dynamics prediction for effective and accurate passenger flow modelling and simulation.

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

2021 ◽  
pp. 2150461
Author(s):  
Xiang Li ◽  
Yan Bai ◽  
Kaixiong Su

The increase of urban traffic demands has directly affected some large cities that are now dealing with more serious urban rail transit congestion. In order to ensure the travel efficiency of passengers and improve the service level of urban rail transit, we proposed a multi-line collaborative passenger flow control model for urban rail transit networks. The model constructed here is based on passenger flow characteristics and congestion propagation rules. Considering the passenger demand constraints, as well as section transport and station capacity constraints, a linear programming model is established with the aim of minimizing total delayed time of passengers and minimizing control intensities at each station. The network constructed by Line 2, Line 6 and Line 8 of the Beijing metro is the study case used in this research to analyze control stations, control durations and control intensities. The results show that the number of delayed passengers is significantly reduced and the average flow control ratio is relatively balanced at each station, which indicates that the model can effectively relieve congestion and provide quantitative references for urban rail transit operators to come up with new and more effective passenger flow control measures.


Author(s):  
F.J. Lopez-Aligue ◽  
C. Garcia-Orellana ◽  
I. Acevedo-Sotoca ◽  
H. Gonzalez-Velasco ◽  
M. Macias-Macias

2017 ◽  
Author(s):  
Stefania Bracci ◽  
Ioannis Kalfas ◽  
Hans Op de Beeck

AbstractRecent studies showed agreement between how the human brain and neural networks represent objects, suggesting that we might start to understand the underlying computations. However, we know that the human brain is prone to biases at many perceptual and cognitive levels, often shaped by learning history and evolutionary constraints. Here we explore one such bias, namely the bias to perceive animacy, and used the performance of neural networks as a benchmark. We performed an fMRI study that dissociated object appearance (how an object looks like) from object category (animate or inanimate) by constructing a stimulus set that includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), and, crucially, inanimate objects that look like the animate objects (e.g., a cow-mug). Behavioral judgments and deep neural networks categorized images mainly by animacy, setting all objects (lookalike and inanimate) apart from the animate ones. In contrast, activity patterns in ventral occipitotemporal cortex (VTC) were strongly biased towards object appearance: animals and lookalikes were similarly represented and separated from the inanimate objects. Furthermore, this bias interfered with proper object identification, such as failing to signal that a cow-mug is a mug. The bias in VTC to represent a lookalike as animate was even present when participants performed a task requiring them to report the lookalikes as inanimate. In conclusion, VTC representations, in contrast to neural networks, fail to veridically represent objects when visual appearance is dissociated from animacy, probably due to a biased processing of visual features typical of animate objects.


2019 ◽  
Vol 8 (6) ◽  
pp. 243 ◽  
Author(s):  
Yong Han ◽  
Shukang Wang ◽  
Yibin Ren ◽  
Cheng Wang ◽  
Peng Gao ◽  
...  

Predicting the passenger flow of metro networks is of great importance for traffic management and public safety. However, such predictions are very challenging, as passenger flow is affected by complex spatial dependencies (nearby and distant) and temporal dependencies (recent and periodic). In this paper, we propose a novel deep-learning-based approach, named STGCNNmetro (spatiotemporal graph convolutional neural networks for metro), to collectively predict two types of passenger flow volumes—inflow and outflow—in each metro station of a city. Specifically, instead of representing metro stations by grids and employing conventional convolutional neural networks (CNNs) to capture spatiotemporal dependencies, STGCNNmetro transforms the city metro network to a graph and makes predictions using graph convolutional neural networks (GCNNs). First, we apply stereogram graph convolution operations to seamlessly capture the irregular spatiotemporal dependencies along the metro network. Second, a deep structure composed of GCNNs is constructed to capture the distant spatiotemporal dependencies at the citywide level. Finally, we integrate three temporal patterns (recent, daily, and weekly) and fuse the spatiotemporal dependencies captured from these patterns to form the final prediction values. The STGCNNmetro model is an end-to-end framework which can accept raw passenger flow-volume data, automatically capture the effective features of the citywide metro network, and output predictions. We test this model by predicting the short-term passenger flow volume in the citywide metro network of Shanghai, China. Experiments show that the STGCNNmetro model outperforms seven well-known baseline models (LSVR, PCA-kNN, NMF-kNN, Bayesian, MLR, M-CNN, and LSTM). We additionally explore the sensitivity of the model to its parameters and discuss the distribution of prediction errors.


Sign in / Sign up

Export Citation Format

Share Document