scholarly journals Data-Driven Analysis of Airport Security Checkpoint Operations

Aerospace ◽  
2020 ◽  
Vol 7 (6) ◽  
pp. 69
Author(s):  
Stef Janssen ◽  
Régis van der Sommen ◽  
Alexander Dilweg ◽  
Alexei Sharpanskykh

Airport security checkpoints are the most important bottleneck in airport operations, but few studies aim to empirically understand them better. In this work we address this lack of data-driven quantitative analysis and insights about the security checkpoint process. To this end, we followed a total of 2277 passengers through the security checkpoint process at Rotterdam The Hague Airport (RTM), and published detailed timing data about their journey through the process. This dataset is unique in scientific literature, and can aid future researchers in the modelling and analysis of the security checkpoint. Our analysis showed important differences between six identified passenger types. Business passengers were found to be the fastest group, while passengers with reduced mobility (PRM) and families were the slowest two groups. We also identified events that hindered the performance of the security checkpoint, in which groups of passengers had to wait long for security employees or other passengers. A total of 335 such events occurred, with an average of 2.3 passengers affected per event. It was found that a passenger that had a high luggage drop time was followed by an event in 27% of the cases, which was the most frequent cause. To mitigate this waiting time of subsequent passengers in the security checkpoint process, we performed an experiment with a so-called service lane. This lane was used to process passengers that are expected to be slow, while the remaining lanes processed the other passengers. It was found that the mean throughput of the service lane setups was higher than the average throughput of the standard lanes, making it a promising setup to investigate further.

Author(s):  
Daniel Roten ◽  
Kim B. Olsen

ABSTRACT We use deep learning to predict surface-to-borehole Fourier amplification functions (AFs) from discretized shear-wave velocity profiles. Specifically, we train a fully connected neural network and a convolutional neural network using mean AFs observed at ∼600 KiK-net vertical array sites. Compared with predictions based on theoretical SH 1D amplifications, the neural network (NN) results in up to 50% reduction of the mean squared log error between predictions and observations at sites not used for training. In the future, NNs may lead to a purely data-driven prediction of site response that is independent of proxies or simplifying assumptions.


2021 ◽  
Author(s):  
Ashley Gillman ◽  
Stephen Rose ◽  
Jye Smith ◽  
Jason A Dowling ◽  
Nicholas Dowson

Abstract Background / AimsPatient motion during positron emission tomography (PET) imaging can corrupt the image by causing blurring and quantitation error due to misalignment with the attenuation correction image. Data-driven techniques for tracking motion in PET imaging allow for retrospective motion correction, where motion may not have been prospectively anticipated.MethodsA two minute PET acquisition of a Hoffman phantom was acquired on a Bi- ograph mCT Flow, during which the phantom was rocked, simulating periodic motion with varying frequency. Motion was tracked using the sensitivity method, the axial centre-of-mass (COM) method, a novel 3D-COM method, and the principal component analysis (PCA) method. A separate two minute acquisition was acquired with no motion as a gold standard. The tracking signal was discretised into 10 gates using k-means clustering. Motion was modelled and corrected using the reconstruct-transform-add (RTA) technique, leveraging Multimodal Image Registration using Block-matching and Robust Regression (Mirorr) for rigid registration of non- attenuation-corrected 4D PET and Software for Tomographic Image Reconstruction (STIR) for PET reconstructions. Evaluation was performed by segmenting white matter (WM) and grey matter (GM) in the attenuation correction computed tomography (CT). The mean uptake in the region of GM was compared with that in the WM region. Additionally, the difference between the intensity distributions of WM and GM regions was measured with the t-statistic from a Welch's t-test.ResultsDifference in the mean distribution of WM to GM ranked the techniques in order of efficacy: no correction, sensitivity, axial-COM, 3D-COM, PCA, no motion. PCA correction had a great WM/GM separation measured by the t-value than the no motion scan. This was attributed to interpolation blurring during motion correction reducing class variance.ConclusionOf the techniques examined, PCA was found to be most effective for tracking rigid motion. The sensitivity and axial-COM techniques are mostly sensitive to axial motion, and so were ineffective in this phantom experiment. 3D-COM demonstrates improved transaxial motion sensitivity, but not to the level of effectiveness of PCA.


2017 ◽  
Author(s):  
Marielle Saunois ◽  
Philippe Bousquet ◽  
Benjamin Poulter ◽  
Anna Peregon ◽  
Philippe Ciais ◽  
...  

Abstract. Following the recent Global Carbon project (GCP) synthesis of the decadal methane (CH4) budget over 2000–2012 (Saunois et al., 2016), we analyse here the same dataset with a focus on quasi-decadal and inter-annual variability in CH4 emissions. The GCP dataset integrates results from top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling frameworks) and bottom-up models, inventories, and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations). The annual global methane emissions from top-down studies, which by construction match the observed methane growth rate within their uncertainties, all show an increase in total methane emissions over the period 2000–2012, but this increase is not linear over the 13 years. Despite differences between individual studies, the mean emission anomaly of the top-down ensemble shows no significant trend in total methane emissions over the period 2000–2006, during the plateau of atmospheric methane mole fractions, and also over the period 2008–2012, during the renewed atmospheric methane increase. However, the top-down ensemble mean produces an emission shift between 2006 and 2008, leading to 22 [16–32] Tg CH4 yr−1 higher methane emissions over the period 2008–2012 compared to 2002–2006. This emission increase mostly originated from the tropics with a smaller contribution from mid-latitudes and no significant change from boreal regions. The regional contributions remain uncertain in top-down studies. Tropical South America and South and East Asia seems to contribute the most to the emission increase in the tropics. However, these two regions have only limited atmospheric measurements and remain therefore poorly constrained. The sectorial partitioning of this emission increase between the periods 2002–2006 and 2008–2012 differs from one atmospheric inversion study to another. However, all top-down studies suggest smaller changes in fossil fuel emissions (from oil, gas, and coal industries) compared to the mean of the bottom-up inventories included in this study. This difference is partly driven by a smaller emission change in China from the top-down studies compared to the estimate in the EDGARv4.2 inventory, which should be revised to smaller values in a near future. Though the sectorial partitioning of six individual top-down studies out of eight are not consistent with the observed change in atmospheric 13CH4, the partitioning derived from the ensemble mean is consistent with this isotopic constraint. At the global scale, the top-down ensemble mean suggests that, the dominant contribution to the resumed atmospheric CH4 growth after 2006 comes from microbial sources (more from agriculture and waste sectors than from natural wetlands), with an uncertain but smaller contribution from fossil CH4 emissions. Besides, a decrease in biomass burning emissions (in agreement with the biomass burning emission databases) makes the balance of sources consistent with atmospheric 13CH4 observations. The methane loss (in particular through OH oxidation) has not been investigated in detail in this study, although it may play a significant role in the recent atmospheric methane changes.


Atmosphere ◽  
2019 ◽  
Vol 10 (7) ◽  
pp. 421 ◽  
Author(s):  
Barouch Giechaskiel ◽  
Alessandro A. Zardini ◽  
Tero Lähde ◽  
Adolfo Perujo ◽  
Anastasios Kontses ◽  
...  

The scientific literature indicates that solid particle number (SPN) emissions of motorcycles are usually higher than that of passenger cars. The L-category (e.g., mopeds, motorcycles) Euro 4 and 5 environmental steps were designed to reduce the emissions of particulate matter and ozone precursors such as nitrogen oxides and hydrocarbons. In this study the SPN emissions of one moped and eight motorcycles, all fulfilling the Euro 4 standards, were measured with a SPN measurement system employing a catalytic stripper to minimize volatile artefacts. Although the particulate matter mass emissions were <1.5 mg/km for all vehicles tested, two motorcycles and the moped were close to the SPN limit for passenger cars (6 × 1011 particles/km with sizes larger than 23 nm) and four motorcycles exceeded the limit by a factor of up to four. The measurement repeatability was satisfactory (deviation from the mean 10%) and concentration differences between tailpipe and dilution tunnel were small, indicating that performing robust SPN measurements for regulatory control purposes is feasible. However, steady state tests with the moped showed major differences between the tailpipe and the dilution tunnel sampling points for sub-23 nm particles. Thus, the measurement procedures of particles for small displacement engine mopeds and motorcycles need to be better defined for a possible future introduction in regulations.


2020 ◽  
Vol 50 (3) ◽  
pp. 190-196
Author(s):  
Sheldon H. Jacobson

The Transportation Security Administration (TSA) is responsible for protecting the nation's air transportation system. Risk-based security is a paradigm for aligning security resources (i.e., personnel, technology, and time) with security risks. PreCheck is one approach that the TSA uses to implement this strategy. Given that passengers enrolled in PreCheck undergo background checks and fingerprinting, they experience expedited screening at airport security checkpoints, with standard screening lanes dedicated to passengers not enrolled in PreCheck. This difference can favorably impact the TSA’s ability to detect threat items like firearms. This paper uses publicly available data on firearm detection, number of passengers screened, and the fraction of passenger screenings in PreCheck lanes to estimate the number of firearms missed at airport security checkpoints in the United States. To achieve this, it defines risky firearms as firearms carried by passengers not enrolled in PreCheck and assumes that only standard screening lanes are where such firearms are brought to checkpoints. Under this assumption, the number of risky firearms missed in the recent past is estimated, given more current risky firearm detection rates. This analysis suggests that increasing the number of PreCheck passenger screenings may reduce the number of undetected risky firearms passing through security checkpoints.


2012 ◽  
Vol 27 (6) ◽  
pp. 1228-1233 ◽  
Author(s):  
Qais Naziri ◽  
Aaron J. Johnson ◽  
Hasan A. Hooper ◽  
Said H. Sana ◽  
Michael A. Mont

2006 ◽  
Vol 53 (12) ◽  
pp. 121-128 ◽  
Author(s):  
B. Wett

So far, extremely efficient metabolic pathways for nitrogen removal exclusively by autotrophic organisms are well established in scientific literature but not in practice. This paper presents results from the successful implementation of rejection water deammonification in a full-scale single sludge system at the WWTP Strass, Austria. Anaerobic ammonia oxidising biomass has been accumulated during a 2.5 year start-up period when the reactor size was gradually scaled up in the steps. The pH-controlled deammonification system (DEMON) has reached a design capacity of eliminating approximately 300 kg of nitrogen per day. Energy savings outperform expectations, decreasing the mean specific demand for compressed air from 109 m3(kg N)−1 to 29 m3(kg N)−1. Dominance of autotrophic metabolism is confirmed by organic effluent loads topping influent loads.


Sign in / Sign up

Export Citation Format

Share Document