scholarly journals ESTIMATION OF THE LOGIT MODEL FOR THE ONLINE CONTRAFLOW PROBLEM

Transport ◽  
2010 ◽  
Vol 25 (4) ◽  
pp. 433-441
Author(s):  
Habibollah Nassiri ◽  
Ali Edrissi ◽  
Hamed Alibabai

Contraflow or lane reversal is an efficient way for increasing the outbound capacity of a network by reversing the direction of in‐bound roads during evacuations. Hence, it can be considered as a potential remedy for solving congestion problems during evacuation in the context of homeland security, natural disasters and urban evacuations, especially in response to an expected disaster. Most of the contraflow studies are performed offline, thus strategies are generated beforehand for future implementation. Online contraflow models, however, would be often computationally demanding and time‐consuming. This study contributes to the state of the art of contraflow modelling in two regards. First, it focuses on the calibration of a Logit choice model which predicts the online contraflow directions of strategic lanes based on the set of directions obtained from offline scenarios. This is the first effort to adjust offline results to be applied for an online case. The second contribution of this paper is the generation of calibration data set from a novel approach through simulation. The calibrated Logit model is then tested for the network of the City of Fort Worth, Texas. The results show a high performance of this approach to generating beneficial strategies, including an increase in up to 16% in throughput compared to no contraflow case.

2021 ◽  
Vol 5 (2) ◽  
pp. 1-9
Author(s):  
Fattah Alizadeh ◽  
Sazan Luqman

The increasing number of cars inside cities creates problems in traffic control. This issue can be solved by implementing a computer-based automatic system known as the Automatic Car Plate Recognition System (ACPRS). The main purpose of the current paper is to propose an automatic system to detect, extract, segment, and recognize the car plate numbers in the Kurdistan Region of Iraq (KRI). To do so, a frontal image of cars is captured and used as an input of the system. After applying the required pre-processing steps, the SURF descriptor is utilized to detect and extract the car plate from the whole input image. After segmentation of the extracted plate, an efficient projection-based technique is being exploited to describe the available digits and the city name of the registered car plate. The system is evaluated over 200 sample images, which are taken under various testing conditions. The best accuracy of the proposed system, under the controlled condition, shows the high performance and accuracy of the system which is 94%.


Author(s):  
A. Popov ◽  
O.N. Lopateeva ◽  
A.K. Ovsyankin ◽  
M. M. Satsuk ◽  
A. A. Artyshko ◽  
...  

Among the measures aimed at the effective performance of public services in a modern urban environment, one of the main is the quality control and efficiency of the work performed. Timely street cleaning is hampered by several groups of problems, including the lack of a single automated information system (AIS) control of the work performed. In this regard, there is a need to improve and automate this area. This approach will allow you to combine high performance due to the speed of the system and effective quality control of street cleaning. The purpose of this work is the study and analysis of existing information systems (is), allowing to automate the process of quality control and operational performance of the above tasks. On the basis of the conducted researches, to develop is, having coordinated with the customer (administration of the Central district of Krasnoyarsk) requirements and functionality which allow to automate this process.This article presents the main aspects of the design and software solutions for the implementation of the algorithm in the form of AIS, designed to automate the process of monitoring the cleanliness of streets in the city. The development of AIS was conducted in the PhpStorm integrated development environment in the PHP programming language.


Author(s):  
C. Sauer ◽  
F. Bagusat ◽  
M.-L. Ruiz-Ripoll ◽  
C. Roller ◽  
M. Sauer ◽  
...  

AbstractThis work aims at the characterization of a modern concrete material. For this purpose, we perform two experimental series of inverse planar plate impact (PPI) tests with the ultra-high performance concrete B4Q, using two different witness plate materials. Hugoniot data in the range of particle velocities from 180 to 840 m/s and stresses from 1.1 to 7.5 GPa is derived from both series. Within the experimental accuracy, they can be seen as one consistent data set. Moreover, we conduct corresponding numerical simulations and find a reasonably good agreement between simulated and experimentally obtained curves. From the simulated curves, we derive numerical Hugoniot results that serve as a homogenized, mean shock response of B4Q and add further consistency to the data set. Additionally, the comparison of simulated and experimentally determined results allows us to identify experimental outliers. Furthermore, we perform a parameter study which shows that a significant influence of the applied pressure dependent strength model on the derived equation of state (EOS) parameters is unlikely. In order to compare the current results to our own partially reevaluated previous work and selected recent results from literature, we use simulations to numerically extrapolate the Hugoniot results. Considering their inhomogeneous nature, a consistent picture emerges for the shock response of the discussed concrete and high-strength mortar materials. Hugoniot results from this and earlier work are presented for further comparisons. In addition, a full parameter set for B4Q, including validated EOS parameters, is provided for the application in simulations of impact and blast scenarios.


2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


Author(s):  
Mohammed R. Elkobaisi ◽  
Fadi Al Machot

AbstractThe use of IoT-based Emotion Recognition (ER) systems is in increasing demand in many domains such as active and assisted living (AAL), health care and industry. Combining the emotion and the context in a unified system could enhance the human support scope, but it is currently a challenging task due to the lack of a common interface that is capable to provide such a combination. In this sense, we aim at providing a novel approach based on a modeling language that can be used even by care-givers or non-experts to model human emotion w.r.t. context for human support services. The proposed modeling approach is based on Domain-Specific Modeling Language (DSML) which helps to integrate different IoT data sources in AAL environment. Consequently, it provides a conceptual support level related to the current emotional states of the observed subject. For the evaluation, we show the evaluation of the well-validated System Usability Score (SUS) to prove that the proposed modeling language achieves high performance in terms of usability and learn-ability metrics. Furthermore, we evaluate the performance at runtime of the model instantiation by measuring the execution time using well-known IoT services.


Author(s):  
Denys Rozumnyi ◽  
Jan Kotera ◽  
Filip Šroubek ◽  
Jiří Matas

AbstractObjects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects travel a considerable distance during exposure time of a single frame, and therefore, their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by general trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. By postprocessing, non-causal Tracking by Deblatting estimates continuous, complete, and accurate object trajectories for the whole sequence. Tracked objects are precisely localized with higher temporal resolution than by conventional trackers. Energy minimization by dynamic programming is used to detect abrupt changes of motion, called bounces. High-order polynomials are then fitted to smooth trajectory segments between bounces. The output is a continuous trajectory function that assigns location for every real-valued time stamp from zero to the number of frames. The proposed algorithm was evaluated on a newly created dataset of videos from a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms the baselines both in recall and trajectory accuracy. Additionally, we show that from the trajectory function precise physical calculations are possible, such as radius, gravity, and sub-frame object velocity. Velocity estimation is compared to the high-speed camera measurements and radars. Results show high performance of the proposed method in terms of Trajectory-IoU, recall, and velocity estimation.


Radiocarbon ◽  
2004 ◽  
Vol 46 (3) ◽  
pp. 1161-1187 ◽  
Author(s):  
Konrad A Hughen ◽  
John R Southon ◽  
Chanda J H Bertrand ◽  
Brian Frantz ◽  
Paula Zermeño

This paper describes the methods used to develop the Cariaco Basin PL07-58PC marine radiocarbon calibration data set. Background measurements are provided for the period when Cariaco samples were run, as well as revisions leading to the most recent version of the floating varve chronology. The floating Cariaco chronology has been anchored to an updated and expanded Preboreal pine tree-ring data set, with better estimates of uncertainty in the wiggle-match. Pending any further changes to the dendrochronology, these results represent the final Cariaco 58PC calibration data set.


2018 ◽  
Vol 10 (8) ◽  
pp. 80
Author(s):  
Lei Zhang ◽  
Xiaoli Zhi

Convolutional neural networks (CNN for short) have made great progress in face detection. They mostly take computation intensive networks as the backbone in order to obtain high precision, and they cannot get a good detection speed without the support of high-performance GPUs (Graphics Processing Units). This limits CNN-based face detection algorithms in real applications, especially in some speed dependent ones. To alleviate this problem, we propose a lightweight face detector in this paper, which takes a fast residual network as backbone. Our method can run fast even on cheap and ordinary GPUs. To guarantee its detection precision, multi-scale features and multi-context are fully exploited in efficient ways. Specifically, feature fusion is used to obtain semantic strongly multi-scale features firstly. Then multi-context including both local and global context is added to these multi-scale features without extra computational burden. The local context is added through a depthwise separable convolution based approach, and the global context by a simple global average pooling way. Experimental results show that our method can run at about 110 fps on VGA (Video Graphics Array)-resolution images, while still maintaining competitive precision on WIDER FACE and FDDB (Face Detection Data Set and Benchmark) datasets as compared with its state-of-the-art counterparts.


Boreas ◽  
2010 ◽  
Vol 39 (4) ◽  
pp. 674-688 ◽  
Author(s):  
ANNE E. BJUNE ◽  
H. JOHN B. BIRKS ◽  
SYLVIA M. PEGLAR ◽  
ARVID ODLAND

1996 ◽  
Vol 8 (9) ◽  
pp. 1178-1180 ◽  
Author(s):  
F. Dorgeuille ◽  
B. Mersali ◽  
M. Feuillade ◽  
S. Sainson ◽  
S. Slempkes ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document