Computing Performance Measures with National Performance Management Research Data Set

Author(s):  
Kartik Kaushik ◽  
Elham Sharifi ◽  
Stanley Ernest Young
Author(s):  
Venktesh Pandey ◽  
Natalia Ruiz Juri

The National Performance Management Research Data Set (NPMRDS), made available by Federal Highway Administration in 2013, provides fine-resolution travel-time data, which have been used in numerous network performance management and operations applications. This article discusses corridor-level performance measures computed using the NPMRDS. Three measures are analyzed on a 20.2-mile long corridor in San Antonio, Texas, including corridor travel time, corridor travel-time reliability, and day-to-day variation in travel time. The primary contributions of this article are the analysis of the impact of using two different approaches for travel-time aggregation across segments—instantaneous and time-dependent approaches—and defining a mean absolute error-based method to identify days when travel times significantly deviate from typical traffic conditions. The findings suggest that the temporal patterns of corridor travel times obtained using instantaneous and time-dependent aggregation approaches are similar; however, instantaneous travel-time estimates lead to inaccuracies that become more apparent during peak hours and for longer segments. In addition, it is found that a [Formula: see text]-means clustering analysis performed on daily travel-time profiles provides a useful statistic for corridor performance analysis. Using this methodology, 9.23% of weekdays in the 20-month study period are classified as atypical for the corridor. The numerical results reinforce the value of the NPMRDS in estimating corridor performance measures and highlight potential limitations of traditional techniques for evaluating corridor performance measures when applied in practice to support enhanced traffic planning and operations.


Data ◽  
2017 ◽  
Vol 2 (4) ◽  
pp. 39 ◽  
Author(s):  
Virginia Sisiopiku ◽  
Shaghayegh Rostami-Hosuri

Author(s):  
Darshan Mukund Pandit ◽  
Kartik Kaushik ◽  
Cinzia Cirillo

Integration of various datasets is crucial given the emphasis placed on holistic reporting of performance measures of various variables related to road transportation by the Moving Ahead for Progress in the 21st Century (MAP-21) Act. None is more confounding than the merger of geospatial datasets, which is necessary, for example, to combine vehicle travel time and volume information for road segments. Such a merged dataset is released through the National Performance Management Research Dataset (NPMRDS). The NPMRDS is supposed to exclusively cover the National Highway System (NHS) and Strategic Highway Network (STRAHNET) sub-selected from the Highway Performance Monitoring System (HPMS). However, one finds that the coverage is not perfect. There are not only many extra road segments included in the NPMRDS, but also some NHS/STRAHNET roads segments are not fully covered by corresponding NPMRDS segments. Further, one finds very little literature about the method Texas Transportation Institute uses to orchestrate the conflation. Therefore, it was endeavored to create a conflation algorithm which might perform better. The benchmark for the proposed algorithm is the identification of the segments wrongly conflated during the creation of the NPMRDS geospatial dataset. The proposed methodology uses a combination of five measures of similarity between the HPMS and NPMRDS segments. The proposed method successfully identifies significant numbers of mismatched segments: about 5% excess NPMRDS segments, and about 3% HPMS segments without NPMRDS counterpart.


Author(s):  
Chowdhury Siddiqui ◽  
Michael Dennis

This paper presents a framework for establishing targets for national system performance measures for reliability. The paper outlines step-by-step procedures followed using the National Performance Management Research Data Set and provides a possible range of estimates for future years’ targets for South Carolina highways. This paper focuses only on the percentage of person-miles traveled on both Interstate and non-Interstate national highway system. The framework presented in this paper is reproducible for other state Departments of Transportation, and accounts for construction projects that might influence the future predicted target number(s).


Author(s):  
Ernest Tufuor ◽  
Laurence Rilett ◽  
Sean Murphy

The 6th edition of the Highway Capacity Manual (HCM6) introduced a methodology for estimating and forecasting arterial travel time (TT) distributions (TTD) and their associated travel time reliability (TTR) metrics. Recently, it was shown that the HCM6 severely underestimated both the TTD and the TTR metrics for a test network in Lincoln, NE, U.S. Subsequently, it was shown that the underestimation issue could be eliminated through a proposed calibration methodology. Because this validation and calibration work was done on a single, relatively short section of arterial roadway there is an open research question on whether this finding applies to longer and more congested arterial roadways. The goal of this paper is to validate and calibrate the HCM6 TTR methodology on five arterial roadway testbeds that are longer and more congested than the original testbed. Empirical data from the National Performance Management Research Data Set (NPMRDS) which is managed by INRIX was used to represent the ground truth. Similar to the original study, it was found that the HCM6 TTR methodology severely underestimated the TTDs, and their respective TTR metrics, on all five testbeds. This is problematic, because the HCM6 methodology indicates that the corridors had more reliable TT than the empirical data would suggest. It was also shown that the calibration methodology eliminated this underestimation. It is recommended that users of the HCM6 TTR methodology validate and, if necessary, calibrate the model using local empirical travel data.


2015 ◽  
Vol 27 (1) ◽  
pp. 25-53 ◽  
Author(s):  
Chong M. Lau ◽  
Glennda Scully

ABSTRACT Organizational politics is ubiquitous in organizations. Yet to date, no prior research has investigated, in a systematic empirical manner, the mediating role of organizational politics in performance measurement systems. The primary purpose of this research is to investigate if perceptions of organizational politics mediate the relationships between performance measures and employees' trust in their superiors. As organizational politics may also affect employees' perceptions of fairness, a model is used to investigate (1) if performance measures affect organizational politics; (2) if organizational politics, in turn, affects procedural and interpersonal fairness; and (3) if fairness perceptions subsequently affect trust in superiors. Based on a sample of 104 responses, the partial least squares results indicate that organizational politics and fairness perceptions significantly mediate the nonfinancial performance measures and trust relationship. In contrast, the results indicate that the mediating effects of organizational politics and fairness on the relationship between financial performance measures and trust are generally insignificant.


Author(s):  
Simona Babiceanu ◽  
Sanhita Lahiri ◽  
Mena Lockwood

This study uses a suite of performance measures that was developed by taking into consideration various aspects of congestion and reliability, to assess impacts of safety projects on congestion. Safety projects are necessary to help move Virginia’s roadways toward safer operation, but can contribute to congestion and unreliability during execution, and can affect operations after execution. However, safety projects are assessed primarily for safety improvements, not for congestion. This study identifies an appropriate suite of measures, and quantifies and compares the congestion and reliability impacts of safety projects on roadways for the periods before, during, and after project execution. The paper presents the performance measures, examines their sensitivity based on operating conditions, defines thresholds for congestion and reliability, and demonstrates the measures using a set of Virginia safety projects. The data set consists of 10 projects totalling 92 mi and more than 1M data points. The study found that, overall, safety projects tended to have a positive impact on congestion and reliability after completion, and the congestion variability measures were sensitive to the threshold of reliability. The study concludes with practical recommendations for primary measures that may be used to measure overall impacts of safety projects: percent vehicle miles traveled (VMT) reliable with a customized threshold for Virginia; percent VMT delayed; and time to travel 10 mi. However, caution should be used when applying the results directly to other situations, because of the limited number of projects used in the study.


Author(s):  
Marcus Pietsch ◽  
Pierre Tulowitzki ◽  
Colin Cramer

Both organizational and management research suggest that schools and their leaders need to be ambidextrous to secure prosperity and long-term survival in dynamic environments characterized by competition and innovation. In this context, ambidexterity refers to the ability to simultaneously pursue exploitation and exploration and thus to deliver efficiency, control and incremental improvements while embracing flexibility, autonomy and discontinuous innovation. Using a unique, randomized and representative data set of N = 405 principals, we present findings on principals’ exploitation and exploration. The results indicate: (a) that principals engage far more often in exploitative than in explorative activities; (b) that exploitative activities in schools are executed at the expense of explorative activities; and (c) that explorative and ambidextrous activities of principals are positively associated with the (perceived) competition between schools. The study brings a novel perspective to educational research and demonstrates that applying the concept of ambidexterity has the potential to further our understanding of effective educational leadership and management.


2020 ◽  
Vol 29 (3) ◽  
pp. 1608-1617
Author(s):  
Catriona M. Steele ◽  
Melanie Peladeau-Pigeon ◽  
Emily Barrett ◽  
Talia S. Wolkin

Purpose Reference data from healthy adults under the age of 60 years suggest that the 75th and 95th percentiles for pharyngeal residue on swallows of thin liquids are 1% and 3%(C2-4) 2 , respectively. We explored how pharyngeal residue below versus above these values prior to a swallow predicts penetration–aspiration. Method The study involved retrospective analysis of a previous research data set from 305 adults at risk for dysphagia. Participants swallowed six thin boluses and three each of mildly, moderately, and extremely thick barium in videofluoroscopy. Raters measured preswallow residue in %(C2-4) 2 units and Penetration–Aspiration Scale (PAS) scores for each swallow. Swallows were classified as (a) “clean baseline” (with no preswallow residue), (b) “clearing” swallows of residue with no new material added, or (c) swallows of “additional material” plus preswallow residue. Frequencies of PAS scores of ≥ 3 were compared across swallow type by consistency according to residue severity (i.e., ≤ vs. > 1%(C2-4) 2 and ≤ vs. > 3%(C2-4) 2 . Results The data set comprised 2,541 clean baseline, 209 clearing, and 1,722 swallows of additional material. On clean baseline swallows, frequencies of PAS scores of ≥ 3 were 5% for thin and mildly thick liquids and 1% for moderately/extremely thick liquids. Compared to clean baseline swallows, the odds of penetration–aspiration on thin liquids increased 4.60-fold above the 1% threshold and 4.20-fold above the 3% threshold (mildly thick: 2.11-fold > 1%(C2-4) 2 , 2.26-fold > 3%(C2-4) 2 ). PAS scores of ≥ 3 did not occur with clearing swallows of moderately/extremely thick liquids. Lower frequencies of above-threshold preswallow residue were seen for swallows of additional material than for clearing swallows. Compared to clean baseline swallows, the odds of PAS scores of ≥ 3 on swallows of additional material increased ≥ 1.86-fold above the 1% threshold and ≥ 2.15-fold above the 3% threshold, depending on consistency. Conclusion The data suggest that a pharyngeal residue threshold of 1%(C2-4) 2 is a meaningful cut-point for delineating increased risk of penetration–aspiration on a subsequent swallow.


Sign in / Sign up

Export Citation Format

Share Document