scholarly journals Prediction of Bridge Component Ratings Using Ordinal Logistic Regression Model

2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Pan Lu ◽  
Hao Wang ◽  
Denver Tolliver

Prediction of bridge component condition is fundamental for well-informed decisions regarding the maintenance, repair, and rehabilitation (MRR) of highway bridges. The National Bridge Inventory (NBI) condition rating is a major source of bridge condition data in the United States. In this study, a type of generalized linear model (GLM), the ordinal logistic statistical model, is presented and compared with the traditional regression model. The proposed model is evaluated in terms of reliability (the ability of a model to accurately predict bridge component ratings or the agreement between predictions and actual observations) and model fitness. Five criteria were used for evaluation and comparison: prediction error, bias, accuracy, out-of-range forecasts, Akaike’s Information Criteria (AIC), and log likelihood (LL). In this study, an external validation procedure was developed to quantitatively compare the forecasting power of the models for highway bridge component deterioration. The GLM method described in this study allows modeling ordinal and categorical dependent variable and shows slightly but significantly better model fitness and prediction performance than traditional regression model.

2015 ◽  
Vol 31 (4) ◽  
pp. 2235-2254 ◽  
Author(s):  
Ebrahim AmiriHormozaki ◽  
Gokhan Pekcan ◽  
Ahmad Itani

Horizontally curved bridges were investigated following a statistical evaluation of typical details commonly used in the United States. Both seismically and non-seismically designed bridges are considered where the primary differences are in column confinement, type of bearings and abutment support length. Columns and bearings were found to be the most seismically vulnerable components for both categories. Central angle was identified as an important factor that increases the demand on various components, particularly columns. Furthermore, larger angles lead to increased deformations at the supports which adversely affect the seismic vulnerability. Consistent with the fragility curves that account for the central angle explicitly, a second set of system fragility curves were introduced for cases when central angle is not specified such as the case in the National Bridge Inventory. Comparison of fragility parameters to those suggested by HAZUS-MH highlighted the need for revisions to account for current design practices and central angle.


2021 ◽  
Author(s):  
Gaowei Xu ◽  
Fae Azhari

The United States National Bridge Inventory (NBI) records element-level condition ratings on a scale of 0 to 9, representing failed to excellent conditions. Current bridge management systems apply Markov decision processes to find optimal repair schemes given the condition ratings. The deterioration models used in these approaches fail to consider the effect of structural age. In this study, a condition-based bridge maintenance framework is proposed where the state of a bridge component is defined using a three-dimensional random variable that depicts the working age, condition rating, and initial age. The proportional hazard model with a Weibull baseline hazard function translates the three-dimensional random variable into a single hazard indicator for decision-making. To demonstrate the proposed method, concrete bridge decks were taken as the element of interest. Two optimal hazard criteria help select the repair scheme (essential repair, general repair, or no action) that leads to minimum annual expected life-cycle costs.


Author(s):  
Pan Lu ◽  
Shiling Pei ◽  
Denver Tolliver

Accurate prediction of bridge component condition over time is critical for determining a reliable maintenance, repair, and rehabilitation (MRR) strategy for highway bridges. Based on bridge inspection data, regression models are the most-widely adopted tools used by researchers and state agencies to predict future bridge condition (FHWA 2007). Various regression models can produce quite different results because of the differences in modeling assumptions. The evaluation of model quality can be challenging and sometimes subjective. In this study, an external validation procedure was developed to quantitatively compare the forecasting power of different regression models for highway bridge component deterioration. Several regression models for highway bridge component rating over time were compared using the proposed procedure and a traditional apparent model evaluation method based on the goodness-of-fit to data. The results obtained by applying the two methods are compared and discussed in this paper.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4336
Author(s):  
Piervincenzo Rizzo ◽  
Alireza Enshaeian

Bridge health monitoring is increasingly relevant for the maintenance of existing structures or new structures with innovative concepts that require validation of design predictions. In the United States there are more than 600,000 highway bridges. Nearly half of them (46.4%) are rated as fair while about 1 out of 13 (7.6%) is rated in poor condition. As such, the United States is one of those countries in which bridge health monitoring systems are installed in order to complement conventional periodic nondestructive inspections. This paper reviews the challenges associated with bridge health monitoring related to the detection of specific bridge characteristics that may be indicators of anomalous behavior. The methods used to detect loss of stiffness, time-dependent and temperature-dependent deformations, fatigue, corrosion, and scour are discussed. Owing to the extent of the existing scientific literature, this review focuses on systems installed in U.S. bridges over the last 20 years. These are all major factors that contribute to long-term degradation of bridges. Issues related to wireless sensor drifts are discussed as well. The scope of the paper is to help newcomers, practitioners, and researchers at navigating the many methodologies that have been proposed and developed in order to identify damage using data collected from sensors installed in real structures.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Huihui Zhang ◽  
Yini Liu ◽  
Fangyao Chen ◽  
Baibing Mi ◽  
Lingxia Zeng ◽  
...  

Abstract Background Since December 2019, the coronavirus disease 2019 (COVID-19) has spread quickly among the population and brought a severe global impact. However, considerable geographical disparities in the distribution of COVID-19 incidence existed among different cities. In this study, we aimed to explore the effect of sociodemographic factors on COVID-19 incidence of 342 cities in China from a geographic perspective. Methods Official surveillance data about the COVID-19 and sociodemographic information in China’s 342 cities were collected. Local geographically weighted Poisson regression (GWPR) model and traditional generalized linear models (GLM) Poisson regression model were compared for optimal analysis. Results Compared to that of the GLM Poisson regression model, a significantly lower corrected Akaike Information Criteria (AICc) was reported in the GWPR model (61953.0 in GLM vs. 43218.9 in GWPR). Spatial auto-correlation of residuals was not found in the GWPR model (global Moran’s I = − 0.005, p = 0.468), inferring the capture of the spatial auto-correlation by the GWPR model. Cities with a higher gross domestic product (GDP), limited health resources, and shorter distance to Wuhan, were at a higher risk for COVID-19. Furthermore, with the exception of some southeastern cities, as population density increased, the incidence of COVID-19 decreased. Conclusions There are potential effects of the sociodemographic factors on the COVID-19 incidence. Moreover, our findings and methodology could guide other countries by helping them understand the local transmission of COVID-19 and developing a tailored country-specific intervention strategy.


2021 ◽  
pp. 088506662110668
Author(s):  
Asha Singh ◽  
Chen Liang ◽  
Stephanie L. Mick ◽  
Chiedozie Udeh

Background The Cardiac Surgery Score (CASUS) was developed to assist in predicting post-cardiac surgery mortality using parameters measured in the intensive care unit. It is calculated by assigning points to ten physiologic variables and adding them to obtain a score (additive CASUS), or by logistic regression to weight the variables and estimate the probability of mortality (logistic CASUS). Both additive and logistic CASUS have been externally validated elsewhere, but not yet in the United States of America (USA). This study aims to validate CASUS in a quaternary hospital in the USA and compare the predictive performance of additive to logistic CASUS in this setting. Methods Additive and logistic CASUS (postoperative days 1-5) were calculated for 7098 patients at Cleveland Clinic from January 2015 to February 2017. 30-day mortality data were abstracted from institutional records and the Death Registries for Ohio State and the Centers for Disease Control. Given a low event rate, model discrimination was assessed by area under the curve (AUROC), partial AUROC (pAUC), and average precision (AP). Calibration was assessed by curves and quantified using Harrell's Emax, and Integrated Calibration Index (ICI). Results 30-day mortality rate was 1.37%. For additive CASUS, odds ratio for mortality was 1.41 (1.35-1.46, P <0.001). Additive and logistic CASUS had comparable pAUC and AUROC (all >0.83). However, additive CASUS had greater AP, especially on postoperative day 1 (0.22 vs. 0.11). Additive CASUS had better calibration curves, and lower Emax, and ICI on all days. Conclusions Additive and logistic CASUS discriminated well for postoperative 30-day mortality in our quaternary center in the USA, however logistic CASUS under-predicted mortality in our cohort. Given its ease of calculation, and better predictive accuracy, additive CASUS may be the preferred model for postoperative use. Validation in more typical cardiac surgery centers in the USA is recommended.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Tim Hua ◽  
Chris Chankyo Kim ◽  
Zihan Zhang ◽  
Alex Lyford

As COVID-19 spread throughout the United States, governors and health experts (HEs) received a surge in followers on Twitter. This paper seeks to investigate how HEs, Democratic governors, and Republican governors discuss COVID-19 on Twitter. Tweets dating from January 1st, 2020 to October 18th, 2020 from official accounts of all fifty governors and 46 prominent U.S.-based HEs were scraped using python package Twint (N = 192,403) and analyzed using a custom-built wordcount program (Twintproject, 2020). The most significant finding is that in 2020, Democratic governors mentioned death at 4.03 times the rate of Republican governors in their COVID-19 tweets. In 2019, Democratic governors still mentioned death at twice the rate of Republicans. We believe we have substantial evidence that Republican governors are less comfortable talking about death than their Democratic counterparts. We also found that Democratic governors tweet about masks, stay-at-home measures, and solutions more often than Republicans. After controlling for state-level variations in COVID-19 data, our regression model confirms that party affiliation is still correlated with the prevalence of tweets in these three categories. However, there isn’t a large difference between the proportion of COVID-19 tweets, tweets about the economy, tweets about vaccines, and tweets containing “science-like” words between governors of the two parties. HEs tweeted about death and vaccines more than the governors. They also tweeted about solutions and testing at a similar rate compared to governors and mentioned lockdowns, the economy, and masks less frequently.


2021 ◽  
Vol 9 ◽  
Author(s):  
Fu-Sheng Chou ◽  
Laxmi V. Ghimire

Background: Pediatric myocarditis is a rare disease. The etiologies are multiple. Mortality associated with the disease is 5–8%. Prognostic factors were identified with the use of national hospitalization databases. Applying these identified risk factors for mortality prediction has not been reported.Methods: We used the Kids' Inpatient Database for this project. We manually curated fourteen variables as predictors of mortality based on the current knowledge of the disease, and compared performance of mortality prediction between linear regression models and a machine learning (ML) model. For ML, the random forest algorithm was chosen because of the categorical nature of the variables. Based on variable importance scores, a reduced model was also developed for comparison.Results: We identified 4,144 patients from the database for randomization into the primary (for model development) and testing (for external validation) datasets. We found that the conventional logistic regression model had low sensitivity (~50%) despite high specificity (&gt;95%) or overall accuracy. On the other hand, the ML model struck a good balance between sensitivity (89.9%) and specificity (85.8%). The reduced ML model with top five variables (mechanical ventilation, cardiac arrest, ECMO, acute kidney injury, ventricular fibrillation) were sufficient to approximate the prediction performance of the full model.Conclusions: The ML algorithm performs superiorly when compared to the linear regression model for mortality prediction in pediatric myocarditis in this retrospective dataset. Prospective studies are warranted to further validate the applicability of our model in clinical settings.


Sign in / Sign up

Export Citation Format

Share Document