Discovering Crash Severity Factors of Grade Crossing With a Machine Learning Approach

Author(s):  
Dahye Lee ◽  
Jeffery Warner ◽  
Curtis Morgan

According to the Federal Railroad Administration (FRA) Highway-Rail Grade Crossing Accident/Incident database, more than 12,000 accidents occurred between 2012 and 2017 in the United States with casualties of around 3900. Despite repeated efforts to fully understand the risk factors that contribute to highway-rail grade crossing collisions, there still remain many uncertainties. A machine learning approach is proposed in this paper to find out significant factors, along with their individual impacts of crash severities at grade crossings. One of the most efficient and accurate machine learning algorithms, extreme gradient boosting (XGB or XGBoost), is applied to analyze 21 different accident and crossing -related characteristics per driver severities. The XGB model has been proven in previous studies across many research areas in transportation to outperform other machine learning-based methods and statistical classification methods, such as multinomial logit model, multiple additive regression trees, decision tree, and random forest, especially in prediction accuracy. Thereby, applying the algorithm is expected to provide highly reliable results to identify important factors that have impacts on injury severities at grade crossings. Such application will further aid the discovery of potential crossings with significant factors. The FRA’s Highway-Rail Grade Crossing Accident/Incident database from 2012 to 2017 is fused with the FRA Highway-Rail Crossing Inventory database for the analysis. Observations with missing information were removed from the original database. Crossing position under or over the railroad and pedestrian or other types of highway users were also not considered since they were not specifically of interest in this study. After the database cleaning process, it condensed to the total of 1,250 accidents out of the retrieved 12,630 from the combined database. The results show that adjacent highway traffic volume and train speed are the most significant factors causing accidents and injury severity. They are followed by the driver’s age and the estimated vehicle speed. It also indicated that truck-involved accidents and crossings with gates, flashing lights, and other types of warning devices combined, and highway user’s gender as a male also pertain to the higher injury rate. Through this study, it is possible to provide guidance to decision-makers in recognizing possible risks at-grade crossings that may cause driver casualties.

PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0252873
Author(s):  
Hawazin W. Elani ◽  
André F. M. Batista ◽  
W. Murray Thomson ◽  
Ichiro Kawachi ◽  
Alexandre D. P. Chiavegatto Filho

Introduction Little is understood about the socioeconomic predictors of tooth loss, a condition that can negatively impact individual’s quality of life. The goal of this study is to develop a machine-learning algorithm to predict complete and incremental tooth loss among adults and to compare the predictive performance of these models. Methods We used data from the National Health and Nutrition Examination Survey from 2011 to 2014. We developed multiple machine-learning algorithms and assessed their predictive performances by examining the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and positive and negative predictive values. Results The extreme gradient boosting trees presented the highest performance in the prediction of edentulism (AUC = 88.7%; 95%CI: 87.1, 90.2), the absence of a functional dentition (AUC = 88.3% 95%CI: 87.3,89.3) and for predicting missing any tooth (AUC = 83.2%; 95%CI, 82.0, 84.4). Although, as expected, age and routine dental care emerged as strong predictors of tooth loss, the machine learning approach identified additional predictors, including socioeconomic conditions. Indeed, the performance of models incorporating socioeconomic characteristics was better at predicting tooth loss than those relying on clinical dental indicators alone. Conclusions Future application of machine-learning algorithm, with longitudinal cohorts, for identification of individuals at risk for tooth loss could assist clinicians to prioritize interventions directed toward the prevention of tooth loss.


2021 ◽  
Vol 9 (5) ◽  
pp. 1034
Author(s):  
Carlos Sabater ◽  
Lorena Ruiz ◽  
Abelardo Margolles

This study aimed to recover metagenome-assembled genomes (MAGs) from human fecal samples to characterize the glycosidase profiles of Bifidobacterium species exposed to different prebiotic oligosaccharides (galacto-oligosaccharides, fructo-oligosaccharides and human milk oligosaccharides, HMOs) as well as high-fiber diets. A total of 1806 MAGs were recovered from 487 infant and adult metagenomes. Unsupervised and supervised classification of glycosidases codified in MAGs using machine-learning algorithms allowed establishing characteristic hydrolytic profiles for B. adolescentis, B. bifidum, B. breve, B. longum and B. pseudocatenulatum, yielding classification rates above 90%. Glycosidase families GH5 44, GH32, and GH110 were characteristic of B. bifidum. The presence or absence of GH1, GH2, GH5 and GH20 was characteristic of B. adolescentis, B. breve and B. pseudocatenulatum, while families GH1 and GH30 were relevant in MAGs from B. longum. These characteristic profiles allowed discriminating bifidobacteria regardless of prebiotic exposure. Correlation analysis of glycosidase activities suggests strong associations between glycosidase families comprising HMOs-degrading enzymes, which are often found in MAGs from the same species. Mathematical models here proposed may contribute to a better understanding of the carbohydrate metabolism of some common bifidobacteria species and could be extrapolated to other microorganisms of interest in future studies.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
John Foley ◽  
Naghmeh Moradpoor ◽  
Henry Ochenyi

One of the important features of routing protocol for low-power and lossy networks (RPLs) is objective function (OF). OF influences an IoT network in terms of routing strategies and network topology. On the contrary, detecting a combination of attacks against OFs is a cutting-edge technology that will become a necessity as next generation low-power wireless networks continue to be exploited as they grow rapidly. However, current literature lacks study on vulnerability analysis of OFs particularly in terms of combined attacks. Furthermore, machine learning is a promising solution for the global networks of IoT devices in terms of analysing their ever-growing generated data and predicting cyberattacks against such devices. Therefore, in this paper, we study the vulnerability analysis of two popular OFs of RPL to detect combined attacks against them using machine learning algorithms through different simulated scenarios. For this, we created a novel IoT dataset based on power and network metrics, which is deployed as part of an RPL IDS/IPS solution to enhance information security. Addressing the captured results, our machine learning approach is successful in detecting combined attacks against two popular OFs of RPL based on the power and network metrics in which MLP and RF algorithms are the most successful classifier deployment for single and ensemble models.


Author(s):  
Tashi Ngamdung ◽  
Marco daSilva

The United States Department of Transportation’s (US DOT) Research and Innovative Technology Administration’s John A. Volpe National Transportation Systems Center (Volpe Center), under the direction of the US DOT Federal Railroad Administration (FRA) Office of Research and Development (R&D), is leveraging the National Highway Traffic Safety Administration (NHTSA) sponsored Integrated Vehicle Based Safety System (IVBSS) Light Vehicle (LV) Field Operational Test (FOT) to collect and analyze drivers’ activities at or on approach to highway-rail grade crossings. Grade crossings in Michigan, Indiana, and Ohio were cross-referenced with IVBSS LV FOT research vehicle location to identify the time research vehicles were present at a crossing. The IVBSS LV FOT included 108 participants that took a total of 22,656 trips. Of the 22,656 total trips, 3,137 trips included a total of 4,215 grade crossing events. The analysis was based of drivers’ activities at the 4,215 grade crossing events. Both looking behavior and distractions did not significantly differ based on gender. However when analyzed per age-group, younger drivers (between 20 to 30 years old) were significantly more likely to be distracted than middle-aged drivers (between 40 to 50 years old) or older drivers (between 60 to 70 years old). For looking behavior, the data revealed that older drivers are more likely to look at least one way at or on approach to highway-rail crossing (43.8 percent exhibited this behavior) than either middle-aged drivers (35.0 percent exhibited this behavior) or younger drivers (25.3 percent exhibited this behavior).


Author(s):  
Elric Zweck ◽  
Katherine L. Thayer ◽  
Ole K. L. Helgestad ◽  
Manreet Kanwar ◽  
Mohyee Ayouty ◽  
...  

Background Cardiogenic shock (CS) is a heterogeneous syndrome with varied presentations and outcomes. We used a machine learning approach to test the hypothesis that patients with CS have distinct phenotypes at presentation, which are associated with unique clinical profiles and in‐hospital mortality. Methods and Results We analyzed data from 1959 patients with CS from 2 international cohorts: CSWG (Cardiogenic Shock Working Group Registry) (myocardial infarction [CSWG‐MI; n=410] and acute‐on‐chronic heart failure [CSWG‐HF; n=480]) and the DRR (Danish Retroshock MI Registry) (n=1069). Clusters of patients with CS were identified in CSWG‐MI using the consensus k means algorithm and subsequently validated in CSWG‐HF and DRR. Patients in each phenotype were further categorized by their Society of Cardiovascular Angiography and Interventions staging. The machine learning algorithms revealed 3 distinct clusters in CS: "non‐congested (I)", "cardiorenal (II)," and "cardiometabolic (III)" shock. Among the 3 cohorts (CSWG‐MI versus DDR versus CSWG‐HF), in‐hospital mortality was 21% versus 28% versus 10%, 45% versus 40% versus 32%, and 55% versus 56% versus 52% for clusters I, II, and III, respectively. The "cardiometabolic shock" cluster had the highest risk of developing stage D or E shock as well as in‐hospital mortality among the phenotypes, regardless of cause. Despite baseline differences, each cluster showed reproducible demographic, metabolic, and hemodynamic profiles across the 3 cohorts. Conclusions Using machine learning, we identified and validated 3 distinct CS phenotypes, with specific and reproducible associations with mortality. These phenotypes may allow for targeted patient enrollment in clinical trials and foster development of tailored treatment strategies in subsets of patients with CS.


2020 ◽  
Author(s):  
Jia Xue ◽  
Junxiang Chen ◽  
Ran Hu ◽  
Chen Chen ◽  
Chengda Zheng ◽  
...  

BACKGROUND It is important to measure the public response to the COVID-19 pandemic. Twitter is an important data source for infodemiology studies involving public response monitoring. OBJECTIVE The objective of this study is to examine COVID-19–related discussions, concerns, and sentiments using tweets posted by Twitter users. METHODS We analyzed 4 million Twitter messages related to the COVID-19 pandemic using a list of 20 hashtags (eg, “coronavirus,” “COVID-19,” “quarantine”) from March 7 to April 21, 2020. We used a machine learning approach, Latent Dirichlet Allocation (LDA), to identify popular unigrams and bigrams, salient topics and themes, and sentiments in the collected tweets. RESULTS Popular unigrams included “virus,” “lockdown,” and “quarantine.” Popular bigrams included “COVID-19,” “stay home,” “corona virus,” “social distancing,” and “new cases.” We identified 13 discussion topics and categorized them into 5 different themes: (1) public health measures to slow the spread of COVID-19, (2) social stigma associated with COVID-19, (3) COVID-19 news, cases, and deaths, (4) COVID-19 in the United States, and (5) COVID-19 in the rest of the world. Across all identified topics, the dominant sentiments for the spread of COVID-19 were anticipation that measures can be taken, followed by mixed feelings of trust, anger, and fear related to different topics. The public tweets revealed a significant feeling of fear when people discussed new COVID-19 cases and deaths compared to other topics. CONCLUSIONS This study showed that Twitter data and machine learning approaches can be leveraged for an infodemiology study, enabling research into evolving public discussions and sentiments during the COVID-19 pandemic. As the situation rapidly evolves, several topics are consistently dominant on Twitter, such as confirmed cases and death rates, preventive measures, health authorities and government policies, COVID-19 stigma, and negative psychological reactions (eg, fear). Real-time monitoring and assessment of Twitter discussions and concerns could provide useful data for public health emergency responses and planning. Pandemic-related fear, stigma, and mental health concerns are already evident and may continue to influence public trust when a second wave of COVID-19 occurs or there is a new surge of the current pandemic.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0241239
Author(s):  
Kai On Wong ◽  
Osmar R. Zaïane ◽  
Faith G. Davis ◽  
Yutaka Yasui

Background Canada is an ethnically-diverse country, yet its lack of ethnicity information in many large databases impedes effective population research and interventions. Automated ethnicity classification using machine learning has shown potential to address this data gap but its performance in Canada is largely unknown. This study conducted a large-scale machine learning framework to predict ethnicity using a novel set of name and census location features. Methods Using census 1901, the multiclass and binary class classification machine learning pipelines were developed. The 13 ethnic categories examined were Aboriginal (First Nations, Métis, Inuit, and all-combined)), Chinese, English, French, Irish, Italian, Japanese, Russian, Scottish, and others. Machine learning algorithms included regularized logistic regression, C-support vector, and naïve Bayes classifiers. Name features consisted of the entire name string, substrings, double-metaphones, and various name-entity patterns, while location features consisted of the entire location string and substrings of province, district, and subdistrict. Predictive performance metrics included sensitivity, specificity, positive predictive value, negative predictive value, F1, Area Under the Curve for Receiver Operating Characteristic curve, and accuracy. Results The census had 4,812,958 unique individuals. For multiclass classification, the highest performance achieved was 76% F1 and 91% accuracy. For binary classifications for Chinese, French, Italian, Japanese, Russian, and others, the F1 ranged 68–95% (median 87%). The lower performance for English, Irish, and Scottish (F1 ranged 63–67%) was likely due to their shared cultural and linguistic heritage. Adding census location features to the name-based models strongly improved the prediction in Aboriginal classification (F1 increased from 50% to 84%). Conclusions The automated machine learning approach using only name and census location features can predict the ethnicity of Canadians with varying performance by specific ethnic categories.


Author(s):  
Marco A. Alvarez ◽  
SeungJin Lim

Current search engines impose an overhead to motivated students and Internet users who employ the Web as a valuable resource for education. The user, searching for good educational materials for a technical subject, often spends extra time to filter irrelevant pages or ends up with commercial advertisements. It would be ideal if, given a technical subject by user who is educationally motivated, suitable materials with respect to the given subject are automatically identified by an affordable machine processing of the recommendation set returned by a search engine for the subject. In this scenario, the user can save a significant amount of time in filtering out less useful Web pages, and subsequently the user’s learning goal on the subject can be achieved more efficiently without clicking through numerous pages. This type of convenient learning is called One-Stop Learning (OSL). In this paper, the contributions made by Lim and Ko in (Lim and Ko, 2006) for OSL are redefined and modeled using machine learning algorithms. Four selected supervised learning algorithms: Support Vector Machine (SVM), AdaBoost, Naive Bayes and Neural Networks are evaluated using the same data used in (Lim and Ko, 2006). The results presented in this paper are promising, where the highest precision (98.9%) and overall accuracy (96.7%) obtained by using SVM is superior to the results presented by Lim and Ko. Furthermore, the machine learning approach presented here, demonstrates that the small set of features used to represent each Web page yields a good solution for the OSL problem.


2019 ◽  
Vol 9 (12) ◽  
pp. 122 ◽  
Author(s):  
Marina Sánchez-Rico ◽  
Jesús M. Alvarado

The study of diagnostic associations entails a large number of methodological problems regarding the application of machine learning algorithms, collinearity and wide variability being some of the most prominent ones. To overcome these, we propose and tested the usage of uniform manifold approximation and projection (UMAP), a very recent, popular dimensionality reduction technique. We showed its effectiveness by using it on a large Spanish clinical database of patients diagnosed with depression, to whom we applied UMAP before grouping them using a hierarchical agglomerative cluster analysis. By extensively studying its behavior and results, validating them with purely unsupervised metrics, we show that they are consistent with well-known relationships, which validates the applicability of UMAP to advance the study of comorbidities.


2020 ◽  
Vol 25 (4) ◽  
pp. 433-448 ◽  
Author(s):  
Alex Ingrams

In this paper, the author argues that the conflict between the copious amount of digital data processed by public organisations and the need for policy-relevant insights to aid public participation constitutes a ‘public information paradox’. Machine learning (ML) approaches may offer one solution to this paradox through algorithms that transparently collect and use statistical modelling to provide insights for policymakers. Such an approach is tested in this paper. The test involves applying an unsupervised machine learning approach with latent Dirichlet allocation (LDA) analysis of thousands of public comments submitted to the United States Transport Security Administration (TSA) on a 2013 proposed regulation for the use of new full body imaging scanners in airport security terminals. The analysis results in salient topic clusters that could be used by policymakers to understand large amounts of text such as in an open public comments process. The results are compared with the actual final proposed TSA rule, and the author reflects on new questions raised for transparency by the implementation of ML in open rule-making processes.


Sign in / Sign up

Export Citation Format

Share Document