Predicting the Retroreflectivity Degradation of Waterborne Paint Pavement Markings using Advanced Machine Learning Techniques

Author(s):  
Momen R. Mousa ◽  
Saleh R. Mousa ◽  
Marwa Hassan ◽  
Paul Carlson ◽  
Ibrahim A. Elnaml

Waterborne paint is the most common marking material used throughout the United States. Because of budget constraints, most transportation agencies repaint their markings based on a fixed schedule, which is questionable in relation to efficiency and economy. To overcome this problem, state agencies could evaluate the marking performance by utilizing measured retroreflectivity of waterborne paints applied in the National Transportation Product Evaluation Program (NTPEP) or by using retroreflectivity degradation models developed in previous studies. Generally, both options lack accuracy because of the high dimensionality and multi-collinearity of retroreflectivity data. Therefore, the objective of this study was to employ an advanced machine learning algorithm to develop performance prediction models for waterborne paints considering the variables that are believed to affect their performance. To achieve this objective, a total of 17,952 skip and wheel retroreflectivity measurements were collected from 10 test decks included in the NTPEP. Based on these data, two CatBoost models were developed with an acceptable level of accuracy which can predict the skip and wheel retroreflectivity of waterborne paints for up to 3 years using only the initial measured retroreflectivity and the anticipated project conditions over the intended prediction horizon, such as line color, traffic, air temperature, and so forth. These models could be used by transportation agencies throughout the United States to 1) compare between different products and select the best product for a specific project, and 2) determine the expected service life of a specific product based on a specified threshold retroreflectivity to plan for future restriping activities.

2021 ◽  
Vol 8 ◽  
Author(s):  
Keith Carlson ◽  
Faraz Dadgostari ◽  
Michael A. Livermore ◽  
Daniel N. Rockmore

This paper introduces a novel linked structure-content representation of federal statutory law in the United States and analyzes and quantifies its structure using tools and concepts drawn from network analysis and complexity studies. The organizational component of our representation is based on the explicit hierarchical organization within the United States Code (USC) as well an embedded cross-reference citation network. We couple this structure with a layer of content-based similarity derived from the application of a “topic model” to the USC. The resulting representation is the first that explicitly models the USC as a “multinetwork” or “multilayered network” incorporating hierarchical structure, cross-references, and content. We report several novel descriptive statistics of this multinetwork. These include the results of this first application of the machine learning technique of topic modeling to the USC as well as multiple measures articulating the relationships between the organizational and content network layers. We find a high degree of assortativity of “titles” (the highest level hierarchy within the USC) with related topics. We also present a link prediction task and show that machine learning techniques are able to recover information about structure from content. Success in this prediction task has a natural interpretation as indicating a form of mutual information. We connect the relational findings between organization and content to a measure of “ease of search” in this large hyperlinked document that has implications for the ways in which the structure of the USC supports (or doesn’t support) broad useful access to the law. The measures developed in this paper have the potential to enable comparative work in the study of statutory networks that ranges across time and geography.


2018 ◽  
Author(s):  
Marcus Nguyen ◽  
S. Wesley Long ◽  
Patrick F. McDermott ◽  
Randall J. Olsen ◽  
Robert Olson ◽  
...  

NontyphoidalSalmonellaspecies are the leading bacterial cause of food-borne disease in the United States. Whole genome sequences and paired antimicrobial susceptibility data are available forSalmonellastrains because of surveillance efforts from public health agencies. In this study, a collection of 5,278 nontyphoidalSalmonellagenomes, collected over 15 years in the United States, were used to generate XGBoost-based machine learning models for predicting minimum inhibitory concentrations (MICs) for 15 antibiotics. The MIC prediction models have average accuracies between 95-96% within ± 1 two-fold dilution factor and can predict MICs with noa prioriinformation about the underlying gene content or resistance phenotypes of the strains. By selecting diverse genomes for training sets, we show that highly accurate MIC prediction models can be generated with fewer than 500 genomes. We also show that our approach for predicting MICs is stable over time despite annual fluctuations in antimicrobial resistance gene content in the sampled genomes. Finally, using feature selection, we explore the important genomic regions identified by the models for predicting MICs. To date, this is one of the largest MIC modeling studies to be published. Our strategy for developing whole genome sequence-based models for surveillance and clinical diagnostics can be readily applied to other important human pathogens.


Author(s):  
Scott Wark ◽  
Thao Phan

Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad “ethnic affinity” categories: “African American,” “U.S.-Hispanic,” and “Asian American.” This paper uses the life and death of these “ethnic affinity” categories to argue that they exemplify a novel mode of racialisation made possible by machine learning techniques. These categories worked by analysing users’ preferences and behaviour: they were supposed to capture an “affinity” for a broad demographic group, rather than registering membership of that group. That is, they were supposed to allow advertisers to “personalise” content for users depending on behaviourally determined affinities. We argue that, in effect, Facebook’s ethnic affinity categories were supposed to operationalise a “post-racial” mode of categorising users. But the paradox of personalisation is that in order to apprehend users as individuals, platforms must first assemble them into groups based on their likenesses with other individuals. This article uses an analysis of these categories to argue that even in the absence of data on a user’s race—even after the demise of the categories themselves—users can still be subject to techniques of inclusion or exclusion for discriminatory ends. The inductive machine learning techniques that platforms like Facebook employ to classify users generate “proxies,” like racialised preferences or language use, as racialising substitutes. This article concludes by arguing that Facebook’s ethnic affinity categories in fact typify novel modes of racialisation today.


2021 ◽  
Vol 1 (4) ◽  
pp. 268-280
Author(s):  
Bamanga Mahmud , , , Ahmad ◽  
Ahmadu Asabe Sandra ◽  
Musa Yusuf Malgwi ◽  
Dahiru I. Sajoh

For the identification and prediction of different diseases, machine learning techniques are commonly used in clinical decision support systems. Since heart disease is the leading cause of death for both men and women around the world. Heart is one of the essential parts of human body, therefore, it is one of the most critical concerns in the medical domain, and several researchers have developed intelligent medical devices to support the systems and further to enhance the ability to diagnose and predict heart diseases. However, there are few studies that look at the capabilities of ensemble methods in developing a heart disease detection and prediction model. In this study, the researchers assessed that how to use ensemble model, which proposes a more stable performance than the use of base learning algorithm and these leads to better results than other heart disease prediction models. The University of California, Irvine (UCI) Machine Learning Repository archive was used to extract patient heart disease data records. To achieve the aim of this study, the researcher developed the meta-algorithm. The ensemble model is a superior solution in terms of high predictive accuracy and diagnostics output reliability, as per the results of the experiments. An ensemble heart disease prediction model is also presented in this work as a valuable, cost-effective, and timely predictive option with a user-friendly graphical user interface that is scalable and expandable. From the finding, the researcher suggests that Bagging is the best ensemble classifier to be adopted as the extended algorithm that has the high prediction probability score in the implementation of heart disease prediction.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S695-S695
Author(s):  
Timothy Sullivan

Abstract Background Outpatient antibiotic misuse is common, yet it is difficult to identify and prevent. Novel methods are needed to better identify unnecessary antibiotic use in the outpatient setting. Methods The Twitter developer platform was accessed to identify Tweets describing outpatient antibiotic use in the United States between November 2018 and March 2019. Unique English-language Tweets reporting recent antibiotic use were aggregated, reviewed, and labeled as describing possible misuse or not describing misuse. Possible misuse was defined as antibiotic use for a diagnosis or symptoms for which antibiotics are not indicated based on national guidelines, or the use of antibiotics without evaluation by a healthcare provider (Figure 1). Tweets were randomly divided into training and testing sets consisting of 80% and 20% of the data, respectively. Training set Tweets were preprocessed via a natural language processing pipeline, converted into numerical vectors, and used to generate a logistic regression algorithm to predict misuse in the testing set. Analyses were performed in Python using the scikit-learn and nltk libraries. Results 4000 Tweets were included, of which 1028 were labeled as describing possible outpatient antibiotic misuse. The algorithm correctly identified Tweets describing possible antibiotic misuse in the testing set with specificity = 94%, sensitivity = 55%, PPV = 75%, NPV = 87%, and area under the ROC curve = 0.91 (Figure 2). Conclusion A machine learning algorithm using Twitter data identified episodes of self-reported antibiotic misuse with good test performance, as defined by the area under the ROC curve. Analysis of Twitter data captured some episodes of antibiotic misuses, such as the use of non-prescribed antibiotics, that are not easily identified by other methods. This approach could be used to generate novel insights into the causes and extent of antibiotic misuse in the United States, and to monitor antibiotic misuse in real time. Disclosures All authors: No reported disclosures.


2019 ◽  
Author(s):  
Sing-Chun Wang ◽  
Yuxuan Wang

Abstract. Occurrences of devastating wildfires have been on the rise in the United States for the past decades. While the environmental controls, including weather, climate, and fuels, are known to play important roles in controlling wildfires, the interrelationships between fires and the environmental controls are highly complex and may not be well represented by traditional parametric regressions. Here we develop a model integrating multiple machine learning algorithms to predict gridded monthly wildfire burned area during 2002–2015 over the South Central United States and identify the relative importance of the environmental drivers on the burned area for both the winter-spring and summer fire seasons of that region. The developed model is able to alleviate the issue of unevenly-distributed burned area data and achieve a cross-validation (CV) R2 value of 0.42 and 0.40 for the two fire seasons. For the total burned area over the study domain, the model can explain 50 % and 79 % of interannual total burned area for the winter-spring and summer fire season, respectively. The prediction model ranks relative humidity (RH) anomalies and preceding months’ drought severity as the top two most important predictors on the gridded burned area for both fire seasons. Sensitivity experiments with the model show that the effect of climate change represented by a group of climate-anomaly variables contributes the most to the burned area for both fire seasons. Antecedent fuel amount and conditions are found to outweigh weather effects for the burned area in the winter-spring fire season, while the current-month fire weather is more important for the summer fire season likely due to the controlling effect of weather on fuel moisture in this season. This developed model allows us to predict gridded burned area and to access specific fire management strategies for different fire mechanisms in the two seasons.


2018 ◽  
Vol 21 ◽  
pp. 45-48
Author(s):  
Shilpa Balan ◽  
Sanchita Gawand ◽  
Priyanka Purushu

Cybersecurity plays a vital role in protecting the privacy and data of people. In the recent times, there have been several issues relating to cyber fraud, data breach and cyber theft. Many people in the United States have been a victim of identity theft. Thus, understanding of cybersecurity plays an important role in protecting their information and devices. As the adoption of smart devices and social networking are increasing, cybersecurity awareness needs to be spread. The research aims at building a classification machine learning algorithm to determine the awareness of cybersecurity by the common masses in the United States. We were able to attain a good F-measure score when evaluating the performance of the classification model built for this study.


2018 ◽  
Vol 57 (2) ◽  
Author(s):  
Marcus Nguyen ◽  
S. Wesley Long ◽  
Patrick F. McDermott ◽  
Randall J. Olsen ◽  
Robert Olson ◽  
...  

ABSTRACTNontyphoidalSalmonellaspecies are the leading bacterial cause of foodborne disease in the United States. Whole-genome sequences and paired antimicrobial susceptibility data are available forSalmonellastrains because of surveillance efforts from public health agencies. In this study, a collection of 5,278 nontyphoidalSalmonellagenomes, collected over 15 years in the United States, was used to generate extreme gradient boosting (XGBoost)-based machine learning models for predicting MICs for 15 antibiotics. The MIC prediction models had an overall average accuracy of 95% within ±1 2-fold dilution step (confidence interval, 95% to 95%), an average very major error rate of 2.7% (confidence interval, 2.4% to 3.0%), and an average major error rate of 0.1% (confidence interval, 0.1% to 0.2%). The model predicted MICs with noa prioriinformation about the underlying gene content or resistance phenotypes of the strains. By selecting diverse genomes for the training sets, we show that highly accurate MIC prediction models can be generated with less than 500 genomes. We also show that our approach for predicting MICs is stable over time, despite annual fluctuations in antimicrobial resistance gene content in the sampled genomes. Finally, using feature selection, we explore the important genomic regions identified by the models for predicting MICs. To date, this is one of the largest MIC modeling studies to be published. Our strategy for developing whole-genome sequence-based models for surveillance and clinical diagnostics can be readily applied to other important human pathogens.


2020 ◽  
Vol 16 ◽  
Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

Background and Introduction: Diabetes mellitus is a metabolic disorder that has emerged as a serious public health issue worldwide. According to the World Health Organization (WHO), without interventions, the number of diabetic incidences is expected to be at least 629 million by 2045. Uncontrolled diabetes gradually leads to progressive damage to eyes, heart, kidneys, blood vessels and nerves. Method: The paper presents a critical review of existing statistical and Artificial Intelligence (AI) based machine learning techniques with respect to DM complications namely retinopathy, neuropathy and nephropathy. The statistical and machine learning analytic techniques are used to structure the subsequent content review. Result: It has been inferred that statistical analysis can help only in inferential and descriptive analysis whereas, AI based machine learning models can even provide actionable prediction models for faster and accurate diagnose of complications associated with DM. Conclusion: The integration of AI based analytics techniques like machine learning and deep learning in clinical medicine will result in improved disease management through faster disease detection and cost reduction for disease treatment.


Sign in / Sign up

Export Citation Format

Share Document