New machine learning-based prediction models for fracture energy of asphalt mixtures

Measurement ◽  
2019 ◽  
Vol 135 ◽  
pp. 438-451 ◽  
Author(s):  
Hamed Majidifard ◽  
Behnam Jahangiri ◽  
William G. Buttlar ◽  
Amir H. Alavi
2021 ◽  
Author(s):  
◽  
Hamed Majidifard

The main aim of agencies involved in the construction of asphalt roads is to improve the field performance of the asphalt mixtures. The rising use of recycled and novel materials in asphalt mixture has rendered the previous semi-empirical methods of mixture design partly incapable of accurately predicting the mixture field performance with high precision. Meeting this challenge calls for a shift towards an approach involving mixture performance tests. This project deals with investigating the performance of modern recycled asphalt mixes containing ground tire rubber, Recycled Asphalt Shingles (RAS), Recycled Asphalt Pavement (RAP) and rejuvenators. Various performance tests for various type of distresses were considered to evaluate the effect of using these components in asphalt mixtures. Combining these performance tests with prediction of field performance of mixtures should provide more robust and reliable design criteria for the modern recycled asphalt mixtures leading to better roads. To this end, the performance of eighteen different dense-graded asphalt mixtures paved in Missouri were investigated. The sections contain a wide range of reclaimed asphalt pavement (RAP) and recycled asphalt shingles (RAS), and different types of additives. The large number of sections investigated and the associated breadth of asphalt mixtures tested provided a robust data set to evaluate the range, repeatability, and relative values provided by modern mixture performance tests. As cracking is one of the most prevalent distresses in Missouri, performance tests such as the disk-shaped compact tension test (DC[T]) and Illinois flexibility index test (I-FIT) were used to evaluate the cracking potential of the sampled field cores. In addition, the Hamburg wheel tracking test (HWTT) was employed to assess rutting and stripping potential. Asphalt binder replacement (ABR) and binder grade bumping at low temperature were found to be critical factors in low-temperature cracking resistance as assessed by the DC(T) fracture energy test. Six sections were found to perform well in the DC(T) test, likely as a result of binder grade bumping (softer grade selection) or because of low recycling content. However, all of the sections were characterized as having brittle behavior as predicted by the I-FIT flexibility index. Service life and ABR were key factors in the I-FIT test. Finally, a performance-space diagram including DC(T) fracture energy and HWTT depth was used to identify mixtures with higher usable temperature interval (UTI mix), some of which contained significant amounts of recycled material. In the second phase of chapter 2, the poor performing mixtures were redesigned in order to improve their performance by changing the components of the mixtures including recycling content, rejuvenator type and amount, binder type, crumb rubber quantity, etc.. Finally, the optimum content of the components based on mixture performance and materials costs was determined. The testing results along with the field performance data was used to develop a specification for MoDOT to screen the mixtures and use it for quality control and quality assurance of plant-produced asphalt concrete. Field monitoring is a potential means to identify the most reliable cracking performance test. Also, a new cracking index was introduced based on SCB (I-FIT) test to improve the test reliability and correlation with field results. In the third chapter of this study a prediction tool was developed to predict the performance of asphalt mixture at high and low temperatures. This tool is based on two different prediction models for DC(T) fracture energy and Hamburg wheel track tests. For DC(T) fracture energy model, genetic programming was used to develop the prediction model, and Convolution Neural Network (CNN) was used to train the Hamburg wheel track model on 10,000 data points. A database containing a comprehensive collection of Hamburg and DC(T) tests results were used to develop the machine learning-based prediction models. This tool can be used for pre-design purposes to design an asphalt mixture with balanced performance in rutting and cracking. The models were formulated in terms of typical influencing mixture properties variables such as asphalt binder high-temperature performance grade (PG), mixture type, aggregate size, aggregate gradation, asphalt content, total asphalt binder recycling content and tests parameters like temperature and number of cycles. Models accuracy were assessed through a rigorous validation process and found to be quite acceptable, despite the relatively small size of the training set. Since performing performance tests might be cost-restrictive for some users, using the proposed ML-based models can save time and expense during the material screening phase. Pavement distress inspections are performed using sophisticated data collection vehicles and/or foot-on-ground surveys. In either approach, the process of distress detection is human-dependent, expensive, inefficient, and/or unsafe. Automated pavement distress detection via road images is still a challenging issue among pavement researchers and computer-vision community. In the forth chapter of dissertation, we extracted 7237 google street-view, manually annotated for classification (nine categories of distress classes). Afterward, the YOLO (you look only once) deep learning framework was implemented to train the model using the labeled dataset. Also, U-net based model is developed to quantify the severity of the distresses, and finally, a hybrid model is developed by integrating the YOLO and U-net model to classify the distresses and quantify their severity simultaneously. The output of the distress classification and segmentation models are used to develop a comprehensive pavement condition tool which rates each pavement image according to the type and severity of distress extracted. As a result, we are able to avoid over-dependence on human judgement throughout the pavement condition evaluation process. The outcome of this study could be conveniently employed to evaluate the pavement conditions during its service life and help to make valid decisions for rehabilitation of the roads at the right time.


2019 ◽  
Author(s):  
Oskar Flygare ◽  
Jesper Enander ◽  
Erik Andersson ◽  
Brjánn Ljótsson ◽  
Volen Z Ivanov ◽  
...  

**Background:** Previous attempts to identify predictors of treatment outcomes in body dysmorphic disorder (BDD) have yielded inconsistent findings. One way to increase precision and clinical utility could be to use machine learning methods, which can incorporate multiple non-linear associations in prediction models. **Methods:** This study used a random forests machine learning approach to test if it is possible to reliably predict remission from BDD in a sample of 88 individuals that had received internet-delivered cognitive behavioral therapy for BDD. The random forest models were compared to traditional logistic regression analyses. **Results:** Random forests correctly identified 78% of participants as remitters or non-remitters at post-treatment. The accuracy of prediction was lower in subsequent follow-ups (68%, 66% and 61% correctly classified at 3-, 12- and 24-month follow-ups, respectively). Depressive symptoms, treatment credibility, working alliance, and initial severity of BDD were among the most important predictors at the beginning of treatment. By contrast, the logistic regression models did not identify consistent and strong predictors of remission from BDD. **Conclusions:** The results provide initial support for the clinical utility of machine learning approaches in the prediction of outcomes of patients with BDD. **Trial registration:** ClinicalTrials.gov ID: NCT02010619.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


2019 ◽  
Vol 21 (9) ◽  
pp. 662-669 ◽  
Author(s):  
Junnan Zhao ◽  
Lu Zhu ◽  
Weineng Zhou ◽  
Lingfeng Yin ◽  
Yuchen Wang ◽  
...  

Background: Thrombin is the central protease of the vertebrate blood coagulation cascade, which is closely related to cardiovascular diseases. The inhibitory constant Ki is the most significant property of thrombin inhibitors. Method: This study was carried out to predict Ki values of thrombin inhibitors based on a large data set by using machine learning methods. Taking advantage of finding non-intuitive regularities on high-dimensional datasets, machine learning can be used to build effective predictive models. A total of 6554 descriptors for each compound were collected and an efficient descriptor selection method was chosen to find the appropriate descriptors. Four different methods including multiple linear regression (MLR), K Nearest Neighbors (KNN), Gradient Boosting Regression Tree (GBRT) and Support Vector Machine (SVM) were implemented to build prediction models with these selected descriptors. Results: The SVM model was the best one among these methods with R2=0.84, MSE=0.55 for the training set and R2=0.83, MSE=0.56 for the test set. Several validation methods such as yrandomization test and applicability domain evaluation, were adopted to assess the robustness and generalization ability of the model. The final model shows excellent stability and predictive ability and can be employed for rapid estimation of the inhibitory constant, which is full of help for designing novel thrombin inhibitors.


2020 ◽  
Vol 16 ◽  
Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

Background and Introduction: Diabetes mellitus is a metabolic disorder that has emerged as a serious public health issue worldwide. According to the World Health Organization (WHO), without interventions, the number of diabetic incidences is expected to be at least 629 million by 2045. Uncontrolled diabetes gradually leads to progressive damage to eyes, heart, kidneys, blood vessels and nerves. Method: The paper presents a critical review of existing statistical and Artificial Intelligence (AI) based machine learning techniques with respect to DM complications namely retinopathy, neuropathy and nephropathy. The statistical and machine learning analytic techniques are used to structure the subsequent content review. Result: It has been inferred that statistical analysis can help only in inferential and descriptive analysis whereas, AI based machine learning models can even provide actionable prediction models for faster and accurate diagnose of complications associated with DM. Conclusion: The integration of AI based analytics techniques like machine learning and deep learning in clinical medicine will result in improved disease management through faster disease detection and cost reduction for disease treatment.


2021 ◽  
Vol 15 ◽  
Author(s):  
Alhassan Alkuhlani ◽  
Walaa Gad ◽  
Mohamed Roushdy ◽  
Abdel-Badeeh M. Salem

Background: Glycosylation is one of the most common post-translation modifications (PTMs) in organism cells. It plays important roles in several biological processes including cell-cell interaction, protein folding, antigen’s recognition, and immune response. In addition, glycosylation is associated with many human diseases such as cancer, diabetes and coronaviruses. The experimental techniques for identifying glycosylation sites are time-consuming, extensive laboratory work, and expensive. Therefore, computational intelligence techniques are becoming very important for glycosylation site prediction. Objective: This paper is a theoretical discussion of the technical aspects of the biotechnological (e.g., using artificial intelligence and machine learning) to digital bioinformatics research and intelligent biocomputing. The computational intelligent techniques have shown efficient results for predicting N-linked, O-linked and C-linked glycosylation sites. In the last two decades, many studies have been conducted for glycosylation site prediction using these techniques. In this paper, we analyze and compare a wide range of intelligent techniques of these studies from multiple aspects. The current challenges and difficulties facing the software developers and knowledge engineers for predicting glycosylation sites are also included. Method: The comparison between these different studies is introduced including many criteria such as databases, feature extraction and selection, machine learning classification methods, evaluation measures and the performance results. Results and conclusions: Many challenges and problems are presented. Consequently, more efforts are needed to get more accurate prediction models for the three basic types of glycosylation sites.


2018 ◽  
Author(s):  
Liyan Pan ◽  
Guangjian Liu ◽  
Xiaojian Mao ◽  
Huixian Li ◽  
Jiexin Zhang ◽  
...  

BACKGROUND Central precocious puberty (CPP) in girls seriously affects their physical and mental development in childhood. The method of diagnosis—gonadotropin-releasing hormone (GnRH)–stimulation test or GnRH analogue (GnRHa)–stimulation test—is expensive and makes patients uncomfortable due to the need for repeated blood sampling. OBJECTIVE We aimed to combine multiple CPP–related features and construct machine learning models to predict response to the GnRHa-stimulation test. METHODS In this retrospective study, we analyzed clinical and laboratory data of 1757 girls who underwent a GnRHa test in order to develop XGBoost and random forest classifiers for prediction of response to the GnRHa test. The local interpretable model-agnostic explanations (LIME) algorithm was used with the black-box classifiers to increase their interpretability. We measured sensitivity, specificity, and area under receiver operating characteristic (AUC) of the models. RESULTS Both the XGBoost and random forest models achieved good performance in distinguishing between positive and negative responses, with the AUC ranging from 0.88 to 0.90, sensitivity ranging from 77.91% to 77.94%, and specificity ranging from 84.32% to 87.66%. Basal serum luteinizing hormone, follicle-stimulating hormone, and insulin-like growth factor-I levels were found to be the three most important factors. In the interpretable models of LIME, the abovementioned variables made high contributions to the prediction probability. CONCLUSIONS The prediction models we developed can help diagnose CPP and may be used as a prescreening tool before the GnRHa-stimulation test.


2019 ◽  
Vol 41 (2) ◽  
pp. 284-287
Author(s):  
Pedro Guilherme Coelho Hannun ◽  
Luis Gustavo Modelli de Andrade

Abstract Introduction: The prediction of post transplantation outcomes is clinically important and involves several problems. The current prediction models based on standard statistics are very complex, difficult to validate and do not provide accurate prediction. Machine learning, a statistical technique that allows the computer to make future predictions using previous experiences, is beginning to be used in order to solve these issues. In the field of kidney transplantation, computational forecasting use has been reported in prediction of chronic allograft rejection, delayed graft function, and graft survival. This paper describes machine learning principles and steps to make a prediction and performs a brief analysis of the most recent applications of its application in literature. Discussion: There is compelling evidence that machine learning approaches based on donor and recipient data are better in providing improved prognosis of graft outcomes than traditional analysis. The immediate expectations that emerge from this new prediction modelling technique are that it will generate better clinical decisions based on dynamic and local practice data and optimize organ allocation as well as post transplantation care management. Despite the promising results, there is no substantial number of studies yet to determine feasibility of its application in a clinical setting. Conclusion: The way we deal with storage data in electronic health records will radically change in the coming years and machine learning will be part of clinical daily routine, whether to predict clinical outcomes or suggest diagnosis based on institutional experience.


Sign in / Sign up

Export Citation Format

Share Document