scholarly journals Classifying Compensations in Construction Disputes Using Machine Learning Techniques

Author(s):  
Murat Ayhan ◽  
◽  
Irem Dikmen ◽  
M. Talat Birgonul ◽  
◽  
...  

It is highly probable to encounter disputes in construction projects and construction disputes are detrimental as they may lead to cost overruns and delays. Knowing the compensation with some certainty can avoid parties from extending inconclusive claims. Decision support systems can be helpful to understand the aspect of the compensation, if any compensation can be acquired. Within this context, the primary objective of this research is to predict the associated compensations in construction disputes by using machine learning (ML) techniques on past project data so that in new projects, decision support can be provided with some certainty via forecasts on the aspect of the compensation. To do this, a conceptual model identifying the attributes affecting compensations was established based on an extensive literature review. Using these attributes, data from real-world dispute cases were collected. Insignificant attributes were eliminated via Chi-square tests to establish a simpler classification model, which was experimented via alternative single and ensemble ML techniques. The Naïve Bayes (NB) classifier generated the highest average classification accuracy as 80.61% when One-vs-All (OvA) decomposition technique was utilized. The conceptual model can guide construction professionals during dispute management decision-making and the promising results indicate that the classification model has the potential to identify compensations. This study can be used to mitigate disputes by preventing parties from resorting to unpleasant and inconclusive resolution processes.

2020 ◽  
Vol 89 ◽  
pp. 20-29
Author(s):  
Sh. K. Kadiev ◽  
◽  
R. Sh. Khabibulin ◽  
P. P. Godlevskiy ◽  
V. L. Semikov ◽  
...  

Introduction. An overview of research in the field of classification as a method of machine learning is given. Articles containing mathematical models and algorithms for classification were selected. The use of classification in intelligent management decision support systems in various subject areas is also relevant. Goal and objectives. The purpose of the study is to analyze papers on the classification as a machine learning method. To achieve the objective, it is necessary to solve the following tasks: 1) to identify the most used classification methods in machine learning; 2) to highlight the advantages and disadvantages of each of the selected methods; 3) to analyze the possibility of using classification methods in intelligent systems to support management decisions to solve issues of forecasting, prevention and elimination of emergencies. Methods. To obtain the results, general scientific and special methods of scientific knowledge were used - analysis, synthesis, generalization, as well as the classification method. Results and discussion thereof. According to the results of the analysis, studies with a mathematical formulation and the availability of software developments were identified. The issues of classification in the implementation of machine learning in the development of intelligent decision support systems are considered. Conclusion. The analysis revealed that enough algorithms were used to perform the classification while sorting the acquired knowledge within the subject area. The implementation of an accurate classification is one of the fundamental problems in the development of management decision support systems, including for fire and emergency prevention and response. Timely and effective decision by officials of operational shifts for the disaster management is also relevant. Key words: decision support, analysis, classification, machine learning, algorithm, mathematical models.


2021 ◽  
Vol 503 (3) ◽  
pp. 4581-4600
Author(s):  
Orlando Luongo ◽  
Marco Muccino

ABSTRACT We alleviate the circularity problem, whereby gamma-ray bursts are not perfect distance indicators, by means of a new model-independent technique based on Bézier polynomials. We use the well consolidate Amati and Combo correlations. We consider improved calibrated catalogues of mock data from differential Hubble rate points. To get our mock data, we use those machine learning scenarios that well adapt to gamma-ray bursts, discussing in detail how we handle small amounts of data from our machine learning techniques. We explore only three machine learning treatments, i.e. linear regression, neural network, and random forest, emphasizing quantitative statistical motivations behind these choices. Our calibration strategy consists in taking Hubble’s data, creating the mock compilation using machine learning and calibrating the aforementioned correlations through Bézier polynomials with a standard chi-square analysis first and then by means of a hierarchical Bayesian regression procedure. The corresponding catalogues, built up from the two correlations, have been used to constrain dark energy scenarios. We thus employ Markov chain Monte Carlo numerical analyses based on the most recent Pantheon supernova data, baryonic acoustic oscillations, and our gamma-ray burst data. We test the standard ΛCDM model and the Chevallier–Polarski–Linder parametrization. We discuss the recent H0 tension in view of our results. Moreover, we highlight a further severe tension over Ωm and we conclude that a slight evolving dark energy model is possible.


Materials ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2297
Author(s):  
Ayaz Ahmad ◽  
Furqan Farooq ◽  
Krzysztof Adam Ostrowski ◽  
Klaudia Śliwa-Wieczorek ◽  
Slawomir Czarnecki

Structures located on the coast are subjected to the long-term influence of chloride ions, which cause the corrosion of steel reinforcements in concrete elements. This corrosion severely affects the performance of the elements and may shorten the lifespan of an entire structure. Even though experimental activities in laboratories might be a solution, they may also be problematic due to time and costs. Thus, the application of individual machine learning (ML) techniques has been investigated to predict surface chloride concentrations (Cc) in marine structures. For this purpose, the values of Cc in tidal, splash, and submerged zones were collected from an extensive literature survey and incorporated into the article. Gene expression programming (GEP), the decision tree (DT), and an artificial neural network (ANN) were used to predict the surface chloride concentrations, and the most accurate algorithm was then selected. The GEP model was the most accurate when compared to ANN and DT, which was confirmed by the high accuracy level of the K-fold cross-validation and linear correlation coefficient (R2), mean absolute error (MAE), mean square error (MSE), and root mean square error (RMSE) parameters. As is shown in the article, the proposed method is an effective and accurate way to predict the surface chloride concentration without the inconveniences of laboratory tests.


2020 ◽  
Vol 21 (15) ◽  
pp. 5280
Author(s):  
Irini Furxhi ◽  
Finbarr Murphy

The practice of non-testing approaches in nanoparticles hazard assessment is necessary to identify and classify potential risks in a cost effective and timely manner. Machine learning techniques have been applied in the field of nanotoxicology with encouraging results. A neurotoxicity classification model for diverse nanoparticles is presented in this study. A data set created from multiple literature sources consisting of nanoparticles physicochemical properties, exposure conditions and in vitro characteristics is compiled to predict cell viability. Pre-processing techniques were applied such as normalization methods and two supervised instance methods, a synthetic minority over-sampling technique to address biased predictions and production of subsamples via bootstrapping. The classification model was developed using random forest and goodness-of-fit with additional robustness and predictability metrics were used to evaluate the performance. Information gain analysis identified the exposure dose and duration, toxicological assay, cell type, and zeta potential as the five most important attributes to predict neurotoxicity in vitro. This is the first tissue-specific machine learning tool for neurotoxicity prediction caused by nanoparticles in in vitro systems. The model performs better than non-tissue specific models.


Sign in / Sign up

Export Citation Format

Share Document