scholarly journals Data Intelligence Model and Meta-Heuristic Algorithms-Based Pan Evaporation Modelling in Two Different Agro-Climatic Zones: A Case Study from Northern India

Atmosphere ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1654
Author(s):  
Nand Lal Kushwaha ◽  
Jitendra Rajput ◽  
Ahmed Elbeltagi ◽  
Ashraf Y. Elnaggar ◽  
Dipaka Ranjan Sena ◽  
...  

Precise quantification of evaporation has a vital role in effective crop modelling, irrigation scheduling, and agricultural water management. In recent years, the data-driven models using meta-heuristics algorithms have attracted the attention of researchers worldwide. In this investigation, we have examined the performance of models employing four meta-heuristic algorithms, namely, support vector machine (SVM), random tree (RT), reduced error pruning tree (REPTree), and random subspace (RSS) for simulating daily pan evaporation (EPd) at two different locations in north India representing semi-arid climate (New Delhi) and sub-humid climate (Ludhiana). The most suitable combinations of meteorological input variables as covariates to estimate EPd were ascertained through the subset regression technique followed by sensitivity analyses. The statistical indicators such as root mean square error (RMSE), mean absolute error (MAE), Nash–Sutcliffe efficiency (NSE), Willmott index (WI), and correlation coefficient (r) followed by graphical interpretations, were utilized for model evaluation. The SVM algorithm successfully performed in reconstructing the EPd time series with acceptable statistical criteria (i.e., NSE = 0.937, 0.795; WI = 0.984, 0.943; r = 0.968, 0.902; MAE = 0.055, 0.993 mm/day; and RMSE = 0.092, 1.317 mm/day) compared with the other applied algorithms during the testing phase at the New Delhi and Ludhiana stations, respectively. This study also demonstrated and discussed the potential of meta-heuristic algorithms for producing reasonable estimates of daily evaporation using minimal meteorological input variables with applicability of the best candidate model vetted in two diverse agro-climatic settings.

2020 ◽  
Vol 24 (1) ◽  
pp. 47-56
Author(s):  
Ove Oklevik ◽  
Grzegorz Kwiatkowski ◽  
Mona Kristin Nytun ◽  
Helene Maristuen

The quality of any economic impact assessment largely depends on the adequacy of the input variables and chosen assumptions. This article presents a direct economic impact assessment of a music festival hosted in Norway and sensitivity analyses of two study design assumptions: estimated number of attendees and chosen definition (size) of the affected area. Empirically, the article draws on a state-of-the-art framework of an economic impact analysis and uses primary data from 471 event attendees. The results show that, first, an economic impact analysis is a complex task that requires high precision in assessing different monetary flows entering and leaving the host region, and second, the study design assumptions exert a tremendous influence on the final estimation. Accordingly, the study offers a fertile agenda for local destination marketing organizers and event managers on how to conduct reliable economic impact assessments and explains which elements of such analyses are particularly important for final estimations.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4523 ◽  
Author(s):  
Carlos Cabo ◽  
Celestino Ordóñez ◽  
Fernando Sáchez-Lasheras ◽  
Javier Roca-Pardiñas ◽  
and Javier de Cos-Juez

We analyze the utility of multiscale supervised classification algorithms for object detection and extraction from laser scanning or photogrammetric point clouds. Only the geometric information (the point coordinates) was considered, thus making the method independent of the systems used to collect the data. A maximum of five features (input variables) was used, four of them related to the eigenvalues obtained from a principal component analysis (PCA). PCA was carried out at six scales, defined by the diameter of a sphere around each observation. Four multiclass supervised classification models were tested (linear discriminant analysis, logistic regression, support vector machines, and random forest) in two different scenarios, urban and forest, formed by artificial and natural objects, respectively. The results obtained were accurate (overall accuracy over 80% for the urban dataset, and over 93% for the forest dataset), in the range of the best results found in the literature, regardless of the classification method. For both datasets, the random forest algorithm provided the best solution/results when discrimination capacity, computing time, and the ability to estimate the relative importance of each variable are considered together.


2012 ◽  
Vol 2012 ◽  
pp. 1-10
Author(s):  
Pijush Samui

The main objective of site characterization is the prediction of in situ soil properties at any half-space point at a site based on limited tests. In this study, the Support Vector Machine (SVM) has been used to develop a three dimensional site characterization model for Bangalore, India based on large amount of Standard Penetration Test. SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing ε-insensitive loss function. The database consists of 766 boreholes, with more than 2700 field SPT values () spread over 220 sq km area of Bangalore. The model is applied for corrected () values. The three input variables (, , and , where , , and are the coordinates of the Bangalore) were used for the SVM model. The output of SVM was the data. The results presented in this paper clearly highlight that the SVM is a robust tool for site characterization. In this study, a sensitivity analysis of SVM parameters (σ, , and ε) has been also presented.


2013 ◽  
Vol 291-294 ◽  
pp. 2164-2168 ◽  
Author(s):  
Li Tian ◽  
Qiang Qiang Wang ◽  
An Zhao Cao

With the characteristic of line loss volatility, a research of line loss rate prediction was imperatively carried out. Considering the optimization ability of heuristic algorithm and the regression ability of support vector machine, a heuristic algorithm-support vector machine model is constructed. Case study shows that, compared with other heuristic algorithms’, the search efficiency and speed of genetic algorithm are good, and the prediction model is with high accuracy.


2003 ◽  
Vol 24 (3) ◽  
pp. 214-223 ◽  
Author(s):  
Nicholas Graves ◽  
Tanya M. Nicholls ◽  
Arthur J. Morris

AbstractObjective:To model the economic costs of hospital-acquired infections (HAIs) in New Zealand, by type of HAI.Design:Monte Carlo simulation model.Setting:Auckland District Health Board Hospitals (DHBH), the largest publicly funded hospital group in New Zealand supplying secondary and tertiary services. Costs are also estimated for predicted HAIs in admissions to all hospitals in New Zealand.Patients:All adults admitted to general medical and general surgical services.Method:Data on the number of cases of HAI were combined with data on the estimated prolongation of hospital stay due to HAI to produce an estimate of the number of bed days attributable to HAI. A cost per bed day value was applied to provide an estimate of the economic cost. Costs were estimated for predicted infections of the urinary tract, surgical wounds, the lower and upper respiratory tracts, the bloodstream, and other sites, and for cases of multiple sites of infection. Sensitivity analyses were undertaken for input variables.Results:The estimated costs of predicted HAIs in medical and surgical admissions to Auckland DHBH were $10.12 (US $4.56) million and $8.64 (US $3.90) million, respectively. They were $51.35 (US $23.16) million and $85.26 (US $38.47) million, respectively, for medical and surgical admissions to all hospitals in New Zealand.Conclusions:The method used produces results that are less precise than those of a specifically designed study using primary data collection, but has been applied at a lower cost. The estimated cost of HAIs is substantial, but only a proportion of infections can be avoided. Further work is required to identify the most cost-effective strategies for the prevention of HAI.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Danni Chen ◽  
JianDong Zhao ◽  
Peng Huang ◽  
Xiongna Deng ◽  
Tingting Lu

Purpose Sparrow search algorithm (SSA) is a novel global optimization method, but it is easy to fall into local optimization, which leads to its poor search accuracy and stability. The purpose of this study is to propose an improved SSA algorithm, called levy flight and opposition-based learning (LOSSA), based on LOSSA strategy. The LOSSA shows better search accuracy, faster convergence speed and stronger stability. Design/methodology/approach To further enhance the optimization performance of the algorithm, The Levy flight operation is introduced into the producers search process of the original SSA to enhance the ability of the algorithm to jump out of the local optimum. The opposition-based learning strategy generates better solutions for SSA, which is beneficial to accelerate the convergence speed of the algorithm. On the one hand, the performance of the LOSSA is evaluated by a set of numerical experiments based on classical benchmark functions. On the other hand, the hyper-parameter optimization problem of the Support Vector Machine (SVM) is also used to test the ability of LOSSA to solve practical problems. Findings First of all, the effectiveness of the two improved methods is verified by Wilcoxon signed rank test. Second, the statistical results of the numerical experiment show the significant improvement of the LOSSA compared with the original algorithm and other natural heuristic algorithms. Finally, the feasibility and effectiveness of the LOSSA in solving the hyper-parameter optimization problem of machine learning algorithms are demonstrated. Originality/value An improved SSA based on LOSSA is proposed in this paper. The experimental results show that the overall performance of the LOSSA is satisfactory. Compared with the SSA and other natural heuristic algorithms, the LOSSA shows better search accuracy, faster convergence speed and stronger stability. Moreover, the LOSSA also showed great optimization performance in the hyper-parameter optimization of the SVM model.


1984 ◽  
Vol 11 (1) ◽  
pp. 4-6 ◽  
Author(s):  
D. K. Pahalwan ◽  
R. S. Tripathi

Abstract Field experiment was conducted during dry season of 1981 and 1982 to determine the optimal irrigation schedule for summer peanuts (Arachis hypogaea L.) in relation to evaporative demand and crop water requirement at different growth stages. It was observed that peanut crop requires a higher irrigation frequency schedule during pegging to pod formation stage followed by pod development to maturity and planting to flowering stages. The higher pod yield and water use efficiency was obtained when irrigations were scheduled at an irrigation water to the cumulative pan evaporation ratio of 0.5 during planting to flowering, 0.9 during pegging to pod formation and 0.7 during pod development to maturity stage. The profile water contribution to total crop water use was higher under less frequent irrigation schedules particularly when the irrigations were scheduled at 0.5 irrigation water to the cumulative pan evaporation ratio up to the pod formation stage.


2004 ◽  
Vol 26 (1-2) ◽  
pp. 45-55
Author(s):  
Torsten Mattfeldt ◽  
Danilo Trijic ◽  
Hans‐Werner Gottfried ◽  
Hans A. Kestler

The subclassification of incidental prostatic carcinoma into the categories T1a and T1b is of major prognostic and therapeutic relevance. In this paper an attempt was made to find out which properties mainly predispose to these two tumor categories, and whether it is possible to predict the category from a battery of clinical and histopathological variables using newer methods of multivariate data analysis. The incidental prostatic carcinomas of the decade 1990–99 diagnosed at our department were reexamined. Besides acquisition of routine clinical and pathological data, the tumours were scored by immunohistochemistry for proliferative activity and p53‐overexpression. Tumour vascularization (angiogenesis) and epithelial texture were investigated by quantitative stereology. Learning vector quantization (LVQ) and support vector machines (SVM) were used for the purpose of prediction of tumour category from a set of 10 input variables (age, Gleason score, preoperative PSA value, immunohistochemical scores for proliferation and p53‐overexpression, 3 stereological parameters of angiogenesis, 2 stereological parameters of epithelial texture). In a stepwise logistic regression analysis with the tumour categories T1a and T1b as dependent variables, only the Gleason score and the volume fraction of epithelial cells proved to be significant as independent predictor variables of the tumour category. Using LVQ and SVM with the information from all 10 input variables, more than 80 of the cases could be correctly predicted as T1a or T1b category with specificity, sensitivity, negative and positive predictive value from 74–92%. Using only the two significant input variables Gleason score and epithelial volume fraction, the accuracy of prediction was not worse. Thus, descriptive and quantitative texture parameters of tumour cells are of major importance for the extent of propagation in the prostate gland in incidental prostatic adenocarcinomas. Classical statistical tools and neuronal approaches led to consistent conclusions.


Sign in / Sign up

Export Citation Format

Share Document