model training
Recently Published Documents


TOTAL DOCUMENTS

1031
(FIVE YEARS 684)

H-INDEX

24
(FIVE YEARS 7)

Author(s):  
Kiran M.Mane ◽  
◽  
S.P. Chavan ◽  
S.A. Salokhe ◽  
P.A. Nadgouda ◽  
...  

Large amounts of natural fine aggregate (NFA) and cement are used in building, which has major environmental consequences. This view of industrial waste can be used in part as an alternative to cement and part of the sand produced by the crusher as fine aggregate, similar to slag sand (GGBFS), fly ash, metacaolin, and silica fume. Many times, there are issues with the fresh characteristics of concrete when using alternative materials. The ANN tool is used in this paper to develop a Matlab software model that collapses concrete made with pozzolanic material and partially replaces natural fine aggregate (NFA) with manufactured sand (MS). Predict. The slump test was carried out in reference with I.S11991959, and the findings were used to create the artificial neural network (ANN) model. To mimic the formation, a total of 131 outcome values are employed, with 20% being used for model testing and 80% being used for model training. 25 enter the material properties to determine the concrete slump achieved by partially substituting pozzolan for cement and artificial sand (MS) for natural fine aggregate (NFA). According to studies, the workability of concrete is critically harmed as the amount of artificial sand replacing natural sand grows. The ANN model's results are extremely accurate, and they can forecast the slump of concrete prepared by partly substituting natural fine aggregate (NFA) and artificial sand (MS) with pozzolan.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 132
Author(s):  
Eyad Alsaghir ◽  
Xiyu Shi ◽  
Varuna De Silva ◽  
Ahmet Kondoz

Deep learning, in general, was built on input data transformation and presentation, model training with parameter tuning, and recognition of new observations using the trained model. However, this came with a high computation cost due to the extensive input database and the length of time required in training. Despite the model learning its parameters from the transformed input data, no direct research has been conducted to investigate the mathematical relationship between the transformed information (i.e., features, excitation) and the model’s learnt parameters (i.e., weights). This research aims to explore a mathematical relationship between the input excitations and the weights of a trained convolutional neural network. The objective is to investigate three aspects of this assumed feature-weight relationship: (1) the mathematical relationship between the training input images’ features and the model’s learnt parameters, (2) the mathematical relationship between the images’ features of a separate test dataset and a trained model’s learnt parameters, and (3) the mathematical relationship between the difference of training and testing images’ features and the model’s learnt parameters with a separate test dataset. The paper empirically demonstrated the existence of this mathematical relationship between the test image features and the model’s learnt weights by the ANOVA analysis.


2022 ◽  
Vol 56 ◽  
pp. 155-162
Author(s):  
Korina-Konstantina Drakaki ◽  
Georgia-Konstantina Sakki ◽  
Ioannis Tsoukalas ◽  
Panagiotis Kossieris ◽  
Andreas Efstratiadis

Abstract. Motivated by the challenges induced by the so-called Target Model and the associated changes to the current structure of the energy market, we revisit the problem of day-ahead prediction of power production from Small Hydropower Plants (SHPPs) without storage capacity. Using as an example a typical run-of-river SHPP in Western Greece, we test alternative forecasting schemes (from regression-based to machine learning) that take advantage of different levels of information. In this respect, we investigate whether it is preferable to use as predictor the known energy production of previous days, or to predict the day-ahead inflows and next estimate the resulting energy production via simulation. Our analyses indicate that the second approach becomes clearly more advantageous when the expert's knowledge about the hydrological regime and the technical characteristics of the SHPP is incorporated within the model training procedure. Beyond these, we also focus on the predictive uncertainty that characterize such forecasts, with overarching objective to move beyond the standard, yet risky, point forecasting methods, providing a single expected value of power production. Finally, we discuss the use of the proposed forecasting procedure under uncertainty in the real-world electricity market.


2022 ◽  
Author(s):  
Nils Koerber

In recent years the amount of data generated by imaging techniques has grown rapidly along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, we present the Microscopic Image Analyzer (MIA). MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and is compatible with commonly used open source software packages. The software provides a unified interface for easy image labeling, model training and inference. Furthermore the software was evaluated in a public competition and performed among the top three for all tested data sets. The source code is available on https://github.com/MIAnalyzer/MIA.


Water ◽  
2022 ◽  
Vol 14 (2) ◽  
pp. 191
Author(s):  
Shen Chiang ◽  
Chih-Hsin Chang ◽  
Wei-Bo Chen

To better understand the effect and constraint of different data lengths on the data-driven model training for the rainfall-runoff simulation, the support vector regression (SVR) approach was applied to the data-driven model as the core algorithm in the present study. Various features selection strategies and different data lengths were employed in the training phase of the model. The validated results of the SVR were compared with the rainfall-runoff simulation derived from a physically based hydrologic model, the Hydrologic Modeling System (HEC-HMS). The HEC-HMS was considered a conventional approach and was also calibrated with a dataset period identical to the SVR. Our results showed that the SVR and HEC-HMS models could be adopted for short and long periods of rainfall-runoff simulation. However, the SVR model estimated the rainfall-runoff relationship reasonably well even if the observational data of one year or one typhoon event was used. In contrast, the HEC-HMS model needed more parameter optimization and inference processes to achieve the same performance level as the SVR model. Overall, the SVR model was superior to the HEC-HMS model in the performance of the rainfall-runoff simulation.


2022 ◽  
Vol 14 (2) ◽  
pp. 328
Author(s):  
Pengliang Wei ◽  
Ran Huang ◽  
Tao Lin ◽  
Jingfeng Huang

A deep semantic segmentation model-based method can achieve state-of-the-art accuracy and high computational efficiency in large-scale crop mapping. However, the model cannot be widely used in actual large-scale crop mapping applications, mainly because the annotation of ground truth data for deep semantic segmentation model training is time-consuming. At the operational level, it is extremely difficult to obtain a large amount of ground reference data by photointerpretation for the model training. Consequently, in order to solve this problem, this study introduces a workflow that aims to extract rice distribution information in training sample shortage regions, using a deep semantic segmentation model (i.e., U-Net) trained on pseudo-labels. Based on the time series Sentinel-1 images, Cropland Data Layer (CDL) and U-Net model, the optimal multi-temporal datasets for rice mapping were summarized, using the global search method. Then, based on the optimal multi-temporal datasets, the proposed workflow (a combination of K-Means and random forest) was directly used to extract the rice-distribution information of Jiangsu (i.e., the K–RF pseudo-labels). For comparison, the optimal well-trained U-Net model acquired from Arkansas (i.e., the transfer model) was also transferred to Jiangsu to extract local rice-distribution information (i.e., the TF pseudo-labels). Finally, the pseudo-labels with high confidences generated from the two methods were further used to retrain the U-Net models, which were suitable for rice mapping in Jiangsu. For different rice planting pattern regions of Jiangsu, the final results showed that, compared with the U-Net model trained on the TF pseudo-labels, the rice area extraction errors of pseudo-labels could be further reduced by using the U-Net model trained on the K–RF pseudo-labels. In addition, compared with the existing rule-based rice mapping methods, he U-Net model trained on the K–RF pseudo-labels could robustly extract the spatial distribution information of rice. Generally, this study could provide new options for applying a deep semantic segmentation model to training sample shortage regions.


Author(s):  
Lucas Woltmann ◽  
Claudio Hartmann ◽  
Dirk Habich ◽  
Wolfgang Lehner

AbstractCardinality estimation is a fundamental task in database query processing and optimization. As shown in recent papers, machine learning (ML)-based approaches may deliver more accurate cardinality estimations than traditional approaches. However, a lot of training queries have to be executed during the model training phase to learn a data-dependent ML model making it very time-consuming. Many of those training or example queries use the same base data, have the same query structure, and only differ in their selective predicates. To speed up the model training phase, our core idea is to determine a predicate-independent pre-aggregation of the base data and to execute the example queries over this pre-aggregated data. Based on this idea, we present a specific aggregate-based training phase for ML-based cardinality estimation approaches in this paper. As we are going to show with different workloads in our evaluation, we are able to achieve an average speedup of 90 with our aggregate-based training phase and thus outperform indexes.


Author(s):  
Abdelali Elmoufidi ◽  
Ayoub Skouta ◽  
Said Jai-Andaloussi ◽  
Ouail Ouchetto

In the area of ophthalmology, glaucoma affects an increasing number of people. It is a major cause of blindness. Early detection avoids severe ocular complications such as glaucoma, cystoid macular edema, or diabetic proliferative retinopathy. Intelligent artificial intelligence has been confirmed beneficial for glaucoma assessment. In this paper, we describe an approach to automate glaucoma diagnosis using funds images. The setup of the proposed framework is in order: The Bi-dimensional Empirical Mode Decomposition (BEMD) algorithm is applied to decompose the Regions of Interest (ROI) to components (BIMFs+residue). CNN architecture VGG19 is implemented to extract features from decomposed BEMD components. Then, we fuse the features of the same ROI in a bag of features. These last very long; therefore, Principal Component Analysis (PCA) are used to reduce features dimensions. The bags of features obtained are the input parameters of the implemented classifier based on the Support Vector Machine (SVM). To train the built models, we have used two public datasets, which are ACRIMA and REFUGE. For testing our models, we have used a part of ACRIMA and REFUGE plus four other public datasets, which are RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF. The overall precision of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% is obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, by using the model trained on REFUGE. Again an accuracy of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36% is obtained in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, using the model training on ACRIMA. The experimental results obtained from different datasets demonstrate the efficiency and robustness of the proposed approach. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.


Sign in / Sign up

Export Citation Format

Share Document