scholarly journals A BAYESIAN APPROACH BASED ON ACQUISITION FUNCTION FOR OPTIMAL SELECTION OF DEEP LEARNING HYPERPARAMETERS: A CASE STUDY WITH ENERGY MANAGEMENT DATA

2020 ◽  
Vol 2 (1) ◽  
pp. 22-27
Author(s):  
MUHAMMAD ALI ◽  
Krishneel Prakash ◽  
Hemanshu Pota

With the recent rollout of smart meters, huge amount of data can be generated on hourly and daily basis. Researchers and industry persons can leverage from this big data to make intelligent decisions via deep learning (DL) algorithms. However, the performance of DL algorithms are heavily dependent on the proper selection of parameters. If the hyperparameters are poorly selected, they usually lead to suboptimal results. Traditional approaches include a manual setting of parameters by trial and error methods which is time consuming and difficult process.  In this paper, a Bayesian approach based on acquisition is presented to automatic selection of optimal parameters based on provided data. The acquisition function was established to search for the best parameter from the input space and evaluate the next points based on past observations. The tuning process identifies the best model parameters by iterating the objective function and minimizing the loss for optimizable variables such as learning rate and Hidden layersize. To validate the presented approach, we conducted a case study on real-life energy management datasets while constructing a deep learning model on MATLAB platform. A performance comparison was drawn with random parameters and optimal parameters selected by presented approach. The comparison results illustrate that the presented approach is effective as it brings a notable improvement in the performance of learning algorithm.

GEOMATICA ◽  
2021 ◽  
pp. 1-23
Author(s):  
Roholah Yazdan ◽  
Masood Varshosaz ◽  
Saied Pirasteh ◽  
Fabio Remondino

Automatic detection and recognition of traffic signs from images is an important topic in many applications. At first, we segmented the images using a classification algorithm to delineate the areas where the signs are more likely to be found. In this regard, shadows, objects having similar colours, and extreme illumination changes can significantly affect the segmentation results. We propose a new shape-based algorithm to improve the accuracy of the segmentation. The algorithm works by incorporating the sign geometry to filter out the wrong pixels from the classification results. We performed several tests to compare the performance of our algorithm against those obtained by popular techniques such as Support Vector Machine (SVM), K-Means, and K-Nearest Neighbours. In these tests, to overcome the unwanted illumination effects, the images are transformed into colour spaces Hue, Saturation, and Intensity, YUV, normalized red green blue, and Gaussian. Among the traditional techniques used in this study, the best results were obtained with SVM applied to the images transformed into the Gaussian colour space. The comparison results also suggested that by adding the geometric constraints proposed in this study, the quality of sign image segmentation is improved by 10%–25%. We also comparted the SVM classifier enhanced by incorporating the geometry of signs with a U-Shaped deep learning algorithm. Results suggested the performance of both techniques is very close. Perhaps the deep learning results could be improved if a more comprehensive data set is provided.


2013 ◽  
Vol 8 (No. 4) ◽  
pp. 186-194
Author(s):  
M. Heřmanovský ◽  
P. Pech

This paper demonstrates an application of the previously published method for selection of optimal catchment descriptors, according to which similar catchments can be identified for the purpose of estimation of the Sacramento – Soil Moisture Accounting (SAC-SMA) model parameters for a set of tested catchments, based on the physical similarity approach. For the purpose of the analysis, the following data from the Model Parameter Estimation Experiment (MOPEX) project were taken: a priori model parameter sets used as reference values for comparison with the newly estimated parameters, and catchment descriptors of four categories (climatic descriptors, soil properties, land cover and catchment morphology). The inverse clustering method, with Andrews’ curves for a homogeneity check, was used for the catchment grouping process. The optimal catchment descriptors were selected on the basis of two criteria, one comparing different subsets of catchment descriptors of the same size (MIN), the other one evaluating the improvement after addition of another catchment descriptor (MAX). The results suggest that the proposed method and the two criteria used may lead to the selection of a subset of conditionally optimal catchment descriptors from a broader set of them. As expected, the quality of the resulting subset of optimal catchment descriptors is mainly dependent on the number and type of the descriptors in the broader set. In the presented case study, six to seven catchment descriptors (two climatic, two soil and at least two land-cover descriptors) were identified as optimal for regionalisation of the SAC-SMA model parameters for a set of MOPEX catchments.


2020 ◽  
Vol 14 ◽  
Author(s):  
Yaqing Zhang ◽  
Jinling Chen ◽  
Jen Hong Tan ◽  
Yuxuan Chen ◽  
Yunyi Chen ◽  
...  

Emotion is the human brain reacting to objective things. In real life, human emotions are complex and changeable, so research into emotion recognition is of great significance in real life applications. Recently, many deep learning and machine learning methods have been widely applied in emotion recognition based on EEG signals. However, the traditional machine learning method has a major disadvantage in that the feature extraction process is usually cumbersome, which relies heavily on human experts. Then, end-to-end deep learning methods emerged as an effective method to address this disadvantage with the help of raw signal features and time-frequency spectrums. Here, we investigated the application of several deep learning models to the research field of EEG-based emotion recognition, including deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid model of CNN and LSTM (CNN-LSTM). The experiments were carried on the well-known DEAP dataset. Experimental results show that the CNN and CNN-LSTM models had high classification performance in EEG-based emotion recognition, and their accurate extraction rate of RAW data reached 90.12 and 94.17%, respectively. The performance of the DNN model was not as accurate as other models, but the training speed was fast. The LSTM model was not as stable as the CNN and CNN-LSTM models. Moreover, with the same number of parameters, the training speed of the LSTM was much slower and it was difficult to achieve convergence. Additional parameter comparison experiments with other models, including epoch, learning rate, and dropout probability, were also conducted in the paper. Comparison results prove that the DNN model converged to optimal with fewer epochs and a higher learning rate. In contrast, the CNN model needed more epochs to learn. As for dropout probability, reducing the parameters by ~50% each time was appropriate.


2012 ◽  
Vol 14 (01) ◽  
pp. 1250004 ◽  
Author(s):  
ANDRE KOOIMAN ◽  
SUKHAD SUBODH KESHKAMAT

Selection of scale in science and planning is often guided by ad-hoc decisions and arguments of accuracy and availability of existing data and resources. A more analytical approach to selection of scale and a bridge between theoretical insight and practical application is required. This paper reviews recent developments in thinking on theoretical concepts on scale from the perspective of geo-information science and compares these with a real life case. The concept of scale is framed as a three dimensional boundary object that explains scale choice as resultant of rationalities in reality-, model- and data scales. It is applied to a case-study of how issues of scale were handled in the Reconstruction program of the Province of North Brabant in The Netherlands. The Reconstruction is an ongoing regional spatial planning exercise that started in the year 2000 in response to major veterinary, environmental, social and economic problems in areas with concentrations of intensive livestock keeping. It combines legislation and policies at international, national, regional and municipal levels. Geographic information was produced to support and present the results of the plan process and related SEA. Scale of various intermediate and final geo-information products varied from 1:5000–1:400,000 and was dependent on the plan stage, plan status and target audience, plan instrument, level of participation, data characteristics, costs and technology. By comparing theory with the case study we bring out the criteria and conditions of selection of appropriate scale whereby the usefulness of academic research in geographic information science for planning and decision making could be improved.


2020 ◽  
Vol 16 (11) ◽  
pp. e1007575 ◽  
Author(s):  
Alireza Yazdani ◽  
Lu Lu ◽  
Maziar Raissi ◽  
George Em Karniadakis

Mathematical models of biological reactions at the system-level lead to a set of ordinary differential equations with many unknown parameters that need to be inferred using relatively few experimental measurements. Having a reliable and robust algorithm for parameter inference and prediction of the hidden dynamics has been one of the core subjects in systems biology, and is the focus of this study. We have developed a new systems-biology-informed deep learning algorithm that incorporates the system of ordinary differential equations into the neural networks. Enforcing these equations effectively adds constraints to the optimization procedure that manifests itself as an imposed structure on the observational data. Using few scattered and noisy measurements, we are able to infer the dynamics of unobserved species, external forcing, and the unknown model parameters. We have successfully tested the algorithm for three different benchmark problems.


2019 ◽  
Vol 15 (2) ◽  
pp. 800-811 ◽  
Author(s):  
Ivan la Fe-Perdomo ◽  
Gerardo Beruvides ◽  
Ramon Quiza ◽  
Rodolfo Haber ◽  
Marcelino Rivas

2018 ◽  
Author(s):  
K S Naveenkumar ◽  
Babu R Mohammed Harun ◽  
R Vinayakumar ◽  
KP Soman

AbstractProtein classification is responsible for the biological sequence, we came up with an idea which deals with the classification of proteomics using deep learning algorithm. This algorithm focuses mainly to classify sequences of protein-vector which is used for the representation of proteomics. Selection of the type protein representation is challenging based on which output in terms of accuracy is depended on, The protein representation used here is n-gram i.e. 3-gram and Keras embedding used for biological sequences like protein. In this paper we are working on the Protein classification to show the strength and representation of biological sequence of the proteins.


Smart Cities ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 1220-1243
Author(s):  
Hafiz Suliman Munawar ◽  
Fahim Ullah ◽  
Siddra Qayyum ◽  
Amirhossein Heravi

Floods are one of the most fatal and devastating disasters, instigating an immense loss of human lives and damage to property, infrastructure, and agricultural lands. To cater to this, there is a need to develop and implement real-time flood management systems that could instantly detect flooded regions to initiate relief activities as early as possible. Current imaging systems, relying on satellites, have demonstrated low accuracy and delayed response, making them unreliable and impractical to be used in emergency responses to natural disasters such as flooding. This research employs Unmanned Aerial Vehicles (UAVs) to develop an automated imaging system that can identify inundated areas from aerial images. The Haar cascade classifier was explored in the case study to detect landmarks such as roads and buildings from the aerial images captured by UAVs and identify flooded areas. The extracted landmarks are added to the training dataset that is used to train a deep learning algorithm. Experimental results show that buildings and roads can be detected from the images with 91% and 94% accuracy, respectively. The overall accuracy of 91% is recorded in classifying flooded and non-flooded regions from the input case study images. The system has shown promising results on test images belonging to both pre- and post-flood classes. The flood relief and rescue workers can quickly locate flooded regions and rescue stranded people using this system. Such real-time flood inundation systems will help transform the disaster management systems in line with modern smart cities initiatives.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
A Chave. Badiola ◽  
A Flores-Saiffe ◽  
R Valencia-Murillo ◽  
G Mendizabal-Ruiz ◽  
A Santibañez-Morales ◽  
...  

Abstract Study question Can ERICA’s deep-learning capabilities allow it to learn specifics from individual clinics, and improve its performance through a quality assurance and fine-tuning process? Summary answer Quality assurance and fine-tuning allowed ERICA to adapt to unique specifications of individual clinics, resulting in an improved performance at each clinic. What is known already Machine learning (ML) solutions to real-life problems have shown that generalizability (applicability of a model to different scenarios) of a single model is fundamentally a suboptimal approach, due to the risk of underspecification. Under-specification becomes relevant in environments where there is a myriad of protocols and approaches, like during IVF treatments. It is naïve to assume that different features extracted from embryos to predict treatment success weigh the same along the overall heterogeneity of protocols. This underspecification problem takes special relevance when deploying an ML-based product, like ERICA, in a clinical setting. Study design, size, duration Retrospective analysis of results from the quality assurance (QA) and fine-tuning (adaptation) process performed for a deep learning algorithm named ERICA (Embryo Ranking Intelligent Classification Assistant) at five clinics (1879 embryos) between August and September 2020. Participants/materials, setting, methods QA and fine-tuning consist of a transfer-learning approach (of the ERICA Core model) and re-training using embryos of each clinic exclusively. Results are assessed by a 10-fold cross validation approach, which splits the database in 10 and iteratively validates on each by training on the rest. Performance of ERICA is assessed both before and after the fine-tuning process and results are presented as averages per clinic. Embryos considered for QA and fine-tuning had known outcome. Main results and the role of chance After the fine-tuning, ERICA showed an average improvement of 13% in accuracy (from 50.2% to 63.2%); 36.6% in specificity (from 22.4% to 59%); 11% for Positive Predictive Value (from 51% to 62); 19.6% for Negative Predictive Value (from 44.6% to 64.2%), and 3.4% for F1 score (from 60% to 63.4%). Sensitivity decreased from 78% to 65.4%. Our results suggest ERICA’s Core is robust lending itself to be fine-tuned. It learns from individual laboratory specifics, and in this way adapts to new clinics. The results demonstrate that the Core model tends to classify embryos from new clinics as having a good prognosis, since it showed a high sensitivity and low specificity, both showing an improved balance following the fine-tune process. Additionally, the probability of finding a good prognosis embryo in the different labels, behaved as expected, decreasing its probability from Optimal (65.8%) to Poor prognosis (37.4%). Limitations, reasons for caution underspecification is a challenge to Artificial Intelligence (AI) based solutions pursuing a general model. For this study, our approach of QA followed by a fine-tuning process to overcome underspecification, was successful. However, it was only applied to 5 clinics, and the findings remain to be proven on a larger scale. Wider implications of the findings: Performance of QA should be considered standard before clinical implementation of any AI based solution. Our results should be interpreted as the theoretical/expected future performance of ERICA for each clinic. Regular assessments on performance for all models generated after fine-tuning are encouraged. Trial registration number Not applicable


Sign in / Sign up

Export Citation Format

Share Document