scholarly journals Tuning Deep Neural Networks for Predicting Energy Consumption in Arid Climate Based on Buildings Characteristics

2021 ◽  
Vol 13 (22) ◽  
pp. 12442
Author(s):  
Amal A. Al-Shargabi ◽  
Abdulbasit Almhafdy ◽  
Dina M. Ibrahim ◽  
Manal Alghieth ◽  
Francisco Chiclana

The dramatic growth in the number of buildings worldwide has led to an increase interest in predicting energy consumption, especially for the case of residential buildings. As the heating and cooling system highly affect the operation cost of buildings; it is worth investigating the development of models to predict the heating and cooling loads of buildings. In contrast to the majority of the existing related studies, which are based on historical energy consumption data, this study considers building characteristics, such as area and floor height, to develop prediction models of heating and cooling loads. In particular, this study proposes deep neural networks models based on several hyper-parameters: the number of hidden layers, the number of neurons in each layer, and the learning algorithm. The tuned models are constructed using a dataset generated with the Integrated Environmental Solutions Virtual Environment (IESVE) simulation software for the city of Buraydah city, the capital of the Qassim region in Saudi Arabia. The Qassim region was selected because of its harsh arid climate of extremely cold winters and hot summers, which means that lot of energy is used up for cooling and heating of residential buildings. Through model tuning, optimal parameters of deep learning models are determined using the following performance measures: Mean Square Error (MSE), Root Mean Square Error (RMSE), Regression (R) values, and coefficient of determination (R2). The results obtained with the five-layer deep neural network model, with 20 neurons in each layer and the Levenberg–Marquardt algorithm, outperformed the results of the other models with a lower number of layers. This model achieved MSE of 0.0075, RMSE 0.087, R and R2 both as high as 0.99 in predicting the heating load and MSE of 0.245, RMSE of 0.495, R and R2 both as high as 0.99 in predicting the cooling load. As the developed prediction models were based on buildings characteristics, the outcomes of the research may be relevant to architects at the pre-design stage of heating and cooling energy-efficient buildings.

Energies ◽  
2021 ◽  
Vol 14 (13) ◽  
pp. 3876
Author(s):  
Sameh Monna ◽  
Adel Juaidi ◽  
Ramez Abdallah ◽  
Aiman Albatayneh ◽  
Patrick Dutournie ◽  
...  

Since buildings are one of the major contributors to global warming, efforts should be intensified to make them more energy-efficient, particularly existing buildings. This research intends to analyze the energy savings from a suggested retrofitting program using energy simulation for typical existing residential buildings. For the assessment of the energy retrofitting program using computer simulation, the most commonly utilized residential building types were selected. The energy consumption of those selected residential buildings was assessed, and a baseline for evaluating energy retrofitting was established. Three levels of retrofitting programs were implemented. These levels were ordered by cost, with the first level being the least costly and the third level is the most expensive. The simulation models were created for two different types of buildings in three different climatic zones in Palestine. The findings suggest that water heating, space heating, space cooling, and electric lighting are the highest energy consumers in ordinary houses. Level one measures resulted in a 19–24 percent decrease in energy consumption due to reduced heating and cooling loads. The use of a combination of levels one and two resulted in a decrease of energy consumption for heating, cooling, and lighting by 50–57%. The use of the three levels resulted in a decrease of 71–80% in total energy usage for heating, cooling, lighting, water heating, and air conditioning.


Author(s):  
Pablo Martínez Fernández ◽  
Pablo Salvador Zuriaga ◽  
Ignacio Villalba Sanchís ◽  
Ricardo Insa Franco

This paper presents the application of machine learning systems based on neural networks to model the energy consumption of electric metro trains, as a first step in a research project that aims to optimise the energy consumed for traction in the Metro Network of Valencia (Spain). An experimental dataset was gathered and used for training. Four input variables (train speed and acceleration, track slope and curvature) and one output variable (traction power) were considered. The fully trained neural network shows good agreement with the target data, with relative mean square error around 21%. Additional tests with independent datasets also give good results (relative mean square error = 16%). The neural network has been applied to five simple case studies to assess its performance – and has proven to correctly model basic consumption trends (e.g. the influence of the slope) – and to properly reproduce acceleration, holding and braking, although it tends to slightly underestimate the energy regenerated during braking. Overall, the neural network provides a consistent estimation of traction power and the global energy consumption of metro trains, and thus may be used as a modelling tool during further stages of research.


2020 ◽  
Vol 10 (11) ◽  
pp. 3829 ◽  
Author(s):  
Arash Moradzadeh ◽  
Amin Mansour-Saatloo ◽  
Behnam Mohammadi-Ivatloo ◽  
Amjad Anvari-Moghaddam

Nowadays, since energy management of buildings contributes to the operation cost, many efforts are made to optimize the energy consumption of buildings. In addition, the most consumed energy in the buildings is assigned to the indoor heating and cooling comforts. In this regard, this paper proposes a heating and cooling load forecasting methodology, which by taking this methodology into the account energy consumption of the buildings can be optimized. Multilayer perceptron (MLP) and support vector regression (SVR) for the heating and cooling load forecasting of residential buildings are employed. MLP and SVR are the applications of artificial neural networks and machine learning, respectively. These methods commonly are used for modeling and regression and produce a linear mapping between input and output variables. Proposed methods are taught using training data pertaining to the characteristics of each sample in the dataset. To apply the proposed methods, a simulated dataset will be used, in which the technical parameters of the building are used as input variables and heating and cooling loads are selected as output variables for each network. Finally, the simulation and numerical results illustrates the effectiveness of the proposed methodologies.


2020 ◽  
Author(s):  
Tiago Luciano Passafaro ◽  
Fernando B. Lopes ◽  
João R. R. Dórea ◽  
Mark Craven ◽  
Vivian Breen ◽  
...  

Abstract Background: Deep neural networks (DNN) are a particular case of artificial neural networks (ANN) composed by multiple hidden layers, and have recently gained attention in genome-enabled prediction of complex traits. Yet, few studies in genome-enabled prediction have assessed the performance of DNN compared to traditional regression models. Strikingly, no clear superiority of DNN has been reported so far, and results seem highly dependent on the species and traits of application. Nevertheless, the relatively small datasets used in previous studies, most with fewer than 5,000 observations may have precluded the full potential of DNN. Therefore, the objective of this study was to investigate the impact of the dataset sample size on the performance of DNN compared to Bayesian regression models for genome-enable prediction of body weight in broilers by sub-sampling 63,526 observations of the training set.Results: Predictive performance of DNN improved as sample size increased, reaching a plateau at about 0.32 of prediction correlation when 60% of the entire training set size was used (i.e., 39,510 observations). Interestingly, DNN showed superior prediction correlation using up to 3% of training set, but poorer prediction correlation after that compared to Bayesian Ridge Regression (BRR) and Bayes Cπ. Regardless the amount of data used to train the predictive machines, DNN displayed the lowest mean square error of prediction compared to all other approaches. The predictive bias was lower for DNN compared to Bayesian models regardless the amount of data used with estimates closed to one with larger sample sizes. Conclusions: DNN had worse prediction correlation compared to BRR and Bayes Cπ, but improved mean square error of prediction and bias relative to both Bayesian models for genome-enabled prediction of body weight in broilers. Such findings, highlights advantages and disadvantages between predictive approaches depending on the criterion used for comparison. Nonetheless, further analysis is necessary to detect scenarios where DNN can clearly outperform Bayesian benchmark models.


Author(s):  
Mohammad Kaveh ◽  
Reza Amiri Chayjan ◽  
Behrooz Khezri

AbstractThis paper presents the application of feed forward and cascade forward neural networks to model the non-linear behavior of pistachio nut, squash and cantaloupe seeds during drying process. The performance of the feed forward and cascade forward ANNs was compared with those of nonlinear and linear regression models using statistical indices, namely mean square error ($MSE$), mean absolute error ($MAE$), standard deviation of mean absolute error (SDMAE) and the correlation coefficient (${R^2}$). The best neural network feed forward back-propagation topology for the prediction of effective moisture diffusivity and energy consumption were 3-3-4-2 with the training algorithm of Levenberg-Marquardt (LM). This structure is capable to predict effective moisture diffusivity and specific energy consumption with${R^2}$= 0.9677 and 0.9716, respectively and mean-square error ($MSE$) of 0.00014. Also the highest${R^2}$values to predict the drying rate and moisture ratio were 0.9872 and 0.9944 respectively.


BMC Genomics ◽  
2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Tiago L. Passafaro ◽  
Fernando B. Lopes ◽  
João R. R. Dórea ◽  
Mark Craven ◽  
Vivian Breen ◽  
...  

Abstract Background Deep neural networks (DNN) are a particular case of artificial neural networks (ANN) composed by multiple hidden layers, and have recently gained attention in genome-enabled prediction of complex traits. Yet, few studies in genome-enabled prediction have assessed the performance of DNN compared to traditional regression models. Strikingly, no clear superiority of DNN has been reported so far, and results seem highly dependent on the species and traits of application. Nevertheless, the relatively small datasets used in previous studies, most with fewer than 5000 observations may have precluded the full potential of DNN. Therefore, the objective of this study was to investigate the impact of the dataset sample size on the performance of DNN compared to Bayesian regression models for genome-enable prediction of body weight in broilers by sub-sampling 63,526 observations of the training set. Results Predictive performance of DNN improved as sample size increased, reaching a plateau at about 0.32 of prediction correlation when 60% of the entire training set size was used (i.e., 39,510 observations). Interestingly, DNN showed superior prediction correlation using up to 3% of training set, but poorer prediction correlation after that compared to Bayesian Ridge Regression (BRR) and Bayes Cπ. Regardless of the amount of data used to train the predictive machines, DNN displayed the lowest mean square error of prediction compared to all other approaches. The predictive bias was lower for DNN compared to Bayesian models, across all dataset sizes, with estimates close to one with larger sample sizes. Conclusions DNN had worse prediction correlation compared to BRR and Bayes Cπ, but improved mean square error of prediction and bias relative to both Bayesian models for genome-enabled prediction of body weight in broilers. Such findings, highlights advantages and disadvantages between predictive approaches depending on the criterion used for comparison. Furthermore, the inclusion of more data per se is not a guarantee for the DNN to outperform the Bayesian regression methods commonly used for genome-enabled prediction. Nonetheless, further analysis is necessary to detect scenarios where DNN can clearly outperform Bayesian benchmark models.


Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1331
Author(s):  
Hossein Moayedi ◽  
Amir Mosavi

A reliable prediction of sustainable energy consumption is key for designing environmentally friendly buildings. In this study, three novel hybrid intelligent methods, namely the grasshopper optimization algorithm (GOA), wind-driven optimization (WDO), and biogeography-based optimization (BBO), are employed to optimize the multitarget prediction of heating loads (HLs) and cooling loads (CLs) in the heating, ventilation and air conditioning (HVAC) systems. Concerning the optimization of the applied algorithms, a series of swarm-based iterations are performed, and the best structure is proposed for each model. The GOA, WDO, and BBO algorithms are mixed with a class of feedforward artificial neural networks (ANNs), which is called a multi-layer perceptron (MLP) to predict the HL and CL. According to the sensitivity analysis, the WDO with swarm size = 500 proposes the most-fitted ANN. The proposed WDO-ANN provided an accurate prediction in terms of heating load (training (R2 correlation = 0.977 and RMSE error = 0.183) and testing (R2 correlation = 0.973 and RMSE error = 0.190)) and yielded the best-fitted prediction in terms of cooling load (training (R2 correlation = 0.99 and RMSE error = 0.147) and testing (R2 correlation = 0.99 and RMSE error = 0.148)).


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 229
Author(s):  
Xianzhong Tian ◽  
Juan Zhu ◽  
Ting Xu ◽  
Yanjun Li

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dipendra Jha ◽  
Vishu Gupta ◽  
Logan Ward ◽  
Zijiang Yang ◽  
Christopher Wolverton ◽  
...  

AbstractThe application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector-based materials representation as input to build accurate property prediction models. We find that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to significantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data.


2020 ◽  
Vol 39 (8) ◽  
pp. 1296-1307
Author(s):  
Fanchao MENG ◽  
Guoyu REN ◽  
Jun GUO ◽  
Lei ZHANG ◽  
Ruixue ZHANG ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document