Prediction of occupancy level and energy consumption in office building using blind system identification and neural networks

2019 ◽  
Vol 240 ◽  
pp. 276-294 ◽  
Author(s):  
Yixuan Wei ◽  
Liang Xia ◽  
Song Pan ◽  
Jinshun Wu ◽  
Xingxing Zhang ◽  
...  
Small ◽  
2021 ◽  
Vol 17 (13) ◽  
pp. 2170057
Author(s):  
Tao Zeng ◽  
Xiaoqin Zou ◽  
Zhongqiang Wang ◽  
Guangli Yu ◽  
Zhi Yang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 229
Author(s):  
Xianzhong Tian ◽  
Juan Zhu ◽  
Ting Xu ◽  
Yanjun Li

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.


Sign in / Sign up

Export Citation Format

Share Document