scholarly journals Optimal Workload Allocation for Edge Computing Network Using Application Prediction

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Zhenquan Qin ◽  
Zanping Cheng ◽  
Chuan Lin ◽  
Zhaoyi Lu ◽  
Lei Wang

By deploying edge servers on the network edge, mobile edge computing network strengthens the real-time processing ability near the end devices and releases the huge load pressure of the core network. Considering the limited computing or storage resources on the edge server side, the workload allocation among edge servers for each Internet of Things (IoT) application affects the response time of the application’s requests. Hence, when the access devices of the edge server are deployed intensively, the workload allocation becomes a key factor affecting the quality of user experience (QoE). To solve this problem, this paper proposes an edge workload allocation scheme, which uses application prediction (AP) algorithm to minimize response delay. This problem has been proved to be a NP hard problem. First, in the application prediction model, long short-term memory (LSTM) method is proposed to predict the tasks of future access devices. Second, based on the prediction results, the edge workload allocation is divided into two subproblems to solve, which are the task assignment subproblem and the resource allocation subproblem. Using historical execution data, we can solve the problem in linear time. The simulation results show that the proposed AP algorithm can effectively reduce the response delay of the device and the average completion time of the task sequence and approach the theoretical optimal allocation results.

Author(s):  
Bingqian Du ◽  
Chuan Wu ◽  
Zhiyi Huang

Cloud computing has been widely adopted to support various computation services. A fundamental problem faced by cloud providers is how to efficiently allocate resources upon user requests and price the resource usage, in order to maximize resource efficiency and hence provider profit. Existing studies establish detailed performance models of cloud resource usage, and propose offline or online algorithms to decide allocation and pricing. Differently, we adopt a blackbox approach, and leverage model-free Deep Reinforcement Learning (DRL) to capture dynamics of cloud users and better characterize inherent connections between an optimal allocation/pricing policy and the states of the dynamic cloud system. The goal is to learn a policy that maximizes net profit of the cloud provider through trial and error, which is better than decisions made on explicit performance models. We combine long short-term memory (LSTM) units with fully-connected neural networks in our DRL to deal with online user arrivals, and adjust the output and update methods of basic DRL algorithms to address both resource allocation and pricing. Evaluation based on real-world datasets shows that our DRL approach outperforms basic DRL algorithms and state-of-theart white-box online cloud resource allocation/pricing algorithms significantly, in terms of both profit and the number of accepted users.


2020 ◽  
Vol 69 (3) ◽  
pp. 3280-3295 ◽  
Author(s):  
Qixun Zhang ◽  
Jingran Chen ◽  
Lei Ji ◽  
Zhiyong Feng ◽  
Zhu Han ◽  
...  

Author(s):  
Molong Duan ◽  
Chinedum Okwudire

This paper proposes a method for near energy optimal allocation of control effort in dual-input over-actuated systems using a linear time-invariant (LTI) controller. The method assumes a quadratic energy cost functional, and the non-causal energy optimal control ratio within the redundant actuation space is defined. Near energy optimal control allocation is addressed by using a LTI controller to align the control inputs with a causal approximation of the energy optimal control ratio. The use of a LTI controller for control allocation leads to low computation burden compared to techniques in the literature which require optimization at each time step. Moreover, the proposed method achieves broadband, near optimal control allocation, as opposed to traditional allocation methods which make use of a static system model for control allocation. The proposed method is validated through simulations and experiments on an over-actuated hybrid feed drive system. Significant improvements in energy efficiency without sacrificing positioning performance are demonstrated.


Author(s):  
Chang Li ◽  
Roger Fales

This work focuses on an accurate Extended Kalman Filter (EKF) estimator, which is applied in a forced-feedback metering poppet valve system (FFMPVS). The EKF estimator is used to estimate the position and velocity of the main poppet valve, the pilot poppet valve and the piston through using the control volume pressure, the load pressure and the pressure between the pilot poppet and the actuator housing, which are all disturbed by noise. The EKF estimator takes advantage of its recursive optimal state estimation to estimate the states of this metering poppet valve, which is a non-linear, time-variant dynamical system in real time. The EKF estimator has robustness to parameter variations and ability to filter measurement noises. It is shown that the EKF estimator tracks the states confidently and promptly for both the steady-state and transient performance, at the same time, the EKF estimator also filters the noise of the measured pressures.


2017 ◽  
Vol 43 (2) ◽  
pp. 311-347 ◽  
Author(s):  
Miguel Ballesteros ◽  
Chris Dyer ◽  
Yoav Goldberg ◽  
Noah A. Smith

We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks—the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance.


1978 ◽  
Vol 46 (2) ◽  
pp. 571-576 ◽  
Author(s):  
J. K. Adamowicz

Visual short-term memory of young and older adults was studied in relation to imaging ability. Both recall and recognition memory tasks were used and additional variables included stimulus complexity and response delay (recognition tasks) and stimulus complexity and visual masking (recall tasks). Young and older participants were matched on visual discrimination, verbal intelligence, and imaging ability. Stimuli consisted of abstract visual patterns. Age-related decrements in recognition and recall were observed but performance was related to imaging ability only with recall tasks and only for older adults. The results were discussed with reference to mediational strategies and locus of occurrence of age-related decrements in short-term memory.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yu Weng ◽  
Haozhen Chu ◽  
Zhaoyi Shi

Intelligent vehicles have provided a variety of services; there is still a great challenge to execute some computing-intensive applications. Edge computing can provide plenty of computing resources for intelligent vehicles, because it offloads complex services from the base station (BS) to the edge computing nodes. Before the selection of the computing node for services, it is necessary to clarify the resource requirement of vehicles, the user mobility, and the situation of the mobile core network; they will affect the users’ quality of experience (QoE). To maximize the QoE, we use multiagent reinforcement learning to build an intelligent offloading system; we divide this goal into two suboptimization problems; they include global node scheduling and independent exploration of agents. We apply the improved Kuhn–Munkres (KM) algorithm to node scheduling and make full use of existing edge computing nodes; meanwhile, we guide intelligent vehicles to the potential areas of idle computing nodes; it can encourage their autonomous exploration. Finally, we make some performance evaluations to illustrate the effectiveness of our constructed system on the simulated dataset.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xunfa Lu ◽  
Cheng Liu ◽  
Kin Keung Lai ◽  
Hairong Cui

PurposeThe purpose of the paper is to better measure the risks and volatility of the Bitcoin market by using the proposed novel risk measurement model.Design/methodology/approachThe joint regression analysis of value at risk (VaR) and expected shortfall (ES) can effectively overcome the non-elicitability problem of ES to better measure the risks and volatility of financial markets. And because of the incomparable advantages of the long- and short-term memory (LSTM) model in processing non-linear time series, the paper embeds LSTM into the joint regression combined forecasting framework of VaR and ES, constructs a joint regression combined forecasting model based on LSTM for jointly measuring VaR and ES, i.e. the LSTM-joint-combined (LSTM-J-C) model, and uses it to investigate the risks of the Bitcoin market.FindingsEmpirical results show that the proposed LSTM-J-C model can improve forecasting performance of VaR and ES in the Bitcoin market more effectively compared with the historical simulation, the GARCH model and the joint regression combined forecasting model.Social implicationsThe proposed LSTM-J-C model can provide theoretical support and practical guidance to cryptocurrency market investors, policy makers and regulatory agencies for measuring and controlling cryptocurrency market risks.Originality/valueA novel risk measurement model, namely LSTM-J-C model, is proposed to jointly estimate VaR and ES of Bitcoin. On the other hand, the proposed LSTM-J-C model provides risk managers more accurate forecasts of volatility in the Bitcoin market.


Sign in / Sign up

Export Citation Format

Share Document