With the rapid development of Internet technology and mobile terminals, users’ demand for high-speed networks is increasing. Mobile edge computing proposes a distributed caching approach to deal with the impact of massive data traffic on communication networks, in order to reduce network latency and improve user service quality. In this paper, a deep reinforcement learning algorithm is proposed to solve the task unloading problem of multi-service nodes. The simulation platform iFogSim and data set Google Cluster Trace are used to carry out experiments. The final results show that the task offloading strategy based on DDQN algorithm has a good effect on energy consumption and cost, it has verified the application prospect of deep reinforcement learning algorithm in mobile edge computing.