Efficient Execution of Deep Neural Networks on Mobile Devices with NPU

Author(s):  
Tianxiang Tan ◽  
Guohong Cao
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 229
Author(s):  
Xianzhong Tian ◽  
Juan Zhu ◽  
Ting Xu ◽  
Yanjun Li

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.


2021 ◽  
pp. 495-508
Author(s):  
Shashank Reddy Danda ◽  
Xiaoyong Yuan ◽  
Bo Chen

Author(s):  
Chakkrit Termritthikun ◽  
Paisarn Muneesawang

The growth of high-performance mobile devices has resulted in more research into on-device image recognition. The research problems have been the latency and accuracy of automatic recognition, which remain as obstacles to its real-world usage. Although the recently developed deep neural networks can achieve accuracy comparable to that of a human user, some of them are still too slow. This paper describes the development of the architecture of a new convolutional neural network model, NU-LiteNet. For this, SqueezeNet was developed to reduce the model size to a degree suitable for smartphones. The model size of NU-LiteNet was therefore 2.6 times smaller than that of SqueezeNet. The model outperformed other Convolutional Neural Network (CNN) models for mobile devices (eg. SqueezeNet and MobileNet) with an accuracy of 81.15% and 69.58% on Singapore and Paris landmark datasets respectively. The shortest execution time of 0.7 seconds per image was recorded with NU-LiteNet on mobile phones.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1495
Author(s):  
Hyun Kwon ◽  
Hyunsoo Yoon ◽  
Ki-Woong Park

Mobile devices such as sensors are used to connect to the Internet and provide services to users. Web services are vulnerable to automated attacks, which can restrict mobile devices from accessing websites. To prevent such automated attacks, CAPTCHAs are widely used as a security solution. However, when a high level of distortion has been applied to a CAPTCHA to make it resistant to automated attacks, the CAPTCHA becomes difficult for a human to recognize. In this work, we propose a method for generating a CAPTCHA image that will resist recognition by machines while maintaining its recognizability to humans. The method utilizes the style transfer method, and creates a new image, called a style-plugged-CAPTCHA image, by incorporating the styles of other images while keeping the content of the original CAPTCHA. In our experiment, we used the TensorFlow machine learning library and six CAPTCHA datasets in use on actual websites. The experimental results show that the proposed scheme reduces the rate of recognition by the DeCAPTCHA system to 3.5% and 3.2% using one style image and two style images, respectively, while maintaining recognizability by humans.


Author(s):  
Ivan Miguel Pires ◽  
Nuno Pombo ◽  
Nuno M. Garcia ◽  
Francisco Flórez-Revuelta

The recognition of Activities of Daily Living (ADL) and their environments based on sensors available in off-the-shelf mobile devices is an emerging topic. These devices are capable to acquire and process the sensors' data for the correct recognition of the ADL and their environments, providing a fast and reliable feedback to the user. However, the methods implemented in a mobile application for this purpose should be adapted to the low resources of these devices. This paper focuses on the demonstration of a mobile application that implements a framework, that forks their implementation in several modules, including data acquisition, data processing, data fusion and classification methods based on the sensors? data acquired from the accelerometer, gyroscope, magnetometer, microphone and Global Positioning System (GPS) receiver. The framework presented is a function of the number of sensors available in the mobile devices and implements the classification with Deep Neural Networks (DNN) that reports an accuracy between 58.02% and 89.15%.


Sign in / Sign up

Export Citation Format

Share Document