Analysis of the acceleration of deep learning inference models on a heterogeneous architecture based on OpenVINO

Author(s):  
Fatima Zahra Guerrouj ◽  
Mohamed ABOUZAHIR ◽  
Mustapha RAMZI ◽  
El Mehdi ABDALI
2021 ◽  
Vol 3 (3) ◽  
pp. 190-207
Author(s):  
S. K. B. Sangeetha

In recent years, deep-learning systems have made great progress, particularly in the disciplines of computer vision and pattern recognition. Deep-learning technology can be used to enable inference models to do real-time object detection and recognition. Using deep-learning-based designs, eye tracking systems could determine the position of eyes or pupils, regardless of whether visible-light or near-infrared image sensors were utilized. For growing electronic vehicle systems, such as driver monitoring systems and new touch screens, accurate and successful eye gaze estimates are critical. In demanding, unregulated, low-power situations, such systems must operate efficiently and at a reasonable cost. A thorough examination of the different deep learning approaches is required to take into consideration all of the limitations and opportunities of eye gaze tracking. The goal of this research is to learn more about the history of eye gaze tracking, as well as how deep learning contributed to computer vision-based tracking. Finally, this research presents a generalized system model for deep learning-driven eye gaze direction diagnostics, as well as a comparison of several approaches.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 630
Author(s):  
Wenquan Jin ◽  
Rongxu Xu ◽  
Sunhwan Lim ◽  
Dong-Hwan Park ◽  
Chanwon Park ◽  
...  

Computation offloading enables intensive computational tasks in edge computing to be separated into multiple computing resources of the server to overcome hardware limitations. Deep learning derives the inference approach based on the learning approach with a volume of data using a sufficient computing resource. However, deploying the domain-specific inference approaches to edge computing provides intelligent services close to the edge of the networks. In this paper, we propose intelligent edge computing by providing a dynamic inference approach for building environment control. The dynamic inference approach is provided based on the rules engine that is deployed on the edge gateway to select an inference function by the triggered rule. The edge gateway is deployed in the entry of a network edge and provides comprehensive functions, including device management, device proxy, client service, intelligent service and rules engine. The functions are provided by microservices provider modules that enable flexibility, extensibility and light weight for offloading domain-specific solutions to the edge gateway. Additionally, the intelligent services can be updated through offloading the microservices provider module with the inference models. Then, using the rules engine, the edge gateway operates an intelligent scenario based on the deployed rule profile by requesting the inference model of the intelligent service provider. The inference models are derived by training the building user data with the deep learning model using the edge server, which provides a high-performance computing resource. The intelligent service provider includes inference models and provides intelligent functions in the edge gateway using a constrained hardware resource based on microservices. Moreover, for bridging the Internet of Things (IoT) device network to the Internet, the gateway provides device management and proxy to enable device access to web clients.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4503 ◽  
Author(s):  
Patrick Thiam ◽  
Peter Bellmann ◽  
Hans A. Kestler ◽  
Friedhelm Schwenker

Standard feature engineering involves manually designing measurable descriptors based on some expert knowledge in the domain of application, followed by the selection of the best performing set of designed features for the subsequent optimisation of an inference model. Several studies have shown that this whole manual process can be efficiently replaced by deep learning approaches which are characterised by the integration of feature engineering, feature selection and inference model optimisation into a single learning process. In the following work, deep learning architectures are designed for the assessment of measurable physiological channels in order to perform an accurate classification of different levels of artificially induced nociceptive pain. In contrast to previous works, which rely on carefully designed sets of hand-crafted features, the current work aims at building competitive pain intensity inference models through autonomous feature learning, based on deep neural networks. The assessment of the designed deep learning architectures is based on the BioVid Heat Pain Database (Part A) and experimental validation demonstrates that the proposed uni-modal architecture for the electrodermal activity (EDA) and the deep fusion approaches significantly outperform previous methods reported in the literature, with respective average performances of 84.57 % and 84.40 % for the binary classification experiment consisting of the discrimination between the baseline and the pain tolerance level ( T 0 vs. T 4 ) in a Leave-One-Subject-Out (LOSO) cross-validation evaluation setting. Moreover, the experimental results clearly show the relevance of the proposed approaches, which also offer more flexibility in the case of transfer learning due to the modular nature of deep neural networks.


Author(s):  
Stellan Ohlsson
Keyword(s):  

2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


2020 ◽  
Author(s):  
L Pennig ◽  
L Lourenco Caldeira ◽  
C Hoyer ◽  
L Görtz ◽  
R Shahzad ◽  
...  
Keyword(s):  

2020 ◽  
Author(s):  
A Heinrich ◽  
M Engler ◽  
D Dachoua ◽  
U Teichgräber ◽  
F Güttler
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document