scholarly journals Real-time Neural Networks Implementation Proposal for Microcontrollers

Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1597
Author(s):  
Caio José B. V. Guimarães ◽  
Marcelo A. C. Fernandes

The adoption of intelligent systems with Artificial Neural Networks (ANNs) embedded in hardware for real-time applications currently faces a growing demand in fields such as the Internet of Things (IoT) and Machine to Machine (M2M). However, the application of ANNs in this type of system poses a significant challenge due to the high computational power required to process its basic operations. This paper aims to show an implementation strategy of a Multilayer Perceptron (MLP)-type neural network, in a microcontroller (a low-cost, low-power platform). A modular matrix-based MLP with the full classification process was implemented as was the backpropagation training in the microcontroller. The testing and validation were performed through Hardware-In-the-Loop (HIL) of the Mean Squared Error (MSE) of the training process, classification results, and the processing time of each implementation module. The results revealed a linear relationship between the values of the hyperparameters and the processing time required for classification, also the processing time concurs with the required time for many applications in the fields mentioned above. These findings show that this implementation strategy and this platform can be applied successfully in real-time applications that require the capabilities of ANNs.

Author(s):  
Giovanny Mondragón-Ruiz ◽  
Alonso Tenorio-Trigoso ◽  
Manuel Castillo-Cara ◽  
Blanca Caminero ◽  
Carmen Carrión

AbstractInternet of Things (IoT) has posed new requirements to the underlying processing architecture, specially for real-time applications, such as event-detection services. Complex Event Processing (CEP) engines provide a powerful tool to implement these services. Fog computing has raised as a solution to support IoT real-time applications, in contrast to the Cloud-based approach. This work is aimed at analysing a CEP-based Fog architecture for real-time IoT applications that uses a publish-subscribe protocol. A testbed has been developed with low-cost and local resources to verify the suitability of CEP-engines to low-cost computing resources. To assess performance we have analysed the effectiveness and cost of the proposal in terms of latency and resource usage, respectively. Results show that the fog computing architecture reduces event-detection latencies up to 35%, while the available computing resources are being used more efficiently, when compared to a Cloud deployment. Performance evaluation also identifies the communication between the CEP-engine and the final users as the most time consuming component of latency. Moreover, the latency analysis concludes that the time required by CEP-engine is related to the compute resources, but is nonlinear dependent of the number of things connected.


2021 ◽  
Author(s):  
Nicholas Parkyn

Emerging heterogeneous computing, computing at the edge, machine learning and AI at the edge technology drives approaches and techniques for processing and analysing onboard instrument data in near real-time. The author has used edge computing and neural networks combined with high performance heterogeneous computing platforms to accelerate AI workloads. Heterogeneous computing hardware used is readily available, low cost, delivers impressive AI performance and can run multiple neural networks in parallel. Collecting, processing and machine learning from onboard instruments data in near real-time is not a trivial problem due to data volumes, complexities of data filtering, data storage and continual learning. Little research has been done on continual machine learning which aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn from a non-stationary and never-ending stream of data. The author has applied the concept of continual learning to building a system that continually learns from actual boat performance and refines predictions previously done using static VPP data. The neural networks used are initially trained using the output from traditional VPP software and continue to learn from actual data collected under real sailing conditions. The author will present the system design, AI, and edge computing techniques used and the approaches he has researched for incremental training to realise continual learning.


2022 ◽  
pp. 166-201
Author(s):  
Asha Gowda Karegowda ◽  
Devika G.

Artificial neural networks (ANN) are often more suitable for classification problems. Even then, training of ANN is a surviving challenge task for large and high dimensional natured search space problems. These hitches are more for applications that involves process of fine tuning of ANN control parameters: weights and bias. There is no single search and optimization method that suits the weights and bias of ANN for all the problems. The traditional heuristic approach fails because of their poorer convergence speed and chances of ending up with local optima. In this connection, the meta-heuristic algorithms prove to provide consistent solution for optimizing ANN training parameters. This chapter will provide critics on both heuristics and meta-heuristic existing literature for training neural networks algorithms, applicability, and reliability on parameter optimization. In addition, the real-time applications of ANN will be presented. Finally, future directions to be explored in the field of ANN are presented which will of potential interest for upcoming researchers.


2019 ◽  
Vol 962 ◽  
pp. 41-48
Author(s):  
Tzong Daw Wu ◽  
Jiun Shen Chen ◽  
Ching Pei Tseng ◽  
Cheng Chang Hsieh

This study presents a real-time method for determining the thickness of each layer in multilayer thin films. Artificial neural networks (ANNs) were introduced to estimate thicknesses from a transmittance spectrum. After training via theoretical spectra which were generated by thin-film optics and modified by noise, ANNs were applied to estimate the thicknesses of four-layer nanoscale films which were TiO2, Ag, Ti, and TiO2 thin films assembled sequentially on polyethylene terephthalate (PET) substrates. The results reveal that the mean squared error of the estimation is 2.6 nm2, and is accurate enough to monitor film growth in real time.


2016 ◽  
Vol 17 (6) ◽  
pp. 703-716 ◽  
Author(s):  
Sina Zarrabian ◽  
Rabie Belkacemi ◽  
Adeniyi A. Babalola

Abstract In this paper, a novel intelligent control is proposed based on Artificial Neural Networks (ANN) to mitigate cascading failure (CF) and prevent blackout in smart grid systems after N-1-1 contingency condition in real-time. The fundamental contribution of this research is to deploy the machine learning concept for preventing blackout at early stages of its occurrence and to make smart grids more resilient, reliable, and robust. The proposed method provides the best action selection strategy for adaptive adjustment of generators’ output power through frequency control. This method is able to relieve congestion of transmission lines and prevent consecutive transmission line outage after N-1-1 contingency condition. The proposed ANN-based control approach is tested on an experimental 100 kW test system developed by the authors to test intelligent systems. Additionally, the proposed approach is validated on the large-scale IEEE 118-bus power system by simulation studies. Experimental results show that the ANN approach is very promising and provides accurate and robust control by preventing blackout. The technique is compared to a heuristic multi-agent system (MAS) approach based on communication interchanges. The ANN approach showed more accurate and robust response than the MAS algorithm.


Author(s):  
Fereshteh Hoseini ◽  
Mostafa Ghobaei Arani ◽  
Alireza Taghizadeh

<p class="Abstract">By increasing the use of cloud services and the number of requests to processing tasks with minimum time and costs, the resource allocation and scheduling, especially in real-time applications become more challenging. The problem of resource scheduling, is one of the most important scheduling problems in the area of NP-hard problems. In this paper, we propose an efficient algorithm is proposed to schedule real-time cloud services by considering the resource constraints. The simulation results show that the proposed algorithm shorten the processing time of tasks and decrease the number of canceled tasks.</p>


Author(s):  
Javier Garcia-Guzman ◽  
Lisardo Prieto González ◽  
Jonatan Pajares Redondo ◽  
Mat Max Montalvo Martinez ◽  
María Jesús López Boada

Given the high number of vehicle-crash victims, it has been established as a priority to reduce this figure in the transportation sector. For this reason, many of the recent researches are focused on including control systems in existing vehicles, to improve their stability, comfort and handling. These systems need to know in every moment the behavior of the vehicle (state variables), among others, when the different maneuvers are performed, to actuate by means of the systems in the vehicle (brakes, steering, suspension) and, in this way, to achieve a good behavior. The main problem arises from the lack of ability to directly capture several required dynamic vehicle variables, such as roll angle, from low-cost sensors. Previous studies demonstrate that low-cost sensors can provide data in real-time with the required precision and reliability. Even more, other research works indicate that neural networks are efficient mechanisms to estimate roll angle. Nevertheless, it is necessary to assess that the fusion of data coming from low-cost devices and estimations provided by neural networks can fulfill the reliability and appropriateness requirements for using these technologies to improve overall safety in production vehicles. Because of the increasing of computing power, the reduction of consumption and electric devices size, along with the high variety of communication technologies and networking protocols using Internet have yield to Internet of Things (IoT) development. In order to address this issue, this study has two main goals: 1) Determine the appropriateness and performance of neural networks embedded in low-cost sensors kits to estimate roll angle required to evaluate rollover risk situations. 2) Compare the low-cost control unit devices (Intel Edison and Raspberry Pi 3 Model B), to provide the roll angle estimation with this artificial neural network-based approach. To fulfil these objectives an experimental environment has been set up composed of a van with two set of low-cost kits, one including a Raspberry Pi 3 Model B, low cost Inertial Measurement Unit (BNO055 - 37€) and GPS (Mtk3339 - 53€) and the other having an Intel Edison System on Chip linked to a SparkFun 9 Degrees of Freedom module. This experimental environment will be tested in different maneuvers for comparison purposes. Neural networks embedded in low-cost sensor kits provide roll angle estimations very approximated to real values. Even more, Intel Edison and Raspberry Pi 3 Model B have enough computing capabilities to successfully run roll angle estimation based on neural networks to determine rollover risks situation fulfilling real-time operation restrictions stated for this problem.


Author(s):  
Cristian Grava ◽  
Alexandru Gacsádi ◽  
Ioan Buciu

In this paper we present an original implementation of a homogeneous algorithm for motion estimation and compensation in image sequences, by using Cellular Neural Networks (CNN). The CNN has been proven their efficiency in real-time image processing, because they can be implemented on a CNN chip or they can be emulated on Field Programmable Gate Array (FPGA). The motion information is obtained by using a CNN implementation of the well-known Horn &amp; Schunck method. This information is further used in a CNN implementation of a motion-compensation method. Through our algorithm we obtain a homogeneous implementation for real-time applications in artificial vision or medical imaging. The algorithm is illustrated on some classical sequences and the results confirm the validity of our algorithm.


Sign in / Sign up

Export Citation Format

Share Document