scholarly journals Porting Rulex Software to the Raspberry Pi for Machine Learning Applications on the Edge

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6526
Author(s):  
Ali Walid Daher ◽  
Ali Rizik ◽  
Marco Muselli ◽  
Hussein Chible ◽  
Daniele D. Caviglia

Edge Computing enables to perform measurement and cognitive decisions outside a central server by performing data storage, manipulation, and processing on the Internet of Things (IoT) node. Also, Artificial Intelligence (AI) and Machine Learning applications have become a rudimentary procedure in virtually every industrial or preliminary system. Consequently, the Raspberry Pi is adopted, which is a low-cost computing platform that is profitably applied in the field of IoT. As for the software part, among the plethora of Machine Learning (ML) paradigms reported in the literature, we identified Rulex, as a good ML platform, suitable to be implemented on the Raspberry Pi. In this paper, we present the porting of the Rulex ML platform on the board to perform ML forecasts in an IoT setup. Specifically, we explain the porting Rulex’s libraries on Windows 32 Bits, Ubuntu 64 Bits, and Raspbian 32 Bits. Therefore, with the aim of carrying out an in-depth verification of the application possibilities, we propose to perform forecasts on five unrelated datasets from five different applications, having varying sizes in terms of the number of records, skewness, and dimensionality. These include a small Urban Classification dataset, three larger datasets concerning Human Activity detection, a Biomedical dataset related to mental state, and a Vehicle Activity Recognition dataset. The overall accuracies for the forecasts performed are: 84.13%, 99.29% (for SVM), 95.47% (for SVM), and 95.27% (For KNN) respectively. Finally, an image-based gender classification dataset is employed to perform image classification on the Edge. Moreover, a novel image pre-processing Algorithm was developed that converts images into Time-series by relying on statistical contour-based detection techniques. Even though the dataset contains inconsistent and random images, in terms of subjects and settings, Rulex achieves an overall accuracy of 96.47% while competing with the literature which is dominated by forward-facing and mugshot images. Additionally, power consumption for the Raspberry Pi in a Client/Server setup was compared with an HP laptop, where the board takes more time, but consumes less energy for the same ML task.

2021 ◽  
Author(s):  
Nicholas Parkyn

Emerging heterogeneous computing, computing at the edge, machine learning and AI at the edge technology drives approaches and techniques for processing and analysing onboard instrument data in near real-time. The author has used edge computing and neural networks combined with high performance heterogeneous computing platforms to accelerate AI workloads. Heterogeneous computing hardware used is readily available, low cost, delivers impressive AI performance and can run multiple neural networks in parallel. Collecting, processing and machine learning from onboard instruments data in near real-time is not a trivial problem due to data volumes, complexities of data filtering, data storage and continual learning. Little research has been done on continual machine learning which aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn from a non-stationary and never-ending stream of data. The author has applied the concept of continual learning to building a system that continually learns from actual boat performance and refines predictions previously done using static VPP data. The neural networks used are initially trained using the output from traditional VPP software and continue to learn from actual data collected under real sailing conditions. The author will present the system design, AI, and edge computing techniques used and the approaches he has researched for incremental training to realise continual learning.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Tarun Dhar Diwan ◽  
Siddartha Choubey ◽  
H. S. Hota ◽  
S. B Goyal ◽  
Sajjad Shaukat Jamal ◽  
...  

Identification of anomaly and malicious traffic in the Internet of things (IoT) network is essential for IoT security. Tracking and blocking unwanted traffic flows in the IoT network is required to design a framework for the identification of attacks more accurately, quickly, and with less complexity. Many machine learning (ML) algorithms proved their efficiency to detect intrusion in IoT networks. But this ML algorithm suffers many misclassification problems due to inappropriate and irrelevant feature size. In this paper, an in-depth study is presented to address such issues. We have presented lightweight low-cost feature selection IoT intrusion detection techniques with low complexity and high accuracy due to their low computational time. A novel feature selection technique was proposed with the integration of rank-based chi-square, Pearson correlation, and score correlation to extract relevant features out of all available features from the dataset. Then, feature entropy estimation was applied to validate the relationship among all extracted features to identify malicious traffic in IoT networks. Finally, an extreme gradient ensemble boosting approach was used to classify the features in relevant attack types. The simulation is performed on three datasets, i.e., NSL-KDD, USNW-NB15, and CCIDS2017, and results are presented on different test sets. It was observed that on the NSL-KDD dataset, accuracy was approx. 97.48%. Similarly, the accuracy of USNW-NB15 and CCIDS2017 was approx. 99.96% and 99.93%, respectively. Along with that, state-of-the-art comparison is also presented with existing techniques.


Author(s):  
Kirti Magudia ◽  
Christopher P. Bridge ◽  
Katherine P. Andriole ◽  
Michael H. Rosenthal

AbstractWith vast interest in machine learning applications, more investigators are proposing to assemble large datasets for machine learning applications. We aim to delineate multiple possible roadblocks to exam retrieval that may present themselves and lead to significant time delays. This HIPAA-compliant, institutional review board–approved, retrospective clinical study required identification and retrieval of all outpatient and emergency patients undergoing abdominal and pelvic computed tomography (CT) at three affiliated hospitals in the year 2012. If a patient had multiple abdominal CT exams, the first exam was selected for retrieval (n=23,186). Our experience in attempting to retrieve 23,186 abdominal CT exams yielded 22,852 valid CT abdomen/pelvis exams and identified four major categories of challenges when retrieving large datasets: cohort selection and processing, retrieving DICOM exam files from PACS, data storage, and non-recoverable failures. The retrieval took 3 months of project time and at minimum 300 person-hours of time between the primary investigator (a radiologist), a data scientist, and a software engineer. Exam selection and retrieval may take significantly longer than planned. We share our experience so that other investigators can anticipate and plan for these challenges. We also hope to help institutions better understand the demands that may be placed on their infrastructure by large-scale medical imaging machine learning projects.


2021 ◽  
Vol 10 (3) ◽  
pp. 40
Author(s):  
Gilson Augusto Helfer ◽  
Jorge Luis Victória Barbosa ◽  
Douglas Alves ◽  
Adilson Ben da Costa ◽  
Marko Beko ◽  
...  

The present work proposed a low-cost portable device as an enabling technology for agriculture using multispectral imaging and machine learning in soil texture. Clay is an important factor for the verification and monitoring of soil use due to its fast reaction to chemical and surface changes. The system developed uses the analysis of reflectance in wavebands for clay prediction. The selection of each wavelength is performed through an LED lamp panel. A NoIR microcamera controlled by a Raspberry Pi device is employed to acquire the image and unfold it in RGB histograms. Results showed a good prediction performance with R2 of 0.96, RMSEC of 3.66% and RMSECV of 16.87%. The high portability allows the equipment to be used in a field providing strategic information related to soil sciences.


Author(s):  
Teguh Wahyono ◽  
Yaya Heryadi

The aim of this chapter is to describe and analyze the application of machine learning for anomaly detection. The study regarding the anomaly detection is a very important thing. The various phenomena often occur related to the anomaly study, such as the occurrence of an extreme climate change, the intrusion detection for the network security, the fraud detection for e-banking, the diagnosis for engines fault, the spacecraft anomaly detection, the vessel track, and the airline safety. This chapter is an attempt to provide a structured and a broad overview of extensive research on anomaly detection techniques spanning multiple research areas and application domains. Quantitative analysis meta-approach is used to see the development of the research concerned with those matters. The learning is done on the method side, the techniques utilized, the application development, the technology utilized, and the research trend, which is developed.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 600
Author(s):  
Gianluca Cornetta ◽  
Abdellah Touhafi

Low-cost, high-performance embedded devices are proliferating and a plethora of new platforms are available on the market. Some of them either have embedded GPUs or the possibility to be connected to external Machine Learning (ML) algorithm hardware accelerators. These enhanced hardware features enable new applications in which AI-powered smart objects can effectively and pervasively run in real-time distributed ML algorithms, shifting part of the raw data analysis and processing from cloud or edge to the device itself. In such context, Artificial Intelligence (AI) can be considered as the backbone of the next generation of Internet of the Things (IoT) devices, which will no longer merely be data collectors and forwarders, but really “smart” devices with built-in data wrangling and data analysis features that leverage lightweight machine learning algorithms to make autonomous decisions on the field. This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices. In addition, several machine learning algorithms have been built on top of a custom multi-dimensional array library. The designed framework has been evaluated and its performance stressed on Raspberry Pi III- and IV-embedded computers.


2021 ◽  
Vol 12 (1) ◽  
pp. 89
Author(s):  
Ruiqi Chen ◽  
Tianyu Wu ◽  
Yuchen Zheng ◽  
Ming Ling

In Internet of Things (IoT) scenarios, it is challenging to deploy Machine Learning (ML) algorithms on low-cost Field Programmable Gate Arrays (FPGAs) in a real-time, cost-efficient, and high-performance way. This paper introduces Machine Learning on FPGA (MLoF), a series of ML IP cores implemented on the low-cost FPGA platforms, aiming at helping more IoT developers to achieve comprehensive performance in various tasks. With Verilog, we deploy and accelerate Artificial Neural Networks (ANNs), Decision Trees (DTs), K-Nearest Neighbors (k-NNs), and Support Vector Machines (SVMs) on 10 different FPGA development boards from seven producers. Additionally, we analyze and evaluate our design with six datasets, and compare the best-performing FPGAs with traditional SoC-based systems including NVIDIA Jetson Nano, Raspberry Pi 3B+, and STM32L476 Nucle. The results show that Lattice’s ICE40UP5 achieves the best overall performance with low power consumption, on which MLoF averagely reduces power by 891% and increases performance by 9 times. Moreover, its cost, power, Latency Production (CPLP) outperforms SoC-based systems by 25 times, which demonstrates the significance of MLoF in endpoint deployment of ML algorithms. Furthermore, we make all of the code open-source in order to promote future research.


2021 ◽  
Vol 13 (17) ◽  
pp. 3479
Author(s):  
Maria Pia Del Rosso ◽  
Alessandro Sebastianelli ◽  
Dario Spiller ◽  
Pierre Philippe Mathieu ◽  
Silvia Liberata Ullo

In recent years, the growth of Machine Learning (ML) algorithms has raised the number of studies including their applicability in a variety of different scenarios. Among all, one of the hardest ones is the aerospace, due to its peculiar physical requirements. In this context, a feasibility study, with a prototype of an on board Artificial Intelligence (AI) model, and realistic testing equipment and scenario are presented in this work. As a case study, the detection of volcanic eruptions has been investigated with the objective to swiftly produce alerts and allow immediate interventions. Two Convolutional Neural Networks (CNNs) have been designed and realized from scratch, showing how to efficiently implement them for identifying the eruptions and at the same time adapting their complexity in order to fit on board requirements. The CNNs are then tested with experimental hardware, by means of a drone with a paylod composed of a generic processing unit (Raspberry PI), an AI processing unit (Movidius stick) and a camera. The hardware employed to build the prototype is low-cost, easy to found and to use. Moreover, the dataset has been published on GitHub, made available to everyone. The results are promising and encouraging toward the employment of the proposed system in future missions, given that ESA has already moved the first steps of AI on board with the Phisat-1 satellite, launched on September 2020.


Author(s):  
Medha Misra ◽  
◽  
Pawan Singh ◽  
Anil Kumar

Smart home is an emerging technology which is growing continuously now. It integrates many of the new technologies with the help of home networking for improving human’s quality and standard of living, so there are many projects which are researching in diverse technologies in order to apply them to the smart home system. As the technology evolves it comforts mankind with some additional ease and advancement. At the time the evolution calls upon all the daily routine devices to operate over internet and this project is based on the idea to make these devices accessible to the owner anytime anywhere. Particularly for now we intend to connect electrical appliances in any house to a kind of a local area network so that it can be operated by the respective authorities so as to minimize the electricity wastage. Further these devices along with sensors can be made to operate on their own, intelligently and accurately. The excellency of the project can be utilized in hostels and classrooms as well where most of the times we find electrical appliances operational even when not necessary. The idea is to turn a house into a smart house. The project utilizes the current technology such as wifi and low cost electronic modules to meet the requirements of IOT and gives it a web as well as an app interface for an ease of access. The technologies which we are using include - Wifi module, Relay module, Raspberry pi, Sensors for automatic support and feedback to user. The project can be extended to cloud also for data storage and providing access to authorized user.


This paper presents low cost automation system for textile industries where colour and shape are detected along with pick and place robotic arm. Edge detection techniques and Contour approximation algorithmare used for pattern detection.The main goal is to count the number of samples of each pattern or shapes. This system makes use of raspberry pi with a PI camera. The PI cam is used for capturing the image of the textiles being moved on a conveyor belt. The system is programmed using open CV platform. The simulation results using OpenCV environment coded with Python are presented


Sign in / Sign up

Export Citation Format

Share Document