inference engines
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 22)

H-INDEX

11
(FIVE YEARS 2)

Author(s):  
Anuj Dubey ◽  
Afzal Ahmad ◽  
Muhammad Adeel Pasha ◽  
Rosario Cammarota ◽  
Aydin Aysu

Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces.


2021 ◽  
Vol 4 (5) ◽  
pp. 356-362
Author(s):  
Johanes Perdamenta Sembiring ◽  
Jonson Manurung

There were 12 disease data obtained from the agriculture and horticulture department of Deli Serdang, namely earthworms, thrips pests, leaf caterpillars, armyworms, aphids, root namhodes, purple spots, powdery mildew, stem neck rot, anthracnose, fusarium wilt, rot. poor, dead shoots (phytopthora porri foister) and there are 30 Symptoms of disease in onion plants An expert system is a computer program that provides expert advice (decisions, recommendations or problem solving) as if people had been consulted. The backward chaining method is the opposite of forward chaining which starts with a hypothesis (an object) and asks for information to convince or ignore. Backward chaining inference engines are often called Object-Driven/Goal-Driven. In this study, a web-based expert system was designed using the backward chaining method which aims to detect the type of disease in shallot plants by paying attention to the symptoms experienced in shallot plants. By using the backward chaining method, the value of disease certainty in shallot plants will be obtained from the expert system designed. The test results concluded that the design of an expert system for diagnosing diseases in shallots using the backward chaining method was as expected.


Author(s):  
Xinhui Lai ◽  
Thomas Lange ◽  
Aneesh Balakrishnan ◽  
Dan Alexandrescu ◽  
Maksim Jenihhin

2021 ◽  
Vol 14 (1) ◽  
pp. 1-10
Author(s):  
Iin Intan Uljanah ◽  
Shofwatul Uyun

Determining the land suitability class of plants specifically cocoa (Theobroma cacao) is significant to do because each plant has a different characteristic of growth. This research aims at implementing the algorithm to determine the land suitability class of cocoa plants using the Multi-Layer Inference Fuzzy Tsukamoto (MLIFT). This research uses 18 input variables including 15 non-linguistic variables or crisp and the rest are linguistic ones or fuzzy as the data of growth requirements of cocoa plants. Generally, the algorithm used consists of three main steps those are fuzzification, Tsukamoto inference machine, and defuzzification consisting of three layers. The first layer covers seven inference engines, while each of the second and the third ones only consists of one inference engine. The concept of inference process in Fuzzy Tsukamoto is calculating the weighted average of each result of the  nference process. Based on the testing result, it can be concluded that the multi-layer inference Fuzzy Tsukamoto for determining the land suitability class of cocoa plants has an accuracy level amounted 97%.


Author(s):  
Weisi Luo ◽  
Dong Chai ◽  
Xiaoyue Run ◽  
Jiang Wang ◽  
Chunrong Fang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2984
Author(s):  
Pierre-Emmanuel Novac ◽  
Ghouthi Boukli Hacene ◽  
Alain Pegatoquet ◽  
Benoît Miramond ◽  
Vincent Gripon

Embedding Artificial Intelligence onto low-power devices is a challenging task that has been partly overcome with recent advances in machine learning and hardware design. Presently, deep neural networks can be deployed on embedded targets to perform different tasks such as speech recognition, object detection or Human Activity Recognition. However, there is still room for optimization of deep neural networks onto embedded devices. These optimizations mainly address power consumption, memory and real-time constraints, but also an easier deployment at the edge. Moreover, there is still a need for a better understanding of what can be achieved for different use cases. This work focuses on quantization and deployment of deep neural networks onto low-power 32-bit microcontrollers. The quantization methods, relevant in the context of an embedded execution onto a microcontroller, are first outlined. Then, a new framework for end-to-end deep neural networks training, quantization and deployment is presented. This framework, called MicroAI, is designed as an alternative to existing inference engines (TensorFlow Lite for Microcontrollers and STM32Cube.AI). Our framework can indeed be easily adjusted and/or extended for specific use cases. Execution using single precision 32-bit floating-point as well as fixed-point on 8- and 16 bits integers are supported. The proposed quantization method is evaluated with three different datasets (UCI-HAR, Spoken MNIST and GTSRB). Finally, a comparison study between MicroAI and both existing embedded inference engines is provided in terms of memory and power efficiency. On-device evaluation is done using ARM Cortex-M4F-based microcontrollers (Ambiq Apollo3 and STM32L452RE).


2021 ◽  
Author(s):  
Adedamola Wuraola ◽  
Nitish Patel ◽  
Sing Kiong Nguang

Sign in / Sign up

Export Citation Format

Share Document