Effect Analysis of Low-Level Hardware Faults on Neural Networks using Emulated Inference

Author(s):  
Fin Hendrik Bahnsen ◽  
Vanessa Klebe ◽  
Goerschwin Fey
2007 ◽  
Vol 3 (1) ◽  
pp. 153-165 ◽  
Author(s):  
Uri Polat ◽  
Anna Sterkin ◽  
Oren Yehezkel

Author(s):  
Siddhivinayak Kulkarni

Developments in technology and the Internet have led to an increase in number of digital images and videos. Thousands of images are added to WWW every day. Content based Image Retrieval (CBIR) system typically consists of a query example image, given by the user as an input, from which low-level image features are extracted. These low level image features are used to find images in the database which are most similar to the query image and ranked according their similarity. This chapter evaluates various CBIR techniques based on fuzzy logic and neural networks and proposes a novel fuzzy approach to classify the colour images based on their content, to pose a query in terms of natural language and fuse the queries based on neural networks for fast and efficient retrieval. A number of experiments were conducted for classification, and retrieval of images on sets of images and promising results were obtained.


2018 ◽  
Vol 1085 ◽  
pp. 042034 ◽  
Author(s):  
Wahid Bhimji ◽  
Steven Andrew Farrell ◽  
Thorsten Kurth ◽  
Michela Paganini ◽  
Prabhat ◽  
...  

Author(s):  
BERNHARD R. KÄMMERER

We propose a method to incorporate the uncertainty of data in the computation process of neural networks. A measure of certainty is used on each input element in order to modulate the element's contribution to the whole input activity. The amount of certainty may result from knowledge about sensor data (e.g. detectable hardware faults or information from preprocessing steps) or may be determined in previous neurons. The method is developed and studied within the scope of the perceptron model and tested on an image processing application.


Author(s):  
Sucithra B. ◽  
Angelin Gladston

Plant leaf recognition has been carried out widely using low level features. Scale invariant feature transform techniques have been used to extract the low level features. Leaves that match based on low level features but do not do so in the semantic perspective cannot not be recognized. To address that, global features have been extracted and used using convolutional neural networks. Even then there are issues like leaf images in various illuminations, rotations, taken in different angles, and so on. To address such issues, the closeness among low level features and global features are computed using multiple distance measures and a leaf recognition framework has been proposed. The matched patches are evaluated both quantitatively and qualitatively. Experimental results obtained are promising for the proposed closeness-based leaf recognition framework.


2018 ◽  
pp. 112-119
Author(s):  
K. V. Panfilova ◽  
D. V. Vorotnev ◽  
R. V. Golovanov ◽  
S. V. Umnyashkin ◽  
I. O. Sharonov

There are many frameworks for building, training and executing neural networks. Each of them offers their own format for storing network architecture. There are two frameworks considered in this paper: Caffe and Torch. They offer Google Protocol Buffer (protobuf) and the built-in Torch format for storing the architecture of neural networks. The existence of different formats leads to the difficulties of porting neural networks to finite devices of different manufacturers. It leads to difficulties in porting neural networks to end-point devices of different vendors. To resolve these issues the Khronos Group proposed universal NNEF format which will be mediator between frameworks and proprietary low-level libraries. The NNEF format allows storing a description of a neural network using a computational graph. In this paper the two main approaches of development of import (parsing) library for neural networks stored in NNEF: online and offline parsing. For each approach an advantages and disadvantages were noticed which will help developers to choose correct way of NNEF parser implementation. The main advantage of an offline parser is simplicity for debugging, and the online parser is a low computational complexity.


Author(s):  
Nouma Izeboudjen ◽  
Ahcene Farah ◽  
Hamid Bessalah ◽  
Ahmed Bouridene ◽  
Nassim Chikhi

Artificial neural networks (ANNs) are systems which are derived from the field of neuroscience and are characterized by intensive arithmetic operations. These networks display interesting features such as parallelism, classification, optimization, adaptation, generalization and associative memories. Since the McCulloch and Pitts pioneering work (McCulloch, W.S., & Pitts, W. (1943), there has been much discussion on the topic of ANNs implementation, and a huge diversity of ANNs has been designed (C. Lindsey & T. Lindblad, 1994). The benefits of using such implementations is well discussed in a paper by R. Lippmann (Richard P. Lipmann, 1984): “The great interest of building neural networks remains in the high speed processing that can be achieved through massively parallel implementation”. In another paper Clark S. Lindsey (C.S Lindsey, Th. Lindbald, 1995) posed a real dilemma of hardware implementation: “Built a general, but probably expensive system that can be reprogrammed for several kinds of tasks like CNAPS for example? Or build a specialized chip to do one thing but very quickly, like the IBM ZISC Processor”. To overcome this dilemma, most researchers agree that an ideal solution should relay the performances obtained using specific hardware implementation and the flexibility allowed by software tools and general purpose chips. Since their commercial introduction in the mid- 1980’s, and due to the advances in the development of both of the microelectronic technology and the specific CAD tools, FPGAs devices have progressed in an evolutionary and revolutionary way. The evolution process has allowed faster and bigger FPGAs, better CAD tools and better technical support. The revolution process concerns the introduction of high performances multipliers, Microprocessors and DSP functions. This has a direct incidence to FPGA implementation of ANNs and a lot of research has been carried to investigate the use of FPGAs in ANNs implementation (Amos R. Omandi & Jagath C. rajapakse, 2006). Another attractive key feature of FPGAs is their flexibility, which can be obtained at different levels: exploitation of the programmability of FPGA, dynamic reconfiguration or run time reconfiguration (RTR), (Xilinx XAPP290, 2004) and the application of the design for reuse concept (Keating, Michael; Bricaud, Pierre, 2002). However, a big disadvantage of FPGAs is the low level hardware oriented programming model needed to fully exploit the FPGA’s potential performances. High level based VHDL synthesis tools have been proposed to bridge the gap between the high level application requirements and the low level FPGA hardware but these tools are not algorithmic or application specific. Thus, special concepts need to be developed for automatic ANN implementation before using synthesis tools. In this paper, we present a high level design methodology for ANN implementation that attempts to build a bridge between the synthesis tool and the ANN design requirements. This method offers a high flexibility in the design while achieving speed/area performances constraints. The three implementation figures of the ANN based back propagation algorithm are considered. These are the off-type implementation, the on-chip global implementation and the dynamic reconfiguration choices of the ANN. To achieve our goal, a design for reuse strategy has been applied. To validate our approach, three case studies are considered using the Virtex-II and Virtex-4 FPGA devices. A comparative study is done and new conclusions are given.


Sign in / Sign up

Export Citation Format

Share Document