scholarly journals Efficient Systolic-Array Redundancy Architecture for Offline/Online Repair

Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 338
Author(s):  
Keewon Cho ◽  
Ingeol Lee ◽  
Hyeonchan Lim ◽  
Sungho Kang

Neural-network computing has revolutionized the field of machine learning. The systolic-array architecture is a widely used architecture for neural-network computing acceleration that was adopted by Google in its Tensor Processing Unit (TPU). To ensure the correct operation of the neural network, the reliability of the systolic-array architecture should be guaranteed. This paper proposes an efficient systolic-array redundancy architecture that is based on systolic-array partitioning and rearranging connections of the systolic-array elements. The proposed architecture allows both offline and online repair with an extended redundancy architecture and programmable fuses and can ensure reliability even in an online situation, for which the previous fault-tolerant schemes have not been considered.

Author(s):  
Shiyu Yang ◽  
Kuangrong Hao ◽  
Yongsheng Ding ◽  
Jian Liu

Today, in the construction of smart city, the development of self-driving technology plays the key role. The explosion of convolutional neural network (CNN) technology has made it possible to utilize end-to-end tasks with images. However, today’s CNN has deeper, more accurate characteristics. If we do not improve the calculation method to reduce the number of network parameters, this feature makes it very difficult for us to run neural network computing in small devices. In this paper, we further optimize the network computing methods based on MobileNets to reduce number of network parameters. At the same time, in the network structure, we add BatchNormalization and Swish activation function. We designed our own network in the end-to-end prediction for steering angle in the self-driving car task. From the final simulation results, our neural network’s storage space can be reduced and the execution speed of neural network can be improved while maintaining the accuracy of the neural network.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Samuel Maddrell-Mander ◽  
Lakshan Ram Madhan Mohan ◽  
Alexander Marshall ◽  
Daniel O’Hanlon ◽  
Konstantinos Petridis ◽  
...  

AbstractThis paper presents the first study of Graphcore’s Intelligence Processing Unit (IPU) in the context of particle physics applications. The IPU is a new type of processor optimised for machine learning. Comparisons are made for neural-network-based event simulation, multiple-scattering correction, and flavour tagging, implemented on IPUs, GPUs and CPUs, using a variety of neural network architectures and hyperparameters. Additionally, a Kálmán filter for track reconstruction is implemented on IPUs and GPUs. The results indicate that IPUs hold considerable promise in addressing the rapidly increasing compute needs in particle physics.


Terminology ◽  
2022 ◽  
Author(s):  
Ayla Rigouts Terryn ◽  
Véronique Hoste ◽  
Els Lefever

Abstract As with many tasks in natural language processing, automatic term extraction (ATE) is increasingly approached as a machine learning problem. So far, most machine learning approaches to ATE broadly follow the traditional hybrid methodology, by first extracting a list of unique candidate terms, and classifying these candidates based on the predicted probability that they are valid terms. However, with the rise of neural networks and word embeddings, the next development in ATE might be towards sequential approaches, i.e., classifying each occurrence of each token within its original context. To test the validity of such approaches for ATE, two sequential methodologies were developed, evaluated, and compared: one feature-based conditional random fields classifier and one embedding-based recurrent neural network. An additional comparison was added with a machine learning interpretation of the traditional approach. All systems were trained and evaluated on identical data in multiple languages and domains to identify their respective strengths and weaknesses. The sequential methodologies were proven to be valid approaches to ATE, and the neural network even outperformed the more traditional approach. Interestingly, a combination of multiple approaches can outperform all of them separately, showing new ways to push the state-of-the-art in ATE.


Sign in / Sign up

Export Citation Format

Share Document