binary neural network
Recently Published Documents


TOTAL DOCUMENTS

131
(FIVE YEARS 72)

H-INDEX

8
(FIVE YEARS 3)

2021 ◽  
Vol 20 (5s) ◽  
pp. 1-24
Author(s):  
Febin P. Sunny ◽  
Asif Mirza ◽  
Mahdi Nikdast ◽  
Sudeep Pasricha

Domain specific neural network accelerators have garnered attention because of their improved energy efficiency and inference performance compared to CPUs and GPUs. Such accelerators are thus well suited for resource-constrained embedded systems. However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead. Binarized neural networks (BNNs), which utilize single-bit weights, represent an efficient way to implement and deploy neural network models on accelerators. In this paper, we present a novel optical-domain BNN accelerator, named ROBIN , which intelligently integrates heterogeneous microring resonator optical devices with complementary capabilities to efficiently implement the key functionalities in BNNs. We perform detailed fabrication-process variation analyses at the optical device level, explore efficient corrective tuning for these devices, and integrate circuit-level optimization to counter thermal variations. As a result, our proposed ROBIN architecture possesses the desirable traits of being robust, energy-efficient, low latency, and high throughput, when executing BNN models. Our analysis shows that ROBIN can outperform the best-known optical BNN accelerators and many electronic accelerators. Specifically, our energy-efficient ROBIN design exhibits energy-per-bit values that are ∼4 × lower than electronic BNN accelerators and ∼933 × lower than a recently proposed photonic BNN accelerator, while a performance-efficient ROBIN design shows ∼3 × and ∼25 × better performance than electronic and photonic BNN accelerators, respectively.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2600
Author(s):  
Yiyang Zhao ◽  
Yongjia Wang ◽  
Ruibo Wang ◽  
Yuan Rong ◽  
Xianyang Jiang

Since memristor was found, it has shown great application potential in neuromorphic computing. Currently, most neural networks based on memristors deploy the special analog characteristics of memristor. However, owing to the limitation of manufacturing process, non-ideal characteristics such as non-linearity, asymmetry, and inconsistent device periodicity appear frequently and definitely, therefore, it is a challenge to employ memristor in a massive way. On the contrary, a binary neural network (BNN) requires its weights to be either +1 or −1, which can be mapped by digital memristors with high technical maturity. Upon this, a highly robust BNN inference accelerator with binary sigmoid activation function is proposed. In the accelerator, the inputs of each network layer are either +1 or 0, which can facilitate feature encoding and reduce the peripheral circuit complexity of memristor hardware. The proposed two-column reference memristor structure together with current controlled voltage source (CCVS) circuit not only solves the problem of mapping positive and negative weights on memristor array, but also eliminates the sneak current effect under the minimum conductance status. Being compared to the traditional differential pair structure of BNN, the proposed two-column reference scheme can reduce both the number of memristors and the latency to refresh the memristor array by nearly 50%. The influence of non-ideal factors of memristor array such as memristor array yield, memristor conductance fluctuation, and reading noise on the accuracy of BNN is investigated in detail based on a newly memristor circuit model with non-ideal characteristics. The experimental results demonstrate that when the array yield α ≥ 5%, or the reading noise σ ≤ 0.25, a recognition accuracy greater than 97% on the MNIST data set is achieved.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6868
Author(s):  
Wenzhe Liu ◽  
Jiehua Zhang ◽  
Zhuo Su ◽  
Zhongzhu Zhou ◽  
Li Liu

As is well-known, defects precisely affect the lives and functions of the machines in which they occur, and even cause potentially catastrophic casualties. Therefore, quality assessment before mounting is an indispensable requirement for factories. Apart from the recognition accuracy, current networks suffer from excessive computing complexity, making it of great difficulty to deploy in the manufacturing process. To address these issues, this paper introduces binary networks into the area of surface defect detection for the first time, for the reason that binary networks prohibitively constrain weight and activation to +1 and −1. The proposed Bi-ShuffleNet and U-BiNet utilize binary convolution layers and activations in low bitwidth, in order to reach comparable performances while incurring much less computational cost. Extensive experiments are conducted on real-life NEU and Magnetic Tile datasets, revealing the least OPs required and little accuracy decline. When classifying the defects, Bi-ShuffleNet yields comparable results to counterpart networks, with at least 2× inference complexity reduction. Defect segmentation results indicate similar observations. Some network design rules in defect detection and binary networks are also summarized in this paper.


2021 ◽  
Author(s):  
José Yauri ◽  
Aura Hernández-Sabaté ◽  
Paul Folch ◽  
Débora Gil

The study of mental workload becomes essential for human work efficiency, health conditions and to avoid accidents, since workload compromises both performance and awareness. Although workload has been widely studied using several physiological measures, minimising the sensor network as much as possible remains both a challenge and a requirement. Electroencephalogram (EEG) signals have shown a high correlation to specific cognitive and mental states like workload. However, there is not enough evidence in the literature to validate how well models generalize in case of new subjects performing tasks of a workload similar to the ones included during model’s training. In this paper we propose a binary neural network to classify EEG features across different mental workloads. Two workloads, low and medium, are induced using two variants of the N-Back Test. The proposed model was validated in a dataset collected from 16 subjects and shown a high level of generalization capability: model reported an average recall of 81.81% in a leave-one-out subject evaluation.


2021 ◽  
Author(s):  
Jiadong Chen ◽  
Shiping Wen ◽  
Kaibo Shi ◽  
Yin Yang

2021 ◽  
Vol 117 ◽  
pp. 102156
Author(s):  
Shengyu He ◽  
Haitao Meng ◽  
Zhaoheng Zhou ◽  
Yongjun Liu ◽  
Kai Huang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document