error resilience
Recently Published Documents


TOTAL DOCUMENTS

370
(FIVE YEARS 56)

H-INDEX

17
(FIVE YEARS 3)

2022 ◽  
Vol 27 (2) ◽  
pp. 1-30
Author(s):  
Jaechul Lee ◽  
Cédric Killian ◽  
Sebastien Le Beux ◽  
Daniel Chillet

The energy consumption of manycore architectures is dominated by data movement, which calls for energy-efficient and high-bandwidth interconnects. To overcome the bandwidth limitation of electrical interconnects, integrated optics appear as a promising technology. However, it suffers from high power overhead related to low laser efficiency, which calls for the use of techniques and methods to improve its energy costs. Besides, approximate computing is emerging as an efficient method to reduce energy consumption and improve execution speed of embedded computing systems. It relies on allowing accuracy reduction on data at the cost of tolerable application output error. In this context, the work presented in this article exploits both features by defining approximate communications for error-tolerant applications. We propose a method to design realistic and scalable nanophotonic interconnect supporting approximate data transmission and power adaption according to the communication distance to improve the energy efficiency. For this purpose, the data can be sent by mixing low optical power signal and truncation for the Least Significant Bits (LSB) of the floating-point numbers, while the overall power is adapted according to the communication distance. We define two ranges of communications, short and long, which require only four power levels. This reduces area and power overhead to control the laser output power. A transmission model allows estimating the laser power according to the targeted BER and the number of truncated bits, while the optical network interface allows configuring, at runtime, the number of approximated and truncated bits and the laser output powers. We explore the energy efficiency provided by each communication scheme, and we investigate the error resilience of the benchmarks over several approximation and truncation schemes. The simulation results of ApproxBench applications show that, compared to an interconnect involving only robust communications, approximations in the optical transmission led to up to 53% laser power reduction with a limited degradation at the application level with less than 9% of output error. Finally, we show that our solution is scalable and leads to 10% reduction in the total energy consumption, 35× reduction in the laser driver size, and 10× reduction in the laser controller compared to state-of-the-art solution.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Yuanyuan Chen ◽  
Sebastian Ecker ◽  
Lixiang Chen ◽  
Fabian Steinlechner ◽  
Marcus Huber ◽  
...  

AbstractHigh-dimensional quantum entanglement is currently one of the most prolific fields in quantum information processing due to its high information capacity and error resilience. A versatile method for harnessing high-dimensional entanglement has long been hailed as an absolute necessity in the exploration of quantum science and technologies. Here we exploit Hong-Ou-Mandel interference to manipulate discrete frequency entanglement in arbitrary-dimensional Hilbert space. The generation and characterization of two-, four- and six-dimensional frequency entangled qudits are theoretically and experimentally investigated, allowing for the estimation of entanglement dimensionality in the whole state space. Additionally, our strategy can be generalized to engineer higher-dimensional entanglement in other photonic degrees of freedom. Our results may provide a more comprehensive understanding of frequency shaping and interference phenomena, and pave the way to more complex high-dimensional quantum information processing protocols.


Author(s):  
José A. Moríñigo ◽  
Andrés Bustos ◽  
Rafael Mayo-García

2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Elbruz Ozen ◽  
Alex Orailoglu

As deep learning algorithms are widely adopted, an increasing number of them are positioned in embedded application domains with strict reliability constraints. The expenditure of significant resources to satisfy performance requirements in deep neural network accelerators has thinned out the margins for delivering safety in embedded deep learning applications, thus precluding the adoption of conventional fault tolerance methods. The potential of exploiting the inherent resilience characteristics of deep neural networks remains though unexplored, offering a promising low-cost path towards safety in embedded deep learning applications. This work demonstrates the possibility of such exploitation by juxtaposing the reduction of the vulnerability surface through the proper design of the quantization schemes with shaping the parameter distributions at each layer through the guidance offered by appropriate training methods, thus delivering deep neural networks of high resilience merely through algorithmic modifications. Unequaled error resilience characteristics can be thus injected into safety-critical deep learning applications to tolerate bit error rates of up to at absolutely zero hardware, energy, and performance costs while improving the error-free model accuracy even further.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-19
Author(s):  
Mahmoud Masadeh ◽  
Yassmeen Elderhalli ◽  
Osman Hasan ◽  
Sofiene Tahar

Machine learning is widely used these days to extract meaningful information out of the Zettabytes of sensors data collected daily. All applications require analyzing and understanding the data to identify trends, e.g., surveillance, exhibit some error tolerance. Approximate computing has emerged as an energy-efficient design paradigm aiming to take advantage of the intrinsic error resilience in a wide set of error-tolerant applications. Thus, inexact results could reduce power consumption, delay, area, and execution time. To increase the energy-efficiency of machine learning on FPGA, we consider approximation at the hardware level, e.g., approximate multipliers. However, errors in approximate computing heavily depend on the application, the applied inputs, and user preferences. However, dynamic partial reconfiguration has been introduced, as a key differentiating capability in recent FPGAs, to significantly reduce design area, power consumption, and reconfiguration time by adaptively changing a selective part of the FPGA design without interrupting the remaining system. Thus, integrating “Dynamic Partial Reconfiguration” (DPR) with “Approximate Computing” (AC) will significantly ameliorate the efficiency of FPGA-based design approximation. In this article, we propose hardware-efficient quality-controlled approximate accelerators, which are suitable to be implemented in FPGA-based machine learning algorithms as well as any error-resilient applications. Experimental results using three case studies of image blending, audio blending, and image filtering applications demonstrate that the proposed adaptive approximate accelerator satisfies the required quality with an accuracy of 81.82%, 80.4%, and 89.4%, respectively. On average, the partial bitstream was found to be 28.6 smaller than the full bitstream .


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-22
Author(s):  
Uzair Sharif ◽  
Daniel Mueller-Gritschneder ◽  
Ulf Schlichtmann

Safety-critical embedded systems may either use specialized hardware or rely on Software-Implemented Hardware Fault Tolerance (SIHFT) to meet soft error resilience requirements. SIHFT has the advantage that it can be used with low-cost, off-the-shelf components such as standard Micro-Controller Units. For this, SIHFT methods apply redundancy in software computation and special checker codes to detect transient errors, so called soft errors, that either corrupt the data flow or the control flow of the software and may lead to Silent Data Corruption (SDC). So far, this is done by applying separate SIHFT methods for the data and control flow protection, which leads to large overheads in computation time. This work in contrast presents REPAIR, a method that exploits the checks of the SIHFT data flow protection to also detect control flow errors as well, thereby, yielding higher SDC resilience with less computational overhead. For this, the data flow protection methods entail duplicating the computation with subsequent checks placed strategically throughout the program. These checks assure that the two redundant computation paths, which work on two different parts of the register file, yield the same result. By updating the pairing between the registers used in the primary computation path and the registers in the duplicated computation path using the REPAIR method, these checks also fail with high coverage when a control flow error, which leads to an illegal jumps, occurs. Extensive RTL fault injection simulations are carried out to accurately quantify soft error resilience while evaluating Mibench programs along with an embedded case-study running on an OpenRISC processor. Our method performs slightly better on average in terms of soft error resilience compared to the best state-of-the-art method but requiring significantly lower overheads. These results show that REPAIR is a valuable addition to the set of known SIHFT methods.


2021 ◽  
Author(s):  
Jianping Zeng ◽  
Hongjune Kim ◽  
Jaejin Lee ◽  
Changhee Jung
Keyword(s):  

2021 ◽  
Vol 124 ◽  
pp. 114331
Author(s):  
Zhi Liu ◽  
Yuhong Liu ◽  
Zhengming Chen ◽  
Gang Guo ◽  
Haibin Wang

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5461
Author(s):  
Hameed Ullah Khan ◽  
Nasru Minallah ◽  
Arbab Masood ◽  
Amaad Khalil ◽  
Jaroslav Frnda ◽  
...  

The introduction of 5G with excessively high speeds and ever-advancing cellular device capabilities has increased the demand for high data rate wireless multimedia communication. Data compression, transmission robustness and error resilience are introduced to meet the increased demands of high data rates of today. An innovative approach is to come up with a unique setup of source bit codes (SBCs) that ensure the convergence and joint source-channel coding (JSCC) correspondingly results in lower bit error ratio (BER). The soft-bit assisted source and channel codes are optimized jointly for optimum convergence. Source bit codes assisted by iterative detection are used with a rate-1 precoder for performance evaluation of the above mentioned scheme of transmitting sata-partitioned (DP) H.264/AVC frames from source through a narrowband correlated Rayleigh fading channel. A novel approach of using sphere packing (SP) modulation aided differential space time spreading (DSTS) in combination with SBC is designed for the video transmission to cope with channel fading. Furthermore, the effects of SBC with different hamming distances d(H,min) but similar coding rates is explored on objective video quality such as peak signal to noise ratio (PSNR) and also the overall bit error ratio (BER). EXtrinsic Information Transfer Charts (EXIT) are used for analysis of the convergence behavior of SBC and its iterative scheme. Specifically, the experiments exhibit that the proposed scheme of error protection of SBC d(H,min) = 6 outperforms the SBCs having same code rate, but with d(H,min) = 3 by 3 dB with PSNR degradation of 1 dB. Furthermore, simulation results show that a gain of 27 dB Eb/N0 is achieved with SBC having code rate 1/3 compared to the benchmark Rate-1 SBC codes.


Sign in / Sign up

Export Citation Format

Share Document