Selection of an error-correcting code for FPGA-based physical unclonable functions

Author(s):  
Brian Jarvis ◽  
Kris Gaj
Author(s):  
Emanuele Strieder ◽  
Christoph Frisch ◽  
Michael Pehl

Physical Unclonable Functions (PUFs) are used in various key-generation schemes and protocols. Such schemes are deemed to be secure even for PUFs with challenge-response behavior, as long as no responses and no reliability information about the PUF are exposed. This work, however, reveals a pitfall in these constructions: When using state-of-the-art helper data algorithms to correct noisy PUF responses, an attacker can exploit the publicly accessible helper data and challenges. We show that with this public information and the knowledge of the underlying error correcting code, an attacker can break the security of the system: The redundancy in the error correcting code reveals machine learnable features and labels. Learning these features and labels results in a predictive model for the dependencies between different challenge-response pairs (CRPs) without direct access to the actual PUF response. We provide results based on simulated data of a k-SUM PUF model and an Arbiter PUF model. We also demonstrate the attack for a k-SUM PUF model generated from real data and discuss the impact on more recent PUF constructions such as the Multiplexer PUF and the Interpose PUF. The analysis reveals that especially the frequently used repetition code is vulnerable: For a SUM-PUF in combination with a repetition code, e.g., already the observation of 800 challenges and helper data bits suffices to reduce the entropy of the key down to one bit. The analysis also shows that even other linear block codes like the BCH, the Reed-Muller, or the Single Parity Check code are affected by the problem. The code-dependent insights we gain from the analysis allow us to suggest mitigation strategies for the identified attack. While the shown vulnerability advances Machine Learning (ML) towards realistic attacks on key-storage systems with PUFs, our analysis also facilitates a better understanding and evaluation of existing approaches and protocols with PUFs. Therefore, it brings the community one step closer to a more complete leakage assessment of PUFs.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Florent Bernard ◽  
Viktor Fischer ◽  
Crina Costea ◽  
Robert Fouquet

The paper analyzes and proposes some enhancements of Ring-Oscillators-based Physical Unclonable Functions (PUFs). PUFs are used to extract a unique signature of an integrated circuit in order to authenticate a device and/or to generate a key. We show that designers of RO PUFs implemented in FPGAs need a precise control of placement and routing and an appropriate selection of ROs pairs to get independent bits in the PUF response. We provide a method to identify which comparisons are suitable when selecting pairs of ROs. Dealing with power consumption, we propose a simple improvement that reduces the consumption of the PUF published by Suh et al. in 2007 by up to 96.6%. Last but not least, we point out that ring oscillators significantly influence one another and can even be locked. This questions the reliability of the PUF and should be taken into account during the design.


Author(s):  
Vincent Immler ◽  
Karthik Uppund

Several publications presented tamper-evident Physical Unclonable Functions (PUFs) for secure storage of cryptographic keys and tamper-detection. Unfortunately, previously published PUF-based key derivation schemes do not sufficiently take into account the specifics of the underlying application, i.e., an attacker that tampers with the physical parameters of the PUF outside of an idealized noise error model. This is a notable extension of existing schemes for PUF key derivation, as they are typically concerned about helper data leakage, i.e., by how much the PUF’s entropy is diminished when gaining access to its helper data.To address the specifics of tamper-evident PUFs, we formalize the aspect of tamper-sensitivity, thereby providing a new tool to rate by how much an attacker is allowed to tamper with the PUF. This complements existing criteria such as effective number of secret bits for entropy and failure rate for reliability. As a result, it provides a fair comparison among different schemes and independent of the PUF implementation, as its unit is based on the noise standard deviation of the underlying PUF measurement. To overcome the limitations of previous schemes, we then propose an Error-Correcting Code (ECC) based on the Lee metric, i.e., a distance metric well-suited to describe the distance between q-ary symbols as output from an equidistant quantization, i.e., a higher-order alphabet PUF. This novel approach is required, as the underlying symbols’ bits are not i.i.d. which hinders applying previous state-of-the-art approaches. We present the concept for our scheme and demonstrate its feasibility based on an empirical PUF distribution. The benefits of our approach are an increase by over 21% in effective secret bit compared to previous approaches based on equidistant quantization. At the same time, we improve tamper-sensitivity compared to an equiprobable quantization while ensuring similar reliability and entropy. Hence, this work opens up a new direction of how to interpret the PUF output and details a practically relevant scheme outperforming all previous constructions.


2020 ◽  
Vol 4 ◽  
pp. 17-24
Author(s):  
I.V. Kochetova ◽  
◽  
A.V. Levenets ◽  

The article proposes a simulation model of an adaptive system for transmitting discrete messages, in which information about the state of the communication channel is used to set the parameters of an error-correcting code, which makes it possible to operate the bandwidth of the communication channel and optimize the performance of data transmission equipment. An assessment of the efficiency of cer-tain error-correcting coding methods in a simulated system with respect to the value Eb/No is carried out. The model makes it possible to estimate such parameters as the weight of transmitted messages, the number of repeated messages through the feedback channel for a given value of Eb/No.


2019 ◽  
Vol 42 ◽  
Author(s):  
Gian Domenico Iannetti ◽  
Giorgio Vallortigara

Abstract Some of the foundations of Heyes’ radical reasoning seem to be based on a fractional selection of available evidence. Using an ethological perspective, we argue against Heyes’ rapid dismissal of innate cognitive instincts. Heyes’ use of fMRI studies of literacy to claim that culture assembles pieces of mental technology seems an example of incorrect reverse inferences and overlap theories pervasive in cognitive neuroscience.


1975 ◽  
Vol 26 ◽  
pp. 395-407
Author(s):  
S. Henriksen

The first question to be answered, in seeking coordinate systems for geodynamics, is: what is geodynamics? The answer is, of course, that geodynamics is that part of geophysics which is concerned with movements of the Earth, as opposed to geostatics which is the physics of the stationary Earth. But as far as we know, there is no stationary Earth – epur sic monere. So geodynamics is actually coextensive with geophysics, and coordinate systems suitable for the one should be suitable for the other. At the present time, there are not many coordinate systems, if any, that can be identified with a static Earth. Certainly the only coordinate of aeronomic (atmospheric) interest is the height, and this is usually either as geodynamic height or as pressure. In oceanology, the most important coordinate is depth, and this, like heights in the atmosphere, is expressed as metric depth from mean sea level, as geodynamic depth, or as pressure. Only for the earth do we find “static” systems in use, ana even here there is real question as to whether the systems are dynamic or static. So it would seem that our answer to the question, of what kind, of coordinate systems are we seeking, must be that we are looking for the same systems as are used in geophysics, and these systems are dynamic in nature already – that is, their definition involvestime.


1978 ◽  
Vol 48 ◽  
pp. 515-521
Author(s):  
W. Nicholson

SummaryA routine has been developed for the processing of the 5820 plates of the survey. The plates are measured on the automatic measuring machine, GALAXY, and the measures are subsequently processed by computer, to edit and then refer them to the SAO catalogue. A start has been made on measuring the plates, but the final selection of stars to be made is still a matter for discussion.


Author(s):  
P.J. Killingworth ◽  
M. Warren

Ultimate resolution in the scanning electron microscope is determined not only by the diameter of the incident electron beam, but by interaction of that beam with the specimen material. Generally, while minimum beam diameter diminishes with increasing voltage, due to the reduced effect of aberration component and magnetic interference, the excited volume within the sample increases with electron energy. Thus, for any given material and imaging signal, there is an optimum volt age to achieve best resolution.In the case of organic materials, which are in general of low density and electric ally non-conducting; and may in addition be susceptible to radiation and heat damage, the selection of correct operating parameters is extremely critical and is achiev ed by interative adjustment.


Sign in / Sign up

Export Citation Format

Share Document