standardization effort
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 6)

H-INDEX

7
(FIVE YEARS 2)

Author(s):  
Joppe W. Bos ◽  
Marc Gourjon ◽  
Joost Renes ◽  
Tobias Schneider ◽  
Christine Van Vredendaal

In the final phase of the post-quantum cryptography standardization effort, the focus has been extended to include the side-channel resistance of the candidates. While some schemes have been already extensively analyzed in this regard, there is no such study yet of the finalist Kyber.In this work, we demonstrate the first completely masked implementation of Kyber which is protected against first- and higher-order attacks. To the best of our knowledge, this results in the first higher-order masked implementation of any post-quantum secure key encapsulation mechanism algorithm. This is realized by introducing two new techniques. First, we propose a higher-order algorithm for the one-bit compression operation. This is based on a masked bit-sliced binary-search that can be applied to prime moduli. Second, we propose a technique which enables one to compare uncompressed masked polynomials with compressed public polynomials. This avoids the costly masking of the ciphertext compression while being able to be instantiated at arbitrary orders.We show performance results for first-, second- and third-order protected implementations on the Arm Cortex-M0+ and Cortex-M4F. Notably, our implementation of first-order masked Kyber decapsulation requires 3.1 million cycles on the Cortex-M4F. This is a factor 3.5 overhead compared to the unprotected optimized implementationin pqm4. We experimentally show that the first-order implementation of our new modules on the Cortex-M0+ is hardened against attacks using 100 000 traces and mechanically verify the security in a fine-grained leakage model using the verification tool scVerif.


Author(s):  
Peter Pessl ◽  
Lukas Prokop

NIST’s post-quantum standardization effort very recently entered its final round. This makes studying the implementation-security aspect of the remaining candidates an increasingly important task, as such analyses can aid in the final selection process and enable appropriately secure wider deployment after standardization. However, lattice-based key-encapsulation mechanisms (KEMs), which are prominently represented among the finalists, have thus far received little attention when it comes to fault attacks.Interestingly, many of these KEMs exhibit structural similarities. They can be seen as variants of the encryption scheme of Lyubashevsky, Peikert, and Rosen, and employ the Fujisaki-Okamoto transform (FO) to achieve CCA2 security. The latter involves re-encrypting a decrypted plaintext and testing the ciphertexts for equivalence. This corresponds to the classic countermeasure of computing the inverse operation and hence prevents many fault attacks.In this work, we show that despite this inherent protection, practical fault attacks are still possible. We present an attack that requires a single instruction-skipping fault in the decoding process, which is run as part of the decapsulation. After observing if this fault actually changed the outcome (effective fault) or if the correct result is still returned (ineffective fault), we can set up a linear inequality involving the key coefficients. After gathering enough of these inequalities by faulting many decapsulations, we can solve for the key using a bespoke statistical solving approach. As our attack only requires distinguishing effective from ineffective faults, various detection-based countermeasures, including many forms of double execution, can be bypassed.We apply this attack to Kyber and NewHope, both of which belong to the aforementioned class of schemes. Using fault simulations, we show that, e.g., 6,500 faulty decapsulations are required for full key recovery on Kyber512. To demonstrate practicality, we use clock glitches to attack Kyber running on a Cortex M4. As we argue that other schemes of this class, such as Saber, might also be susceptible, the presented attack clearly shows that one cannot rely on the FO transform’s fault deterrence and that proper countermeasures are still needed.


Author(s):  
Dave Lewis ◽  
Linda Hogan ◽  
David Filip ◽  
P. J. Wall

In this paper, we examine the challenges of developing international standards for Trustworthy AI that aim both to be global applicable and to address the ethical questions key to building trust at a commercial and societal level. We begin by examining the validity of grounding standards that aim for international reach on human right agreements, and the need to accommodate variations in prioritization and tradeoffs in implementing rights in different societal and cultural settings. We then examine the major recent proposals from the OECD, the EU and the IEEE on ethical governance of Trustworthy AI systems in terms of their scope and use of normative language. From this analysis, we propose a preliminary minimal model for the functional roles relevant to Trustworthy AI as a framing for further standards development in this area. We also identify the different types of interoperability reference points that may exist between these functional roles and remark on the potential role they could play in future standardization. Finally we examine a current AI standardization effort under ISO/IEC JTC1 to consider how future Trustworthy AI standards may be able to build on existing standards in developing ethical guidelines and in particular on the ISO standard on Social Responsibility.We conclude by proposing some future directions for research and development of Trustworthy AI standards.


2019 ◽  
Vol 98 (1) ◽  
pp. 119-131 ◽  
Author(s):  
Joanna Isabelle Olszewska ◽  
Michael Houghtaling ◽  
Paulo J. S. Goncalves ◽  
Nicola Fabiano ◽  
Tamas Haidegger ◽  
...  

AbstractRobotics is a fast-growing field which requires the efficient development of adapted standards. Hence, in this paper, we propose a development methodology to support the robot standardization effort led by international, technical, and professional associations such as the Institute of Electrical and Electronics Engineers (IEEE). Our proposed standard development life cycle is a middle-out, iterative, collaborative, and incremental approach we have successfully applied to the development of the new IEEE Ontological Standard for Ethically Driven Robotics and Automation Systems (IEEE P7007 Standard).


Author(s):  
Veera Ragavan Sampath Kumar ◽  
Alaa Khamis ◽  
Sandro Fiorini ◽  
Joel Luís Carbonera ◽  
Alberto Olivares Alarcos ◽  
...  

Abstract The current fourth industrial revolution, or ‘Industry 4.0’ (I4.0), is driven by digital data, connectivity, and cyber systems, and it has the potential to create impressive/new business opportunities. With the arrival of I4.0, the scenario of various intelligent systems interacting reliably and securely with each other becomes a reality which technical systems need to address. One major aspect of I4.0 is to adopt a coherent approach for the semantic communication in between multiple intelligent systems, which include human and artificial (software or hardware) agents. For this purpose, ontologies can provide the solution by formalizing the smart manufacturing knowledge in an interoperable way. Hence, this paper presents the few existing ontologies for I4.0, along with the current state of the standardization effort in the factory 4.0 domain and examples of real-world scenarios for I4.0.


Author(s):  
Haresh A. Suthar

<p>This Paper contains VHDL implementation of H.264 video coding standard, which is new video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of the H.264/AVC standardization effort is to enhance compression performance and provision of a “network-friendly” video representation addressing “conversational” (video telephony) and “no conversational” (storage, broadcast, or streaming) applications.H.264 video coder standard is having fundamental blocks like transform and quantization, Intra prediction, Inter prediction and Context Adaptive Variable Length Coding (CAVLC). Each block is designed and integrated to one top module in VHDL.</p>


2011 ◽  
Vol 12 (1-2) ◽  
pp. 35-66 ◽  
Author(s):  
MATS CARLSSON ◽  
PER MILDNER

AbstractSICStus Prolog has evolved for nearly 25 years. This is an appropriate point in time for revisiting the main language and design decisions, and try to distill some lessons. SICStus Prolog was conceived in a context of multiple, conflicting Prolog dialect camps and a fledgling standardization effort. We reflect on the impact of this effort and role model implementations on our development. After summarizing the development history, we give a guided tour of the system anatomy, exposing some designs that were not published before. We give an overview of our new interactive development environment, and describe a sample of key applications. Finally, we try to identify key good and not so good design decisions.


Sign in / Sign up

Export Citation Format

Share Document