worst case
Recently Published Documents


TOTAL DOCUMENTS

7151
(FIVE YEARS 2078)

H-INDEX

90
(FIVE YEARS 12)

2022 ◽  
Vol 15 (1) ◽  
pp. 1-27
Author(s):  
Franz-Josef Streit ◽  
Paul Krüger ◽  
Andreas Becher ◽  
Stefan Wildermann ◽  
Jürgen Teich

FPGA-based Physical Unclonable Functions (PUF) have emerged as a viable alternative to permanent key storage by turning effects of inaccuracies during the manufacturing process of a chip into a unique, FPGA-intrinsic secret. However, many fixed PUF designs may suffer from unsatisfactory statistical properties in terms of uniqueness, uniformity, and robustness. Moreover, a PUF signature may alter over time due to aging or changing operating conditions, rendering a PUF insecure in the worst case. As a remedy, we propose CHOICE , a novel class of FPGA-based PUF designs with tunable uniqueness and reliability characteristics. By the use of addressable shift registers available on an FPGA, we show that a wide configuration space for adjusting a device-specific PUF response is obtained without any sacrifice of randomness. In particular, we demonstrate the concept of address-tunable propagation delays, whereby we are able to increase or decrease the probability of obtaining “ 1 ”s in the PUF response. Experimental evaluations on a group of six 28 nm Xilinx Artix-7 FPGAs show that CHOICE PUFs provide a large range of configurations to allow a fine-tuning to an average uniqueness between 49% and 51%, while simultaneously achieving bit error rates below 1.5%, thus outperforming state-of-the-art PUF designs. Moreover, with only a single FPGA slice per PUF bit, CHOICE is one of the smallest PUF designs currently available for FPGAs. It is well-known that signal propagation delays are affected by temperature, as the operating temperature impacts the internal currents of transistors that ultimately make up the circuit. We therefore comprehensively investigate how temperature variations affect the PUF response and demonstrate how the tunability of CHOICE enables us to determine configurations that show a high robustness to such variations. As a case study, we present a cryptographic key generation scheme based on CHOICE PUF responses as device-intrinsic secret and investigate the design objectives resource costs, performance, and temperature robustness to show the practicability of our approach.


2022 ◽  
Vol 155 ◽  
pp. 111932
Author(s):  
Francisco Gutierrez-Garcia ◽  
Angel Arcos-Vargas ◽  
Antonio Gomez-Exposito

2022 ◽  
Vol 27 (1) ◽  
pp. 1-24
Author(s):  
Ding Han ◽  
Guohui Li ◽  
Quan Zhou ◽  
Jianjun Li ◽  
Yong Yang ◽  
...  

Response Time Analysis ( RTA ) is an important and promising technique for analyzing the schedulability of real-time tasks under both Global Fixed-Priority ( G-FP ) scheduling and Global Earliest Deadline First ( G-EDF ) scheduling. Most existing RTA methods for tasks under global scheduling are dominated by partitioned scheduling, due to the pessimism of the -based interference calculation where is the number of processors. Two-part execution scenario is an effective technique that addresses this pessimism at the cost of efficiency. The major idea of two-part execution scenario is to calculate a more accurate upper bound of the interference by dividing the execution of the target job into two parts and calculating the interference on the target job in each part. This article proposes a novel RTA execution framework that improves two-part execution scenario by reducing some unnecessary calculation, without sacrificing accuracy of the schedulability test. The key observation is that, after the division of the execution of the target job, two-part execution scenario enumerates all possible execution time of the target job in the first part for calculating the final Worst-Case Response Time ( WCRT ). However, only some special execution time can cause the final result. A set of experiments is conducted to test the performance of the proposed execution framework and the result shows that the proposed execution framework can improve the efficiency of two-part execution scenario analysis by up to in terms of the execution time.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 280
Author(s):  
Michael Haider ◽  
Dominik Bortis ◽  
Grayson Zulauf ◽  
Johann W. Kolar ◽  
Yasuo Ono

The motor integration of singe-phase-supplied Variable-Speed Drives (VSDs) is prevented by the significant volume, short lifetime, and operating temperature limit of the electrolytic capacitors required to buffer the pulsating power grid. The DC-link energy storage requirement is eliminated by using the kinetic energy of the motor as a buffer. The proposed concept is called the Motor-Integrated Power Pulsation Buffer (MPPB), and a control technique and structure are detailed that meet the requirements for nominal and faulted operation with a simple reconfiguration of existing controller blocks. A 7.5 KW, motor-integrated hardware demonstrator validated the proposed MPPB concept and loss models for a scroll compressor drive used in auxiliary railway applications. The MPPB drive with a front-end CISPR 11/Class A EMI filter, PFC rectifier stage, and output-side inverter stage achieved a power density of 0.91 KW L−1 (15 W in−3). The grid-to-motor-shaft efficiency exceeded 90% for all loads over 5 kW or 66% of nominal load, with a worst-case loss penalty over a conventional system of only 17%.


2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-29
Author(s):  
Yuanbo Li ◽  
Kris Satya ◽  
Qirun Zhang

Dyck-reachability is a fundamental formulation for program analysis, which has been widely used to capture properly-matched-parenthesis program properties such as function calls/returns and field writes/reads. Bidirected Dyck-reachability is a relaxation of Dyck-reachability on bidirected graphs where each edge u → ( i v labeled by an open parenthesis “( i ” is accompanied with an inverse edge v → ) i u labeled by the corresponding close parenthesis “) i ”, and vice versa. In practice, many client analyses such as alias analysis adopt the bidirected Dyck-reachability formulation. Bidirected Dyck-reachability admits an optimal reachability algorithm. Specifically, given a graph with n nodes and m edges, the optimal bidirected Dyck-reachability algorithm computes all-pairs reachability information in O ( m ) time. This paper focuses on the dynamic version of bidirected Dyck-reachability. In particular, we consider the problem of maintaining all-pairs Dyck-reachability information in bidirected graphs under a sequence of edge insertions and deletions. Dynamic bidirected Dyck-reachability can formulate many program analysis problems in the presence of code changes. Unfortunately, solving dynamic graph reachability problems is challenging. For example, even for maintaining transitive closure, the fastest deterministic dynamic algorithm requires O ( n 2 ) update time to achieve O (1) query time. All-pairs Dyck-reachability is a generalization of transitive closure. Despite extensive research on incremental computation, there is no algorithmic development on dynamic graph algorithms for program analysis with worst-case guarantees. Our work fills the gap and proposes the first dynamic algorithm for Dyck reachability on bidirected graphs. Our dynamic algorithms can handle each graph update ( i.e. , edge insertion and deletion) in O ( n ·α( n )) time and support any all-pairs reachability query in O (1) time, where α( n ) is the inverse Ackermann function. We have implemented and evaluated our dynamic algorithm on an alias analysis and a context-sensitive data-dependence analysis for Java. We compare our dynamic algorithms against a straightforward approach based on the O ( m )-time optimal bidirected Dyck-reachability algorithm and a recent incremental Datalog solver. Experimental results show that our algorithm achieves orders of magnitude speedup over both approaches.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Pasquale Arpaia ◽  
Federica Crauso ◽  
Mirco Frosolone ◽  
Massimo Mariconda ◽  
Simone Minucci ◽  
...  

AbstractA personalized model of the human knee for enhancing the inter-individual reproducibility of a measurement method for monitoring Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) after transdermal delivery is proposed. The model is based on the solution of Maxwell Equations in the electric-quasi-stationary limit via Finite Element Analysis. The dimensions of the custom geometry are estimated on the basis of knee circumference at the patella, body mass index, and sex of each individual. An optimization algorithm allows to find out the electrical parameters of each subject by experimental impedance spectroscopy data. Muscular tissues were characterized anisotropically, by extracting Cole–Cole equation parameters from experimental data acquired with twofold excitation, both transversal and parallel to tissue fibers. A sensitivity and optimization analysis aiming at reducing computational burden in model customization achieved a worst-case reconstruction error lower than 5%. The personalized knee model and the optimization algorithm were validated in vivo by an experimental campaign on thirty volunteers, 67% healthy and 33% affected by knee osteoarthritis (Kellgren–Lawrence grade ranging in [1,4]), with an average error of 3%.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 116
Author(s):  
Mikhail Moshkov

In this paper, based on the results of rough set theory, test theory, and exact learning, we investigate decision trees over infinite sets of binary attributes represented as infinite binary information systems. We define the notion of a problem over an information system and study three functions of the Shannon type, which characterize the dependence in the worst case of the minimum depth of a decision tree solving a problem on the number of attributes in the problem description. The considered three functions correspond to (i) decision trees using attributes, (ii) decision trees using hypotheses (an analog of equivalence queries from exact learning), and (iii) decision trees using both attributes and hypotheses. The first function has two possible types of behavior: logarithmic and linear (this result follows from more general results published by the author earlier). The second and the third functions have three possible types of behavior: constant, logarithmic, and linear (these results were published by the author earlier without proofs that are given in the present paper). Based on the obtained results, we divided the set of all infinite binary information systems into four complexity classes. In each class, the type of behavior for each of the considered three functions does not change.


Author(s):  
Giampiero Mastinu ◽  
Laura Solari

Abstract Purpose The paper aims to promote the transition to low/zero emission of the local public transport, particularly, urban buses are taken into account. Method The life cycle assessment of electric and biomethane-fuelled urban buses is performed by exploiting SimaPro commercial software (v.9.1.1.). Attention is focused on powertrains. Both midpoint and endpoint analyses are performed. Referring to environmental impact, the best compressed biomethane gas (CBG) powertrain was compared to the best electric one. Additionally, the worst-case scenario has been considered for both CBG and electric powertrains. Results CBG powertrain outperforms the electric one if overall greenhouse gas emissions are considered. However, the electric powertrain seems promising for human health and ecosystem. Conclusions The environmental performance of the two powertrains is good. Both of the two technologies have strength and weak points that anyhow make them good candidates for a clean local public transport of the future. The analysis performed in the paper suggests a future investigation on hybrid electric-CBG powertrain. Actually, such a solution could benefit from both the strengths of the biomethane and the electric powertrain.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 138
Author(s):  
Wei Liu ◽  
Yang Liu

The tail risk management is of great significance in the investment process. As an extension of the asymmetric tail risk measure—Conditional Value at Risk (CVaR), higher moment coherent risk (HMCR) is compatible with the higher moment information (skewness and kurtosis) of probability distribution of the asset returns as well as capturing distributional asymmetry. In order to overcome the difficulties arising from the asymmetry and ambiguity of the underlying distribution, we propose the Wasserstein distributionally robust mean-HMCR portfolio optimization model based on the kernel smoothing method and optimal transport, where the ambiguity set is defined as a Wasserstein “ball” around the empirical distribution in the weighted kernel density estimation (KDE) distribution function family. Leveraging Fenchel’s duality theory, we obtain the computationally tractable DCP (difference-of-convex programming) reformulations and show that the ambiguity version preserves the asymmetry of the HMCR measure. Primary empirical test results for portfolio selection demonstrate the efficiency of the proposed model.


Author(s):  
Wouter van Eekelen ◽  
Dick den Hertog ◽  
Johan S.H. van Leeuwaarden

A notorious problem in queueing theory is to compute the worst possible performance of the GI/G/1 queue under mean-dispersion constraints for the interarrival- and service-time distributions. We address this extremal queue problem by measuring dispersion in terms of mean absolute deviation (MAD) instead of the more conventional variance, making available methods for distribution-free analysis. Combined with random walk theory, we obtain explicit expressions for the extremal interarrival- and service-time distributions and, hence, the best possible upper bounds for all moments of the waiting time. We also obtain tight lower bounds that, together with the upper bounds, provide robust performance intervals. We show that all bounds are computationally tractable and remain sharp also when the mean and MAD are not known precisely but are estimated based on available data instead. Summary of Contribution: Queueing theory is a classic OR topic with a central role for the GI/G/1 queue. Although this queueing system is conceptually simple, it is notoriously hard to determine the worst-case expected waiting time when only knowing the first two moments of the interarrival- and service-time distributions. In this setting, the exact form of the extremal distribution can only be determined numerically as the solution to a nonconvex nonlinear optimization problem. Our paper demonstrates that using mean absolute deviation (MAD) instead of variance alleviates the computational intractability of the extremal GI/G/1 queue problem, enabling us to state the worst-case distributions explicitly.


Sign in / Sign up

Export Citation Format

Share Document