scholarly journals Gage MPC: Bypassing Residual Function Leakage for Non-Interactive MPC

2021 ◽  
Vol 2021 (4) ◽  
pp. 528-548
Author(s):  
Ghada Almashaqbeh ◽  
Fabrice Benhamouda ◽  
Seungwook Han ◽  
Daniel Jaroslawicz ◽  
Tal Malkin ◽  
...  

Abstract Existing models for non-interactive MPC cannot provide full privacy for inputs, because they inherently leak the residual function (i.e., the output of the function on the honest parties’ input together with all possible values of the adversarial inputs). For example, in any non-interactive sealed-bid auction, the last bidder can figure out what was the highest previous bid. We present a new MPC model which avoids this privacy leak. To achieve this, we utilize a blockchain in a novel way, incorporating smart contracts and arbitrary parties that can be incentivized to perform computation (“bounty hunters,” akin to miners). Security is maintained under a monetary assumption about the parties: an honest party can temporarily supply a recoverable collateral of value higher than the computational cost an adversary can expend. We thus construct non-interactive MPC protocols with strong security guarantees (full security, no residual leakage) in the short term. Over time, as the adversary can invest more and more computational resources, the security guarantee decays. Thus, our model, which we call Gage MPC, is suitable for secure computation with limited-time secrecy, such as auctions. A key ingredient in our protocols is a primitive we call “Gage Time Capsules” (GaTC): a time capsule that allows a party to commit to a value that others are able to reveal but only at a designated computational cost. A GaTC allows a party to commit to a value together with a monetary collateral. If the original party properly opens the GaTC, it can recover the collateral. Otherwise, the collateral is used to incentivize bounty hunters to open the GaTC. This primitive is used to ensure completion of Gage MPC protocols on the desired inputs. As a requisite tool (of independent interest), we present a generalization of garbled circuit that are more robust: they can tolerate exposure of extra input labels. This is in contrast to Yao’s garbled circuits, whose secrecy breaks down if even a single extra label is exposed. Finally, we present a proof-of-concept implementation of a special case of our construction, yielding an auction functionality over an Ethereum-like blockchain.

2021 ◽  
pp. 1-33
Author(s):  
Carmit Hazay ◽  
Mor Lilintal

Despite the fact that the majority of applications encountered in practice today are captured more efficiently by RAM programs, the area of secure two-party computation (2PC) has seen tremendous improvement mostly for Boolean circuits. One of the most studied objects in this domain is garbled circuits. Analogously, garbled RAM (GRAM) provide similar security guarantees for RAM programs with applications to constant round 2PC. In this work we consider the notion of gradual GRAM which requires no memory garbling algorithm. Our approach provides several qualitative advantages over prior works due to the conceptual similarity to the analogue garbling mechanism for Boolean circuits. We next revisit the GRAM construction from (In STOC (2015) 449–458) and improve it in two orthogonal aspects: match it directly with tree-based ORAMs and explore its consistency with gradual ORAM.


2019 ◽  
Vol 2019 ◽  
pp. 1-18 ◽  
Author(s):  
Xin Fang ◽  
Stratis Ioannidis ◽  
Miriam Leeser

Secure Function Evaluation (SFE) has received recent attention due to the massive collection and mining of personal data, but remains impractical due to its large computational cost. Garbled Circuits (GC) is a protocol for implementing SFE which can evaluate any function that can be expressed as a Boolean circuit and obtain the result while keeping each party’s input private. Recent advances have led to a surge of garbled circuit implementations in software for a variety of different tasks. However, these implementations are inefficient, and therefore GC is not widely used, especially for large problems. This research investigates, implements, and evaluates secure computation generation using a heterogeneous computing platform featuring FPGAs. We have designed and implemented SIFO: secure computational infrastructure using FPGA overlays. Unlike traditional FPGA design, a coarse-grained overlay architecture is adopted which supports mapping SFE problems that are too large to map to a single FPGA. Host tools provided include SFE problem generator, parser, and automatic host code generation. Our design allows repurposing an FPGA to evaluate different SFE tasks without the need for reprogramming and fully explores the parallelism for any GC problem. Our system demonstrates an order of magnitude speedup compared with an existing software platform.


2016 ◽  
Vol 2016 (4) ◽  
pp. 144-164 ◽  
Author(s):  
Marina Blanton ◽  
Fattaneh Bayatbabolghani

AbstractComputation based on genomic data is becoming increasingly popular today, be it for medical or other purposes. Non-medical uses of genomic data in a computation often take place in a server-mediated setting where the server offers the ability for joint genomic testing between the users. Undeniably, genomic data is highly sensitive, which in contrast to other biometry types, discloses a plethora of information not only about the data owner, but also about his or her relatives. Thus, there is an urgent need to protect genomic data. This is particularly true when the data is used in computation for what we call recreational non-health-related purposes. Towards this goal, in this work we put forward a framework for server-aided secure two-party computation with the security model motivated by genomic applications. One particular security setting that we treat in this work provides stronger security guarantees with respect to malicious users than the traditional malicious model. In particular, we incorporate certified inputs into secure computation based on garbled circuit evaluation to guarantee that a malicious user is unable to modify her inputs in order to learn unauthorized information about the other user’s data. Our solutions are general in the sense that they can be used to securely evaluate arbitrary functions and offer attractive performance compared to the state of the art. We apply the general constructions to three specific types of genomic tests: paternity, genetic compatibility, and ancestry testing and implement the constructions. The results show that all such private tests can be executed within a matter of seconds or less despite the large size of one’s genomic data.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1511
Author(s):  
Taylor Simons ◽  
Dah-Jye Lee

There has been a recent surge in publications related to binarized neural networks (BNNs), which use binary values to represent both the weights and activations in deep neural networks (DNNs). Due to the bitwise nature of BNNs, there have been many efforts to implement BNNs on ASICs and FPGAs. While BNNs are excellent candidates for these kinds of resource-limited systems, most implementations still require very large FPGAs or CPU-FPGA co-processing systems. Our work focuses on reducing the computational cost of BNNs even further, making them more efficient to implement on FPGAs. We target embedded visual inspection tasks, like quality inspection sorting on manufactured parts and agricultural produce sorting. We propose a new binarized convolutional layer, called the neural jet features layer, that learns well-known classic computer vision kernels that are efficient to calculate as a group. We show that on visual inspection tasks, neural jet features perform comparably to standard BNN convolutional layers while using less computational resources. We also show that neural jet features tend to be more stable than BNN convolution layers when training small models.


Author(s):  
Michael Nierla ◽  
Alexander Sutor ◽  
Stefan Johann Rupitsch ◽  
Manfred Kaltenbacher

Purpose This paper aims to present a novel stageless evaluation scheme for a vector Preisach model that exploits rotational operators for the description of vector hysteresis. It is meant to resolve the discretizational errors that arise during the application of the standard matrix-based implementation of Preisach-based models. Design/methodology/approach The newly developed evaluation uses a nested-list data structure. Together with an adapted form of the Everett function, it allows to represent both the additional rotational operator and the switching operator of the standard scalar Preisach model in a stageless fashion, i.e. without introducing discretization errors. Additionally, presented updating and simplification rules ensure the computational efficiency of the scheme. Findings A comparison between the stageless evaluation scheme and the commonly used matrix approach reveals not only an improvement in accuracy up to machine precision but, furthermore, a reduction of computational resources. Research limitations/implications The presented evaluation scheme is especially designed for a vector Preisach model, which is based on an additional rotational operator. A direct application to other vector Preisach models that do not rely on rotational operators is not intended. Nevertheless, the presented methodology allows an easy adaption to similar vector Preisach schemes that use modified setting rules for the rotational operator and/or the switching operator. Originality/value Prior to this contribution, the vector Preisach model based on rotational operators could only be evaluated using a matrix-based approach that works with discretized forms of rotational and switching operator. The presented evaluation scheme offers reduced computational cost at much higher accuracy. Therefore, it is of great interest for all users of the mentioned or similar vector Preisach models.


2018 ◽  
Author(s):  
Benjamin Brown-Steiner ◽  
Noelle E. Selin ◽  
Ronald Prinn ◽  
Simone Tilmes ◽  
Louisa Emmons ◽  
...  

Abstract. While state-of-the-art complex chemical mechanisms expand our understanding of atmospheric chemistry, their sheer size and computational requirements often limit simulations to short length, or ensembles to only a few members. Here we present and compare three 25-year offline simulations with chemical mechanisms of different levels of complexity using CESM Version 1.2 CAM-chem (CAM4): the MOZART-4 mechanism, the Reduced Hydrocarbon mechanism, and the Super-Fast mechanism. We show that, for most regions and time periods, differences in simulated ozone chemistry between these three mechanisms is smaller than the model-observation differences themselves. The MOZART-4 mechanism and the Reduced Hydrocarbon are in close agreement in their representation of ozone throughout the troposphere during all time periods (annual, seasonal and diurnal). While the Super-Fast mechanism tends to have higher simulated ozone variability and differs from the MOZART-4 mechanism over regions of high biogenic emissions, it is surprisingly capable of simulating ozone adequately given its simplicity. We explore the trade-offs between chemical mechanism complexity and computational cost by identifying regions where the simpler mechanisms are comparable to the MOZART-4 mechanism, and regions where they are not. The Super-Fast mechanism is three times as fast as the MOZART-4 mechanism, which allows for longer simulations, or ensembles with more members, that may not be feasible with the MOZART-4 mechanism given limited computational resources.


2019 ◽  
Vol 14 (5) ◽  
Author(s):  
Ashley Guy ◽  
Alan Bowling

Microscale dynamic simulations can require significant computational resources to generate desired time evolutions. Microscale phenomena are often driven by even smaller scale dynamics, requiring multiscale system definitions to combine these effects. At the smallest scale, large active forces lead to large resultant accelerations, requiring small integration time steps to fully capture the motion and dictating the integration time for the entire model. Multiscale modeling techniques aim to reduce this computational cost, often by separating the system into subsystems or coarse graining to simplify calculations. A multiscale method has been previously shown to greatly reduce the time required to simulate systems in the continuum regime while generating equivalent time histories. This method identifies a portion of the active and dissipative forces that cancel and contribute little to the overall motion. The forces are then scaled to eliminate these noncontributing portions. This work extends that method to include an adaptive scaling method for forces that have large changes in magnitude across the time history. Results show that the adaptive formulation generates time histories similar to those of the unscaled truth model. Computation time reduction is consistent with the existing method.


2019 ◽  
Vol 20 (5) ◽  
pp. 314-320
Author(s):  
Yu. I. Buryak ◽  
A. A. Screennikov

The work is devoted to solving the problem of justifying the rational composition of a team of specialists who provide preparing for a group of aircraft for a given time. To substantiate the optimal composition of the team, it is necessary to solve the problem of scheduling work on a group of aircraft with different composition of specialists. This, in turn, requires consideration of the huge number of options for streamlining work performed on each aircraft, and options for organizing the sequence of maintenance by one specialist of several aircraft. Finding solutions using combinatorial optimization requires an unacceptably high computational cost. The article proposes an approach for finding not the optimal, but some rational admissible solution, which is not much worse than the optimal one, but its definition does not require large computational resources. An algorithm for rational work scheduling based on discrete-event modeling is proposed. Planning is carried out sequentially in time. When planning the sequence of work, it was suggested first of all to put the work with the maximum duration possible. The developed algorithm is software implemented, which allowed to investigate some properties of the solutions obtained. Examples of calculating the schedule of work on a group of aircraft with a different composition of the team of specialists are given. The problem of justification of rational structure of the team is solved by rational planning algorithm works by sequentially increasing the number of specialists. An example of substantiating the rational composition of a team of specialists performing preparing of a group of eight aircraft, each of which performs five types of work, is given and analyzed in details. The high speed of the calculations for the rational planning of work by a given team allowed to consider all possible options for the team (tens of thousands of options) and substantiate such an option that the number of specialists in the team would be minimal, but they would ensure the preparation of aircraft for a given time. Low requirements for computing resources allow solving problems with a sufficiently large number of types of work performed on each aircraft of the group.


Author(s):  
Prithwish Kundu ◽  
Muhsin M. Ameen ◽  
Chao Xu ◽  
Umesh Unnikrishnan ◽  
Tianfeng Lu ◽  
...  

The stiffness of large chemistry mechanisms has been proved to be a major hurdle towards predictive engine simulations. As a result, detailed chemistry mechanisms with a few thousand species need to be reduced based on target conditions so that they can be accommodated within the available computational resources. The computational cost of simulations typically increase super-linearly with the number of species and reactions. This work aims to bring detailed chemistry mechanisms within the realm of engine simulations by coupling the framework of unsteady flamelets and fast chemistry solvers. A previously developed Tabulated Flamelet Model (TFM) framework for non-premixed combustion was used in this study. The flamelet solver consists of the traditional operator-splitting scheme with VODE (Variable coefficient ODE solver) and a numerical Jacobian for solving the chemistry. In order to use detailed mechanisms with thousands of species, a new framework with the LSODES (Livermore Solver for ODEs in Sparse form) chemistry solver and an analytical Jacobian was implemented in this work. Results from 1D simulations show that with the new framework, the computational cost is linearly proportional to the number of species in a given chemistry mechanism. As a result, the new framework is 2–3 orders of magnitude faster than the conventional VODE solver for large chemistry mechanisms. This new framework was used to generate unsteady flamelet libraries for n-dodecane using a detailed chemistry mechanism with 2,755 species and 11,173 reactions. The Engine Combustion Network (ECN) Spray A experiments which consist of an igniting n-dodecane spray in turbulent, high-pressure engine conditions are simulated using large eddy simulations (LES) coupled with detailed mechanisms. A grid with 0.06 mm minimum cell size and 22 million peak cell count was implemented. The framework is validated across a range of ambient temperatures against ignition delay and liftoff lengths. Qualitative results from the simulations were compared against experimental OH and CH2O PLIF data. The models are able to capture the spatial and temporal trends in species compared to those observed in the experiments. Quantitative and qualitative comparisons between the predictions of the reduced and detailed mechanisms are presented in detail. The main goal of this study is to demonstrate that detailed reaction mechanisms (∼1000 species) can now be used in engine simulations with a linear increase in computation cost with number of species during the tabulation process and a small increase in the 3D simulation cost.


2017 ◽  
Author(s):  
Jimmy Kromann ◽  
Jan Jensen ◽  
Monika Kruszyk ◽  
Mikkel Jessing ◽  
Morten Jørgensen

While computational prediction of chemical reactivity is possible it usually requires expert knowledge and there are relatively few computational tools that can be used by a bench chemist to help guide synthesis. The RegioSQM method for predicting the regioselectivity of electrophilic aromatic substitution reactions of heteroaromatic systems is presented in this paper. RegioSQM protonates all aromatic C-H carbon atoms and identifies those with the lowest free energies in chloroform using the PM3 semiempirical method as the most nucleophilic center. These positions are found to correlate qualitatively with the regiochemical outcome in a retrospective analysis of 96% of more than 525 literature examples of electrophilic aromatic halogenation reactions. The method is automated and requires only a SMILES string of the molecule of interest, which can easily be generated using chemical drawing programs such as ChemDraw. The computational cost is 1-10 minutes per molecule depending on size, using relatively modest computational resources and the method is freely available via a web server at regiosqm.org. RegioSQM should therefore be of practical use in the planning of organic synthesis.


Sign in / Sign up

Export Citation Format

Share Document