scholarly journals Polynomial Batch Codes for Efficient IT-PIR

2016 ◽  
Vol 2016 (4) ◽  
pp. 202-218 ◽  
Author(s):  
Ryan Henry

Abstract Private information retrieval (PIR) is a way for clients to query a remote database without the database holder learning the clients’ query terms or the responses they generate. Compelling applications for PIR are abound in the cryptographic and privacy research literature, yet existing PIR techniques are notoriously inefficient. Consequently, no such PIRbased application to date has seen real-world at-scale deployment. This paper proposes new “batch coding” techniques to help address PIR’s efficiency problem. The new techniques exploit the connection between ramp secret sharing schemes and efficient information-theoretically secure PIR (IT-PIR) protocols. This connection was previously observed by Henry, Huang, and Goldberg (NDSS 2013), who used ramp schemes to construct efficient “batch queries” with which clients can fetch several database records for the same cost as fetching a single record using a standard, non-batch query. The new techniques in this paper generalize and extend those of Henry et al. to construct “batch codes” with which clients can fetch several records for only a fraction the cost of fetching a single record using a standard non-batch query over an unencoded database. The batch codes are highly tuneable, providing a means to trade off (i) lower server-side computation cost, (ii) lower server-side storage cost, and/or (iii) lower uni- or bi-directional communication cost, in exchange for a comparatively modest decrease in resilience to Byzantine database servers.

2019 ◽  
Vol 2019 (4) ◽  
pp. 112-131
Author(s):  
Syed Mahbub Hafiz ◽  
Ryan Henry

Abstract We study both the practical and theoretical efficiency of private information retrieval (PIR) protocols in a model wherein several untrusted servers work to obliviously service remote clients’ requests for data and yet no pair of servers colludes in a bid to violate said obliviousness. In exchange for such a strong security assumption, we obtain new PIR protocols exhibiting remarkable efficiency with respect to every cost metric—download, upload, computation, and round complexity—typically considered in the PIR literature. The new constructions extend a multiserver PIR protocol of Shah, Rashmi, and Ramchandran (ISIT 2014), which exhibits a remarkable property of its own: to fetch a b-bit record from a collection of r such records, the client need only download b + 1 bits total. We find that allowing “a bit more” download (and optionally introducing computational assumptions) yields a family of protocols offering very attractive trade-offs. In addition to Shah et al.’s protocol, this family includes as special cases (2-server instances of) the seminal protocol of Chor, Goldreich, Kushilevitz, and Sudan (FOCS 1995) and the recent DPF-based protocol of Boyle, Gilboa, and Ishai (CCS 2016). An implicit “folklore” axiom that dogmatically permeates the research literature on multiserver PIR posits that the latter protocols are the “most efficient” protocols possible in the perfectly and computationally private settings, respectively. Yet our findings soundly refute this supposed axiom: These special cases are (by far) the least performant representatives of our family, with essentially all other parameter settings yielding instances that are significantly faster.


2020 ◽  
Vol 4 (02) ◽  
pp. 34-45
Author(s):  
Naufal Dzikri Afifi ◽  
Ika Arum Puspita ◽  
Mohammad Deni Akbar

Shift to The Front II Komplek Sukamukti Banjaran Project is one of the projects implemented by one of the companies engaged in telecommunications. In its implementation, each project including Shift to The Front II Komplek Sukamukti Banjaran has a time limit specified in the contract. Project scheduling is an important role in predicting both the cost and time in a project. Every project should be able to complete the project before or just in the time specified in the contract. Delay in a project can be anticipated by accelerating the duration of completion by using the crashing method with the application of linear programming. Linear programming will help iteration in the calculation of crashing because if linear programming not used, iteration will be repeated. The objective function in this scheduling is to minimize the cost. This study aims to find a trade-off between the costs and the minimum time expected to complete this project. The acceleration of the duration of this study was carried out using the addition of 4 hours of overtime work, 3 hours of overtime work, 2 hours of overtime work, and 1 hour of overtime work. The normal time for this project is 35 days with a service fee of Rp. 52,335,690. From the results of the crashing analysis, the alternative chosen is to add 1 hour of overtime to 34 days with a total service cost of Rp. 52,375,492. This acceleration will affect the entire project because there are 33 different locations worked on Shift to The Front II and if all these locations can be accelerated then the duration of completion of the entire project will be effective


2011 ◽  
Author(s):  
Huong Giang (Lily) Nguyen ◽  
Xiangkang Yin ◽  
Luong Hoang Luong

2020 ◽  
Vol 12 (7) ◽  
pp. 2767 ◽  
Author(s):  
Víctor Yepes ◽  
José V. Martí ◽  
José García

The optimization of the cost and CO 2 emissions in earth-retaining walls is of relevance, since these structures are often used in civil engineering. The optimization of costs is essential for the competitiveness of the construction company, and the optimization of emissions is relevant in the environmental impact of construction. To address the optimization, black hole metaheuristics were used, along with a discretization mechanism based on min–max normalization. The stability of the algorithm was evaluated with respect to the solutions obtained; the steel and concrete values obtained in both optimizations were analyzed. Additionally, the geometric variables of the structure were compared. Finally, the results obtained were compared with another algorithm that solved the problem. The results show that there is a trade-off between the use of steel and concrete. The solutions that minimize CO 2 emissions prefer the use of concrete instead of those that optimize the cost. On the other hand, when comparing the geometric variables, it is seen that most remain similar in both optimizations except for the distance between buttresses. When comparing with another algorithm, the results show a good performance in optimization using the black hole algorithm.


Author(s):  
Vincent E. Castillo ◽  
John E. Bell ◽  
Diane A. Mollenkopf ◽  
Theodore P. Stank

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jeonghyuk Park ◽  
Yul Ri Chung ◽  
Seo Taek Kong ◽  
Yeong Won Kim ◽  
Hyunho Park ◽  
...  

AbstractThere have been substantial efforts in using deep learning (DL) to diagnose cancer from digital images of pathology slides. Existing algorithms typically operate by training deep neural networks either specialized in specific cohorts or an aggregate of all cohorts when there are only a few images available for the target cohort. A trade-off between decreasing the number of models and their cancer detection performance was evident in our experiments with The Cancer Genomic Atlas dataset, with the former approach achieving higher performance at the cost of having to acquire large datasets from the cohort of interest. Constructing annotated datasets for individual cohorts is extremely time-consuming, with the acquisition cost of such datasets growing linearly with the number of cohorts. Another issue associated with developing cohort-specific models is the difficulty of maintenance: all cohort-specific models may need to be adjusted when a new DL algorithm is to be used, where training even a single model may require a non-negligible amount of computation, or when more data is added to some cohorts. In resolving the sub-optimal behavior of a universal cancer detection model trained on an aggregate of cohorts, we investigated how cohorts can be grouped to augment a dataset without increasing the number of models linearly with the number of cohorts. This study introduces several metrics which measure the morphological similarities between cohort pairs and demonstrates how the metrics can be used to control the trade-off between performance and the number of models.


2020 ◽  
Vol 15 (1) ◽  
pp. 4-17
Author(s):  
Jean-François Biasse ◽  
Xavier Bonnetain ◽  
Benjamin Pring ◽  
André Schrottenloher ◽  
William Youmans

AbstractWe propose a heuristic algorithm to solve the underlying hard problem of the CSIDH cryptosystem (and other isogeny-based cryptosystems using elliptic curves with endomorphism ring isomorphic to an imaginary quadratic order 𝒪). Let Δ = Disc(𝒪) (in CSIDH, Δ = −4p for p the security parameter). Let 0 < α < 1/2, our algorithm requires:A classical circuit of size $2^{\tilde{O}\left(\log(|\Delta|)^{1-\alpha}\right)}.$A quantum circuit of size $2^{\tilde{O}\left(\log(|\Delta|)^{\alpha}\right)}.$Polynomial classical and quantum memory.Essentially, we propose to reduce the size of the quantum circuit below the state-of-the-art complexity $2^{\tilde{O}\left(\log(|\Delta|)^{1/2}\right)}$ at the cost of increasing the classical circuit-size required. The required classical circuit remains subexponential, which is a superpolynomial improvement over the classical state-of-the-art exponential solutions to these problems. Our method requires polynomial memory, both classical and quantum.


Author(s):  
Henk Ernst Blok ◽  
Djoerd Hiemstra ◽  
Sunil Choenni ◽  
Franciska de Jong ◽  
Henk M. Blanken ◽  
...  

2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Orhan Bozkurt ◽  
Mehmet İslamoğlu

As the variety of materials utilized in construction industry has expanded, new techniques have been used in order to optimize the quality and efficiency of output. Therefore, recent innovations taking place in the construction industry led researchers to increase the mechanical efficiency of the output more than the cost effectiveness of it. However, especially professionals experiencing in the industry look into the cost effectiveness of the work. In other words, they also want researchers to justify the innovative techniques economically. The aim of this study is to provide a comparative analysis of the cost efficiency of polymer concrete used to manufacture durable and long-lasting reinforced concrete structures.


Sign in / Sign up

Export Citation Format

Share Document