scholarly journals Multilinear Maps from Obfuscation

2020 ◽  
Vol 33 (3) ◽  
pp. 1080-1113 ◽  
Author(s):  
Martin R. Albrecht ◽  
Pooya Farshim ◽  
Shuai Han ◽  
Dennis Hofheinz ◽  
Enrique Larraia ◽  
...  

AbstractWe provide constructions of multilinear groups equipped with natural hard problems from indistinguishability obfuscation, homomorphic encryption, and NIZKs. This complements known results on the constructions of indistinguishability obfuscators from multilinear maps in the reverse direction. We provide two distinct, but closely related constructions and show that multilinear analogues of the $${\text {DDH}} $$DDH assumption hold for them. Our first construction is symmetric and comes with a $$\kappa $$κ-linear map $$\mathbf{e }: {{\mathbb {G}}}^\kappa \longrightarrow {\mathbb {G}}_T$$e:Gκ⟶GT for prime-order groups $${\mathbb {G}}$$G and $${\mathbb {G}}_T$$GT. To establish the hardness of the $$\kappa $$κ-linear $${\text {DDH}} $$DDH problem, we rely on the existence of a base group for which the $$\kappa $$κ-strong $${\text {DDH}} $$DDH assumption holds. Our second construction is for the asymmetric setting, where $$\mathbf{e }: {\mathbb {G}}_1 \times \cdots \times {\mathbb {G}}_{\kappa } \longrightarrow {\mathbb {G}}_T$$e:G1×⋯×Gκ⟶GT for a collection of $$\kappa +1$$κ+1 prime-order groups $${\mathbb {G}}_i$$Gi and $${\mathbb {G}}_T$$GT, and relies only on the 1-strong $${\text {DDH}} $$DDH assumption in its base group. In both constructions, the linearity $$\kappa $$κ can be set to any arbitrary but a priori fixed polynomial value in the security parameter. We rely on a number of powerful tools in our constructions: probabilistic indistinguishability obfuscation, dual-mode NIZK proof systems (with perfect soundness, witness-indistinguishability, and zero knowledge), and additively homomorphic encryption for the group $$\mathbb {Z}_N^{+}$$ZN+. At a high level, we enable “bootstrapping” multilinear assumptions from their simpler counterparts in standard cryptographic groups and show the equivalence of PIO and multilinear maps under the existence of the aforementioned primitives.

2021 ◽  
Vol 13 (8) ◽  
pp. 4113
Author(s):  
Valeria Superti ◽  
Cynthia Houmani ◽  
Ralph Hansmann ◽  
Ivo Baur ◽  
Claudia R. Binder

With increasing urbanisation, new approaches such as the Circular Economy (CE) are needed to reduce resource consumption. In Switzerland, Construction & Demolition (C&D) waste accounts for the largest portion of waste (84%). Beyond limiting the depletion of primary resources, implementing recycling strategies for C&D waste (such as using recycled aggregates to produce recycled concrete (RC)), can also decrease the amount of landfilled C&D waste. The use of RC still faces adoption barriers. In this research, we examined the factors driving the adoption of recycled products for a CE in the C&D sector by focusing on RC for structural applications. We developed a behavioural framework to understand the determinants of architects’ decisions to recommend RC. We collected and analysed survey data from 727 respondents. The analyses focused on architects’ a priori beliefs about RC, behavioural factors affecting their recommendations of RC, and project-specific contextual factors that might play a role in the recommendation of RC. Our results show that the factors that mainly facilitate the recommendation of RC by architects are: a senior position, a high level of RC knowledge and of the Minergie label, beliefs about the reduced environmental impact of RC, as well as favourable prescriptive social norms expressed by clients and other architects. We emphasise the importance of a holistic theoretical framework in approaching decision-making processes related to the adoption of innovation, and the importance of the agency of each involved actor for a transition towards a circular construction sector.


2021 ◽  
Vol 15 (2) ◽  
pp. 1-25
Author(s):  
Amal Alhosban ◽  
Zaki Malik ◽  
Khayyam Hashmi ◽  
Brahim Medjahed ◽  
Hassan Al-Ababneh

Service-Oriented Architectures (SOA) enable the automatic creation of business applications from independently developed and deployed Web services. As Web services are inherently a priori unknown, how to deliver reliable Web services compositions is a significant and challenging problem. Services involved in an SOA often do not operate under a single processing environment and need to communicate using different protocols over a network. Under such conditions, designing a fault management system that is both efficient and extensible is a challenging task. In this article, we propose SFSS, a self-healing framework for SOA fault management. SFSS is predicting, identifying, and solving faults in SOAs. In SFSS, we identified a set of high-level exception handling strategies based on the QoS performances of different component services and the preferences articled by the service consumers. Multiple recovery plans are generated and evaluated according to the performance of the selected component services, and then we execute the best recovery plan. We assess the overall user dependence (i.e., the service is independent of other services) using the generated plan and the available invocation information of the component services. Due to the experiment results, the given technique enhances the service selection quality by choosing the services that have the highest score and betters the overall system performance. The experiment results indicate the applicability of SFSS and show improved performance in comparison to similar approaches.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Erkhembayar Jadamba ◽  
Miyoung Shin

Drug repositioning offers new clinical indications for old drugs. Recently, many computational approaches have been developed to repurpose marketed drugs in human diseases by mining various of biological data including disease expression profiles, pathways, drug phenotype expression profiles, and chemical structure data. However, despite encouraging results, a comprehensive and efficient computational drug repositioning approach is needed that includes the high-level integration of available resources. In this study, we propose a systematic framework employing experimental genomic knowledge and pharmaceutical knowledge to reposition drugs for a specific disease. Specifically, we first obtain experimental genomic knowledge from disease gene expression profiles and pharmaceutical knowledge from drug phenotype expression profiles and construct a pathway-drug network representing a priori known associations between drugs and pathways. To discover promising candidates for drug repositioning, we initialize node labels for the pathway-drug network using identified disease pathways and known drugs associated with the phenotype of interest and perform network propagation in a semisupervised manner. To evaluate our method, we conducted some experiments to reposition 1309 drugs based on four different breast cancer datasets and verified the results of promising candidate drugs for breast cancer by a two-step validation procedure. Consequently, our experimental results showed that the proposed framework is quite useful approach to discover promising candidates for breast cancer treatment.


Author(s):  
Dewi Ulya Mailasari

<p class="05IsiAbstrak">This article describes the students’ difficulty in memorizing English vocabulary in Integrated Islamic Elementary School (SDIT) Amal Insani Jepara.  This study uses a descriptive qualitative method. The result denotes that students have difficulties in memorizing vocabulary including there is no visible intrinsic motivation from students, considering English as just another compulsory subject. In addition to the factors of integration and talent, it seems that attitudinal and motivational factors take the main role of the difficulties of students of SDIT Amal Insani in remembering English vocabulary. Students with a high level of intelligence coupled with high enthusiasm, because there is a reward from the teacher, easy to remember vocabulary as well as remembering other subject matter. Students with a priori and low motivation attitude find it difficult to remember the vocabulary that has been given.</p>


2016 ◽  
Vol 2 (1) ◽  
pp. 475-478
Author(s):  
Nico Hoffmann ◽  
Edmund Koch ◽  
Uwe Petersohn ◽  
Matthias Kirsch ◽  
Gerald Steiner

AbstractIntraoperative thermal neuroimaging is a novel intraoperative imaging technique for the characterization of perfusion disorders, neural activity and other pathological changes of the brain. It bases on the correlation of (sub-)cortical metabolism and perfusion with the emitted heat of the cortical surface. In order to minimize required computational resources and prevent unwanted artefacts in subsequent data analysis workflows foreground detection is a important preprocessing technique to differentiate pixels representing the cerebral cortex from background objects. We propose an efficient classification framework that integrates characteristic dynamic thermal behaviour into this classification task to include additional discriminative features. The first stage of our framework consists of learning this representation of characteristic thermal time-frequency behaviour. This representation models latent interconnections in the time-frequency domain that cover specific, yet a priori unknown, thermal properties of the cortex. In a second stage these features are then used to classify each pixel’s state with conditional random fields. We quantitatively evaluate several approaches to learning high-level features and their impact to the overall prediction accuracy. The introduction of high-level features leads to a significant accuracy improvement compared to a baseline classifier.


Author(s):  
Diandian Zhang ◽  
Li Lu ◽  
Jeronimo Castrillon ◽  
Torsten Kempf ◽  
Gerd Ascheid ◽  
...  

Spinlocks are a common technique in Multi-Processor Systems-on-Chip (MPSoCs) to protect shared resources and prevent data corruption. Without a priori application knowledge, the control of spinlocks is often highly random which can degrade the system performance significantly. To improve this, a centralized control mechanism for spinlocks is proposed in this paper, which utilizes application-specific information during spinlock control. The complete control flow is presented, which starts from integrating high-level user-defined information down to a low-level realization of the control. An Application-Specific Instruction-set Processor (ASIP) called OSIP, which was originally designed for task scheduling and mapping, is extended to support this mechanism. The case studies demonstrate the high efficiency of the proposed approach and at the same time highlight the efficiency and flexibility advantages of using an ASIP as the system controller in MPSoCs.


2019 ◽  
Vol 214 ◽  
pp. 07017
Author(s):  
Jean-Marc Andre ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Olivier Chaze ◽  
...  

The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Huige Wang ◽  
Kefei Chen ◽  
Tianyu Pan ◽  
Yunlei Zhao

Functional encryption (FE) can implement fine-grained control to encrypted plaintext via permitting users to compute only some specified functions on the encrypted plaintext using private keys with respect to those functions. Recently, many FEs were put forward; nonetheless, most of them cannot resist chosen-ciphertext attacks (CCAs), especially for those in the secret-key settings. This changed with the work, i.e., a generic transformation of public-key functional encryption (PK-FE) from chosen-plaintext (CPA) to chosen-ciphertext (CCA), where the underlying schemes are required to have some special properties such as restricted delegation or verifiability features. However, examples for such underlying schemes with these features have not been found so far. Later, a CCA-secure functional encryption from projective hash functions was proposed, but their scheme only applies to inner product functions. To construct such a scheme, some nontrivial techniques will be needed. Our key contribution in this work is to propose CCA-secure functional encryptions in the PKE and SK environment, respectively. In the existing generic transformation from (adaptively) simulation-based CPA- (SIM-CPA-) secure ones for deterministic functions to (adaptively) simulation-based CCA- (SIM-CCA-) secure ones for randomized functions, whether the schemes were directly applied to CCA settings for deterministic functions is not implied. We give an affirmative answer and derive a SIM-CCA-secure scheme for deterministic functions by making some modifications on it. Again, based on this derived scheme, we also propose an (adaptively) indistinguishable CCA- (IND-CCA-) secure SK-FE for deterministic functions. The final results show that our scheme can be instantiated under both nonstandard assumptions (e.g., hard problems on multilinear maps and indistinguishability obfuscation (IO)) and under standard assumptions (e.g., DDH, RSA, LWE, and LPN).


2016 ◽  
Vol 19 (A) ◽  
pp. 255-266 ◽  
Author(s):  
Jung Hee Cheon ◽  
Jinhyuck Jeong ◽  
Changmin Lee

Let$\mathbf{f}$and$\mathbf{g}$be polynomials of a bounded Euclidean norm in the ring$\mathbb{Z}[X]/\langle X^{n}+1\rangle$. Given the polynomial$[\mathbf{f}/\mathbf{g}]_{q}\in \mathbb{Z}_{q}[X]/\langle X^{n}+1\rangle$, the NTRU problem is to find$\mathbf{a},\mathbf{b}\in \mathbb{Z}[X]/\langle X^{n}+1\rangle$with a small Euclidean norm such that$[\mathbf{a}/\mathbf{b}]_{q}=[\mathbf{f}/\mathbf{g}]_{q}$. We propose an algorithm to solve the NTRU problem, which runs in$2^{O(\log ^{2}\unicode[STIX]{x1D706})}$time when$\Vert \mathbf{g}\Vert ,\Vert \mathbf{f}\Vert$, and$\Vert \mathbf{g}^{-1}\Vert$are within some range. The main technique of our algorithm is the reduction of a problem on a field to one on a subfield. The GGH scheme, the first candidate of an (approximate) multilinear map, was recently found to be insecure by the Hu–Jia attack using low-level encodings of zero, but no polynomial-time attack was known without them. In the GGH scheme without low-level encodings of zero, our algorithm can be directly applied to attack this scheme if we have some top-level encodings of zero and a known pair of plaintext and ciphertext. Using our algorithm, we can construct a level-$0$encoding of zero and utilize it to attack a security ground of this scheme in the quasi-polynomial time of its security parameter using the parameters suggested by Garg, Gentry and Halevi [‘Candidate multilinear maps from ideal lattices’,Advances in cryptology — EUROCRYPT 2013(Springer, 2013) 1–17].


2009 ◽  
Vol 6 (2) ◽  
pp. 3007-3040 ◽  
Author(s):  
J. Timmermans ◽  
W. Verhoef ◽  
C. van der Tol ◽  
Z. Su

Abstract. In remote sensing evapotranspiration is estimated using a single surface temperature. This surface temperature is an aggregate over multiple canopy components. The temperature of the individual components can differ significantly, introducing errors in the evapotranspiration estimations. The temperature aggregate has a high level of directionality. An inversion method is presented in this paper to retrieve four canopy component temperatures from directional brightness temperatures. The Bayesian method uses both a priori information and sensor characteristics to solve the ill-posed inversion problem. The method is tested using two case studies: 1) a sensitivity analysis, using a large forward simulated dataset, and 2) in a reality study, using two datasets of two field campaigns. The results of the sensitivity analysis show that the Bayesian approach is able to retrieve the four component temperatures from directional brightness temperatures with good success rates using multi-directional sensors (ℜspectra≈0.3, ℜgonio≈0.3, and ℜAATSR≈0.5), and no improvement using mono-angular sensors (ℜ≈1). The results of the experimental study show that the approach gives good results for high LAI values (RMSEgrass=0.50 K, RMSEwheat=0.29 K, RMSEsugar beet=0.75 K, RMSEbarley=0.67 K), but for low LAI values the measurement setup provides extra disturbances in the directional brightness temperatures, RMSEyoung maize=2.85 K, RMSEmature maize=2.85 K. As these disturbances, were only present for two crops and can be eliminated using masked thermal images the method is considered successful.


Sign in / Sign up

Export Citation Format

Share Document