Zvika Brakerski
and
Henry Yuen
Quantum Garbled Circuits
Manuscript, 2020.
[Abstract] [PDF]
We present a garbling scheme for quantum circuits, thus achieving a decomposable randomized encoding scheme for quantum computation. Specifically, given a quantum circuit and a quantum input, we show how to compute an encoding from which it is possible to derive the output of the computation and nothing else. In the classical setting, garbled circuits (and randomized encodings in general) are a versatile cryptographic tool with many applications such as secure multiparty computation, delegated computation, depthreduction of cryptographic primitives, complexity lowerbounds, and more. However, a quantum analogue for garbling general circuits was not known prior to this work. We hope that our quantum randomized encoding scheme can similarly be useful for applications in quantum computing and cryptography.
The properties of our scheme are analogous to ones achieved in the classical setting (in particular to the socalled pointandpermute garbling method). Our scheme has perfect correctness, and has perfect informationtheoretic security if we allow the encoding size to grow exponentially with the depth of the circuit. This exponential blowup can be avoided via computational assumptions (specifically, the existence of quantumsecure pseudorandom generators). The encoding process is decomposable: each input qubit can be encoded independently, when given access to a string of classical randomness and EPR pairs. The complexity of the encoding is comparable to that of classical garbled circuits in terms of dependence on the size and depth of the encoded circuit (and on the security parameter in the computational setting). Furthermore, the encoding can be computed via a constantdepth quantum circuit with boundedarity gates as well as quantum fanout gates (which come "for free" in the classical setting).
Gorjan Alagic,
Zvika Brakerski,
Yfke Dulek
and
Christian Schaffner
Impossibility of Quantum Virtual BlackBox Obfuscation of Classical Circuits
Manuscript, 2020.
[Abstract] [PDF]
Virtual blackbox obfuscation is a strong cryptographic primitive: it encrypts a circuit while maintaining its full input/output functionality. A remarkable result by Barak et al. (Crypto 2001) shows that a general obfuscator that obfuscates classical circuits into classical circuits cannot exist. A promising direction that circumvents this impossibility result is to obfuscate classical circuits into quantum states, which would potentially be better capable of hiding information about the obfuscated circuit. We show that, under the assumption that learningwitherrors (LWE) is hard for quantum computers, this quantum variant of virtual blackbox obfuscation of classical circuits is generally impossible. On the way, we show that under the presence of dependent classical auxiliary input, even the small class of classical point functions cannot be quantum virtual blackbox obfuscated.

A proof of quantumness is a method for provably demonstrating (to a classical verifier) that a quantum device can perform computational tasks that a classical device with comparable resources cannot. Providing a proof of quantumness is the first step towards constructing a useful quantum computer. There are currently three approaches for exhibiting proofs of quantumness: (i) Inverting a classicallyhard oneway function (e.g. using Shor's algorithm). This seems technologically out of reach. (ii) Sampling from a classicallyhardtosample distribution (e.g. BosonSampling). This may be within reach of nearterm experiments, but for all such tasks known verification requires exponential time. (iii) Interactive protocols based on cryptographic assumptions. The use of a trapdoor scheme allows for efficient verification, and implementation seems to require much less resources than (i), yet still more than (ii).
In this work we propose a significant simplification to approach (iii) by employing the random oracle heuristic. (We note that we do not apply the FiatShamir paradigm.) We give a twomessage (challengeresponse) proof of quantumness based on any trapdoor clawfree function. In contrast to earlier proposals we do not need an adaptive hardcore bit property. This allows the use of smaller security parameters and more diverse computational assumptions (such as Ring Learning with Errors), significantly reducing the quantum computational effort required for a successful demonstration.

We propose a new approach to construct generalpurpose indistinguishability obfuscation (iO). Our construction is obtained via a new intermediate primitive that we call split fullyhomomorphic encryption (split FHE), which we show to be sufficient for constructing iO. Specifically, split FHE is FHE where decryption takes the following twostep syntactic form: (i) A secret decryption step uses the secret key and produces a hint which is (asymptotically) shorter than the length of the encrypted message, and (ii) a public decryption step that only requires the ciphertext and the previously generated hint (and not the entire secret key), and recovers the encrypted message. In terms of security, the hints for a set of ciphertexts should not allow one to violate semantic security for any other ciphertexts.
Next, we show a generic candidate construction of split FHE based on three building blocks: (i) A standard FHE scheme with linear decryptandmultiply (which can be instantiated with essentially all LWEbased constructions), (ii) a linearly homomorphic encryption scheme with short decryption hints (such as the DamgardJurik encryption scheme, based on the DCR problem), and (iii) a cryptographic hash function (which can be based on a variety of standard assumptions). Our approach is heuristic in the sense that our construction is not provably secure and makes implicit assumptions about the interplay between these underlying primitives. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model.
We view our construction as a big departure from the stateoftheart constructions, and it is in fact quite simple.
Zvika Brakerski
and
Omri Shmueli
Scalable Pseudorandom Quantum States
CRYPTO 2020.
[Abstract] [PDF]
Efficiently sampling a quantum state that is hard to distinguish from a truly random quantum state is an elementary task in quantum information theory that has both computational and physical uses. This is often referred to as pseudorandom (quantum) state generator, or PRS generator for short.
In existing constructions of PRS generators, security scales with the number of qubits in the states, i.e. the (statistical) security parameter for an nqubit PRS is roughly n. Perhaps counterintuitively, nqubit PRS are not known to imply kqubit PRS even for k smaller than n. Therefore the question of scalability for PRS was thus far open: is it possible to construct nqubit PRS generators with security parameter λ for all n, λ. Indeed, we believe that PRS with tiny (even constant) n and large λ can be quite useful.
We resolve the problem in this work, showing that any quantumsecure oneway function implies scalable PRS. We follow the paradigm of first showing a statistically secure construction when given oracle access to a random function, and then replacing the random function with a quantumsecure (classical) pseudorandom function to achieve computational security. However, our methods deviate significantly from prior works since scalable pseudorandom states require randomizing the amplitudes of the quantum state, and not just the phase as in all prior works. We show how to achieve this using Gaussian sampling.
Zvika Brakerski,
Venkata Koppula
and
Tamer Mour
NIZK from LPN and Trapdoor Hash via Correlation Intractability for Approximable Relations
CRYPTO 2020.
[Abstract] [PDF]
We present new noninteractive zeroknowledge argument systems (NIZK), based on standard assumptions that were previously not known to imply it. In particular, we rely on the hardness of both the learning parity with noise (LPN) assumption, and the existence of trapdoor hash functions (TDH, defined by Döttling et al., Crypto 2019). Such TDH can be based on a number of standard assumptions, including DDH, QR, DCR, and LWE.
We revisit the correlation intractability (CI) framework for converting Σprotocols into NIZK, and present a different strategy for instantiating it by putting together two new components. First, while prior works considered the searchcomplexity of the relations for which CI is sought,we consider their probabilistic representation. Namely, a distribution over lowercomplexity functions that bitwisecomputes the target function with all but small (constant) probability. The second component is a new perspective for quantifying the class of relations for which CI is achieved. We show that it is instructive to consider CI for approximable relations (CIApx) which is quantified by a class of relations, but requires CI to hold against any approximation of any relation in this class.
We show that CIApx for just constantdegree polynomials suffices for NIZK, if the underlying Σprotocol is implemented using a suitable commitment scheme. We show that such a commitment scheme can be constructed based on LPN. We then show how to construct CIApx for constantdegree polynomials from any suitable TDH (with an enhanced correctness property that is satisfied by all existing TDH constructions).
Zvika Brakerski
and
Vinod Vaikuntanathan
LatticeInspired Broadcast Encryption and Succinct CiphertextPolicy ABE
Manuscript, 2020.
[Abstract] [PDF]
We propose a candidate CiphertextPolicy Attribute Based Encryption (CPABE) scheme for circuits, with ciphertext size that depends only on the depth of the policy circuit (and not its size). This, in particular, gives us a Broadcast Encryption (BE) scheme where all parameters (in particular, the size of the keys and ciphertexts) have a polylogarithmic dependence on the number of users, a task that was only known to be achievable assuming ideal multilinear maps or indistinguishability obfuscation. Our construction relies on techniques from latticebased (and in particular LWEbased) cryptography, but we are unable to provide a security proof. We analyze some attempts at cryptanalysis.
Zvika Brakerski
and
Nico Döttling
Hardness of LWE on General Entropic Distributions
EUROCRYPT 2020.
[Abstract] [PDF]
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape. In many of its applications the so called ``LWE secret'' is not sampled uniformly, but comes from a distribution with some minentropy. This variant, known as ``Entropic LWE'', has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius.
In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ballbounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highlycomposite moduli, and using blackbox impossibility for arbitrary moduli.
Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution s, given s+e, where e is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ballbounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.

We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semihonest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2party computation of the oblivious transfer functionality (OT). These constructions made a novel nonblackbox use of the underlying OT protocol. The question remained whether this can be done by only making blackbox use of 2round OT. This is of theoretical and potentially also practical value as blackbox use of primitives tends to lead to more efficient constructions.
Our main result proves that such a blackbox construction is impossible, namely that nonblackbox use of OT is necessary. As a corollary, a similar separation holds when starting with any 2party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of blackbox MPC with minimal interaction. In particular, we complement the separation from 2party functionalities by presenting a complete 4party functionality, give evidence for the difficulty of ruling out a complete 3party functionality and for the difficulty of ruling out blackbox constructions of 3round MPC from 2round OT, and separate a relaxed ``noncompact'' variant of 2party secret sharing from 2round OT.
Zvika Brakerski,
Nico Döttling,
Sanjam Garg
and
Giulio Malavolta
Leveraging Linear Decryption: Rate1 FullyHomomorphic Encryption and TimeLock Puzzles
TCC 2019.
[Abstract] [PDF]
We show how to combine a fullyhomomorphic encryption scheme with linear decryption and a linearlyhomomorphic encryption schemes to obtain constructions with new properties. Specifically, we present the following new results.
 Rate1 FullyHomomorphic Encryption: We construct the first scheme with messagetociphertext length ratio (i.e., rate) 1−z for z=o(1). Our scheme is based on the hardness of the Learning with Errors (LWE) problem and z is proportional to the noisetomodulus ratio of the assumption. Our building block is a construction of a new highrate linearlyhomomorphic encryption. One application of this result is the first generalpurpose secure function evaluation protocol in the preprocessing model where the communication complexity is within additive factor of the optimal insecure protocol.
 FullyHomomorphic TimeLock Puzzles: We construct the first timelock puzzle where one can evaluate any function over a set of puzzles without solving them, from standard assumptions. Prior work required the existence of subexponentially hard indistinguishability obfuscation.
Zvika Brakerski
and
Omri Shmueli
(Pseudo) Random Quantum States with Binary Phase
TCC 2019.
[Abstract] [PDF]
We prove a quantum informationtheoretic conjecture due to Ji, Liu and Song (CRYPTO 2018) which suggested that a uniform superposition with random binary phase is statistically indistinguishable from a Haar random state. That is, any polynomial number of copies of the aforementioned state is within exponentially small trace distance from the same number of copies of a Haar random state.
As a consequence, we get a provable elementary construction of pseudorandom quantum states from postquantum pseudorandom functions. Generating pseduorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)wise independent function (either in our construction or in previous work), results in an explicit construction for quantum state tdesigns for all t. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing tdesigns is bounded by that of (2t)wise independent functions. Explicitly, while in prior literature tdesigns required linear depth (for t>2), this observation shows that polylogarithmic depth suffices for all t.
We note that our constructions yield pseudorandom states and state designs with only realvalued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation.

In (singleserver) Private Information Retrieval (PIR), a server holds a large database DB of size n, and a client holds an index i and wishes to retrieve DB[i] without revealing i to the server. It is well known that information theoretic privacy even against an "honest but curious" server requires linear communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database ("input purification attack"). Nevertheless, there have been some proposals of protocols that achieve sublinear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with squareroot communication complexity, and a protocol by Kerenidis et al. (QIC 2016) with logarithmic communication complexity, and linear amount of shared entanglement.
We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurementfree protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries. Finally, we prove that even with (unlimited) preshared entanglement it is impossible to achieve security in the standard specious model with sublinear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
Benny Applebaum,
Zvika Brakerski
and
Rotem Tsabary
Degree 2 is Complete for the RoundComplexity of Malicious MPC
EUROCRYPT 2019.
[Abstract] [PDF]
We show, via a noninteractive reduction, that the existence of a secure multiparty computation (MPC) protocol for degree2 functions implies the existence of a protocol with the same round complexity for general functions. Thus showing that when considering the round complexity of MPC, it is sufficient to consider very simple functions.
Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blowup which is polynomial in the number of players and in S,2D, where S,D are the circuit size and depth of the function to be computed. Using oneway functions as an additional assumption, the exponential dependence on the depth can be removed.
As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases.
 3round perfectlysecure protocol (with guaranteed output delivery) against an active adversary that corrupts less than a quarter of the parties.
 2round statisticallysecure protocol that achieves security with ``selective abort'' against an active adversary that corrupts less than half of the parties.
 Assuming oneway functions, 2round computationallysecure protocol that achieves security with (standard) abort against an active adversary that corrupts less than half of the parties. This gives a new and conceptually simpler proof to the recent result of Ananth et al. (Crypto 2018).
Technically, our noninteractive reduction draws from the encoding method of Applebaum, Brakerski and Tsabary (TCC 2018). We extend these methods to ones that can be meaningfully analyzed even in the presence of malicious adversaries.

We show that any multiparty functionality can be evaluated using a tworound protocol with perfect correctness and perfect semihonest security, provided that the majority of parties are honest. This settles the round complexity of informationtheoretic semihonest MPC, resolving a longstanding open question (cf. Ishai and Kushilevitz, FOCS 2000). The protocol is efficient for NC^{1} functionalities. Furthermore, given blackbox access to a oneway function, the protocol can be made efficient for any polynomial functionality, at the cost of only guaranteeing computational security.
Technically, we extend and relax the notion of randomized encoding to specifically address multiparty functionalities. The property of a multiparty randomized encoding (MPRE) is that if the functionality g is an encoding of the functionality f, then for any (permitted) coalition of players, their respective outputs and inputs in $g$ allow them to simulate their respective inputs and outputs in f, without learning anything else, including the other outputs of $f$.
Zvika Brakerski
and
Nico Döttling
TwoMessage Statistically SenderPrivate OT from LWE
TCC 2018.
[Abstract] [PDF]
We construct a twomessage oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems and multiparty computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the postquantum setting. This work provides the first (presumed) postquantum secure candidate and thus allows to instantiate the aforementioned applications in a postquantum secure manner.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.

We propose a generalization of the celebrated Ring Learning with Errors (RLWE) problem (Lyubashevsky, Peikert and Regev, Eurocrypt 2010, Eurocrypt 2013), wherein the ambient ring is not the ring of integers of a number field, but rather an order (a full rank subring). We show that our OrderLWE problem enjoys worstcase hardness with respect to shortvector problems in invertibleideal lattices of the order.
The definition allows us to provide a new analysis for the hardness of the abundantly used PolynomialLWE (PLWE) problem (Stehlé et al., Asiacrypt 2009), different from the one recently proposed by Rosca, Stehlé and Wallet (Eurocrypt 2018).
This suggests that OrderLWE may be used to analyze and possibly design useful relaxations of RLWE.
We show that OrderLWE can naturally be harnessed to prove security for RLWE instances where the ``RLWE secret'' (which often corresponds to the secretkey of a cryptosystem) is not sampled uniformly as required for RLWE hardness. We start by showing worstcase hardness even if the secret is sampled from a subring of the sample space. Then, we study the case where the secret is sampled from an ideal of the sample space or a coset thereof (equivalently, some of its CRT coordinates are fixed or leaked). In the latter, we show an interesting threshold phenomenon where the amount of RLWE noise determines whether the problem is tractable.
Lastly, we address the long standing question of whether highentropy secret is sufficient for RLWE to be intractable. Our result on sampling from ideals shows that simply requiring high entropy is insufficient. We therefore propose a broad class of distributions where we conjecture that hardness should hold, and provide evidence via reduction to a concrete lattice problem.
Ben Berger and
Zvika Brakerski
ZeroKnowledge Protocols for Search Problems
ICALP 2018 (Brief Announcement), SCN 2018.
[Abstract] [PDF]
We consider natural ways to extend the notion of ZeroKnowledge (ZK) Proofs beyond decision problems. Specifically, we consider search problems, and define zeroknowledge proofs in this context as interactive protocols in which the prover can establish the correctness of a solution to a given instance without the verifier learning anything beyond the intended solution, even if it deviates from the protocol. The goal of this work is to initiate a study of Search ZeroKnowledge (searchZK), the class of search problems for which such systems exist. This class trivially contains search problems where the validity of a solution can be efficiently verified (using a single message proof containing only the solution). A slightly less obvious, but still straightforward, way to obtain zeroknowledge proofs for search problems is to let the prover send a solution and prove in zeroknowledge that the instancesolution pair is valid. However, there may be other ways to obtain such zeroknowledge proofs, and they may be more advantageous. In fact, we prove that there are search problems for which the aforementioned approach fails, but still search zeroknowledge protocols exist. On the other hand, we show sufficient conditions for search problems under which some form of zeroknowledge can be obtained using the straightforward way.
Zvika Brakerski
and Yael Kalai
Witness Indistinguishability for any SingleRound Argument with Applications to Access Control
PKC 2020.
[Abstract] [PDF]
Consider an access policy for some resource which only allows access to users of the system who own a certain set of attributes. Specifically, we consider the case where such an access structure is defined by some monotone function f:{0,1}^{N} → {0,1}, belonging to some class of function F (e.g. conjunctions, space bounded computation), where N is the number of possible attributes.
In this work we show that any succinct singleround delegation scheme for the function class F can be converted into a succinct singleround private access control protocol. That is, a verifier can be convinced that an approved user (i.e. one which holds an approved set of attributes) is accessing the system, without learning any additional information about the user or the set of attributes.
As a main tool of independent interest, we show that assuming a quasipolynomially secure twomessage oblivious transfer scheme with statistical sender privacy (which can be based on quasipolynomial hardness of the DDH, QR, DCR or LWE assumptions), we can convert any singleround protocol into a witness indistinguishable one, with similar communication complexity.
Note: Prior versions of this report contained an additional result about a new batch delegation scheme for monotone formulae (see ePrint history for this report). We intend to publish these results in future work.
Zvika Brakerski
Quantum FHE (Almost) as Secure as Classical
CRYPTO 2018.
[Abstract] [PDF]
Fully homomorphic encryption schemes (FHE) allow to apply arbitrary efficient computation to encrypted data without decrypting it first. In Quantum FHE (QFHE) we may want to apply an arbitrary quantumly efficient computation to (classical or quantum) encrypted data.
We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multikey classical FHE.
Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on superpolynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.

We give a protocol for producing certifiable randomness from a single untrusted quantum device that is polynomialtime bounded. The randomness is certified to be statistically close to uniform from the point of view of any computationally unbounded quantum adversary, that may share entanglement with the quantum device. The protocol relies on the existence of postquantum secure trapdoor clawfree functions, and introduces a new primitive for constraining the power of an untrusted quantum device. We then show how to construct this primitive based on the hardness of the learning with errors (LWE) problem.
The randomness protocol can also be used as the basis for an efficiently verifiable quantum supremacy proposal, thus answering an outstanding challenge in the field.

We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasipolynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of 1/21/poly(n). Thus we can only show that "very hard" LPN is harder than some "very mildly hard" worst case problem.
Specifically, we consider the (n,m,w)nearest codeword problem ((n,m,w)NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error w/m ≈ log^{2}n/n, (n,m,w)NCP can be solved given oracle access to an LPN distinguisher with noise ratio 1/21/poly(n).
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n,m,w)NCP with the aforementioned parameters lies in the complexity class SearchBPP^{SZK} (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be NPhard. We then show that LPN with very low noise rate log^{2}(n)/n implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in BPP^{SZK}).
Zvika Brakerski,
Alex Lombardi,
Gil Segev
and Vinod Vaikuntanathan
Anonymous IBE, Leakage Resilience and Circular Security from New Assumptions
EUROCRYPT 2018.
[Abstract] [PDF]
In anonymous identitybased encryption (IBE), ciphertexts not only hide their corresponding messages, but also their target identity. We construct an anonymous IBE scheme based on the Computational DiffieHellman (CDH) assumption in general groups (and thus, as a special case, based on the hardness of factoring Blum integers).
Our approach extends and refines the recent treebased approach of Cho et al. (CRYPTO 17) and Döttling and Garg (CRYPTO 17). Whereas the tools underlying their approach do not seem to provide any form of anonymity, we introduce two new building blocks which we utilize for achieving anonymity: blind garbled circuits (which we construct based on any oneway function), and blind batch encryption (which we construct based on CDH).
We then further demonstrate the applicability of our newlydeveloped tools by showing that batch encryption implies a publickey encryption scheme that is both resilient to leakage of a (1−o(1))fraction of its secret key, and KDM secure (or circular secure) with respect to all linear functions of its secret key (which, in turn, is known to imply KDM security for boundedsize circuits). These yield the first highrate leakageresilient encryption scheme and the first KDMsecure encryption scheme based on the CDH or Factoring assumptions.
Finally, relying on our techniques we also construct a batch encryption scheme based on the hardness of the Learning Parity with Noise (LPN) problem, albeit with very small noise rate Ω(log^{2}(n)/n). Although this batch encryption scheme is not blind, we show that it still implies standard (i.e., nonanonymous) IBE, leakage resilience and KDM security. IBE and highrate leakage resilience were not previously known from LPN, even with extremely low noise.

The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure postquantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regev's famous proof that if DCP is in BQP (quantum polytime) then so is LWE (FOCS'02). We also discuss a connection between eDCP and Childs and Van Dam's algorithm for generalized hidden shift problems (SODA'07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.
Zvika Brakerski,
Yael Kalai and
Renen Perlman
Succinct Spooky Free Compilers Are Not Black Box Sound
ASIACRYPT 2017.
[Abstract] [PDF]
It is tempting to think that if we encrypt a sequence of messages {xi} using a semantically secure encryption scheme, such that each xi is encrypted with its own independently generated public key pki, then even if the scheme is malleable (or homomorphic) then malleability is limited to acting on each xi independently. However, it is known that this is not the case, and in fact even nonlocal malleability might be possible. This phenomenon is known as spooky interactions.
We formally define the notion of spooky free compilers that has been implicit in the delegation of computation literature. A spooky free compiler allows to encode a sequence of queries to a multiprover interactive proof system (MIP) in a way that allows to apply the MIP prover algorithm on the encoded values on one hand, and prevents spooky interactions on the other. In our definition, the compiler is allowed to be tailored to a specific MIP and does not need to support any other operation.
We show that (under a plausible complexity assumption) spooky free compilers that are sufficiently succinct to imply delegation schemes for NP with communication n^{a} (for any constant a<1) cannot be proven secure via blackbox reduction to a falsifiable assumption. On the other hand, we show that it is possible to construct nonsuccinct spooky free fully homomorphic encryption, the strongest conceivable flavor of spooky free compiler, in a straightforward way from any fully homomorphic encryption scheme.
Our impossibility result relies on adapting the techniques of Gentry and Wichs (2011) which rule out succinct adaptively sound delegation protocols. We note that spooky free compilers are only known to imply nonadaptive delegation, so the aforementioned result cannot be applied directly. Interestingly, we are still unable to show that spooky free compilers imply adaptive delegation, nor can we apply our techniques directly to rule out arbitrary nonadaptive NPdelegation.

A witness encryption (WE) scheme can take any NP statement as a publickey and use it to encrypt a message. If the statement is true then it is possible to decrypt the message given a corresponding witness, but if the statement is false then the message is computationally hidden. Ideally, the encryption procedure should run in polynomial time, but it is also meaningful to define a weaker notion, which we call nontrivially exponentially efficient WE (XWE), where the encryption runtime is only required to be much smaller than the trivial 2^{m} bound for NP relations with witness size m. We show how to construct such XWE schemes for all of NP with encryption runtime 2^{m/2} under the subexponential learning with errors (LWE) assumption. For NP relations that can be verified in NC1 (e.g., SAT) we can also construct such XWE schemes under the subexponential Decisional Bilinear DiffieHellman (DBDH) assumption. Although we find the result surprising, it follows via a very simple connection to attributebased encryption.
We also show how to upgrade the above results to get nontrivially exponentially efficient indistinguishability obfuscation for null circuits (niO), which guarantees that the obfuscations of any two circuits that always output 0 are indistinguishable. In particular, under the LWE assumptions we get a XniO scheme where the obfuscation time is 2^{n/2} for all circuits with input size n. It is known that in the case of indistinguishability obfuscation (iO) for all circuits, nontrivially efficient XiO schemes imply fully efficient iO schemes (Lin et al., PKC '16) but it remains as a fascinating open problem whether any such connection exists for WE or niO.
Lastly, we explore a potential approach toward constructing fully efficient WE and niO schemes via multiinput ABE.

In a constrained PRF, the owner of the PRF key K can generate constrained keys K_{f} that allow anyone to evaluate the PRF on inputs x that satisfy the predicate f (namely, where f(x) is ``true'') but reveal no information about the PRF evaluation on the other inputs. A private constrained PRF goes further by requiring that the constrained key K_{f} hides the predicate f.
Boneh, Kim and Montgomery (EUROCRYPT 2017) presented a construction of private constrained PRF for point function constraints, and Canetti and Chen (EUROCRYPT 2017) presented a completely different construction for NC1 constraints. In this work, we show two constructions of LWEbased constrainthiding constrained PRFs for general predicates described by polynomialsize circuits.
The two constructions are based on two distinct techniques that we show have further applicability by constructing weak attributehiding predicate encryption schemes. In a nutshell, the first construction imports the technique of modulus switching from the FHE world into the domain of trapdoor extension and homomorphism. The second construction shows how to use the duality between FHE secretkey/randomness and ABE randomness/secretkey to construct a scheme with dual use of the same values for both FHE and ABE purposes.

We construct a 4round multiparty computation protocol in the plain model for any functionality, secure against a malicious adversary. Our protocol relies on the subexponential hardness of the Learning with Errors (LWE) problem with slightly superpolynomial noise ratio, and on the existence of adaptively secure commitments. Our round complexity matches a lower bound of Garg et al. (EUROCRYPT '16), and outperforms the state of the art of 6rounds based on similar assumptions to ours, and 5rounds relying on indistinguishability obfuscation and other strong assumptions.
To do this, we construct an LWE based multikey FHE scheme with a very simple oneround \emph{distributed setup procedure} (vs. the trusted setup required in previous LWE based constructions). This lets us construct a 3round semimalicious protocol without setup using the approach of Mukherjee and Wichs (EUROCRYPT '16). Finally, subexponential hardness and adaptive commitments are used to "compile" the protocol into the fully malicious setting.
Boaz Barak,
Zvika Brakerski, Ilan Komargodski and
Pravesh Kothari
Limits on LowDegree Pseudorandom Generators (Or: SumofSquares Meets Program Obfuscation)
EUROCRYPT 2018.
[Abstract] [PDF]
We prove that for every function G:{+1,1}^{n} > R^{m}, if every output of G is a polynomial (over R) of degree at most d of at most s monomials and m > Õ(sn^{⌈ d/2 ⌉}), then there is a polynomial time algorithm that can distinguish a vector of the form z=G(x) from a vector z where each coordinate is sampled independently according to the marginal distributions of the coordinates of G (assuming the latter satisfy some nondegeneracy condition).
In particular, if G:{+1,1}^{n} > {+1,1}^{m} and m is as above, then G cannot be a pseudorandom generator.
Our algorithm is based on semidefinite programming and in particular the sum of squares (SOS) hierarchy.
As a corollary, we refute some conjectures recently made in the cryptographic literature. This includes refuting the assumptions underlying Lin and Tessaro's (2017) recently proposed candidate construction for indistinguishability obfuscation from bilinear maps, by showing that any blockwise 2local PRG with block size b cannot go beyond m ≈ 2^{2b} n. We give an even stronger bound of m ≈ 2^{b} n on the output length of random blockwise 2local PRGs. We also show that a generalized notion of generators runs into similar barriers.
We complement our algorithm by presenting a class of candidate generators with blockwise locality 3 and constant block size, that resists both Gaussian elimination and SOS algorithms whenever m = n^{1.5ε}. This class is extremely easy to describe: Let G be any simple nonabelian group, and interpret the blocks of x as elements in G, then each output of the generator is of the form x_{i} * x_{j} * x_{k}, where i,j,k are random and "*" is the group operation.
Daniel Benarroch,
Zvika Brakerski and
Tancrède Lepoint
FHE Over the Integers: Decomposed and Batched in the PostQuantum Regime
PKC 2017.
[Abstract] [PDF]
Fully homomorphic encryption over the integers (FHEOI) is currently the only alternative to latticebased FHE. FHEOI includes a family of schemes whose security is based on the hardness of different variants of the approximate greatest common divisor (AGCD) problem. The majority of these works is based on the noisefree variant of AGCD which is potentially weaker than the general one. In particular, the noisefree variant relies on the hardness of factoring and is thus vulnerable to quantum attacks.
A lot of effort was made to port techniques from second generation latticebased FHE (using tensoring) to FHEOI. Gentry, Sahai and Waters (Crypto 13) showed that third generation techniques (which were later formalized using the ``gadget matrix'') can also be ported. In this work, we propose a comprehensive study of applying third generation FHE techniques to the regime of FHEOI. We present and analyze a third generation FHEOI based on decisional AGCD without the noisefree assumption. We proceed to showing a batch version of our scheme where each ciphertext can encode a vector of messages and operations are performed coordinatewise. We use a similar AGCD variant to Cheon et al. (Eurocrypt 13) who suggested the batch approach for second generation FHE, but we do not require the noisefree component or a subset sum assumption. However, like Cheon et al., we do require circular security for our scheme, even for bounded homomorphism. Lastly, we discuss some of the obstacles towards efficient implementation of our schemes and discuss a number of possible optimizations.
Zvika Brakerski,
Justin Holmgren and
Yael Kalai
NonInteractive RAM and Batch NP Delegation from any PIR
STOC 2017.
[Abstract] [PDF]
We present an adaptive and noninteractive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomialtime cryptographic assumptions (e.g. the worst case hardness of polynomialfactor approximation of shortvector lattice problems). In our protocol, the prover and the verifier do not need to interact at all: The verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements in a completely adaptive manner. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer nonadaptive twomessage protocols (where the first message could not be reused), and required either obfuscationbased assumptions or superpolynomial hardness assumptions.
We show that our techniques can also be applied to construct a new type of (nonadaptive) 2message delegation protocols for batch NP statements. Specifically, we can simultaneously prove the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.

In (keypolicy) attribute based encryption (ABE), messages are encrypted respective to attributes x, and keys are generated respective to policy functions f. The ciphertext is decryptable by a key only if f(x)=0. Adding homomorphic capabilities to ABE is a long standing open problem, with current techniques only allowing compact homomorphic evaluation on ciphertext respective to the same x. Recent advances in the study of multikey FHE also allow crossattribute homomorphism with ciphertext size growing (quadratically) with the number of input ciphertexts.
We present an ABE scheme where homomorphic operations can be performed compactly across attributes. Of course, decrypting the resulting ciphertext needs to be done with a key respective to a policy f with f(x_{i})=0 for all attributes involved in the computation. In our scheme, the target policy f needs to be known to the evaluator, we call this targeted homomorphism. Our scheme is secure under the polynomial hardness of learning with errors (LWE) with subexponential modulustonoise ratio.
We present a second scheme where there needs not be a single target policy. Instead, the decryptor only needs a set of keys representing policies f_{j} s.t. for any attribute x_{i} there exists f_{j} with f_{j}(x_{i})=0. In this scheme, the ciphertext size grows (quadratically) with the size of the set of policies (and is still independent of the number of inputs or attributes). Again, the target set of policies needs to be known at evaluation time. This latter scheme is secure in the random oracle model under the polynomial hardness of LWE with subexponential noise ratio.
Zvika Brakerski and
Or Dagmi
Shorter Circuit Obfuscation in Challenging Security Models
SCN 2016.
[Abstract] [PDF]
The study of program obfuscation is seeing great progress in recent years, which is crucially attributed to the introduction of graded encoding schemes by Garg, Gentry and Halevi (Eurocrypt 2013). In such schemes, elements of a ring can be encoded such that the content of the encoding is hidden, but restricted algebraic manipulations, followed by zerotesting, can be performed publicly. This primitive currently underlies all known constructions of generalpurpose obfuscators.
However, the security properties of the current candidate graded encoding schemes are not well understood, and new attacks frequently introduced. It is therefore important to assume as little as possible about the security of the graded encoding scheme, and use as conservative security models as possible. This often comes at a cost of reducing the efficiency or the functionality of the obfuscator.
In this work, we present a candidate obfuscator, based on compositeorder graded encoding schemes, which obfuscates circuits directly a la Zimmerman (Eurocrypt 2015) and ApplebaumBrakerski (TCC 2015). Our construction requires a graded encoding scheme with only 33 ``plaintext slots'' (= subrings of the underlying ring), which is directly related to the size and complexity of the obfuscated program. We prove that our obfuscator is superior to previous works in two different security models.
1. We prove that our obfuscator is indistinguishabilitysecure (iO) in the Unique Representation Generic Graded Encoding model. Previous works either required a compositeorder scheme with polynomially many slots, or were provable in a milder security model. This immediately translates to a polynomial improvement in efficiency, and shows that improved security does not come at the cost of efficiency in this case.
2. Following Badrinarayanan et al. (Eurocrypt 2016), we consider a model where finding any ``nontrivial'' encoding of zero breaks the security of the encoding scheme. We show that, perhaps surprisingly, secure obfuscation is possible in this model even for some classes of nonevasive functions (for example, any class of conjunctions). We define the property required of the function class, formulate an appropriate (generic) security model, and prove that our aforementioned obfuscator is virtualblackbox (VBB) secure in this model.
Zvika Brakerski and
Renen Perlman
LatticeBased Fully Dynamic MultiKey FHE with Short Ciphertexts
CRYPTO 2016.
[Abstract] [PDF]
We present a multikey fully homomorphic encryption scheme that supports an unbounded number of homomorphic operations for an unbounded number of parties. Namely, it allows to perform arbitrarily many computational steps on inputs encrypted by an apriori unbounded (polynomial) number of parties. Inputs from new parties can be introduced into the computation dynamically, so the final set of parties needs not be known ahead of time. Furthermore, the length of the ciphertexts, as well as the space complexity of an atomic homomorphic operation, grow only linearly with the current number of parties.
Prior works either supported only an apriori bounded number of parties (LopezAlt, Tromer and Vaikuntanthan, STOC 12), or only supported singlehop evaluation where all inputs need to be known before the computation starts (Clear and McGoldrick, Crypto 15, Mukherjee and Wichs, Eurocrypt 16). In all aforementioned works, the ciphertext length grew at least quadratically with the number of parties.
Technically, our starting point is the LWEbased approach of previous works. Our result is achieved via a careful use of Gentry's bootstrapping technique, tailored to the specific scheme. Our hardness assumption is that the scheme of Mukherjee and Wichs is circular secure (and thus bootstrappable). A leveled scheme can be achieved under standard LWE.
Zvika Brakerski,
Chris Brzuska and
Nils Fleischhacker
On Statistically Secure Obfuscation with Approximate Correctness
CRYPTO 2016.
[Abstract] [PDF]
Goldwasser and Rothblum (TCC 07) prove that statistical indistinguishability obfuscation (iO) cannot exist if the obfuscator must maintain perfect correctness (under a widely believed complexity theoretic assumption: NP not in SZK). However, for many applications of iO, such as constructing publickey encryption from oneway functions (one of the main open problems in theoretical cryptography), approximate correctness is sufficient. It had been unknown thus far whether statistical approximate iO (saiO) can exist.
We show that saiO does not exist, even for a minimal correctness requirement, if NP is not in the intersection of AM and coAM, and if oneway functions exist. A simple complementary observation shows that if oneway functions do not exist, then averagecase saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a ``baseline'' for the obfuscated program.
We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build publickey encryption from oneway functions and thus deserve further investigation.

The notion of Zero Knowledge has driven the field of cryptography since its conception over thirty years ago. It is well established that twomessage zeroknowledge protocols for NP do not exist, and that fourmessage zeroknowledge arguments exist under the minimal assumption of oneway functions. Resolving the precise round complexity of zeroknowledge has been an outstanding open problem for far too long.
In this work, we present a threemessage zeroknowledge argument system with soundness against uniform polynomialtime cheating provers. The main component in our construction is the recent delegation protocol for RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely on a threemessage variant of their protocol based on a keyless collisionresistant hash functions secure against uniform adversaries as well as other standard primitives.
More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee against realworld adversaries, which we formalize following Rogaway's ``humanignorance" approach (VIETCRYPT 2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our protocol to finding collisions in the underlying hash function.
Zvika
Brakerski and Vinod Vaikuntanathan
CircuitABE from LWE: Unbounded Attributes and SemiAdaptive Security
CRYPTO 2016.
[Abstract] [PDF]
We construct an LWEbased keypolicy attributebased encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth d circuit can have as many as 2^{d} inputs). This is in contrast to previous LWEbased schemes where the length of the public parameters has to grow linearly with the maximal attribute length.
We prove that our scheme is semiadaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWEbased constructions were only able to achieve selective security. (We stress that the ``complexity leveraging'' technique is not applicable for unbounded attributes.)
We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an apriori unbounded sequence of LWE matrices, and have finegrained control over which trapdoor is embedded in each and every one of them, all with succinct representation.

We show how to securely obfuscate conjunctions, which are functions f(x_{1}, ... , x_{n}) = ∧ _{i} y_{i} where each literal y_{i} is either just x_{i} or ! x_{i} e.g., f(x_{1}, ... , x_{n}) = x_{1} ∧ ! x_{3} ∧ ! x_{7} ... ∧ x_{n1}. Whereas prior work of Brakerski and Rothblum (CRYPTO 2013) showed how to achieve this using a nonstandard object called cryptographic multilinear maps, our scheme is based on an ``entropic'' variant of the Ring Learning with Errors (Ring LWE) assumption. As our core tool, we prove that hardness assumptions on the recent multilinear map construction of Gentry, Gorbunov and Halevi (TCC 2015) can be established based on entropic Ring LWE. We view this as a first step towards proving the security of additional multilinear map based constructions, and in particular program obfuscators, under standard assumptions.
Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than blackbox access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy.
Zvika
Brakerski and Gil Segev
Hierarchical Functional Encryption
ITCS 2017.
[Abstract] [PDF]
Functional encryption provides finegrained access control for encrypted data, allowing each user to learn only specific functions of the encrypted data. We study the notion of hierarchical functional encryption, which augments functional encryption with delegation capabilities, offering significantly more expressive access control.
We present a generic transformation that converts any generalpurpose publickey functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing (somewhat surprisingly) that the existence of functional encryption is equivalent to that of its hierarchical generalization.
Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various tradeoffs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support arbitrary hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of publickey encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width.
Zvika
Brakerski, Ilan Komargodski and Gil Segev
From SingleInput to MultiInput Functional Encryption in the PrivateKey Setting
EUROCRYPT 2016.
[Abstract] [PDF]
We construct a generalpurpose multiinput functional encryption scheme in the privatekey setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of x1, ..., xt to compute f(x1, ..., xt) but nothing else. Our construction assumes any generalpurpose privatekey singleinput scheme (without any additional assumptions), and is proven to be adaptivelysecure for any constant number of inputs t. Moreover, it can be extended to a superconstant number of inputs assuming that the underlying singleinput scheme is subexponentially secure.
Instantiating our construction with existing singleinput schemes, we obtain multiinput schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even oneway functions), offering various tradeoffs between security and efficiency.
Previous constructions of multiinput functional encryption schemes either relied on somewhat stronger assumptions and provided weaker security guarantees (Goldwasser et al., EUROCRYPT '14), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al., EUROCRYPT '15).
Independent and concurrent work by Ananth and Jain (ePrint '15) shows (among other contributions) how to construct a selectivelysecure multiinput privatekey functional encryption scheme based on any publickey functional encryption scheme. In this context, our result both relies on a weaker assumption and guarantees stronger security.
Zvika
Brakerski and Vinod Vaikuntanathan
Constrained KeyHomomorphic PRFs from Standard Lattice Assumptions
(or: How to Secretly Embed a Circuit in Your PRF)
TCC 2015.
[Abstract] [PDF]
Boneh et al. (Crypto 13) and Banerjee and Peikert (Crypto 14) constructed pseudorandom functions (PRFs) from the Learning with Errors (LWE) assumption by embedding combinatorial objects, a path and a tree respectively, in instances of the LWE problem. In this work, we show how to generalize this approach to embed circuits, inspired by recent progress in the study of Attribute Based Encryption.
Embedding a universal circuit for some class of functions allows us to produce constrained keys for functions in this class, which gives us the first standardlatticeassumptionbased constrained PRF (CPRF) for general boundeddescription boundeddepth functions, for arbitrary polynomial bounds on the description size and the depth. (A constrained key w.r.t a circuit C enables one to evaluate the PRF on all x for which C(x)=1, but reveals nothing on the PRF values at other points.) We rely on the LWE assumption and on the onedimensional SIS (Short Integer Solution) assumption, which are both related to the worst case hardness of general lattice problems. Previous constructions for similar function classes relied on such exotic assumptions as the existence of multilinear maps or secure program obfuscation. The main drawback of our construction is that it does not allow collusion (i.e. to provide more than a single constrained key to an adversary). Similarly to the aforementioned previous works, our PRF family is also key homomorphic.
Interestingly, our constrained keys are very short. Their length does not depend directly either on the size of the constraint circuit or on the input length. We are not aware of any prior construction achieving this property, even relying on strong assumptions such as indistinguishability obfuscation.
Benny Applebaum and Zvika Brakerski
Obfuscating Circuits via CompositeOrder Graded Encoding
TCC 2015.
[Abstract] [PDF]
We present a candidate obfuscator based on compositeorder Graded Encoding Schemes (GES), which are a generalization of multilinear maps. Our obfuscator operates on circuits directly without converting them into formulas or branching programs as was done in previous solutions. As a result, the time and size complexity of the obfuscated program, measured by the number of GES elements, is directly proportional to the circuit complexity of the program being obfuscated. This improves upon previous constructions whose complexity was related to the formula or branching program size. Known instantiations of Graded Encoding Schemes allow us to obfuscate circuit classes of polynomial degree, which include for example families of circuits of logarithmic depth.
We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well.
As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.

In a functional encryption (FE) scheme, the owner of the secret key can generate restricted decryption keys that allow users to learn specific functions of the encrypted messages and nothing else. In many known constructions of FE schemes, such a notion of security is guaranteed only for messages that are fixed ahead of time (i.e., before the adversary even interacts with the system). This is called selective security, which is too restrictive for many realistic applications. Achieving adaptive security (also called full security), where security is guaranteed even for messages that are adaptively chosen at any point in time, seems significantly more challenging. The handful of known fullysecure schemes are based on specifically tailored techniques that rely on strong assumptions (such as obfuscation assumptions or multilinear maps assumptions).
In this paper we show that any sufficiently expressive selectivelysecure FE scheme can be transformed into a fully secure one without introducing any additional assumptions. We present a direct blackbox transformation, making novel use of hybrid encryption, a classical technique that was originally introduced for improving the efficiency of encryption schemes, combined with a new technique we call the Trojan Method. This method allows to embed a secret execution thread in the functional keys of the underlying scheme, which will only be activated within the proof of security of the resulting scheme.As another application of the Trojan Method, we show how to construct functional encryption schemes for arbitrary circuits starting from ones for shallow circuits (NC1 or even TC0).
Zvika Brakerski and Gil Segev
FunctionPrivate Functional Encryption in the PrivateKey Setting
TCC 2015.
[Abstract] [PDF]
Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Whereas the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided.
Whereas function privacy is inherently limited in the publickey setting, in the privatekey setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages m_{1}, ..., m_{T} together with decryption keys corresponding to functions f_{1}, ..., f_{T}, reveal essentially no information other than the values { f_{i}(m_{j})}_{i,j=1,...,T}. Despite its great potential, the known functionprivate privatekey schemes either support rather limited families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a functionprivate functional encryption scheme, starting with any nonfunctionprivate scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme, and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain functionprivate schemes based either on obfuscation assumptions, on the Learning with Errors assumption, or even on general publickey encryption (offering various tradeoffs between security and efficiency).
Zvika
Brakerski and Guy Rothblum
Virtual BlackBox Obfuscation for All Circuits via Generic Graded Encoding
TCC 2014.
[Abstract] [PDF]
We present a new generalpurpose obfuscator for all polynomialsize circuits. The obfuscator uses graded encoding schemes, a generalization of multilinear maps. We prove that the obfuscator exposes no more information than the program's blackbox functionality, and achieves virtual blackbox security, in the generic graded encoded scheme model. This proof is under a plausible worstcase complexitytheoretic assumption related to the Exponential Time Hypothesis, in addition to standard cryptographic assumptions.
Zvika
Brakerski and Guy Rothblum
BlackBox Obfuscation for dCNFs
ITCS 2014.
[Abstract] [PDF]
We show how to securely obfuscate a new class of functions: conjunctions of NC^{0}_{d} circuits. These are functions of the form C(x) = ∧_{i} C_{i}(x), where each C_{i} is a boolean NC^{0}_{d} circuit, whose output bit is only a function of d = O(1) bits of the input x. For example, dCNFs, where each clause is a disjunction of at most d variables, are in this class. Given such a function, we produce an obfuscated program that preserves the inputoutput functionality of the given function, but reveals nothing else. Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013).
We prove that the construction is a secure obfuscation in a generic multilinear group model, under the blackbox definition of Barak et al. (CRYPTO 2001). Security is based on a new worstcase hardness assumption about exponential hardness of the NPcomplete problem 3SAT, the Bounded Speedup Hypothesis.
One of the new techniques we introduce is a method for enforcing input consistency, which we call randomizing subassignments. We hope that this technique can find further application in constructing secure obfuscators.
Zvika
Brakerski and Vinod Vaikuntanathan
LatticeBased FHE as Secure as PKE
ITCS 2014.
[Abstract] [PDF]
We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of Õ(n^{1.5+ε})approximation for lattice problems (such as GapSVP) under quantum reductions for any ε>0 (or Õ(n^{2+ε})approximation under classical reductions). This matches the best known hardness for ``regular'' (nonhomomorphic) lattice based publickey encryption up to the ε term. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a nonleveled FHE scheme.)
Our approach consists of three main ideas: Noisebounded sequential evaluation of high fanin operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimensionmodulus reduction.
Zvika
Brakerski and Guy Rothblum
Obfuscating Conjunctions
CRYPTO 2013.
[Abstract] [PDF]
We show how to securely obfuscate the class of conjunction functions (functions like f(x_{1}, ... , x_{n}) = x_{1} ∧ ! x_{4} ∧ ! x_{6} ∧ ... ∧ x_{n2}). Given any function in the class, we produce an obfuscated program which preserves the inputoutput functionality of the given function, but reveals nothing else.
Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the DiffieHellman assumption. We conjecture that our construction is secure for any conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against generic adversaries.

We show that the Learning with Errors (LWE) problem is classically at least as hard as
standard worstcase lattice problems, even with polynomial modulus.
Previously this was only known under quantum reductions.
Our techniques capture the tradeoff between the dimension and the modulus of LWE instances,
leading to a much better understanding of the
landscape of the problem. The proof is inspired by techniques from
several recent cryptographic constructions, most notably fully
homomorphic encryption schemes.
Zvika
Brakerski and Moni Naor
Fast Algorithms for Interactive Coding
SODA 2013.
[Abstract] [PDF]
Consider two parties who wish to communicate in order to execute some
interactive protocol $\pi$. However, the communication channel
between them is noisy: An adversary sees everything that is
transmitted over the channel and can change a constant
fraction of the bits as he pleases, thus interrupting the
execution of $\pi$ (which was designed for an errorless
channel). If $\pi$ only contains a single long message, then a
good error correcting code would overcome the noise with only
a constant overhead in communication. However, this solution
is not applicable to interactive protocols consisting of
many short messages.
Schulman (FOCS 92, STOC 93) presented the notion of interactive
coding: A simulator that, given any protocol $\pi$, is able to
simulate it (i.e.\ produce its intended transcript) even with
constant rate adversarial channel errors, and with only constant
(multiplicative) communication overhead. Until recently, however,
the running time of all known simulators was exponential (or
subexponential) in the communication complexity of $\pi$ (denoted
$N$ in this work). Brakerski and Kalai (FOCS 12) recently presented
a simulator that runs in time $\poly(N)$. Their simulator is
randomized (each party flips private coins) and has failure
probability roughly $2^{N}$.
In this work, we improve the computational complexity of interactive
coding. While at least $N$ computational steps are required (even
just to output the transcript of $\pi$), the BK simulator runs in
time $\tilde\Omega(N^2)$.
We present two efficient algorithms for interactive coding: The first with computational complexity $O(N \log N)$ and exponentially small (in $N$) failure probability; and the second with computational complexity $O(N)$, but failure probability $1/\poly(N)$. (Computational complexity is measured in the RAM model.)
Zvika
Brakerski, Craig
Gentry and Shai Halevi
Packed Ciphertexts in LWEbased Homomorphic Encryption
PKC 2013.
[Abstract] [PDF]
We observe that the PeikertVaikuntanathanWaters (PVW) method of packing many plaintext elements in a single Regevtype ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the SmartVercauteren (SV) ciphertextpacking technique that relies on polynomialCRT. While the SV technique is only applicable to schemes that rely on ringLWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of ``GeneralLWE'').
Although using the PVW method with LWEbased schemes leads to worse asymptotic efficiency than using the SV technique with ringLWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``generalLWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
Zvika
Brakerski
When Homomorphism Becomes a Liability
TCC 2013.
[Abstract] [PDF]
We show that an encryption scheme cannot have a simple decryption circuit and be homomorphic at the same time. Specifically, if a scheme can homomorphically evaluate the majority function, then its decryption circuit cannot be a linear function of the secret key (or even a succinct polynomial), even if decryption error is allowed.
An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of noise withlow hamming weight cannot be fully homomorphic. This applies to known schemes such as LPNbased symmetric or public key encryption.
An additional corollary is that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One by applying the aforementioned general statement, and another by directly attacking one of the components of the scheme.
Zvika
Brakerski and Yael Tauman Kalai
Efficient Interactive Coding Against Adversarial Noise
FOCS 2012.
[Abstract] [PDF]
In this work, we study the fundamental problem of constructing interactive protocols that are robust to noise,
a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an errorfree channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (noninteractive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors.
We show how to efficiently simulate any interactive protocol in the presence of constantrate adversarial noise, while incurring only a constant blowup in the communication complexity ($\cc$).
Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $12^{\Omega(\cc)}$.
Previous solutions are either inefficient or are resilient only to stochastic noise.
Zvika Brakerski
Fully Homomorphic Encryption
without Modulus Switching from Classical GapSVP
CRYPTO 2012.
[Abstract] [PDF]
We present a new tensoring technique for LWEbased fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically ($B \to B^2\cdot\poly(n)$) with every multiplication (before ``refreshing''), our noise only grows linearly ($B \to B\cdot\poly(n)$).
We use this technique to construct a \emph{scaleinvariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values.
Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be classically reduced to the worstcase hardness of the GapSVP problem (with quasipolynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Zvika Brakerski, Craig Gentry and Vinod Vaikuntanathan
(Leveled) Fully Homomorphic Encryption without Bootstrapping
ITCS 2012.
[Abstract] [PDF]
We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and
bases security on weaker assumptions.
A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomialsize circuits), without Gentry's bootstrapping procedure.
Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ringLWE (RLWE) problems that have $2^k$ security against known attacks. For RLWE, we have:
 A leveled FHE scheme that can evaluate $L$level arithmetic circuits with $\tilde{O}(k \cdot L^3)$ pergate computation  i.e., computation quasilinear in the security parameter. Security is based on RLWE for an approximation factor exponential in $L$. This construction does not use the bootstrapping procedure.
 A leveled FHE scheme that uses bootstrapping as an optimization, where the pergate computation (which includes the bootstrapping procedure) is $\tilde{O}(k^2)$, independent of $L$. Security is based on the hardness of RLWE for quasipolynomial factors (as opposed to the subexponential factors needed in previous schemes).
We obtain similar results to the above for LWE, but with worse performance.
Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width  e.g., where a constant fraction of levels have width at least $k$ 
we can reduce the pergate computation of the bootstrapped version to $\tilde{O}(k)$, independent of $L$,
by batching the bootstrapping operation.
Previous FHE schemes all required $\tilde{\Omega}(k^{3.5})$ computation per gate.
At the core of our construction is a much more effective approach for managing the noise level
of latticebased ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).
Zvika Brakerski and Vinod Vaikuntanathan
Efficient Fully Homomorphic Encryption from (Standard) LWE
FOCS 2011.
[Abstract] [PDF]
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worstcase hardness of ``short vector problems'' on arbitrary lattices. Additionally, relying on LWE makes our scheme very natural to understand (and implement).
Our construction improves on previous works in two aspects:
 We show that ``somewhat homomorphic'' encryption can
be based on LWE, using a new relinearization technique. In contrast, all
previous schemes relied on complexity assumptions related to ideals in various
rings.
 We deviate from the
``squashing paradigm'' used in all previous
works. We introduce a new dimensionmodulus reduction technique, which shortens the
ciphertexts and reduces the decryption complexity of our scheme, {\em without
introducing additional assumptions}. In contrast, all previous works required an
additional, very strong assumption (namely, the sparse subset sum assumption).
Our scheme has very short ciphertexts and we therefore use it to construct an
asymptotically efficient LWEbased singleserver private information
retrieval (PIR) protocol. The communication complexity of our protocol (in the
publickey model) is kpolylog(k)+log DB bits per singlebit
query, which is better than any known scheme (here, k is a security parameter).
Zvika Brakerski and Yael Tauman Kalai
A Parallel Repetition Theorem for Leakage Resilience
TCC 2012.
[Abstract] [PDF]
A leakage resilient encryption scheme is one which stays secure even against an attacker
that obtains a bounded amount of side information on the secret key (say
$\lambda$ bits of ``leakage''). A fundamental question is whether
parallel repetition amplifies leakage resilience. Namely, if we
secret share our message, and encrypt the shares under two independent keys,
will the resulting scheme be resilient to $2\lambda$ bits of leakage?
Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an
example of a publickey encryption scheme that is resilient to $\lambda$ bits
of leakage, and yet its $2$repetition is not resilient to even
$(1+\epsilon)\lambda$ bits of leakage. In their counterexample, the repeated
schemes share secretly generated public parameters.
In this work we show that under a reasonable strengthening of the definition of
leakage resilience (one that captures known proof techniques for achieving nontrivial
leakage resilience), parallel repetition does in fact amplify
leakage. In particular, if fresh public parameters are used for each copy of the LewkoWaters scheme, then
their negative result does not hold, and leakage is amplified by parallel repetition.
More generally, we show that given $t$ schemes that are resilient to
$\lambda_1, \ldots, \lambda_t$ bits of leakage, respectfully, their direct
product is resilient to $\sum (\lambda_i1)$ bits. We present our
amplification theorem in a general framework that applies other cryptographic primitives as well.
Zvika Brakerski and Vinod Vaikuntanathan
Fully Homomorphic Encryption from RingLWE and Security for Key Dependent Messages
CRYPTO 2011.
[Abstract] [PDF]
We present a somewhat homomorphic encryption scheme that is both very simple to describe and analyze, and whose security (quantumly) reduces to the worstcase hardness of problems on ideal lattices. We then transform it into a fully homomorphic encryption scheme using standard ``squashing'' and ``bootstrapping'' techniques introduced by Gentry (STOC 2009).
One of the obstacles in going from ``somewhat'' to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key.
Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worstcase problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is s, and our public key is (a,b=as+2e), where s,a,e are all degree (n1) integer polynomials whose coefficients are independently drawn from easy to sample distributions.
Zvika Brakerski and Gil Segev
Better Security for Deterministic PublicKey Encryption: The AuxiliaryInput Setting
CRYPTO 2011.
[Abstract] [PDF]
Deterministic publickey encryption, introduced by Bellare, Boldyreva, and O'Neill (CRYPTO '07), provides an alternative to randomized publickey encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security when the plaintext is distributed over a small set. Bellare et al. addressed this difficulty by requiring semantic security to hold only when the plaintext has high minentropy from the adversary's point of view.
In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high minentropy from the adversary's point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees.
We formalize a framework for studying the security of deterministic publickey encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hardtoinvert auxiliary inputs. Within this framework, we propose two schemes: the first is based on the decisional DiffieHellman (and, more generally, on the $d$linear) assumption, and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, quadratic residuosity and Paillier's composite residuosity). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the standard hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multiuser setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multiuser setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.
Zvika Brakerski, Jonathan Katz, Gil Segev and Arkady Yerukhimovich
Limits on the Power of ZeroKnowledge Proofs in Cryptographic Constructions
TCC 2011.
[Abstract] [PDF]
For over 20 years, blackbox impossibility results have been used
to argue the infeasibility of constructing certain cryptographic
primitives (e.g., key agreement) from others (e.g., oneway
functions). A widely recognized limitation of such impossibility
results, however, is that they say nothing about the usefulness of
(known) nonblackbox techniques. This is
unsatisfying, as we would at least like to rule out constructions
using the set of techniques we have at our disposal.
With this motivation in mind, we suggest a new framework for
blackbox constructions that encompasses constructions with a
nonblackbox flavor: specifically, those that rely on
zeroknowledge proofs relative to some oracle. We show that
our framework is powerful enough to capture the NaorYung/Sahai
paradigm for building a (shielding) CCAsecure publickey encryption
scheme from a CPAsecure one, something ruled out by prior
blackbox separation results. On the other hand, we show that several
blackbox impossibility results still hold even in a
setting that allows for zeroknowledge proofs.
Zvika Brakerski, Yael Tauman Kalai, Jonathan Katz and Vinod Vaikuntanathan
Overcoming the Hole in the Bucket: PublicKey Cryptography Resilient to Continual Memory Leakage
FOCS 2010.
[Abstract] [PDF]
In recent years, there has been a major effort to design cryptographic schemes that remain secure even if part of the secret key is leaked. This is due to a recent proliferation of side channel attacks which, through various physical means, can recover part of the secret key. We explore the possibility of achieving security even with continual leakage, i.e., even if some information is leaked each time the key is used.
We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, at each time period, can probe the entire memory (containing a secret key) and ``leak'' up to a $(1o(1))$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [MicaliReyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $(1/2o(1))$) or the symmetric external DiffieHellman assumption (which allows for a leakage rate of $(1o(1))$), we achieve the above for public key encryption, identitybased encryption, and signature schemes. Prior to this work, it was not known how to construct publickey encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage.
Zvika Brakerski and Shafi Goldwasser
Circular and Leakage Resilient PublicKey Encryption Under Subgroup Indistinguishability
(or: Quadratic Residuosity Strikes Back)
CRYPTO 2010.
[Abstract] [PDF]
The main results of this work are new publickey encryption schemes
that, under the quadratic residuosity (QR) assumption (or
Paillier's decisional composite residuosity (DCR) assumption), achieve keydependent
message security as well as high resilience to secret key leakage
and high resilience to the presence of auxiliary input information.
In particular, under what we call the subgroup
indistinguishability assumption, of which the QR and DCR are special
cases, we can construct a scheme that has:
 Keydependent message (circular) security. Achieves security
even when encrypting affine functions of its own secret key (in fact,
w.r.t.\ affine ``keycycles'' of predefined length). Our scheme also
meets the requirements for extending keydependent message security to
broader classes of functions beyond affine functions using previous
techniques of [BGK, ePrint09] or [BHHI, Eurocrypt10].
 Leakage resiliency.
Remains secure even if any
adversarial lowentropy (efficiently computable) function of the
secret key is given to the adversary. A proper selection of parameters
allows for a ``leakage rate'' of $(1o(1))$ of the length of the secret key.
 Auxiliaryinput security. Remains secure even if
any sufficiently \emph{hard to invert} (efficiently computable) function of the
secret key is given to the adversary.
Our scheme is the first to achieve keydependent security and auxiliaryinput
security based on the DCR and QR assumptions. Previous schemes that achieved
these properties relied either on the DDH or LWE assumptions. The proposed
scheme is also the first to achieve leakage resiliency for leakage rate
$(1o(1))$ of the secret key length, under the QR assumption. We note that
leakage resilient schemes under the DCR and the QR assumptions, for the
restricted case of composite modulus product of safe primes, were implied by
the work of [NS, Crypto09], using hash proof systems. However, under the QR
assumption, known constructions of hash proof systems only yield a leakage rate
of $o(1)$ of the secret key length.
Zvika Brakerski and Yael Tauman Kalai
A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model
Manuscript, 2010.
[Abstract] [PDF]
In this work, we present a generic framework for constructing efficient signature scheme, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles).
We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call apriorimessage unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present simple constructions based on the RSA assumption, the short integer solution (SIS) assumption, and the computational DiffieHellman (CDH) assumption over bilinear groups.
Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the short integer solution (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call ring trapdoor functions. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.
Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the learning with error (LWE) assumption, and is similar to recently introduced IBE schemes; The second is based on the $d$linear assumption over bilinear groups.
Zvika Brakerski, Shafi Goldwasser and Yael Tauman Kalai
BlackBox CircularSecure Encryption Beyond Affine Functions
TCC 2011.
[Abstract] [PDF]
We show how to achieve publickey encryption schemes that can securely encrypt
nonlinear functions of their own secret key.
Specifically, we show that for any constant $d\in\mathbb{N}$, there exists a
publickey encryption scheme that can securely encrypt any function $f$ of its
own secret key, assuming $f$ can be expressed as a polynomial of total
degree $d$. Such a scheme is said to be keydependent message (KDM) secure
w.r.t. degree$d$ polynomials. We also show that for any constants $c,e$,
there exists a publickey encryption scheme that is KDM secure w.r.t. all
Turing machines with description size $c\log \lambda$ and running time
$\lambda^e$, where $\lambda$ is the security parameter. The security of such
publickey schemes can be based either on the standard decision DiffieHellman
(DDH) assumption or on the learning with errors (LWE) assumption (with certain
parameters settings).
In the case of functions that can be expressed as degree$d$ polynomials, we
show that the resulting schemes are also secure with respect to key
cycles of any length. Specifically, for any polynomial number $n$ of key
pairs, our schemes can securely encrypt a degree$d$ polynomial whose variables
are the collection of coordinates of all $n$ secret keys. Prior to this work,
it was not known how to achieve this for nonlinear functions.
Our key idea is a general transformation that amplifies KDM security. The
transformation takes an encryption scheme that is KDM secure w.r.t.
some functions even when the secret keys are weak (i.e. chosen from an
arbitrary distribution with entropy $k$), and outputs a scheme that is KDM
secure w.r.t. a richer class of functions. The resulting scheme may no
longer be secure with weak keys. Thus, in some sense, this transformation
converts security with weak keys into amplified KDM security.
Zvika Brakerski and Boaz PattShamir
Distributed Discovery of Large NearCliques
PODC 2009 (Brief Announcement), DISC 2009, Distributed Computing 2011.
[Abstract] [PDF]
Given an undirected graph and $0\le\epsilon\le1$, a set of nodes is called $\epsilon$near clique if all but an $\epsilon$ fraction of the pairs of nodes in the set have a link between them. In this paper we present a fast synchronous network algorithm that uses small messages and finds a nearclique. Specifically, we present a constanttime algorithm that finds, with constant probability of success, a linear size $\epsilon$near clique if there exists an $\epsilon^3$near clique of linear size in the graph. The algorithm uses messages of $O(\log n)$ bits. The failure probability can be reduced to $n^{\Omega(1)}$ in $O(\log n)$ time, and the algorithm also works if the graph contains a clique of size $\Omega(n/\log^{\alpha}\log n)$ for some $\alpha \in (0,1)$.

Publickey
encryption schemes rely for their INDCPA security on permessage fresh
randomness. In practice, randomness may be of poor quality for a variety of
reasons, leading to failure of the schemes. Expecting the systems to improve
is unrealistic. What we show in this paper is that we can, instead, improve
the cryptography to offset the lack of possible randomness. We provide
publickey encryption schemes that achieve INDCPA security when the randomness they
use is of high quality, but, when the latter is not the case, rather than
breaking completely, they achieve a weaker but still useful notion of
security that we call INDCDA. This hedged publickey encryption provides
the best possible security guarantees in the face of bad randomness. We
provide simple RObased ways to make inpractice INDCPA schemes hedge
secure with minimal software changes. We also provide nonRO model schemes
relying on lossy trapdoor functions (LTDFs) and techniques from
deterministic encryption. They achieve adaptive security by establishing
and exploiting the anonymity of LTDFs which we believe is of independent
interest.
Zvika Brakerski and Oded Goldreich
From Absolute Distinguishability to Positive Distinguishability
Studies in Complexity and
Cryptography 2011 (Book Chapter).
[Abstract] [PDF]
We study methods of converting algorithms that distinguish pairs
of distributions with a gap that has an absolute value that is noticeable
into corresponding algorithms in which the gap is always positive.
Our focus is on designing algorithms that, in addition to the tested string,
obtain a fixed number of samples from each distribution.
Needless to say, such algorithms can not provide a very reliable
guess for the sign of the original distinguishability gap,
still we show that even guesses that are noticeably better than random
are useful in this setting.

Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan, are
pseudorandom functions in which the owner of the seed produces a publickey
that constitutes a commitment to all values of the function and can then
produce, for any input $x$, a proof that the function has been evaluated
correctly on $x$, preserving pseudorandomness for all other inputs. No
publickey (even a falsely generated one) should allow for proving more than
one value per input.
VRFs are both a natural and a useful primitive, and previous works have
suggested a variety of constructions and applications. Still, there are many
open questions in the study of VRFs, especially their relation to more widely
studied cryptographic primitives and constructing them from a wide variety of
cryptographic assumptions.
In this work we define a natural relaxation of VRFs that we call weak
verifiable random functions, where pseudorandomness is required to hold only
for randomly selected inputs. We conduct a study of weak VRFs, focusing on
applications, constructions, and their relationship to other cryptographic
primitives. We show:
 Constructions. We present constructions of weak VRFs based on
a variety of assumptions, including general assumptions such as (enhanced)
trapdoor permutations, as well as constructions based on specific
numbertheoretic assumptions such as the DiffieHellman assumption in bilinear
groups.
 Separations.
Verifiable random functions (both weak and standard) cannot be constructed from
oneway permutations in a blackbox manner. This constitutes the first result
separating (standard) VRFs from any cryptographic primitive.
 Applications. Weak VRFs capture the essence of
constructing noninteractive zeroknowledge proofs for all NP languages.
Zvika Brakerski
Local Property Restoring
Manuscript, 2008.
[Abstract] [PDF]
In this paper we present the new notion of local restoring of
properties. A restorer gets implicit access, in the form of oracle
queries, to a function (which may correspond to an object such as a graph, a
code or a matrix) that is close to having a property, such as function
monotonicity or graph bipartiteness. The restorer then, using a small number of
queries to the function, can locally compute the value of a ``corrected''
function in any desired point. The corrected function is required to be both
close to the original function and strictly have the property. In the
case of error correcting codes, this corresponds to local selfcorrecting. Here
we extend the definition and study for general functions and properties.
We define the new notion and go on to give restorers for properties in the
dense graph model: Bipartiteness and $\rho$Clique, and for function
monotonicity. We also show some general relations between property restoring
and other notions such as property testing and tolerant property testing.
Zvika Brakerski and Boaz PattShamir
JitterApproximation Tradeoff for Periodic Scheduling
IPDPS 2004, Wireless Networks 2006.
[Abstract] [PDF]
We consider an asymmetric wireless communication setting, where a
server periodically broadcasts data items to different
mobile clients. The goal is to serve items in to a prescribed
rate, while minimizing the energy consumption of the mobile
users. Abstractly, we are presented with a set of jobs, each with a
known execution time and a requested period, and the task is to
design a schedule for these jobs over a single shared resource
without preemption.
Given any solution
schedule, its period approximation is the
maximal factor by which the average period of a job in the schedule
is blown up w.r.t. its requested period, and the jitter is roughly
the maximal variability of times between two
consecutive occurrences of the same job. Schedules with
low jitter allow the mobile devices to save power by
having their receivers switched off longer.
In this paper we consider a
scenario where clients may be willing to settle for nonoptimal
period approximation so that the jitter is improved.
We present a parametric jitterapproximation tradeoff algorithm that
allows us to choose various combinations between jitter optimality
and period optimality for any given set of jobs.
Zvika Brakerski, Aviv Nisgav and Boaz PattShamir
General Perfectly Periodic Scheduling
PODC 2002, Algorithmica 2006.
[Abstract] [PDF]
In a perfectly periodic schedule, each job must be scheduled
precisely every some fixed number of time units after its previous
occurrence. Traditionally, motivated by centralized systems, the
perfect periodicity requirement is relaxed, the main goal being to
attain the requested average rate. Recently, motivated by mobile
clients with limited power supply, perfect periodicity seems to be
an attractive alternative that allows clients to save energy by
reducing their ``busy waiting'' time. In this case, clients may be
willing to compromise their requested service rate in order to get
perfect periodicity. In this paper, we study a general model of
perfectly periodic schedules, where each job has a requested period
and a length; we assume that $m$ jobs can be served in parallel for
some given $m$. Job lengths may not be truncated, but granted
periods may be different than the requested periods.
We present an algorithm which computes schedules such that the
worstcase proportion between the requested period and the granted
period is guaranteed to be close to the lower bound. This algorithm
improves on previous algorithms for perfect schedules in providing a
worstcase guarantee rather than an averagecase guarantee, in
generalizing unit length jobs to arbitrary length jobs, and in
generalizing the singleserver model to multiple servers.
Zvika Brakerski, Vladimir Dreizin and Boaz PattShamir
Dispatching in PerfectlyPeriodic Schedules
J. Algorithms 2003.
[Abstract] [PDF]
In a perfectlyperiodic schedule, time is divided into timeslots, and each client gets a time slot precisely every predefined number of time slots, called the period of that client. Periodic schedules are useful in mobile communication where they can help save power in the mobile device, and they also enjoy the best possible smoothness. In this paper we study the question of dispatching in a perfectly periodic schedule, namely how to find the next item to schedule, assuming that the schedule is already given somehow. Simple dispatching algorithms suffer from either linear time complexity per slot or from exponential space requirement. We show that if the schedule is given in a natural tree representation, then there exists a way to get the best possible running time per slot for a given space parameter, or the best possible space (up to a polynomial) for a given time parameter. We show that in many practical cases, the running time is constant and the space complexity is polynomial.