Zvika Brakerski  
I am an Associate Professor at the Department of
Computer Science and Applied Mathematics of the Weizmann Institute of Science, and a scientific consultant.
My research interests lie in the foundations of computer science. I am currently working mostly in cryptography and quantum computing. I completed my Ph.D. right here at the department in 2011, advised by Shafi Goldwasser. I then spent two years as a Simons Postdoctoral Fellow at the Computer Science Department of Stanford University, hosted by Dan Boneh. I received my M.Sc. from the Faculty of Engineering of TelAviv University in 2002. My advisor was Boaz PattShamir. Prior to that, in 2001, I received a joint B.Sc. from the Faculty of Engineering and the School of Computer Science of TelAviv University. [Contact]
Contact Information at Weizmann:
Zvika Brakerski
Office:
Ziskind Building, Room 205 Email: zvika at mail.bzsci.com 

Our analysis shows that an even simpler construction: applying a random (binary) phase followed by a random computationalbasis permutation, would suffice, assuming that the input is orthogonal and \emph{flat} (that is, has high minentropy when measured in the computational basis).
Using quantumsecure oneway functions (which imply quantumsecure pseudorandom functions and permutations), we obtain an efficient cryptographic instantiation of the above.
In this work, we develop a new array of techniques that we use to construct a quantum state obfuscator, a powerful notion formalized recently by Coladangelo and Gunn (arXiv:2311.07794) in their pursuit of better software copyprotection schemes. Quantum state obfuscation refers to the task of compiling a quantum program, consisting of a quantum circuit C with a classical description and an auxiliary quantum state a>, into a functionallyequivalent obfuscated quantum program that hides as much as possible about C and a>. We prove the security of our obfuscator when applied to any pseudodeterministic quantum program, i.e. one that computes a (nearly) deterministic classical input / classical output functionality. Our security proof is with respect to an efficient classical oracle, which may be heuristically instantiated using quantumsecure indistinguishability obfuscation for classical circuits.
Our result improves upon the recent work of Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023) who also showed how to obfuscate pseudodeterministic quantum circuits in the classical oracle model, but only ones with a completely classical description. Furthermore, our result answers a question of Coladangelo and Gunn, who provide a construction of quantum state indistinguishability obfuscation with respect to a quantum oracle. Indeed, our quantum state obfuscator together with ColadangeloGunn gives the first candidate realization of a ``bestpossible'' copyprotection scheme for all polynomialtime functionalities.
We proceed by refining and extending the definition of pseudoentanglement, introduced by Aaronson et al., 2022, using the new operational measures; and we present constructions of pseudoentangled states (for our new definition) based on postquantum cryptographic assumptions. Finally, we discuss the relations between computational entanglement theory and other topics, such as quantum cryptography and notions of pseudoentropy, as well as the relevance of our new definitions to the study of the AdS/CFT correspondence.
We believe that, in addition to the contributions presented in the current manuscript, our work opens multiple research directions, of relevance both to the theoretical quantum information theory community as well as for future applications of quantum networks and cryptography.
We show that, similarly to PRS, PRSPD can be constructed from any postquantum oneway function. As far as the authors are aware, this is the first construction of a family of states that satisfies both pseudorandomness and provability of destruction. We show that many cryptographic applications that were shown based on PRS variants using quantum communication can be based on (variants of) PRSPD using only classical communication. This includes symmetric encryption, message authentication, onetime signatures, commitments, and classically verifiable private quantum coins.
Certifying qubits was previously only known to be possible based on the hardness of the Learning with Errors problem and the use of adaptive hardcore (Brakerski et al., 2018). Our framework allows certification of qubits based only on the existence of postquantum trapdoor clawfree functions, or on quantum fully homomorphic encryption. These can be instantiated, for example, from Ring Learning with Errors.
On the technical side, we show that the quantum soundness of any such protocol can be reduced to proving a bound on a simple algorithmic task: informally, answering ``two challenges simultaneously'' in the protocol. Our reduction formalizes the intuition that these protocols demonstrate quantumness by leveraging the impossibility of rewinding a general quantum prover. This allows us to prove tight bounds on the quantum soundness of (KahanamokuMeyer et al., 2021) and (Kalai et al., 2022), showing that no quantum polynomialtime prover can succeed with probability larger than cos^{2}(π/8) ≈ 0.853. Previously, only an upper bound on the success probability of classical provers, and a lower bound on the success probability of quantum provers, were known. We then extend this proof of quantum soundness to show that provers that approach the quantum soundness bound must perform almost anticommuting measurements. This certifies that the prover holds a qubit.
In this work we show, for the former example of blackhole radiation decoding, that it also implies the existence of secure quantum cryptography. In fact, we show an existential equivalence between the hardness of blackhole radiation decoding and a variety of cryptographic primitives, including bitcommitment schemes and oblivious transfer protocols (using quantum communication). This can be viewed (with proper disclaimers, as we discuss) as providing a physical justification for the existence of secure cryptography. We conjecture that such connections may be found in other highenergy physics phenomena.
We consider EFI pairs — efficiently samplable, statistically far but computationally indistinguishable pairs of distributions. Building on the work of Yan (2022) which shows equivalence between EFI pairs and statistical commitment schemes, we show that EFI pairs are necessary and sufficient for a large class of quantumcryptographic applications. Specifically, while it was known how to construct commitments schemes, oblivious transfer, and general secure multiparty computation from any EFI, we show how to construct EFI pairs from minimalistic versions of each one of these primitives. We also construct from EFI quantum computational zero knowledge (𝖰𝖢𝖹𝖪) proofs for all of 𝖰𝖨𝖯, and construct EFI pairs from essentially any nontrivial 𝖰𝖢𝖹𝖪.
This suggests that, for much of quantum cryptography, EFI pairs play a similar role to that played by OWFs in the classical setting: they are simple to describe, essential, and also serve as a linchpin for demonstrating equivalence between primitives.
To achieve rate1 both on the receiver's and sender's end, we use the LPN assumption, with slightly subconstant noise rate 1/m^{ε} for any ε>0 together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHEbased solution which inherently requires an expensive "bootstrapping" operation. We believe that in terms of efficiency we compare favorably to existing batchOT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).
For our DDHbased solution we develop a new technique that may be of independent interest. We show that it is possible to "emulate" the binary group Z_{2} (or any other smallorder group) inside a primeorder group Z_{p} in a functionprivate manner. That is, Z_{2} operations are mapped to Z_{p} operations such that the outcome of the latter do not reveal additional information beyond the Z_{2} outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the postquantum setting in a constructive manner. We show that any noninteractive nonadaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made postquantum constructive. In contrast, assumptions with superpolynomial solution space (such as general search assumptions) cannot be generally converted.
Along the way, we make several additional contributions:
1. We put forth a framework for reductions (or general interaction) with stateful solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting.
2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a superpolynomial solution space cannot be generically "restored" postmeasurement. This shows that the novel rewinding technique of Chiesa et al. (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
We introduce a notion of classical binding for quantum commitments which provides guarantees analogous to the classical case. In our notion, the receiver performs a (partial) measurement of the quantum commitment string, and the outcome of this measurement determines a single value that the sender may open. We expect that our notion can replace classical commitments in various settings, leaving the security proof essentially unchanged. As an example we show a soundness proof for the GMW zeroknowledge proof system.
We construct a noninteractive quantum commitment scheme which is classically statisticallybinding and has a classical opening, based on the existence of any postquantum oneway function. Prior candidates had inherently quantum openings and were not classically binding. In contrast, we show that it is impossible to achieve classical binding for statistically hiding commitments, regardless of assumption or round complexity.
Our scheme is simply Naor's commitment scheme (which classically requires a common random string, CRS), but executed in superposition over all possible values of the CRS, and repeated several times. We hope that this technique for using quantum communication to remove a CRS may find other uses.
We present a novel structural property of Clifford unitaries. Namely, that their (normalized) trace is bounded by 1/√2 in absolute value, regardless of the dimension. We show a similar property for the qary Cliffords. This allows us to analyze a very simple singlequery identity test under the Clifford promise and show that it has (at least) constant soundness. The same test has perfect soundness under the Pauli promise.
We use the test to show that identity/Pauli/Clifford testing (without promise) are all computationally equivalent, thus establishing computational hardness for Pauli and Clifford testing.
It is a known fact that an unstructured lattice can be cast as an ideallattice in some order of a number field (and thus, in a rather trivial sense, that ideals in orders are as general as unstructured lattices). However, it is not known whether this connection can be used to imply useful hardness results for structured lattices, or alternatively new algorithmic techniques for unstructured lattices.
In this work we show that the OrderLWE problem (a generalization of the well known RingLWE problem) on certain orders is at least as hard as the (unstructured) LWE problem. So in general one should not hope to solve OrderLWE more efficiently than LWE. However, we only show that this connection holds in orders that are very ``skewed'' and in particular irrelevant for cryptographic applications. We then discuss the ability to embed unstructured lattices in ``friendlier'' orders, which requires devising an algorithm for computing the conductor of relevant orders. One of our technical tools is an improved hardness result for OrderLWE, closing a gap left in prior work.
We consider the total setting where m is sufficiently large (with respect to u and k), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of kSUM, in particular connections to problems in computational geometry, extends to the total setting.
The best known algorithm in the averagecase total setting is due to Wagner (following the approach of BlumKalaiWasserman), and achieves a runtime of u^{O(1/logk)}. This beats the known (conditional) lower bounds for worstcase kSUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching averagecase lowerbound, by showing a reduction from worstcase lattice problems, thus introducing a new family of techniques into the field of finegrained complexity. In particular, we show that any algorithm solving averagecase kSUM on m elements in time u^{o(1/logk)} will give a superpolynomial improvement in the complexity of algorithms for lattice problems.
For standard LWE (not over rings) entropic results are known, using a ``lossiness approach'' but it was not known how to adapt this approach to the ring setting. In this work we present the first such results, where entropic security is established either under RLWE or under the Decisional Small Polynomial Ratio (DSPR) assumption which is a mild variant of the NTRU assumption.
In the context of general entropic distributions, our results in the ring setting essentially match the known lower bounds (Bolboceanu et al., Asiacrypt 2019; Brakerski and Döttling, Eurocrypt 2020).
Our construction improves upon the prior construction of latticebased DPNIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circularsecure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
An appealing approach towards resolving this problem is to show efficiently encodable and decodable constructions of a combinatorial object called tree codes (Schulman, STOC 93). After a lot of effort in this direction, the current state of the art has deterministic constructions of tree codes that are efficiently encodable but require a logarithmic (instead of constant) alphabet (Cohen, Haeupler, and Schulman, STOC 18). However, we still lack (even heuristic) candidate constructions that are efficiently decodable.
In this work, we show that tree codes that are efficiently encodable, but not efficiently decodable, also imply deterministic and efficient interactive coding schemes that are resilient to adversarial errors. Our result immediately implies a deterministic and efficient interactive coding scheme with a logarithmic alphabet (i.e., 1/loglog rate). We show this result using a novel implementation of hashing through deterministic tree codes that is powerful enough to yield interactive coding schemes.
Our work follows the highlevel outline of the recent work of Gay and Pass [ePrint 2020], who showed a way to remove the heuristic step from the homomorphicencryption based iO approach of Brakerski, D\"ottling, Garg, and Malavolta [EUROCRYPT 2020]. They thus obtain a construction proved secure under circular security assumption of natural homomorphic encryption schemes  specifically, they use homomorphic encryption schemes based on LWE and DCR, respectively. In this work we show how to remove the DCR assumption and remain with a scheme based on the circular security of LWE alone. Along the way we relax some of the requirements in the GayPass blueprint and thus obtain a scheme that is secure under a relaxed assumption. Specifically, we do not require security in the presence of a keycycle, but rather only in the presence of a keyrandomness cycle.
In the classical setting, garbled circuits (and randomized encodings in general) are a versatile cryptographic tool with many applications such as secure multiparty computation, delegated computation, depthreduction of cryptographic primitives, complexity lowerbounds, and more. However, a quantum analogue for garbling general circuits was not known prior to this work. We hope that our quantum randomized encoding scheme can similarly be useful for applications in quantum computing and cryptography.
The properties of our scheme are as follows:
To illustrate the usefulness of quantum randomized encoding, we use it to design a conceptuallysimple zeroknowledge (ZK) proof system for the complexity class QMA. Our protocol has the socalled Σ format with a singlebit challenge, and allows the inputs to be delayed to the last round. The only previouslyknown ZK Σprotocol for QMA is due to Broadbent and Grilo (FOCS 2020), which does not have the aforementioned properties.
In this work we propose a significant simplification to approach (iii) by employing the random oracle heuristic. (We note that we do not apply the FiatShamir paradigm.) We give a twomessage (challengeresponse) proof of quantumness based on any trapdoor clawfree function. In contrast to earlier proposals we do not need an adaptive hardcore bit property. This allows the use of smaller security parameters and more diverse computational assumptions (such as Ring Learning with Errors), significantly reducing the quantum computational effort required for a successful demonstration.
Next, we show a generic candidate construction of split FHE based on three building blocks: (i) A standard FHE scheme with linear decryptandmultiply (which can be instantiated with essentially all LWEbased constructions), (ii) a linearly homomorphic encryption scheme with short decryption hints (such as the DamgardJurik encryption scheme, based on the DCR problem), and (iii) a cryptographic hash function (which can be based on a variety of standard assumptions). Our approach is heuristic in the sense that our construction is not provably secure and makes implicit assumptions about the interplay between these underlying primitives. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model.
We view our construction as a big departure from the stateoftheart constructions, and it is in fact quite simple.
In existing constructions of PRS generators, security scales with the number of qubits in the states, i.e. the (statistical) security parameter for an nqubit PRS is roughly n. Perhaps counterintuitively, nqubit PRS are not known to imply kqubit PRS even for k smaller than n. Therefore the question of scalability for PRS was thus far open: is it possible to construct nqubit PRS generators with security parameter λ for all n, λ. Indeed, we believe that PRS with tiny (even constant) n and large λ can be quite useful.
We resolve the problem in this work, showing that any quantumsecure oneway function implies scalable PRS. We follow the paradigm of first showing a statistically secure construction when given oracle access to a random function, and then replacing the random function with a quantumsecure (classical) pseudorandom function to achieve computational security. However, our methods deviate significantly from prior works since scalable pseudorandom states require randomizing the amplitudes of the quantum state, and not just the phase as in all prior works. We show how to achieve this using Gaussian sampling.
We revisit the correlation intractability (CI) framework for converting Σprotocols into NIZK, and present a different strategy for instantiating it by putting together two new components. First, while prior works considered the searchcomplexity of the relations for which CI is sought,we consider their probabilistic representation. Namely, a distribution over lowercomplexity functions that bitwisecomputes the target function with all but small (constant) probability. The second component is a new perspective for quantifying the class of relations for which CI is achieved. We show that it is instructive to consider CI for approximable relations (CIApx) which is quantified by a class of relations, but requires CI to hold against any approximation of any relation in this class.
We show that CIApx for just constantdegree polynomials suffices for NIZK, if the underlying Σprotocol is implemented using a suitable commitment scheme. We show that such a commitment scheme can be constructed based on LPN. We then show how to construct CIApx for constantdegree polynomials from any suitable TDH (with an enhanced correctness property that is satisfied by all existing TDH constructions).
In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ballbounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highlycomposite moduli, and using blackbox impossibility for arbitrary moduli.
Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution s, given s+e, where e is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ballbounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
Our main result proves that such a blackbox construction is impossible, namely that nonblackbox use of OT is necessary. As a corollary, a similar separation holds when starting with any 2party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of blackbox MPC with minimal interaction. In particular, we complement the separation from 2party functionalities by presenting a complete 4party functionality, give evidence for the difficulty of ruling out a complete 3party functionality and for the difficulty of ruling out blackbox constructions of 3round MPC from 2round OT, and separate a relaxed ``noncompact'' variant of 2party secret sharing from 2round OT.
As a consequence, we get a provable elementary construction of pseudorandom quantum states from postquantum pseudorandom functions. Generating pseduorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)wise independent function (either in our construction or in previous work), results in an explicit construction for quantum state tdesigns for all t. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing tdesigns is bounded by that of (2t)wise independent functions. Explicitly, while in prior literature tdesigns required linear depth (for t>2), this observation shows that polylogarithmic depth suffices for all t.
We note that our constructions yield pseudorandom states and state designs with only realvalued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation.
We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurementfree protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries. Finally, we prove that even with (unlimited) preshared entanglement it is impossible to achieve security in the standard specious model with sublinear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blowup which is polynomial in the number of players and in S,2D, where S,D are the circuit size and depth of the function to be computed. Using oneway functions as an additional assumption, the exponential dependence on the depth can be removed.
As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases.
Technically, we extend and relax the notion of randomized encoding to specifically address multiparty functionalities. The property of a multiparty randomized encoding (MPRE) is that if the functionality g is an encoding of the functionality f, then for any (permitted) coalition of players, their respective outputs and inputs in $g$ allow them to simulate their respective inputs and outputs in f, without learning anything else, including the other outputs of $f$.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
The definition allows us to provide a new analysis for the hardness of the abundantly used PolynomialLWE (PLWE) problem (Stehlé et al., Asiacrypt 2009), different from the one recently proposed by Rosca, Stehlé and Wallet (Eurocrypt 2018). This suggests that OrderLWE may be used to analyze and possibly design useful relaxations of RLWE.
We show that OrderLWE can naturally be harnessed to prove security for RLWE instances where the ``RLWE secret'' (which often corresponds to the secretkey of a cryptosystem) is not sampled uniformly as required for RLWE hardness. We start by showing worstcase hardness even if the secret is sampled from a subring of the sample space. Then, we study the case where the secret is sampled from an ideal of the sample space or a coset thereof (equivalently, some of its CRT coordinates are fixed or leaked). In the latter, we show an interesting threshold phenomenon where the amount of RLWE noise determines whether the problem is tractable.
Lastly, we address the long standing question of whether highentropy secret is sufficient for RLWE to be intractable. Our result on sampling from ideals shows that simply requiring high entropy is insufficient. We therefore propose a broad class of distributions where we conjecture that hardness should hold, and provide evidence via reduction to a concrete lattice problem.
In this work we show that any succinct singleround delegation scheme for the function class F can be converted into a succinct singleround private access control protocol. That is, a verifier can be convinced that an approved user (i.e. one which holds an approved set of attributes) is accessing the system, without learning any additional information about the user or the set of attributes.
As a main tool of independent interest, we show that assuming a quasipolynomially secure twomessage oblivious transfer scheme with statistical sender privacy (which can be based on quasipolynomial hardness of the DDH, QR, DCR or LWE assumptions), we can convert any singleround protocol into a witness indistinguishable one, with similar communication complexity.
Note: Prior versions of this report contained an additional result about a new batch delegation scheme for monotone formulae (see ePrint history for this report). We intend to publish these results in future work.
We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multikey classical FHE.
Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on superpolynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.
The randomness protocol can also be used as the basis for an efficiently verifiable quantum supremacy proposal, thus answering an outstanding challenge in the field.
Specifically, we consider the (n,m,w)nearest codeword problem ((n,m,w)NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error w/m ≈ log^{2}n/n, (n,m,w)NCP can be solved given oracle access to an LPN distinguisher with noise ratio 1/21/poly(n).
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n,m,w)NCP with the aforementioned parameters lies in the complexity class SearchBPP^{SZK} (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be NPhard. We then show that LPN with very low noise rate log^{2}(n)/n implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in BPP^{SZK}).
Our approach extends and refines the recent treebased approach of Cho et al. (CRYPTO 17) and Döttling and Garg (CRYPTO 17). Whereas the tools underlying their approach do not seem to provide any form of anonymity, we introduce two new building blocks which we utilize for achieving anonymity: blind garbled circuits (which we construct based on any oneway function), and blind batch encryption (which we construct based on CDH).
We then further demonstrate the applicability of our newlydeveloped tools by showing that batch encryption implies a publickey encryption scheme that is both resilient to leakage of a (1−o(1))fraction of its secret key, and KDM secure (or circular secure) with respect to all linear functions of its secret key (which, in turn, is known to imply KDM security for boundedsize circuits). These yield the first highrate leakageresilient encryption scheme and the first KDMsecure encryption scheme based on the CDH or Factoring assumptions.
Finally, relying on our techniques we also construct a batch encryption scheme based on the hardness of the Learning Parity with Noise (LPN) problem, albeit with very small noise rate Ω(log^{2}(n)/n). Although this batch encryption scheme is not blind, we show that it still implies standard (i.e., nonanonymous) IBE, leakage resilience and KDM security. IBE and highrate leakage resilience were not previously known from LPN, even with extremely low noise.
We formally define the notion of spooky free compilers that has been implicit in the delegation of computation literature. A spooky free compiler allows to encode a sequence of queries to a multiprover interactive proof system (MIP) in a way that allows to apply the MIP prover algorithm on the encoded values on one hand, and prevents spooky interactions on the other. In our definition, the compiler is allowed to be tailored to a specific MIP and does not need to support any other operation.
We show that (under a plausible complexity assumption) spooky free compilers that are sufficiently succinct to imply delegation schemes for NP with communication n^{a} (for any constant a<1) cannot be proven secure via blackbox reduction to a falsifiable assumption. On the other hand, we show that it is possible to construct nonsuccinct spooky free fully homomorphic encryption, the strongest conceivable flavor of spooky free compiler, in a straightforward way from any fully homomorphic encryption scheme.
Our impossibility result relies on adapting the techniques of Gentry and Wichs (2011) which rule out succinct adaptively sound delegation protocols. We note that spooky free compilers are only known to imply nonadaptive delegation, so the aforementioned result cannot be applied directly. Interestingly, we are still unable to show that spooky free compilers imply adaptive delegation, nor can we apply our techniques directly to rule out arbitrary nonadaptive NPdelegation.
We also show how to upgrade the above results to get nontrivially exponentially efficient indistinguishability obfuscation for null circuits (niO), which guarantees that the obfuscations of any two circuits that always output 0 are indistinguishable. In particular, under the LWE assumptions we get a XniO scheme where the obfuscation time is 2^{n/2} for all circuits with input size n. It is known that in the case of indistinguishability obfuscation (iO) for all circuits, nontrivially efficient XiO schemes imply fully efficient iO schemes (Lin et al., PKC '16) but it remains as a fascinating open problem whether any such connection exists for WE or niO.
Lastly, we explore a potential approach toward constructing fully efficient WE and niO schemes via multiinput ABE.
Boneh, Kim and Montgomery (EUROCRYPT 2017) presented a construction of private constrained PRF for point function constraints, and Canetti and Chen (EUROCRYPT 2017) presented a completely different construction for NC1 constraints. In this work, we show two constructions of LWEbased constrainthiding constrained PRFs for general predicates described by polynomialsize circuits.
The two constructions are based on two distinct techniques that we show have further applicability by constructing weak attributehiding predicate encryption schemes. In a nutshell, the first construction imports the technique of modulus switching from the FHE world into the domain of trapdoor extension and homomorphism. The second construction shows how to use the duality between FHE secretkey/randomness and ABE randomness/secretkey to construct a scheme with dual use of the same values for both FHE and ABE purposes.
To do this, we construct an LWE based multikey FHE scheme with a very simple oneround \emph{distributed setup procedure} (vs. the trusted setup required in previous LWE based constructions). This lets us construct a 3round semimalicious protocol without setup using the approach of Mukherjee and Wichs (EUROCRYPT '16). Finally, subexponential hardness and adaptive commitments are used to "compile" the protocol into the fully malicious setting.
A lot of effort was made to port techniques from second generation latticebased FHE (using tensoring) to FHEOI. Gentry, Sahai and Waters (Crypto 13) showed that third generation techniques (which were later formalized using the ``gadget matrix'') can also be ported. In this work, we propose a comprehensive study of applying third generation FHE techniques to the regime of FHEOI. We present and analyze a third generation FHEOI based on decisional AGCD without the noisefree assumption. We proceed to showing a batch version of our scheme where each ciphertext can encode a vector of messages and operations are performed coordinatewise. We use a similar AGCD variant to Cheon et al. (Eurocrypt 13) who suggested the batch approach for second generation FHE, but we do not require the noisefree component or a subset sum assumption. However, like Cheon et al., we do require circular security for our scheme, even for bounded homomorphism. Lastly, we discuss some of the obstacles towards efficient implementation of our schemes and discuss a number of possible optimizations.
We show that our techniques can also be applied to construct a new type of (nonadaptive) 2message delegation protocols for batch NP statements. Specifically, we can simultaneously prove the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.
We present an ABE scheme where homomorphic operations can be performed compactly across attributes. Of course, decrypting the resulting ciphertext needs to be done with a key respective to a policy f with f(x_{i})=0 for all attributes involved in the computation. In our scheme, the target policy f needs to be known to the evaluator, we call this targeted homomorphism. Our scheme is secure under the polynomial hardness of learning with errors (LWE) with subexponential modulustonoise ratio.
We present a second scheme where there needs not be a single target policy. Instead, the decryptor only needs a set of keys representing policies f_{j} s.t. for any attribute x_{i} there exists f_{j} with f_{j}(x_{i})=0. In this scheme, the ciphertext size grows (quadratically) with the size of the set of policies (and is still independent of the number of inputs or attributes). Again, the target set of policies needs to be known at evaluation time. This latter scheme is secure in the random oracle model under the polynomial hardness of LWE with subexponential noise ratio.
However, the security properties of the current candidate graded encoding schemes are not well understood, and new attacks frequently introduced. It is therefore important to assume as little as possible about the security of the graded encoding scheme, and use as conservative security models as possible. This often comes at a cost of reducing the efficiency or the functionality of the obfuscator.
In this work, we present a candidate obfuscator, based on compositeorder graded encoding schemes, which obfuscates circuits directly a la Zimmerman (Eurocrypt 2015) and ApplebaumBrakerski (TCC 2015). Our construction requires a graded encoding scheme with only 33 ``plaintext slots'' (= subrings of the underlying ring), which is directly related to the size and complexity of the obfuscated program. We prove that our obfuscator is superior to previous works in two different security models.
1. We prove that our obfuscator is indistinguishabilitysecure (iO) in the Unique Representation Generic Graded Encoding model. Previous works either required a compositeorder scheme with polynomially many slots, or were provable in a milder security model. This immediately translates to a polynomial improvement in efficiency, and shows that improved security does not come at the cost of efficiency in this case.
2. Following Badrinarayanan et al. (Eurocrypt 2016), we consider a model where finding any ``nontrivial'' encoding of zero breaks the security of the encoding scheme. We show that, perhaps surprisingly, secure obfuscation is possible in this model even for some classes of nonevasive functions (for example, any class of conjunctions). We define the property required of the function class, formulate an appropriate (generic) security model, and prove that our aforementioned obfuscator is virtualblackbox (VBB) secure in this model.
Prior works either supported only an apriori bounded number of parties (LopezAlt, Tromer and Vaikuntanthan, STOC 12), or only supported singlehop evaluation where all inputs need to be known before the computation starts (Clear and McGoldrick, Crypto 15, Mukherjee and Wichs, Eurocrypt 16). In all aforementioned works, the ciphertext length grew at least quadratically with the number of parties.
Technically, our starting point is the LWEbased approach of previous works. Our result is achieved via a careful use of Gentry's bootstrapping technique, tailored to the specific scheme. Our hardness assumption is that the scheme of Mukherjee and Wichs is circular secure (and thus bootstrappable). A leveled scheme can be achieved under standard LWE.
We show that saiO does not exist, even for a minimal correctness requirement, if NP is not in the intersection of AM and coAM, and if oneway functions exist. A simple complementary observation shows that if oneway functions do not exist, then averagecase saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a ``baseline'' for the obfuscated program.
We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build publickey encryption from oneway functions and thus deserve further investigation.
In this work, we present a threemessage zeroknowledge argument system with soundness against uniform polynomialtime cheating provers. The main component in our construction is the recent delegation protocol for RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely on a threemessage variant of their protocol based on a keyless collisionresistant hash functions secure against uniform adversaries as well as other standard primitives.
More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee against realworld adversaries, which we formalize following Rogaway's ``humanignorance" approach (VIETCRYPT 2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our protocol to finding collisions in the underlying hash function.
We prove that our scheme is semiadaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWEbased constructions were only able to achieve selective security. (We stress that the ``complexity leveraging'' technique is not applicable for unbounded attributes.)
We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an apriori unbounded sequence of LWE matrices, and have finegrained control over which trapdoor is embedded in each and every one of them, all with succinct representation.
Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than blackbox access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy.
We present a generic transformation that converts any generalpurpose publickey functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing (somewhat surprisingly) that the existence of functional encryption is equivalent to that of its hierarchical generalization.
Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various tradeoffs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support arbitrary hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of publickey encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width.
Instantiating our construction with existing singleinput schemes, we obtain multiinput schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even oneway functions), offering various tradeoffs between security and efficiency.
Previous constructions of multiinput functional encryption schemes either relied on somewhat stronger assumptions and provided weaker security guarantees (Goldwasser et al., EUROCRYPT '14), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al., EUROCRYPT '15).
Independent and concurrent work by Ananth and Jain (ePrint '15) shows (among other contributions) how to construct a selectivelysecure multiinput privatekey functional encryption scheme based on any publickey functional encryption scheme. In this context, our result both relies on a weaker assumption and guarantees stronger security.
Embedding a universal circuit for some class of functions allows us to produce constrained keys for functions in this class, which gives us the first standardlatticeassumptionbased constrained PRF (CPRF) for general boundeddescription boundeddepth functions, for arbitrary polynomial bounds on the description size and the depth. (A constrained key w.r.t a circuit C enables one to evaluate the PRF on all x for which C(x)=1, but reveals nothing on the PRF values at other points.) We rely on the LWE assumption and on the onedimensional SIS (Short Integer Solution) assumption, which are both related to the worst case hardness of general lattice problems. Previous constructions for similar function classes relied on such exotic assumptions as the existence of multilinear maps or secure program obfuscation. The main drawback of our construction is that it does not allow collusion (i.e. to provide more than a single constrained key to an adversary). Similarly to the aforementioned previous works, our PRF family is also key homomorphic.
Interestingly, our constrained keys are very short. Their length does not depend directly either on the size of the constraint circuit or on the input length. We are not aware of any prior construction achieving this property, even relying on strong assumptions such as indistinguishability obfuscation.
We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well.
As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.
In this paper we show that any sufficiently expressive selectivelysecure FE scheme can be transformed into a fully secure one without introducing any additional assumptions. We present a direct blackbox transformation, making novel use of hybrid encryption, a classical technique that was originally introduced for improving the efficiency of encryption schemes, combined with a new technique we call the Trojan Method. This method allows to embed a secret execution thread in the functional keys of the underlying scheme, which will only be activated within the proof of security of the resulting scheme.As another application of the Trojan Method, we show how to construct functional encryption schemes for arbitrary circuits starting from ones for shallow circuits (NC1 or even TC0).
Whereas function privacy is inherently limited in the publickey setting, in the privatekey setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages m_{1}, ..., m_{T} together with decryption keys corresponding to functions f_{1}, ..., f_{T}, reveal essentially no information other than the values { f_{i}(m_{j})}_{i,j=1,...,T}. Despite its great potential, the known functionprivate privatekey schemes either support rather limited families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a functionprivate functional encryption scheme, starting with any nonfunctionprivate scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme, and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain functionprivate schemes based either on obfuscation assumptions, on the Learning with Errors assumption, or even on general publickey encryption (offering various tradeoffs between security and efficiency).
We prove that the construction is a secure obfuscation in a generic multilinear group model, under the blackbox definition of Barak et al. (CRYPTO 2001). Security is based on a new worstcase hardness assumption about exponential hardness of the NPcomplete problem 3SAT, the Bounded Speedup Hypothesis.
One of the new techniques we introduce is a method for enforcing input consistency, which we call randomizing subassignments. We hope that this technique can find further application in constructing secure obfuscators.
Our approach consists of three main ideas: Noisebounded sequential evaluation of high fanin operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimensionmodulus reduction.
Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the DiffieHellman assumption. We conjecture that our construction is secure for any conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against generic adversaries.
Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.
Schulman (FOCS 92, STOC 93) presented the notion of interactive coding: A simulator that, given any protocol $\pi$, is able to simulate it (i.e.\ produce its intended transcript) even with constant rate adversarial channel errors, and with only constant (multiplicative) communication overhead. Until recently, however, the running time of all known simulators was exponential (or subexponential) in the communication complexity of $\pi$ (denoted $N$ in this work). Brakerski and Kalai (FOCS 12) recently presented a simulator that runs in time $\poly(N)$. Their simulator is randomized (each party flips private coins) and has failure probability roughly $2^{N}$.
In this work, we improve the computational complexity of interactive coding. While at least $N$ computational steps are required (even just to output the transcript of $\pi$), the BK simulator runs in time $\tilde\Omega(N^2)$.
We present two efficient algorithms for interactive coding: The first with computational complexity $O(N \log N)$ and exponentially small (in $N$) failure probability; and the second with computational complexity $O(N)$, but failure probability $1/\poly(N)$. (Computational complexity is measured in the RAM model.)
Although using the PVW method with LWEbased schemes leads to worse asymptotic efficiency than using the SV technique with ringLWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``generalLWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of noise withlow hamming weight cannot be fully homomorphic. This applies to known schemes such as LPNbased symmetric or public key encryption.
An additional corollary is that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One by applying the aforementioned general statement, and another by directly attacking one of the components of the scheme.
We show how to efficiently simulate any interactive protocol in the presence of constantrate adversarial noise, while incurring only a constant blowup in the communication complexity ($\cc$). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $12^{\Omega(\cc)}$.
Previous solutions are either inefficient or are resilient only to stochastic noise.
We use this technique to construct a \emph{scaleinvariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values.
Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be classically reduced to the worstcase hardness of the GapSVP problem (with quasipolynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ringLWE (RLWE) problems that have $2^k$ security against known attacks. For RLWE, we have:
Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width  e.g., where a constant fraction of levels have width at least $k$  we can reduce the pergate computation of the bootstrapped version to $\tilde{O}(k)$, independent of $L$, by batching the bootstrapping operation. Previous FHE schemes all required $\tilde{\Omega}(k^{3.5})$ computation per gate.
At the core of our construction is a much more effective approach for managing the noise level of latticebased ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).
Our construction improves on previous works in two aspects:
Our scheme has very short ciphertexts and we therefore use it to construct an asymptotically efficient LWEbased singleserver private information retrieval (PIR) protocol. The communication complexity of our protocol (in the publickey model) is kpolylog(k)+log DB bits per singlebit query, which is better than any known scheme (here, k is a security parameter).
Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an example of a publickey encryption scheme that is resilient to $\lambda$ bits of leakage, and yet its $2$repetition is not resilient to even $(1+\epsilon)\lambda$ bits of leakage. In their counterexample, the repeated schemes share secretly generated public parameters.
In this work we show that under a reasonable strengthening of the definition of leakage resilience (one that captures known proof techniques for achieving nontrivial leakage resilience), parallel repetition does in fact amplify leakage. In particular, if fresh public parameters are used for each copy of the LewkoWaters scheme, then their negative result does not hold, and leakage is amplified by parallel repetition.
More generally, we show that given $t$ schemes that are resilient to $\lambda_1, \ldots, \lambda_t$ bits of leakage, respectfully, their direct product is resilient to $\sum (\lambda_i1)$ bits. We present our amplification theorem in a general framework that applies other cryptographic primitives as well.
One of the obstacles in going from ``somewhat'' to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key.
Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worstcase problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is s, and our public key is (a,b=as+2e), where s,a,e are all degree (n1) integer polynomials whose coefficients are independently drawn from easy to sample distributions.
In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high minentropy from the adversary's point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees.
We formalize a framework for studying the security of deterministic publickey encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hardtoinvert auxiliary inputs. Within this framework, we propose two schemes: the first is based on the decisional DiffieHellman (and, more generally, on the $d$linear) assumption, and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, quadratic residuosity and Paillier's composite residuosity). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the standard hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multiuser setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multiuser setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.
With this motivation in mind, we suggest a new framework for blackbox constructions that encompasses constructions with a nonblackbox flavor: specifically, those that rely on zeroknowledge proofs relative to some oracle. We show that our framework is powerful enough to capture the NaorYung/Sahai paradigm for building a (shielding) CCAsecure publickey encryption scheme from a CPAsecure one, something ruled out by prior blackbox separation results. On the other hand, we show that several blackbox impossibility results still hold even in a setting that allows for zeroknowledge proofs.
We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, at each time period, can probe the entire memory (containing a secret key) and ``leak'' up to a $(1o(1))$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [MicaliReyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $(1/2o(1))$) or the symmetric external DiffieHellman assumption (which allows for a leakage rate of $(1o(1))$), we achieve the above for public key encryption, identitybased encryption, and signature schemes. Prior to this work, it was not known how to construct publickey encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage.
In particular, under what we call the subgroup indistinguishability assumption, of which the QR and DCR are special cases, we can construct a scheme that has:
Our scheme is the first to achieve keydependent security and auxiliaryinput security based on the DCR and QR assumptions. Previous schemes that achieved these properties relied either on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate $(1o(1))$ of the secret key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of [NS, Crypto09], using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate of $o(1)$ of the secret key length.
We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call apriorimessage unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present simple constructions based on the RSA assumption, the short integer solution (SIS) assumption, and the computational DiffieHellman (CDH) assumption over bilinear groups.
Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the short integer solution (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call ring trapdoor functions. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.
Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the learning with error (LWE) assumption, and is similar to recently introduced IBE schemes; The second is based on the $d$linear assumption over bilinear groups.
In the case of functions that can be expressed as degree$d$ polynomials, we show that the resulting schemes are also secure with respect to key cycles of any length. Specifically, for any polynomial number $n$ of key pairs, our schemes can securely encrypt a degree$d$ polynomial whose variables are the collection of coordinates of all $n$ secret keys. Prior to this work, it was not known how to achieve this for nonlinear functions.
Our key idea is a general transformation that amplifies KDM security. The transformation takes an encryption scheme that is KDM secure w.r.t. some functions even when the secret keys are weak (i.e. chosen from an arbitrary distribution with entropy $k$), and outputs a scheme that is KDM secure w.r.t. a richer class of functions. The resulting scheme may no longer be secure with weak keys. Thus, in some sense, this transformation converts security with weak keys into amplified KDM security.
VRFs are both a natural and a useful primitive, and previous works have suggested a variety of constructions and applications. Still, there are many open questions in the study of VRFs, especially their relation to more widely studied cryptographic primitives and constructing them from a wide variety of cryptographic assumptions.
In this work we define a natural relaxation of VRFs that we call weak verifiable random functions, where pseudorandomness is required to hold only for randomly selected inputs. We conduct a study of weak VRFs, focusing on applications, constructions, and their relationship to other cryptographic primitives. We show:
We define the new notion and go on to give restorers for properties in the dense graph model: Bipartiteness and $\rho$Clique, and for function monotonicity. We also show some general relations between property restoring and other notions such as property testing and tolerant property testing.