Zvika Brakerski | |
|
Associate Professor at the Department of
Computer Science and Applied Mathematics of the Weizmann Institute of Science.
My research interests lie in the foundations of computer science. I am currently working mostly in cryptography and quantum computing. I completed my Ph.D. right here at the department in 2011, advised by Shafi Goldwasser. I then spent two years as a Simons Postdoctoral Fellow at the Computer Science Department of Stanford University, hosted by Dan Boneh. I received my M.Sc. from the Faculty of Engineering of Tel-Aviv University in 2002. My advisor was Boaz Patt-Shamir. Prior to that, in 2001, I received a joint B.Sc. from the Faculty of Engineering and the School of Computer Science of Tel-Aviv University. [Contact]
Contact Information:
Zvika Brakerski
Office:
Ziskind Building, Room 109 |
|
We show that, similarly to PRS, PRSPD can be constructed from any post-quantum one-way function. As far as the authors are aware, this is the first construction of a family of states that satisfies both pseudorandomness and provability of destruction. We show that many cryptographic applications that were shown based on PRS variants using quantum communication can be based on (variants of) PRSPD using only classical communication. This includes symmetric encryption, message authentication, one-time signatures, commitments, and classically verifiable private quantum coins.
Certifying qubits was previously only known to be possible based on the hardness of the Learning with Errors problem and the use of adaptive hardcore (Brakerski et al., 2018). Our framework allows certification of qubits based only on the existence of post-quantum trapdoor claw-free functions, or on quantum fully homomorphic encryption. These can be instantiated, for example, from Ring Learning with Errors.
On the technical side, we show that the quantum soundness of any such protocol can be reduced to proving a bound on a simple algorithmic task: informally, answering ``two challenges simultaneously'' in the protocol. Our reduction formalizes the intuition that these protocols demonstrate quantumness by leveraging the impossibility of rewinding a general quantum prover. This allows us to prove tight bounds on the quantum soundness of (Kahanamoku-Meyer et al., 2021) and (Kalai et al., 2022), showing that no quantum polynomial-time prover can succeed with probability larger than cos2(π/8) ≈ 0.853. Previously, only an upper bound on the success probability of classical provers, and a lower bound on the success probability of quantum provers, were known. We then extend this proof of quantum soundness to show that provers that approach the quantum soundness bound must perform almost anti-commuting measurements. This certifies that the prover holds a qubit.
In this work we show, for the former example of black-hole radiation decoding, that it also implies the existence of secure quantum cryptography. In fact, we show an existential equivalence between the hardness of black-hole radiation decoding and a variety of cryptographic primitives, including bit-commitment schemes and oblivious transfer protocols (using quantum communication). This can be viewed (with proper disclaimers, as we discuss) as providing a physical justification for the existence of secure cryptography. We conjecture that such connections may be found in other high-energy physics phenomena.
We consider EFI pairs — efficiently samplable, statistically far but computationally indistinguishable pairs of distributions. Building on the work of Yan (2022) which shows equivalence between EFI pairs and statistical commitment schemes, we show that EFI pairs are necessary and sufficient for a large class of quantum-cryptographic applications. Specifically, while it was known how to construct commitments schemes, oblivious transfer, and general secure multiparty computation from any EFI, we show how to construct EFI pairs from minimalistic versions of each one of these primitives. We also construct from EFI quantum computational zero knowledge (𝖰𝖢𝖹𝖪) proofs for all of 𝖰𝖨𝖯, and construct EFI pairs from essentially any non-trivial 𝖰𝖢𝖹𝖪.
This suggests that, for much of quantum cryptography, EFI pairs play a similar role to that played by OWFs in the classical setting: they are simple to describe, essential, and also serve as a linchpin for demonstrating equivalence between primitives.
To achieve rate-1 both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate 1/mε for any ε>0 together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive "bootstrapping" operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).
For our DDH-based solution we develop a new technique that may be of independent interest. We show that it is possible to "emulate" the binary group Z2 (or any other small-order group) inside a prime-order group Zp in a function-private manner. That is, Z2 operations are mapped to Zp operations such that the outcome of the latter do not reveal additional information beyond the Z2 outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted.
Along the way, we make several additional contributions:
1. We put forth a framework for reductions (or general interaction) with stateful solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting.
2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically "restored" post-measurement. This shows that the novel rewinding technique of Chiesa et al. (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
We introduce a notion of classical binding for quantum commitments which provides guarantees analogous to the classical case. In our notion, the receiver performs a (partial) measurement of the quantum commitment string, and the outcome of this measurement determines a single value that the sender may open. We expect that our notion can replace classical commitments in various settings, leaving the security proof essentially unchanged. As an example we show a soundness proof for the GMW zero-knowledge proof system.
We construct a non-interactive quantum commitment scheme which is classically statistically-binding and has a classical opening, based on the existence of any post-quantum one-way function. Prior candidates had inherently quantum openings and were not classically binding. In contrast, we show that it is impossible to achieve classical binding for statistically hiding commitments, regardless of assumption or round complexity.
Our scheme is simply Naor's commitment scheme (which classically requires a common random string, CRS), but executed in superposition over all possible values of the CRS, and repeated several times. We hope that this technique for using quantum communication to remove a CRS may find other uses.
We present a novel structural property of Clifford unitaries. Namely, that their (normalized) trace is bounded by 1/√2 in absolute value, regardless of the dimension. We show a similar property for the q-ary Cliffords. This allows us to analyze a very simple single-query identity test under the Clifford promise and show that it has (at least) constant soundness. The same test has perfect soundness under the Pauli promise.
We use the test to show that identity/Pauli/Clifford testing (without promise) are all computationally equivalent, thus establishing computational hardness for Pauli and Clifford testing.
It is a known fact that an unstructured lattice can be cast as an ideal-lattice in some order of a number field (and thus, in a rather trivial sense, that ideals in orders are as general as unstructured lattices). However, it is not known whether this connection can be used to imply useful hardness results for structured lattices, or alternatively new algorithmic techniques for unstructured lattices.
In this work we show that the Order-LWE problem (a generalization of the well known Ring-LWE problem) on certain orders is at least as hard as the (unstructured) LWE problem. So in general one should not hope to solve Order-LWE more efficiently than LWE. However, we only show that this connection holds in orders that are very ``skewed'' and in particular irrelevant for cryptographic applications. We then discuss the ability to embed unstructured lattices in ``friendlier'' orders, which requires devising an algorithm for computing the conductor of relevant orders. One of our technical tools is an improved hardness result for Order-LWE, closing a gap left in prior work.
We consider the total setting where m is sufficiently large (with respect to u and k), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of k-SUM, in particular connections to problems in computational geometry, extends to the total setting.
The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a run-time of uO(1/logk). This beats the known (conditional) lower bounds for worst-case k-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower-bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case k-SUM on m elements in time uo(1/logk) will give a super-polynomial improvement in the complexity of algorithms for lattice problems.
For standard LWE (not over rings) entropic results are known, using a ``lossiness approach'' but it was not known how to adapt this approach to the ring setting. In this work we present the first such results, where entropic security is established either under RLWE or under the Decisional Small Polynomial Ratio (DSPR) assumption which is a mild variant of the NTRU assumption.
In the context of general entropic distributions, our results in the ring setting essentially match the known lower bounds (Bolboceanu et al., Asiacrypt 2019; Brakerski and Döttling, Eurocrypt 2020).
Our construction improves upon the prior construction of lattice-based DP-NIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circular-secure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
An appealing approach towards resolving this problem is to show efficiently encodable and decodable constructions of a combinatorial object called tree codes (Schulman, STOC 93). After a lot of effort in this direction, the current state of the art has deterministic constructions of tree codes that are efficiently encodable but require a logarithmic (instead of constant) alphabet (Cohen, Haeupler, and Schulman, STOC 18). However, we still lack (even heuristic) candidate constructions that are efficiently decodable.
In this work, we show that tree codes that are efficiently encodable, but not efficiently decodable, also imply deterministic and efficient interactive coding schemes that are resilient to adversarial errors. Our result immediately implies a deterministic and efficient interactive coding scheme with a logarithmic alphabet (i.e., 1/loglog rate). We show this result using a novel implementation of hashing through deterministic tree codes that is powerful enough to yield interactive coding schemes.
Our work follows the high-level outline of the recent work of Gay and Pass [ePrint 2020], who showed a way to remove the heuristic step from the homomorphic-encryption based iO approach of Brakerski, D\"ottling, Garg, and Malavolta [EUROCRYPT 2020]. They thus obtain a construction proved secure under circular security assumption of natural homomorphic encryption schemes --- specifically, they use homomorphic encryption schemes based on LWE and DCR, respectively. In this work we show how to remove the DCR assumption and remain with a scheme based on the circular security of LWE alone. Along the way we relax some of the requirements in the Gay-Pass blueprint and thus obtain a scheme that is secure under a relaxed assumption. Specifically, we do not require security in the presence of a key-cycle, but rather only in the presence of a key-randomness cycle.
In the classical setting, garbled circuits (and randomized encodings in general) are a versatile cryptographic tool with many applications such as secure multiparty computation, delegated computation, depth-reduction of cryptographic primitives, complexity lower-bounds, and more. However, a quantum analogue for garbling general circuits was not known prior to this work. We hope that our quantum randomized encoding scheme can similarly be useful for applications in quantum computing and cryptography.
The properties of our scheme are as follows:
To illustrate the usefulness of quantum randomized encoding, we use it to design a conceptually-simple zero-knowledge (ZK) proof system for the complexity class QMA. Our protocol has the so-called Σ format with a single-bit challenge, and allows the inputs to be delayed to the last round. The only previously-known ZK Σ-protocol for QMA is due to Broadbent and Grilo (FOCS 2020), which does not have the aforementioned properties.
In this work we propose a significant simplification to approach (iii) by employing the random oracle heuristic. (We note that we do not apply the Fiat-Shamir paradigm.) We give a two-message (challenge-response) proof of quantumness based on any trapdoor claw-free function. In contrast to earlier proposals we do not need an adaptive hard-core bit property. This allows the use of smaller security parameters and more diverse computational assumptions (such as Ring Learning with Errors), significantly reducing the quantum computational effort required for a successful demonstration.
Next, we show a generic candidate construction of split FHE based on three building blocks: (i) A standard FHE scheme with linear decrypt-and-multiply (which can be instantiated with essentially all LWE-based constructions), (ii) a linearly homomorphic encryption scheme with short decryption hints (such as the Damgard-Jurik encryption scheme, based on the DCR problem), and (iii) a cryptographic hash function (which can be based on a variety of standard assumptions). Our approach is heuristic in the sense that our construction is not provably secure and makes implicit assumptions about the interplay between these underlying primitives. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model.
We view our construction as a big departure from the state-of-the-art constructions, and it is in fact quite simple.
In existing constructions of PRS generators, security scales with the number of qubits in the states, i.e. the (statistical) security parameter for an n-qubit PRS is roughly n. Perhaps counter-intuitively, n-qubit PRS are not known to imply k-qubit PRS even for k smaller than n. Therefore the question of scalability for PRS was thus far open: is it possible to construct n-qubit PRS generators with security parameter λ for all n, λ. Indeed, we believe that PRS with tiny (even constant) n and large λ can be quite useful.
We resolve the problem in this work, showing that any quantum-secure one-way function implies scalable PRS. We follow the paradigm of first showing a statistically secure construction when given oracle access to a random function, and then replacing the random function with a quantum-secure (classical) pseudorandom function to achieve computational security. However, our methods deviate significantly from prior works since scalable pseudorandom states require randomizing the amplitudes of the quantum state, and not just the phase as in all prior works. We show how to achieve this using Gaussian sampling.
We revisit the correlation intractability (CI) framework for converting Σ-protocols into NIZK, and present a different strategy for instantiating it by putting together two new components. First, while prior works considered the search-complexity of the relations for which CI is sought,we consider their probabilistic representation. Namely, a distribution over lower-complexity functions that bitwise-computes the target function with all but small (constant) probability. The second component is a new perspective for quantifying the class of relations for which CI is achieved. We show that it is instructive to consider CI for approximable relations (CI-Apx) which is quantified by a class of relations, but requires CI to hold against any approximation of any relation in this class.
We show that CI-Apx for just constant-degree polynomials suffices for NIZK, if the underlying Σ-protocol is implemented using a suitable commitment scheme. We show that such a commitment scheme can be constructed based on LPN. We then show how to construct CI-Apx for constant-degree polynomials from any suitable TDH (with an enhanced correctness property that is satisfied by all existing TDH constructions).
In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli.
Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution s, given s+e, where e is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed ``non-compact'' variant of 2-party secret sharing from 2-round OT.
As a consequence, we get a provable elementary construction of pseudorandom quantum states from post-quantum pseudorandom functions. Generating pseduorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)-wise independent function (either in our construction or in previous work), results in an explicit construction for quantum state t-designs for all t. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing t-designs is bounded by that of (2t)-wise independent functions. Explicitly, while in prior literature t-designs required linear depth (for t>2), this observation shows that polylogarithmic depth suffices for all t.
We note that our constructions yield pseudorandom states and state designs with only real-valued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation.
We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries. Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blow-up which is polynomial in the number of players and in S,2D, where S,D are the circuit size and depth of the function to be computed. Using one-way functions as an additional assumption, the exponential dependence on the depth can be removed.
As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases.
Technically, we extend and relax the notion of randomized encoding to specifically address multi-party functionalities. The property of a multi-party randomized encoding (MPRE) is that if the functionality g is an encoding of the functionality f, then for any (permitted) coalition of players, their respective outputs and inputs in $g$ allow them to simulate their respective inputs and outputs in f, without learning anything else, including the other outputs of $f$.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
The definition allows us to provide a new analysis for the hardness of the abundantly used Polynomial-LWE (PLWE) problem (Stehlé et al., Asiacrypt 2009), different from the one recently proposed by Rosca, Stehlé and Wallet (Eurocrypt 2018). This suggests that Order-LWE may be used to analyze and possibly design useful relaxations of RLWE.
We show that Order-LWE can naturally be harnessed to prove security for RLWE instances where the ``RLWE secret'' (which often corresponds to the secret-key of a cryptosystem) is not sampled uniformly as required for RLWE hardness. We start by showing worst-case hardness even if the secret is sampled from a subring of the sample space. Then, we study the case where the secret is sampled from an ideal of the sample space or a coset thereof (equivalently, some of its CRT coordinates are fixed or leaked). In the latter, we show an interesting threshold phenomenon where the amount of RLWE noise determines whether the problem is tractable.
Lastly, we address the long standing question of whether high-entropy secret is sufficient for RLWE to be intractable. Our result on sampling from ideals shows that simply requiring high entropy is insufficient. We therefore propose a broad class of distributions where we conjecture that hardness should hold, and provide evidence via reduction to a concrete lattice problem.
In this work we show that any succinct single-round delegation scheme for the function class F can be converted into a succinct single-round private access control protocol. That is, a verifier can be convinced that an approved user (i.e. one which holds an approved set of attributes) is accessing the system, without learning any additional information about the user or the set of attributes.
As a main tool of independent interest, we show that assuming a quasi-polynomially secure two-message oblivious transfer scheme with statistical sender privacy (which can be based on quasi-polynomial hardness of the DDH, QR, DCR or LWE assumptions), we can convert any single-round protocol into a witness indistinguishable one, with similar communication complexity.
Note: Prior versions of this report contained an additional result about a new batch delegation scheme for monotone formulae (see ePrint history for this report). We intend to publish these results in future work.
We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multi-key classical FHE.
Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on super-polynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.
The randomness protocol can also be used as the basis for an efficiently verifiable quantum supremacy proposal, thus answering an outstanding challenge in the field.
Specifically, we consider the (n,m,w)-nearest codeword problem ((n,m,w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error w/m ≈ log2n/n, (n,m,w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio 1/2-1/poly(n).
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n,m,w)-NCP with the aforementioned parameters lies in the complexity class Search-BPPSZK (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be NP-hard. We then show that LPN with very low noise rate log2(n)/n implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in BPPSZK).
Our approach extends and refines the recent tree-based approach of Cho et al. (CRYPTO 17) and Döttling and Garg (CRYPTO 17). Whereas the tools underlying their approach do not seem to provide any form of anonymity, we introduce two new building blocks which we utilize for achieving anonymity: blind garbled circuits (which we construct based on any one-way function), and blind batch encryption (which we construct based on CDH).
We then further demonstrate the applicability of our newly-developed tools by showing that batch encryption implies a public-key encryption scheme that is both resilient to leakage of a (1−o(1))-fraction of its secret key, and KDM secure (or circular secure) with respect to all linear functions of its secret key (which, in turn, is known to imply KDM security for bounded-size circuits). These yield the first high-rate leakage-resilient encryption scheme and the first KDM-secure encryption scheme based on the CDH or Factoring assumptions.
Finally, relying on our techniques we also construct a batch encryption scheme based on the hardness of the Learning Parity with Noise (LPN) problem, albeit with very small noise rate Ω(log2(n)/n). Although this batch encryption scheme is not blind, we show that it still implies standard (i.e., non-anonymous) IBE, leakage resilience and KDM security. IBE and high-rate leakage resilience were not previously known from LPN, even with extremely low noise.
We formally define the notion of spooky free compilers that has been implicit in the delegation of computation literature. A spooky free compiler allows to encode a sequence of queries to a multi-prover interactive proof system (MIP) in a way that allows to apply the MIP prover algorithm on the encoded values on one hand, and prevents spooky interactions on the other. In our definition, the compiler is allowed to be tailored to a specific MIP and does not need to support any other operation.
We show that (under a plausible complexity assumption) spooky free compilers that are sufficiently succinct to imply delegation schemes for NP with communication na (for any constant a<1) cannot be proven secure via black-box reduction to a falsifiable assumption. On the other hand, we show that it is possible to construct non-succinct spooky free fully homomorphic encryption, the strongest conceivable flavor of spooky free compiler, in a straightforward way from any fully homomorphic encryption scheme.
Our impossibility result relies on adapting the techniques of Gentry and Wichs (2011) which rule out succinct adaptively sound delegation protocols. We note that spooky free compilers are only known to imply non-adaptive delegation, so the aforementioned result cannot be applied directly. Interestingly, we are still unable to show that spooky free compilers imply adaptive delegation, nor can we apply our techniques directly to rule out arbitrary non-adaptive NP-delegation.
We also show how to upgrade the above results to get non-trivially exponentially efficient indistinguishability obfuscation for null circuits (niO), which guarantees that the obfuscations of any two circuits that always output 0 are indistinguishable. In particular, under the LWE assumptions we get a XniO scheme where the obfuscation time is 2n/2 for all circuits with input size n. It is known that in the case of indistinguishability obfuscation (iO) for all circuits, non-trivially efficient XiO schemes imply fully efficient iO schemes (Lin et al., PKC '16) but it remains as a fascinating open problem whether any such connection exists for WE or niO.
Lastly, we explore a potential approach toward constructing fully efficient WE and niO schemes via multi-input ABE.
Boneh, Kim and Montgomery (EUROCRYPT 2017) presented a construction of private constrained PRF for point function constraints, and Canetti and Chen (EUROCRYPT 2017) presented a completely different construction for NC1 constraints. In this work, we show two constructions of LWE-based constraint-hiding constrained PRFs for general predicates described by polynomial-size circuits.
The two constructions are based on two distinct techniques that we show have further applicability by constructing weak attribute-hiding predicate encryption schemes. In a nutshell, the first construction imports the technique of modulus switching from the FHE world into the domain of trapdoor extension and homomorphism. The second construction shows how to use the duality between FHE secret-key/randomness and ABE randomness/secret-key to construct a scheme with dual use of the same values for both FHE and ABE purposes.
To do this, we construct an LWE based multi-key FHE scheme with a very simple one-round \emph{distributed setup procedure} (vs. the trusted setup required in previous LWE based constructions). This lets us construct a 3-round semi-malicious protocol without setup using the approach of Mukherjee and Wichs (EUROCRYPT '16). Finally, subexponential hardness and adaptive commitments are used to "compile" the protocol into the fully malicious setting.
A lot of effort was made to port techniques from second generation lattice-based FHE (using tensoring) to FHE-OI. Gentry, Sahai and Waters (Crypto 13) showed that third generation techniques (which were later formalized using the ``gadget matrix'') can also be ported. In this work, we propose a comprehensive study of applying third generation FHE techniques to the regime of FHE-OI. We present and analyze a third generation FHE-OI based on decisional AGCD without the noise-free assumption. We proceed to showing a batch version of our scheme where each ciphertext can encode a vector of messages and operations are performed coordinate-wise. We use a similar AGCD variant to Cheon et al. (Eurocrypt 13) who suggested the batch approach for second generation FHE, but we do not require the noise-free component or a subset sum assumption. However, like Cheon et al., we do require circular security for our scheme, even for bounded homomorphism. Lastly, we discuss some of the obstacles towards efficient implementation of our schemes and discuss a number of possible optimizations.
We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message delegation protocols for batch NP statements. Specifically, we can simultaneously prove the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.
We present an ABE scheme where homomorphic operations can be performed compactly across attributes. Of course, decrypting the resulting ciphertext needs to be done with a key respective to a policy f with f(xi)=0 for all attributes involved in the computation. In our scheme, the target policy f needs to be known to the evaluator, we call this targeted homomorphism. Our scheme is secure under the polynomial hardness of learning with errors (LWE) with sub-exponential modulus-to-noise ratio.
We present a second scheme where there needs not be a single target policy. Instead, the decryptor only needs a set of keys representing policies fj s.t. for any attribute xi there exists fj with fj(xi)=0. In this scheme, the ciphertext size grows (quadratically) with the size of the set of policies (and is still independent of the number of inputs or attributes). Again, the target set of policies needs to be known at evaluation time. This latter scheme is secure in the random oracle model under the polynomial hardness of LWE with sub-exponential noise ratio.
However, the security properties of the current candidate graded encoding schemes are not well understood, and new attacks frequently introduced. It is therefore important to assume as little as possible about the security of the graded encoding scheme, and use as conservative security models as possible. This often comes at a cost of reducing the efficiency or the functionality of the obfuscator.
In this work, we present a candidate obfuscator, based on composite-order graded encoding schemes, which obfuscates circuits directly a la Zimmerman (Eurocrypt 2015) and Applebaum-Brakerski (TCC 2015). Our construction requires a graded encoding scheme with only 33 ``plaintext slots'' (= sub-rings of the underlying ring), which is directly related to the size and complexity of the obfuscated program. We prove that our obfuscator is superior to previous works in two different security models.
1. We prove that our obfuscator is indistinguishability-secure (iO) in the Unique Representation Generic Graded Encoding model. Previous works either required a composite-order scheme with polynomially many slots, or were provable in a milder security model. This immediately translates to a polynomial improvement in efficiency, and shows that improved security does not come at the cost of efficiency in this case.
2. Following Badrinarayanan et al. (Eurocrypt 2016), we consider a model where finding any ``non-trivial'' encoding of zero breaks the security of the encoding scheme. We show that, perhaps surprisingly, secure obfuscation is possible in this model even for some classes of non-evasive functions (for example, any class of conjunctions). We define the property required of the function class, formulate an appropriate (generic) security model, and prove that our aforementioned obfuscator is virtual-black-box (VBB) secure in this model.
Prior works either supported only an a-priori bounded number of parties (Lopez-Alt, Tromer and Vaikuntanthan, STOC 12), or only supported single-hop evaluation where all inputs need to be known before the computation starts (Clear and McGoldrick, Crypto 15, Mukherjee and Wichs, Eurocrypt 16). In all aforementioned works, the ciphertext length grew at least quadratically with the number of parties.
Technically, our starting point is the LWE-based approach of previous works. Our result is achieved via a careful use of Gentry's bootstrapping technique, tailored to the specific scheme. Our hardness assumption is that the scheme of Mukherjee and Wichs is circular secure (and thus bootstrappable). A leveled scheme can be achieved under standard LWE.
We show that saiO does not exist, even for a minimal correctness requirement, if NP is not in the intersection of AM and coAM, and if one-way functions exist. A simple complementary observation shows that if one-way functions do not exist, then average-case saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a ``baseline'' for the obfuscated program.
We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build public-key encryption from one-way functions and thus deserve further investigation.
In this work, we present a three-message zero-knowledge argument system with soundness against uniform polynomial-time cheating provers. The main component in our construction is the recent delegation protocol for RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely on a three-message variant of their protocol based on a key-less collision-resistant hash functions secure against uniform adversaries as well as other standard primitives.
More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee against real-world adversaries, which we formalize following Rogaway's ``human-ignorance" approach (VIETCRYPT 2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our protocol to finding collisions in the underlying hash function.
We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the ``complexity leveraging'' technique is not applicable for unbounded attributes.)
We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.
Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than black-box access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy.
We present a generic transformation that converts any general-purpose public-key functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing (somewhat surprisingly) that the existence of functional encryption is equivalent to that of its hierarchical generalization.
Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various trade-offs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support arbitrary hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of public-key encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width.
Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security and efficiency.
Previous constructions of multi-input functional encryption schemes either relied on somewhat stronger assumptions and provided weaker security guarantees (Goldwasser et al., EUROCRYPT '14), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al., EUROCRYPT '15).
Independent and concurrent work by Ananth and Jain (ePrint '15) shows (among other contributions) how to construct a selectively-secure multi-input private-key functional encryption scheme based on any public-key functional encryption scheme. In this context, our result both relies on a weaker assumption and guarantees stronger security.
Embedding a universal circuit for some class of functions allows us to produce constrained keys for functions in this class, which gives us the first standard-lattice-assumption-based constrained PRF (CPRF) for general bounded-description bounded-depth functions, for arbitrary polynomial bounds on the description size and the depth. (A constrained key w.r.t a circuit C enables one to evaluate the PRF on all x for which C(x)=1, but reveals nothing on the PRF values at other points.) We rely on the LWE assumption and on the one-dimensional SIS (Short Integer Solution) assumption, which are both related to the worst case hardness of general lattice problems. Previous constructions for similar function classes relied on such exotic assumptions as the existence of multilinear maps or secure program obfuscation. The main drawback of our construction is that it does not allow collusion (i.e. to provide more than a single constrained key to an adversary). Similarly to the aforementioned previous works, our PRF family is also key homomorphic.
Interestingly, our constrained keys are very short. Their length does not depend directly either on the size of the constraint circuit or on the input length. We are not aware of any prior construction achieving this property, even relying on strong assumptions such as indistinguishability obfuscation.
We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well.
As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.
In this paper we show that any sufficiently expressive selectively-secure FE scheme can be transformed into a fully secure one without introducing any additional assumptions. We present a direct black-box transformation, making novel use of hybrid encryption, a classical technique that was originally introduced for improving the efficiency of encryption schemes, combined with a new technique we call the Trojan Method. This method allows to embed a secret execution thread in the functional keys of the underlying scheme, which will only be activated within the proof of security of the resulting scheme.As another application of the Trojan Method, we show how to construct functional encryption schemes for arbitrary circuits starting from ones for shallow circuits (NC1 or even TC0).
Whereas function privacy is inherently limited in the public-key setting, in the private-key setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages m1, ..., mT together with decryption keys corresponding to functions f1, ..., fT, reveal essentially no information other than the values { fi(mj)}i,j=1,...,T. Despite its great potential, the known function-private private-key schemes either support rather limited families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a function-private functional encryption scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme, and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain function-private schemes based either on obfuscation assumptions, on the Learning with Errors assumption, or even on general public-key encryption (offering various trade-offs between security and efficiency).
We prove that the construction is a secure obfuscation in a generic multilinear group model, under the black-box definition of Barak et al. (CRYPTO 2001). Security is based on a new worst-case hardness assumption about exponential hardness of the NP-complete problem 3-SAT, the Bounded Speedup Hypothesis.
One of the new techniques we introduce is a method for enforcing input consistency, which we call randomizing sub-assignments. We hope that this technique can find further application in constructing secure obfuscators.
Our approach consists of three main ideas: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimension-modulus reduction.
Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the Diffie-Hellman assumption. We conjecture that our construction is secure for any conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against generic adversaries.
Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.
Schulman (FOCS 92, STOC 93) presented the notion of interactive coding: A simulator that, given any protocol $\pi$, is able to simulate it (i.e.\ produce its intended transcript) even with constant rate adversarial channel errors, and with only constant (multiplicative) communication overhead. Until recently, however, the running time of all known simulators was exponential (or sub-exponential) in the communication complexity of $\pi$ (denoted $N$ in this work). Brakerski and Kalai (FOCS 12) recently presented a simulator that runs in time $\poly(N)$. Their simulator is randomized (each party flips private coins) and has failure probability roughly $2^{-N}$.
In this work, we improve the computational complexity of interactive coding. While at least $N$ computational steps are required (even just to output the transcript of $\pi$), the BK simulator runs in time $\tilde\Omega(N^2)$.
We present two efficient algorithms for interactive coding: The first with computational complexity $O(N \log N)$ and exponentially small (in $N$) failure probability; and the second with computational complexity $O(N)$, but failure probability $1/\poly(N)$. (Computational complexity is measured in the RAM model.)
Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``general-LWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of noise withlow hamming weight cannot be fully homomorphic. This applies to known schemes such as LPN-based symmetric or public key encryption.
An additional corollary is that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One by applying the aforementioned general statement, and another by directly attacking one of the components of the scheme.
We show how to efficiently simulate any interactive protocol in the presence of constant-rate adversarial noise, while incurring only a constant blow-up in the communication complexity ($\cc$). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $1-2^{-\Omega(\cc)}$.
Previous solutions are either inefficient or are resilient only to stochastic noise.
We use this technique to construct a \emph{scale-invariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values.
Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be classically reduced to the worst-case hardness of the GapSVP problem (with quasi-polynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have $2^k$ security against known attacks. For RLWE, we have:
Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least $k$ -- we can reduce the per-gate computation of the bootstrapped version to $\tilde{O}(k)$, independent of $L$, by batching the bootstrapping operation. Previous FHE schemes all required $\tilde{\Omega}(k^{3.5})$ computation per gate.
At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).
Our construction improves on previous works in two aspects:
Our scheme has very short ciphertexts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is kpolylog(k)+log |DB| bits per single-bit query, which is better than any known scheme (here, k is a security parameter).
Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an example of a public-key encryption scheme that is resilient to $\lambda$ bits of leakage, and yet its $2$-repetition is not resilient to even $(1+\epsilon)\lambda$ bits of leakage. In their counter-example, the repeated schemes share secretly generated public parameters.
In this work we show that under a reasonable strengthening of the definition of leakage resilience (one that captures known proof techniques for achieving non-trivial leakage resilience), parallel repetition does in fact amplify leakage. In particular, if fresh public parameters are used for each copy of the Lewko-Waters scheme, then their negative result does not hold, and leakage is amplified by parallel repetition.
More generally, we show that given $t$ schemes that are resilient to $\lambda_1, \ldots, \lambda_t$ bits of leakage, respectfully, their direct product is resilient to $\sum (\lambda_i-1)$ bits. We present our amplification theorem in a general framework that applies other cryptographic primitives as well.
One of the obstacles in going from ``somewhat'' to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key.
Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worst-case problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is s, and our public key is (a,b=as+2e), where s,a,e are all degree (n-1) integer polynomials whose coefficients are independently drawn from easy to sample distributions.
In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high min-entropy from the adversary's point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees.
We formalize a framework for studying the security of deterministic public-key encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hard-to-invert auxiliary inputs. Within this framework, we propose two schemes: the first is based on the decisional Diffie-Hellman (and, more generally, on the $d$-linear) assumption, and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, quadratic residuosity and Paillier's composite residuosity). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the standard hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multi-user setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multi-user setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.
With this motivation in mind, we suggest a new framework for black-box constructions that encompasses constructions with a nonblack-box flavor: specifically, those that rely on zero-knowledge proofs relative to some oracle. We show that our framework is powerful enough to capture the Naor-Yung/Sahai paradigm for building a (shielding) CCA-secure public-key encryption scheme from a CPA-secure one, something ruled out by prior black-box separation results. On the other hand, we show that several black-box impossibility results still hold even in a setting that allows for zero-knowledge proofs.
We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, at each time period, can probe the entire memory (containing a secret key) and ``leak'' up to a $(1-o(1))$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [Micali-Reyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $(1/2-o(1))$) or the symmetric external Diffie-Hellman assumption (which allows for a leakage rate of $(1-o(1))$), we achieve the above for public key encryption, identity-based encryption, and signature schemes. Prior to this work, it was not known how to construct public-key encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage.
In particular, under what we call the subgroup indistinguishability assumption, of which the QR and DCR are special cases, we can construct a scheme that has:
Our scheme is the first to achieve key-dependent security and auxiliary-input security based on the DCR and QR assumptions. Previous schemes that achieved these properties relied either on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate $(1-o(1))$ of the secret key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of [NS, Crypto09], using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate of $o(1)$ of the secret key length.
We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call a-priori-message unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present simple constructions based on the RSA assumption, the short integer solution (SIS) assumption, and the computational Diffie-Hellman (CDH) assumption over bilinear groups.
Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the short integer solution (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call ring trapdoor functions. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.
Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the learning with error (LWE) assumption, and is similar to recently introduced IBE schemes; The second is based on the $d$-linear assumption over bilinear groups.
In the case of functions that can be expressed as degree-$d$ polynomials, we show that the resulting schemes are also secure with respect to key cycles of any length. Specifically, for any polynomial number $n$ of key pairs, our schemes can securely encrypt a degree-$d$ polynomial whose variables are the collection of coordinates of all $n$ secret keys. Prior to this work, it was not known how to achieve this for nonlinear functions.
Our key idea is a general transformation that amplifies KDM security. The transformation takes an encryption scheme that is KDM secure w.r.t. some functions even when the secret keys are weak (i.e. chosen from an arbitrary distribution with entropy $k$), and outputs a scheme that is KDM secure w.r.t. a richer class of functions. The resulting scheme may no longer be secure with weak keys. Thus, in some sense, this transformation converts security with weak keys into amplified KDM security.
VRFs are both a natural and a useful primitive, and previous works have suggested a variety of constructions and applications. Still, there are many open questions in the study of VRFs, especially their relation to more widely studied cryptographic primitives and constructing them from a wide variety of cryptographic assumptions.
In this work we define a natural relaxation of VRFs that we call weak verifiable random functions, where pseudorandomness is required to hold only for randomly selected inputs. We conduct a study of weak VRFs, focusing on applications, constructions, and their relationship to other cryptographic primitives. We show:
We define the new notion and go on to give restorers for properties in the dense graph model: Bipartiteness and $\rho$-Clique, and for function monotonicity. We also show some general relations between property restoring and other notions such as property testing and tolerant property testing.
