Zvika Brakerski
and Nir Magrafta
Real-Valued Somewhat-Pseudorandom Unitaries
TCC 2024.
[Abstract] [PDF]
We explore a very simple distribution of unitaries: random (binary) phase -- Hadamard -- random (binary) phase -- random computational-basis permutation. We show that this distribution is statistically indistinguishable from random Haar unitaries for any polynomial set of orthogonal input states (in any basis) with polynomial multiplicity. This shows that even though real-valued unitaries cannot be completely pseudorandom (Haug, Bharti, Koh, arXiv:2306.11677), we can still obtain some pseudorandom properties without giving up on the simplicity of a real-valued unitary.
Our analysis shows that an even simpler construction: applying a random (binary) phase followed by a random computational-basis permutation, would suffice, assuming that the input is orthogonal and \emph{flat} (that is, has high min-entropy when measured in the computational basis).
Using quantum-secure one-way functions (which imply quantum-secure pseudorandom functions and permutations), we obtain an efficient cryptographic instantiation of the above.
-
A major unresolved question in quantum cryptography is whether it is possible to obfuscate arbitrary quantum computation. Indeed, there is much yet to understand about the feasibility of quantum obfuscation even in the classical oracle model, where one is given for free the ability to obfuscate any classical circuit.
In this work, we develop a new array of techniques that we use to construct a quantum state obfuscator, a powerful notion formalized recently by Coladangelo and Gunn (arXiv:2311.07794) in their pursuit of better software copy-protection schemes. Quantum state obfuscation refers to the task of compiling a quantum program, consisting of a quantum circuit C with a classical description and an auxiliary quantum state |a>, into a functionally-equivalent obfuscated quantum program that hides as much as possible about C and |a>. We prove the security of our obfuscator when applied to any pseudo-deterministic quantum program, i.e. one that computes a (nearly) deterministic classical input / classical output functionality. Our security proof is with respect to an efficient classical oracle, which may be heuristically instantiated using quantum-secure indistinguishability obfuscation for classical circuits.
Our result improves upon the recent work of Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023) who also showed how to obfuscate pseudo-deterministic quantum circuits in the classical oracle model, but only ones with a completely classical description. Furthermore, our result answers a question of Coladangelo and Gunn, who provide a construction of quantum state indistinguishability obfuscation with respect to a quantum oracle. Indeed, our quantum state obfuscator together with Coladangelo-Gunn gives the first candidate realization of a ``best-possible'' copy-protection scheme for all polynomial-time functionalities.
-
We initiate a rigorous study of computational entanglement theory, inspired by the emerging usefulness of ideas from quantum information theory in computational complexity. We define new operational computational measures of entanglement -- the computational one-shot entanglement cost and distillable entanglement. We then show that the computational measures are fundamentally different from their information-theoretic counterparts by presenting gaps between them.
We proceed by refining and extending the definition of pseudo-entanglement, introduced by Aaronson et al., 2022, using the new operational measures; and we present constructions of pseudo-entangled states (for our new definition) based on post-quantum cryptographic assumptions.
Finally, we discuss the relations between computational entanglement theory and other topics, such as quantum cryptography and notions of pseudoentropy, as well as the relevance of our new definitions to the study of the AdS/CFT correspondence.
We believe that, in addition to the contributions presented in the current manuscript, our work opens multiple research directions, of relevance both to the theoretical quantum information theory community as well as for future applications of quantum networks and cryptography.
Stav Medina
and Zvika Brakerski
Limits on Adaptive Security for Attribute-Based Encryption
TCC 2024.
[Abstract] [PDF]
This work addresses the long quest for proving full (adaptive) security for attribute-based encryption (ABE). We show that in order to prove full security in a black-box manner, the scheme must be ``irregular'' in the sense that it is impossible to ``validate'' secret keys to ascertain consistent decryption of ciphertexts. This extends a result of Lewko and Waters (Eurocrypt 2014) that was only applicable to straight-line proofs (without rewinding). Our work, therefore, establishes that it is impossible to circumvent the irregularity property using creative proof techniques, so long as the adversary is used in a black-box manner.
As a consequence, our work provides an explanation as to why some lattice-based ABE schemes cannot be proven fully secure, even though no known adaptive attacks exist.
Amit Behera,
Zvika Brakerski,
Or Sattath
and Omri Shmueli
Pseudorandomness with Proof of Destruction and Applications
QCrypt 2023, TCC 2023.
[Abstract] [PDF]
Two fundamental properties of quantum states that quantum information theory explores are pseudorandomness and provability of destruction.
We introduce the notion of quantum pseudorandom states
with proofs of destruction (PRSPD) that combines both these properties.
Like standard pseudorandom states (PRS), these are efficiently
generated quantum states that are indistinguishable from random, but they can also be measured to create a classical string. This string is
verifiable (given the secret key) and certifies that the state has been destructed.
We show that, similarly to PRS, PRSPD can be constructed
from any post-quantum one-way function. As far as the authors are
aware, this is the first construction of a family of states that satisfies
both pseudorandomness and provability of destruction.
We show that many cryptographic applications that were shown
based on PRS variants using quantum communication can be based
on (variants of) PRSPD using only classical communication. This includes
symmetric encryption, message authentication, one-time signatures, commitments, and classically verifiable private quantum coins.
-
A test of quantumness is a protocol that allows a classical verifier to certify (only) that a prover is not classical. We show that tests of quantumness that follow a certain template, which captures recent proposals such as (Kalai et al., 2022), can in fact do much more. Namely, the same protocols can be used for certifying a qubit, a building-block that stands at the heart of applications such as certifiable randomness and classical delegation of quantum computation.
Certifying qubits was previously only known to be possible based on the hardness of the Learning with Errors problem and the use of adaptive hardcore (Brakerski et al., 2018). Our framework allows certification of qubits based only on the existence of post-quantum trapdoor claw-free functions, or on quantum fully homomorphic encryption. These can be instantiated, for example, from Ring Learning with Errors.
On the technical side, we show that the quantum soundness of any such protocol can be reduced to proving a bound on a simple algorithmic task: informally, answering ``two challenges simultaneously'' in the protocol. Our reduction formalizes the intuition that these protocols demonstrate quantumness by leveraging the impossibility of rewinding a general quantum prover. This allows us to prove tight bounds on the quantum soundness of (Kahanamoku-Meyer et al., 2021) and (Kalai et al., 2022), showing that no quantum polynomial-time prover can succeed with probability larger than cos2(π/8) ≈ 0.853. Previously, only an upper bound on the success probability of classical provers, and a lower bound on the success probability of quantum provers, were known. We then extend this proof of quantum soundness to show that provers that approach the quantum soundness bound must perform almost anti-commuting measurements. This certifies that the prover holds a qubit.
-
We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time. Specifically, we revisit four foundational results in this context---two protocols and two worst-case to average-case reductions. We show how to improve the approximation factor in each result by a factor of roughly √(n/log n) when running the protocol or reduction in 2ε n time instead of polynomial time, and we show
a novel protocol with no polynomial-time analog.
Zvika Brakerski
Black-Hole Radiation Decoding is Quantum Cryptography
CRYPTO 2023, QIP 2024.
[Abstract] [PDF]
We propose to study equivalence relations between phenomena in high-energy physics and the existence of standard cryptographic primitives, and show the first example where such an equivalence holds. A small number of prior works showed that high-energy phenomena can be explained by cryptographic hardness. Examples include using the existence of one-way functions to explain the hardness of decoding black-hole Hawking radiation (Harlow and Hayden 2013, Aaronson 2016), and using pseudorandom quantum states to explain the hardness of computing AdS/CFT dictionary (Bouland, Fefferman and Vazirani, 2020).
In this work we show, for the former example of black-hole radiation decoding, that it also implies the existence of secure quantum cryptography. In fact, we show an existential equivalence between the hardness of black-hole radiation decoding and a variety of cryptographic primitives, including bit-commitment schemes and oblivious transfer protocols (using quantum communication). This can be viewed (with proper disclaimers, as we discuss) as providing a physical justification for the existence of secure cryptography. We conjecture that such connections may be found in other high-energy physics phenomena.
Zvika Brakerski,
Ran Canetti
and Luowen Qian
On the Computational Hardness Needed for Quantum Cryptography
ITCS 2023.
[Abstract] [PDF]
In the classical model of computation, it is well established that one-way functions (OWF) are essential for almost every computational cryptographic application. In the quantum setting, however, OWFs appear not to be essential (Kretschmer 2021; Ananth et al., Morimae and Yamakawa 2022), and the question of whether a minimal primitive exists remains open.
We consider EFI pairs β efficiently samplable, statistically far but computationally indistinguishable pairs of distributions. Building on the work of Yan (2022) which shows equivalence between EFI pairs and statistical commitment schemes, we show that EFI pairs are necessary and sufficient for a large class of quantum-cryptographic applications. Specifically, while it was known how to construct commitments schemes, oblivious transfer, and general secure multiparty computation from any EFI, we show how to construct EFI pairs from minimalistic versions of each one of these primitives. We also construct from EFI quantum computational zero knowledge (π°π’πΉπͺ) proofs for all of π°π¨π―, and construct EFI pairs from essentially any non-trivial π°π’πΉπͺ.
This suggests that, for much of quantum cryptography, EFI pairs play a similar role to that played by OWFs in the classical setting: they are simple to describe, essential, and also serve as a linchpin for demonstrating equivalence between primitives.
-
We show that it is possible to perform n independent copies of 1-out-of-2 oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is n(1+o(1)) for sufficiently large n. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-1 fully homomorphic encryption (Rate-1 FHE, Brakerski et al., TCC 2019).
To achieve rate-1 both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate 1/mε for any ε>0 together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive "bootstrapping" operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).
For our DDH-based solution we develop a new technique that may be of independent interest. We show that it is possible to "emulate" the binary group Z2 (or any other small-order group) inside a prime-order group Zp in a function-private manner. That is, Z2 operations are mapped to Zp operations such that the outcome of the latter do not reveal additional information beyond the Z2 outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
Nir Bitansky,
Zvika Brakerski
and Yael Kalai
Constructive Post-Quantum Reductions
Crypto 2022, QCrypt 2022.
[Abstract] [PDF]
Is it possible to convert classical reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a non-constructive post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage.
We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted.
Along the way, we make several additional contributions:
1. We put forth a framework for reductions (or general interaction) with stateful solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting.
2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically "restored" post-measurement. This shows that the novel rewinding technique of Chiesa et al. (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
Nir Bitansky and
Zvika Brakerski
Classical Binding for Quantum Commitments
TCC 2021.
[Abstract] [PDF]
In classical commitments, statistical binding means that for almost any commitment transcript there is at most one possible opening. While quantum commitments (for classical messages) sometimes have benefits over their classical counterparts (e.g. in terms of assumptions), they provide a weaker notion of binding. Essentially that the sender cannot open a given commitment to a random value with probability noticeably greater than 1/2.
We introduce a notion of classical binding for quantum commitments which provides guarantees analogous to the classical case. In our notion, the receiver performs a (partial) measurement of the quantum commitment string, and the outcome of this measurement determines a single value that the sender may open. We expect that our notion can replace classical commitments in various settings, leaving the security proof essentially unchanged. As an example we show a soundness proof for the GMW zero-knowledge proof system.
We construct a non-interactive quantum commitment scheme which is classically statistically-binding and has a classical opening, based on the existence of any post-quantum one-way function. Prior candidates had inherently quantum openings and were not classically binding.
In contrast, we show that it is impossible to achieve classical binding for statistically hiding commitments, regardless of assumption or round complexity.
Our scheme is simply Naor's commitment scheme (which classically requires a common random string, CRS), but executed in superposition over all possible values of the CRS, and repeated several times. We hope that this technique for using quantum communication to remove a CRS may find other uses.
-
We consider the problem of subgroup testing for a quantum circuit C: given access to C, determine whether it implements a unitary from a subgroup G of the unitary group. In particular, the group G can be the trivial subgroup (i.e., identity testing) or groups such as the Pauli or Clifford groups, or even their q-ary extensions. We also consider a promise version of the problem where C is promised to be in some subgroup of the unitaries that contains G (e.g., identity testing for Clifford circuits).
We present a novel structural property of Clifford unitaries. Namely, that their (normalized) trace is bounded by 1/√2 in absolute value, regardless of the dimension. We show a similar property for the q-ary Cliffords. This allows us to analyze a very simple single-query identity test under the Clifford promise and show that it has (at least) constant soundness. The same test has perfect soundness under the Pauli promise.
We use the test to show that identity/Pauli/Clifford testing (without promise) are all computationally equivalent, thus establishing computational hardness for Pauli and Clifford testing.
-
Efficient lattice-based cryptography usually relies on the intractability of problems on lattices with algebraic structure such as ideal-lattices or module-lattices. It is an important open question to evaluate the hardness of such lattice problems, and their relation to the hardness of problems on unstructured lattices.
It is a known fact that an unstructured lattice can be cast as an ideal-lattice in some order of a number field (and thus, in a rather trivial sense, that ideals in orders are as general as unstructured lattices). However, it is not known whether this connection can be used to imply useful hardness results for structured lattices, or alternatively new algorithmic techniques for unstructured lattices.
In this work we show that the Order-LWE problem (a generalization of the well known Ring-LWE problem) on certain orders is at least as hard as the (unstructured) LWE problem. So in general one should not hope to solve Order-LWE more efficiently than LWE. However, we only show that this connection holds in orders that are very ``skewed'' and in particular irrelevant for cryptographic applications. We then discuss the ability to embed unstructured lattices in ``friendlier'' orders, which requires devising an algorithm for computing the conductor of relevant orders. One of our technical tools is an improved hardness result for Order-LWE, closing a gap left in prior work.
-
In this work, we show the first worst-case to average-case reduction for the classical k-SUM problem. A k-SUM instance is a collection of m integers, and the goal of the k-SUM problem is to find a subset of k elements that sums to 0. In the average-case version, the m elements are chosen uniformly at random from some interval [βu,u].
We consider the total setting where m is sufficiently large (with respect to u and k), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of k-SUM, in particular connections to problems in computational geometry, extends to the total setting.
The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a run-time of uO(1/logk). This beats the known (conditional) lower bounds for worst-case k-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower-bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case k-SUM on m elements in time uo(1/logk) will give a super-polynomial improvement in the complexity of algorithms for lattice problems.
-
Non-committing encryption (NCE) is a type of public key encryption which comes with the ability to equivocate ciphertexts to encryptions of arbitrary messages, i.e., it allows one to find coins for key generation and encryption which ``explain'' a given ciphertext as an encryption of any message. NCE is the cornerstone to construct adaptively secure multiparty computation [Canetti et al. STOC'96] and can be seen as the quintessential notion of security for public key encryption to realize ideal communication channels. A large body of literature investigates what is the best message-to-ciphertext ratio (i.e., the rate) that one can hope to achieve for NCE. In this work we propose a near complete resolution to this question and we show how to construct NCE with constant rate in the plain model from a variety of assumptions, such as the hardness of the learning with errors (LWE), the decisional Diffie-Hellman (DDH), or the quadratic residuosity (QR) problem. Prior to our work, constructing NCE with constant rate required a trusted setup and indistinguishability obfuscation [Canetti et al. ASIACRYPT'17].
Zvika Brakerski
and
Nico Döttling
Lossiness and Entropic Hardness for Ring-LWE
TCC 2020.
[Abstract] [PDF]
The hardness of the Ring Learning with Errors problem (RLWE) is a central building block for efficiency-oriented lattice-based cryptography. Many applications use an ``entropic'' variant of the problem where the so-called ``secret'' is not distributed uniformly as prescribed but instead comes from some distribution with sufficient min-entropy. However, the hardness of the entropic variant has not been substantiated thus far.
For standard LWE (not over rings) entropic results are known, using a ``lossiness approach'' but it was not known how to adapt this approach to the ring setting. In this work we present the first such results, where entropic security is established either under RLWE or under the Decisional Small Polynomial Ratio (DSPR) assumption which is a mild variant of the NTRU assumption.
In the context of general entropic distributions, our results in the ring setting essentially match the known lower bounds (Bolboceanu et al., Asiacrypt 2019; Brakerski and DΓΆttling, Eurocrypt 2020).
Zvika Brakerski,
Sanjam Garg
and
Rotem Tsabary
FHE-Based Bootstrapping of Designated-Prover NIZK
TCC 2020.
[Abstract] [PDF]
We present a novel tree-based technique that can convert any designated-prover NIZK proof system (DP-NIZK) which maintains zero-knowledge only for single statement, into one that allows to prove an unlimited number of statements in ZK, while maintaining all parameters succinct. Our transformation requires leveled fully-homomorphic encryption. We note that single-statement DP-NIZK can be constructed from any one-way function. We also observe a two-way derivation between DP-NIZK and attribute-based signatures (ABS), and as a result derive now constructions of ABS and homomorphic signatures (HS).
Our construction improves upon the prior construction of lattice-based DP-NIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circular-secure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
Zvika Brakerski,
Yael Kalai
and Raghuvansh Saxena
Deterministic and Efficient Interactive Coding from Hard-to-Decode Tree Codes
FOCS 2020.
[Abstract] [PDF]
The field of Interactive Coding studies how an interactive protocol can be made resilient to channel errors. Even though this field has received abundant attention since Schulman's seminal paper (FOCS 92), constructing interactive coding schemes that are both deterministic and efficient, and at the same time resilient to adversarial errors (with constant information and error rates), remains an elusive open problem.
An appealing approach towards resolving this problem is to show efficiently encodable and decodable constructions of a combinatorial object called tree codes (Schulman, STOC 93). After a lot of effort in this direction, the current state of the art has deterministic constructions of tree codes that are efficiently encodable but require a logarithmic (instead of constant) alphabet (Cohen, Haeupler, and Schulman, STOC 18). However, we still lack (even heuristic) candidate constructions that are efficiently decodable.
In this work, we show that tree codes that are efficiently encodable, but not efficiently decodable, also imply deterministic and efficient interactive coding schemes that are resilient to adversarial errors. Our result immediately implies a deterministic and efficient interactive coding scheme with a logarithmic alphabet (i.e., 1/loglog rate). We show this result using a novel implementation of hashing through deterministic tree codes that is powerful enough to yield interactive coding schemes.
Zvika Brakerski,
Nico Döttling,
Sanjam Garg
and
Giulio Malavolta
Factoring and Pairings are not Necessary for iO: Circular-Secure LWE Suffices
Manuscript, 2020.
[Abstract] [PDF]
We construct indistinguishability obfuscation (iO) solely under circular-security properties of encryption schemes based on the Learning with Errors (LWE) problem. Circular-security assumptions were used before to construct (non-leveled) fully-homomorphic encryption (FHE), but our assumption is stronger and requires circular randomness-leakage-resilience. In contrast with prior works, this assumption can be conjectured to be post-quantum secure; yielding the first provably secure iO construction that is (plausibly) post-quantum secure.
Our work follows the high-level outline of the recent work of Gay and Pass [ePrint 2020], who showed a way to remove the heuristic step from the homomorphic-encryption based iO approach of Brakerski, D\"ottling, Garg, and Malavolta [EUROCRYPT 2020]. They thus obtain a construction proved secure under circular security assumption of natural homomorphic encryption schemes --- specifically, they use homomorphic encryption schemes based on LWE and DCR, respectively. In this work we show how to remove the DCR assumption and remain with a scheme based on the circular security of LWE alone. Along the way we relax some of the requirements in the Gay-Pass blueprint and thus obtain a scheme that is secure under a relaxed assumption. Specifically, we do not require security in the presence of a key-cycle, but rather only in the presence of a key-randomness cycle.
Zvika Brakerski
and
Henry Yuen
Quantum Garbled Circuits
QIP 2021, STOC 2022.
[Abstract] [PDF]
We present a garbling scheme for quantum circuits, thus achieving a decomposable randomized encoding scheme for quantum computation. Specifically, we show how to compute an encoding of a given quantum circuit and quantum input, from which it is possible to derive the output of the computation and nothing else.
In the classical setting, garbled circuits (and randomized encodings in general) are a versatile cryptographic tool with many applications such as secure multiparty computation, delegated computation, depth-reduction of cryptographic primitives, complexity lower-bounds, and more. However, a quantum analogue for garbling general circuits was not known prior to this work. We hope that our quantum randomized encoding scheme can similarly be useful for applications in quantum computing and cryptography.
The properties of our scheme are as follows:
- Our scheme has perfect correctness, and has perfect information-theoretic security if we allow the encoding size to blow-up considerably (double-exponentially in the depth of the circuit in the worst-case). This blowup can be avoided via computational assumptions (specifically, the existence of quantum-secure pseudorandom generators). In the computational case, the size of the encoding is proportional to the size of the circuit being garbled, up to a polynomial in the security parameter.
- The encoding process is decomposable: each input qubit can be encoded independently, when given access to classical randomness and EPR pairs.
- The complexity of encoding essentially matches the size of its output and furthermore it can be computed via a constant-depth quantum circuit with bounded-arity gates as well as quantum fan-out gates (which come ``for free'' in the classical setting). Formally this is captured by the complexity class QNC0f.
To illustrate the usefulness of quantum randomized encoding, we use it to design a conceptually-simple zero-knowledge (ZK) proof system for the complexity class QMA. Our protocol has the so-called Σ format with a single-bit challenge, and allows the inputs to be delayed to the last round. The only previously-known ZK Σ-protocol for QMA is due to Broadbent and Grilo (FOCS 2020), which does not have the aforementioned properties.
Gorjan Alagic,
Zvika Brakerski,
Yfke Dulek
and
Christian Schaffner
Impossibility of Quantum Virtual Black-Box Obfuscation of Classical Circuits
QCrypt 2020, QIP 2021, CRYPTO 2021.
[Abstract] [PDF]
Virtual black-box obfuscation is a strong cryptographic primitive: it encrypts a circuit while maintaining its full input/output functionality. A remarkable result by Barak et al. (Crypto 2001) shows that a general obfuscator that obfuscates classical circuits into classical circuits cannot exist. A promising direction that circumvents this impossibility result is to obfuscate classical circuits into quantum states, which would potentially be better capable of hiding information about the obfuscated circuit. We show that, under the assumption that learning-with-errors (LWE) is hard for quantum computers, this quantum variant of virtual black-box obfuscation of classical circuits is generally impossible. On the way, we show that under the presence of dependent classical auxiliary input, even the small class of classical point functions cannot be quantum virtual black-box obfuscated.
-
A proof of quantumness is a method for provably demonstrating (to a classical verifier) that a quantum device can perform computational tasks that a classical device with comparable resources cannot. Providing a proof of quantumness is the first step towards constructing a useful quantum computer. There are currently three approaches for exhibiting proofs of quantumness: (i) Inverting a classically-hard one-way function (e.g. using Shor's algorithm). This seems technologically out of reach. (ii) Sampling from a classically-hard-to-sample distribution (e.g. BosonSampling). This may be within reach of near-term experiments, but for all such tasks known verification requires exponential time. (iii) Interactive protocols based on cryptographic assumptions. The use of a trapdoor scheme allows for efficient verification, and implementation seems to require much less resources than (i), yet still more than (ii).
In this work we propose a significant simplification to approach (iii) by employing the random oracle heuristic. (We note that we do not apply the Fiat-Shamir paradigm.) We give a two-message (challenge-response) proof of quantumness based on any trapdoor claw-free function. In contrast to earlier proposals we do not need an adaptive hard-core bit property. This allows the use of smaller security parameters and more diverse computational assumptions (such as Ring Learning with Errors), significantly reducing the quantum computational effort required for a successful demonstration.
-
We propose a new approach to construct general-purpose indistinguishability obfuscation (iO). Our construction is obtained via a new intermediate primitive that we call split fully-homomorphic encryption (split FHE), which we show to be sufficient for constructing iO. Specifically, split FHE is FHE where decryption takes the following two-step syntactic form: (i) A secret decryption step uses the secret key and produces a hint which is (asymptotically) shorter than the length of the encrypted message, and (ii) a public decryption step that only requires the ciphertext and the previously generated hint (and not the entire secret key), and recovers the encrypted message. In terms of security, the hints for a set of ciphertexts should not allow one to violate semantic security for any other ciphertexts.
Next, we show a generic candidate construction of split FHE based on three building blocks: (i) A standard FHE scheme with linear decrypt-and-multiply (which can be instantiated with essentially all LWE-based constructions), (ii) a linearly homomorphic encryption scheme with short decryption hints (such as the Damgard-Jurik encryption scheme, based on the DCR problem), and (iii) a cryptographic hash function (which can be based on a variety of standard assumptions). Our approach is heuristic in the sense that our construction is not provably secure and makes implicit assumptions about the interplay between these underlying primitives. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model.
We view our construction as a big departure from the state-of-the-art constructions, and it is in fact quite simple.
Zvika Brakerski
and
Omri Shmueli
Scalable Pseudorandom Quantum States
QCrypt 2020, CRYPTO 2020.
[Abstract] [PDF]
Efficiently sampling a quantum state that is hard to distinguish from a truly random quantum state is an elementary task in quantum information theory that has both computational and physical uses. This is often referred to as pseudorandom (quantum) state generator, or PRS generator for short.
In existing constructions of PRS generators, security scales with the number of qubits in the states, i.e. the (statistical) security parameter for an n-qubit PRS is roughly n. Perhaps counter-intuitively, n-qubit PRS are not known to imply k-qubit PRS even for k smaller than n. Therefore the question of scalability for PRS was thus far open: is it possible to construct n-qubit PRS generators with security parameter λ for all n, λ. Indeed, we believe that PRS with tiny (even constant) n and large λ can be quite useful.
We resolve the problem in this work, showing that any quantum-secure one-way function implies scalable PRS. We follow the paradigm of first showing a statistically secure construction when given oracle access to a random function, and then replacing the random function with a quantum-secure (classical) pseudorandom function to achieve computational security. However, our methods deviate significantly from prior works since scalable pseudorandom states require randomizing the amplitudes of the quantum state, and not just the phase as in all prior works. We show how to achieve this using Gaussian sampling.
Zvika Brakerski,
Venkata Koppula
and
Tamer Mour
NIZK from LPN and Trapdoor Hash via Correlation Intractability for Approximable Relations
CRYPTO 2020.
[Abstract] [PDF]
We present new non-interactive zero-knowledge argument systems (NIZK), based on standard assumptions that were previously not known to imply it. In particular, we rely on the hardness of both the learning parity with noise (LPN) assumption, and the existence of trapdoor hash functions (TDH, defined by DΓΆttling et al., Crypto 2019). Such TDH can be based on a number of standard assumptions, including DDH, QR, DCR, and LWE.
We revisit the correlation intractability (CI) framework for converting Σ-protocols into NIZK, and present a different strategy for instantiating it by putting together two new components. First, while prior works considered the search-complexity of the relations for which CI is sought,we consider their probabilistic representation. Namely, a distribution over lower-complexity functions that bitwise-computes the target function with all but small (constant) probability. The second component is a new perspective for quantifying the class of relations for which CI is achieved. We show that it is instructive to consider CI for approximable relations (CI-Apx) which is quantified by a class of relations, but requires CI to hold against any approximation of any relation in this class.
We show that CI-Apx for just constant-degree polynomials suffices for NIZK, if the underlying Σ-protocol is implemented using a suitable commitment scheme. We show that such a commitment scheme can be constructed based on LPN. We then show how to construct CI-Apx for constant-degree polynomials from any suitable TDH (with an enhanced correctness property that is satisfied by all existing TDH constructions).
Zvika Brakerski
and
Vinod Vaikuntanathan
Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE
ITCS 2022.
[Abstract] [PDF]
We propose a candidate Ciphertext-Policy Attribute Based Encryption (CP-ABE) scheme for circuits, with ciphertext size that depends only on the depth of the policy circuit (and not its size). This, in particular, gives us a Broadcast Encryption (BE) scheme where all parameters (in particular, the size of the keys and ciphertexts) have a poly-logarithmic dependence on the number of users, a task that was only known to be achievable assuming ideal multilinear maps or indistinguishability obfuscation. Our construction relies on techniques from lattice-based (and in particular LWE-based) cryptography, but we are unable to provide a security proof. We analyze some attempts at cryptanalysis.
Zvika Brakerski
and
Nico Döttling
Hardness of LWE on General Entropic Distributions
EUROCRYPT 2020.
[Abstract] [PDF]
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape. In many of its applications the so called ``LWE secret'' is not sampled uniformly, but comes from a distribution with some min-entropy. This variant, known as ``Entropic LWE'', has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius.
In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli.
Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution s, given s+e, where e is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
-
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed ``non-compact'' variant of 2-party secret sharing from 2-round OT.
Zvika Brakerski,
Nico Döttling,
Sanjam Garg
and
Giulio Malavolta
Leveraging Linear Decryption: Rate-1 Fully-Homomorphic Encryption and Time-Lock Puzzles
TCC 2019.
[Abstract] [PDF]
We show how to combine a fully-homomorphic encryption scheme with linear decryption and a linearly-homomorphic encryption schemes to obtain constructions with new properties. Specifically, we present the following new results.
- Rate-1 Fully-Homomorphic Encryption: We construct the first scheme with message-to-ciphertext length ratio (i.e., rate) 1βz for z=o(1). Our scheme is based on the hardness of the Learning with Errors (LWE) problem and z is proportional to the noise-to-modulus ratio of the assumption. Our building block is a construction of a new high-rate linearly-homomorphic encryption. One application of this result is the first general-purpose secure function evaluation protocol in the preprocessing model where the communication complexity is within additive factor of the optimal insecure protocol.
- Fully-Homomorphic Time-Lock Puzzles: We construct the first time-lock puzzle where one can evaluate any function over a set of puzzles without solving them, from standard assumptions. Prior work required the existence of sub-exponentially hard indistinguishability obfuscation.
Zvika Brakerski
and
Omri Shmueli
(Pseudo) Random Quantum States with Binary Phase
TCC 2019.
[Abstract] [PDF]
We prove a quantum information-theoretic conjecture due to Ji, Liu and Song (CRYPTO 2018) which suggested that a uniform superposition with random binary phase is statistically indistinguishable from a Haar random state. That is, any polynomial number of copies of the aforementioned state is within exponentially small trace distance from the same number of copies of a Haar random state.
As a consequence, we get a provable elementary construction of pseudorandom quantum states from post-quantum pseudorandom functions. Generating pseduorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)-wise independent function (either in our construction or in previous work), results in an explicit construction for quantum state t-designs for all t. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing t-designs is bounded by that of (2t)-wise independent functions. Explicitly, while in prior literature t-designs required linear depth (for t>2), this observation shows that polylogarithmic depth suffices for all t.
We note that our constructions yield pseudorandom states and state designs with only real-valued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation.
-
In (single-server) Private Information Retrieval (PIR), a server holds a large database DB of size n, and a client holds an index i and wishes to retrieve DB[i] without revealing i to the server. It is well known that information theoretic privacy even against an "honest but curious" server requires linear communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database ("input purification attack"). Nevertheless, there have been some proposals of protocols that achieve sub-linear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with square-root communication complexity, and a protocol by Kerenidis et al. (QIC 2016) with logarithmic communication complexity, and linear amount of shared entanglement.
We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries. Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
Benny Applebaum,
Zvika Brakerski
and
Rotem Tsabary
Degree 2 is Complete for the Round-Complexity of Malicious MPC
EUROCRYPT 2019.
[Abstract] [PDF]
We show, via a non-interactive reduction, that the existence of a secure multi-party computation (MPC) protocol for degree-2 functions implies the existence of a protocol with the same round complexity for general functions. Thus showing that when considering the round complexity of MPC, it is sufficient to consider very simple functions.
Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blow-up which is polynomial in the number of players and in S,2D, where S,D are the circuit size and depth of the function to be computed. Using one-way functions as an additional assumption, the exponential dependence on the depth can be removed.
As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases.
- 3-round perfectly-secure protocol (with guaranteed output delivery) against an active adversary that corrupts less than a quarter of the parties.
- 2-round statistically-secure protocol that achieves security with ``selective abort'' against an active adversary that corrupts less than half of the parties.
- Assuming one-way functions, 2-round computationally-secure protocol that achieves security with (standard) abort against an active adversary that corrupts less than half of the parties. This gives a new and conceptually simpler proof to the recent result of Ananth et al. (Crypto 2018).
Technically, our non-interactive reduction draws from the encoding method of Applebaum, Brakerski and Tsabary (TCC 2018). We extend these methods to ones that can be meaningfully analyzed even in the presence of malicious adversaries.
-
We show that any multi-party functionality can be evaluated using a two-round protocol with perfect correctness and perfect semi-honest security, provided that the majority of parties are honest. This settles the round complexity of information-theoretic semi-honest MPC, resolving a longstanding open question (cf. Ishai and Kushilevitz, FOCS 2000). The protocol is efficient for NC1 functionalities. Furthermore, given black-box access to a one-way function, the protocol can be made efficient for any polynomial functionality, at the cost of only guaranteeing computational security.
Technically, we extend and relax the notion of randomized encoding to specifically address multi-party functionalities. The property of a multi-party randomized encoding (MPRE) is that if the functionality g is an encoding of the functionality f, then for any (permitted) coalition of players, their respective outputs and inputs in $g$ allow them to simulate their respective inputs and outputs in f, without learning anything else, including the other outputs of $f$.
Zvika Brakerski
and
Nico Döttling
Two-Message Statistically Sender-Private OT from LWE
TCC 2018.
[Abstract] [PDF]
We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
-
We propose a generalization of the celebrated Ring Learning with Errors (RLWE) problem (Lyubashevsky, Peikert and Regev, Eurocrypt 2010, Eurocrypt 2013), wherein the ambient ring is not the ring of integers of a number field, but rather an order (a full rank subring). We show that our Order-LWE problem enjoys worst-case hardness with respect to short-vector problems in invertible-ideal lattices of the order.
The definition allows us to provide a new analysis for the hardness of the abundantly used Polynomial-LWE (PLWE) problem (Stehlé et al., Asiacrypt 2009), different from the one recently proposed by Rosca, Stehlé and Wallet (Eurocrypt 2018).
This suggests that Order-LWE may be used to analyze and possibly design useful relaxations of RLWE.
We show that Order-LWE can naturally be harnessed to prove security for RLWE instances where the ``RLWE secret'' (which often corresponds to the secret-key of a cryptosystem) is not sampled uniformly as required for RLWE hardness. We start by showing worst-case hardness even if the secret is sampled from a subring of the sample space. Then, we study the case where the secret is sampled from an ideal of the sample space or a coset thereof (equivalently, some of its CRT coordinates are fixed or leaked). In the latter, we show an interesting threshold phenomenon where the amount of RLWE noise determines whether the problem is tractable.
Lastly, we address the long standing question of whether high-entropy secret is sufficient for RLWE to be intractable. Our result on sampling from ideals shows that simply requiring high entropy is insufficient. We therefore propose a broad class of distributions where we conjecture that hardness should hold, and provide evidence via reduction to a concrete lattice problem.
Ben Berger and
Zvika Brakerski
Zero-Knowledge Protocols for Search Problems
ICALP 2018 (Brief Announcement), SCN 2018.
[Abstract] [PDF]
We consider natural ways to extend the notion of Zero-Knowledge (ZK) Proofs beyond decision problems. Specifically, we consider search problems, and define zero-knowledge proofs in this context as interactive protocols in which the prover can establish the correctness of a solution to a given instance without the verifier learning anything beyond the intended solution, even if it deviates from the protocol. The goal of this work is to initiate a study of Search Zero-Knowledge (search-ZK), the class of search problems for which such systems exist. This class trivially contains search problems where the validity of a solution can be efficiently verified (using a single message proof containing only the solution). A slightly less obvious, but still straightforward, way to obtain zero-knowledge proofs for search problems is to let the prover send a solution and prove in zero-knowledge that the instance-solution pair is valid. However, there may be other ways to obtain such zero-knowledge proofs, and they may be more advantageous. In fact, we prove that there are search problems for which the aforementioned approach fails, but still search zero-knowledge protocols exist. On the other hand, we show sufficient conditions for search problems under which some form of zero-knowledge can be obtained using the straightforward way.
Zvika Brakerski
and Yael Kalai
Witness Indistinguishability for any Single-Round Argument with Applications to Access Control
PKC 2020.
[Abstract] [PDF]
Consider an access policy for some resource which only allows access to users of the system who own a certain set of attributes. Specifically, we consider the case where such an access structure is defined by some monotone function f:{0,1}N → {0,1}, belonging to some class of function F (e.g. conjunctions, space bounded computation), where N is the number of possible attributes.
In this work we show that any succinct single-round delegation scheme for the function class F can be converted into a succinct single-round private access control protocol. That is, a verifier can be convinced that an approved user (i.e. one which holds an approved set of attributes) is accessing the system, without learning any additional information about the user or the set of attributes.
As a main tool of independent interest, we show that assuming a quasi-polynomially secure two-message oblivious transfer scheme with statistical sender privacy (which can be based on quasi-polynomial hardness of the DDH, QR, DCR or LWE assumptions), we can convert any single-round protocol into a witness indistinguishable one, with similar communication complexity.
Note: Prior versions of this report contained an additional result about a new batch delegation scheme for monotone formulae (see ePrint history for this report). We intend to publish these results in future work.
Zvika Brakerski
Quantum FHE (Almost) as Secure as Classical
CRYPTO 2018.
[Abstract] [PDF]
Fully homomorphic encryption schemes (FHE) allow to apply arbitrary efficient computation to encrypted data without decrypting it first. In Quantum FHE (QFHE) we may want to apply an arbitrary quantumly efficient computation to (classical or quantum) encrypted data.
We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multi-key classical FHE.
Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on super-polynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.
-
We give a protocol for producing certifiable randomness from a single untrusted quantum device that is polynomial-time bounded. The randomness is certified to be statistically close to uniform from the point of view of any computationally unbounded quantum adversary, that may share entanglement with the quantum device. The protocol relies on the existence of post-quantum secure trapdoor claw-free functions, and introduces a new primitive for constraining the power of an untrusted quantum device. We then show how to construct this primitive based on the hardness of the learning with errors (LWE) problem.
The randomness protocol can also be used as the basis for an efficiently verifiable quantum supremacy proposal, thus answering an outstanding challenge in the field.
-
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of 1/2-1/poly(n). Thus we can only show that "very hard" LPN is harder than some "very mildly hard" worst case problem.
Specifically, we consider the (n,m,w)-nearest codeword problem ((n,m,w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error w/m ≈ log2n/n, (n,m,w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio 1/2-1/poly(n).
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n,m,w)-NCP with the aforementioned parameters lies in the complexity class Search-BPPSZK (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be NP-hard. We then show that LPN with very low noise rate log2(n)/n implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in BPPSZK).
Zvika Brakerski,
Alex Lombardi,
Gil Segev
and Vinod Vaikuntanathan
Anonymous IBE, Leakage Resilience and Circular Security from New Assumptions
EUROCRYPT 2018.
[Abstract] [PDF]
In anonymous identity-based encryption (IBE), ciphertexts not only hide their corresponding messages, but also their target identity. We construct an anonymous IBE scheme based on the Computational Diffie-Hellman (CDH) assumption in general groups (and thus, as a special case, based on the hardness of factoring Blum integers).
Our approach extends and refines the recent tree-based approach of Cho et al. (CRYPTO 17) and Döttling and Garg (CRYPTO 17). Whereas the tools underlying their approach do not seem to provide any form of anonymity, we introduce two new building blocks which we utilize for achieving anonymity: blind garbled circuits (which we construct based on any one-way function), and blind batch encryption (which we construct based on CDH).
We then further demonstrate the applicability of our newly-developed tools by showing that batch encryption implies a public-key encryption scheme that is both resilient to leakage of a (1βo(1))-fraction of its secret key, and KDM secure (or circular secure) with respect to all linear functions of its secret key (which, in turn, is known to imply KDM security for bounded-size circuits). These yield the first high-rate leakage-resilient encryption scheme and the first KDM-secure encryption scheme based on the CDH or Factoring assumptions.
Finally, relying on our techniques we also construct a batch encryption scheme based on the hardness of the Learning Parity with Noise (LPN) problem, albeit with very small noise rate Ω(log2β‘(n)/n). Although this batch encryption scheme is not blind, we show that it still implies standard (i.e., non-anonymous) IBE, leakage resilience and KDM security. IBE and high-rate leakage resilience were not previously known from LPN, even with extremely low noise.
-
The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regev's famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS'02). We also discuss a connection between eDCP and Childs and Van Dam's algorithm for generalized hidden shift problems (SODA'07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.
Zvika Brakerski,
Yael Kalai and
Renen Perlman
Succinct Spooky Free Compilers Are Not Black Box Sound
ASIACRYPT 2017.
[Abstract] [PDF]
It is tempting to think that if we encrypt a sequence of messages {xi} using a semantically secure encryption scheme, such that each xi is encrypted with its own independently generated public key pki, then even if the scheme is malleable (or homomorphic) then malleability is limited to acting on each xi independently. However, it is known that this is not the case, and in fact even non-local malleability might be possible. This phenomenon is known as spooky interactions.
We formally define the notion of spooky free compilers that has been implicit in the delegation of computation literature. A spooky free compiler allows to encode a sequence of queries to a multi-prover interactive proof system (MIP) in a way that allows to apply the MIP prover algorithm on the encoded values on one hand, and prevents spooky interactions on the other. In our definition, the compiler is allowed to be tailored to a specific MIP and does not need to support any other operation.
We show that (under a plausible complexity assumption) spooky free compilers that are sufficiently succinct to imply delegation schemes for NP with communication na (for any constant a<1) cannot be proven secure via black-box reduction to a falsifiable assumption. On the other hand, we show that it is possible to construct non-succinct spooky free fully homomorphic encryption, the strongest conceivable flavor of spooky free compiler, in a straightforward way from any fully homomorphic encryption scheme.
Our impossibility result relies on adapting the techniques of Gentry and Wichs (2011) which rule out succinct adaptively sound delegation protocols. We note that spooky free compilers are only known to imply non-adaptive delegation, so the aforementioned result cannot be applied directly. Interestingly, we are still unable to show that spooky free compilers imply adaptive delegation, nor can we apply our techniques directly to rule out arbitrary non-adaptive NP-delegation.
-
A witness encryption (WE) scheme can take any NP statement as a public-key and use it to encrypt a message. If the statement is true then it is possible to decrypt the message given a corresponding witness, but if the statement is false then the message is computationally hidden. Ideally, the encryption procedure should run in polynomial time, but it is also meaningful to define a weaker notion, which we call non-trivially exponentially efficient WE (XWE), where the encryption run-time is only required to be much smaller than the trivial 2m bound for NP relations with witness size m. We show how to construct such XWE schemes for all of NP with encryption run-time 2m/2 under the sub-exponential learning with errors (LWE) assumption. For NP relations that can be verified in NC1 (e.g., SAT) we can also construct such XWE schemes under the sub-exponential Decisional Bilinear Diffie-Hellman (DBDH) assumption. Although we find the result surprising, it follows via a very simple connection to attribute-based encryption.
We also show how to upgrade the above results to get non-trivially exponentially efficient indistinguishability obfuscation for null circuits (niO), which guarantees that the obfuscations of any two circuits that always output 0 are indistinguishable. In particular, under the LWE assumptions we get a XniO scheme where the obfuscation time is 2n/2 for all circuits with input size n. It is known that in the case of indistinguishability obfuscation (iO) for all circuits, non-trivially efficient XiO schemes imply fully efficient iO schemes (Lin et al., PKC '16) but it remains as a fascinating open problem whether any such connection exists for WE or niO.
Lastly, we explore a potential approach toward constructing fully efficient WE and niO schemes via multi-input ABE.
-
In a constrained PRF, the owner of the PRF key K can generate constrained keys Kf that allow anyone to evaluate the PRF on inputs x that satisfy the predicate f (namely, where f(x) is ``true'') but reveal no information about the PRF evaluation on the other inputs. A private constrained PRF goes further by requiring that the constrained key Kf hides the predicate f.
Boneh, Kim and Montgomery (EUROCRYPT 2017) presented a construction of private constrained PRF for point function constraints, and Canetti and Chen (EUROCRYPT 2017) presented a completely different construction for NC1 constraints. In this work, we show two constructions of LWE-based constraint-hiding constrained PRFs for general predicates described by polynomial-size circuits.
The two constructions are based on two distinct techniques that we show have further applicability by constructing weak attribute-hiding predicate encryption schemes. In a nutshell, the first construction imports the technique of modulus switching from the FHE world into the domain of trapdoor extension and homomorphism. The second construction shows how to use the duality between FHE secret-key/randomness and ABE randomness/secret-key to construct a scheme with dual use of the same values for both FHE and ABE purposes.
-
We construct a 4-round multi-party computation protocol in the plain model for any functionality, secure against a malicious adversary. Our protocol relies on the sub-exponential hardness of the Learning with Errors (LWE) problem with slightly super-polynomial noise ratio, and on the existence of adaptively secure commitments. Our round complexity matches a lower bound of Garg et al. (EUROCRYPT '16), and outperforms the state of the art of 6-rounds based on similar assumptions to ours, and 5-rounds relying on indistinguishability obfuscation and other strong assumptions.
To do this, we construct an LWE based multi-key FHE scheme with a very simple one-round \emph{distributed setup procedure} (vs. the trusted setup required in previous LWE based constructions). This lets us construct a 3-round semi-malicious protocol without setup using the approach of Mukherjee and Wichs (EUROCRYPT '16). Finally, subexponential hardness and adaptive commitments are used to "compile" the protocol into the fully malicious setting.
Boaz Barak,
Zvika Brakerski, Ilan Komargodski and
Pravesh Kothari
Limits on Low-Degree Pseudorandom Generators (Or: Sum-of-Squares Meets Program Obfuscation)
EUROCRYPT 2018.
[Abstract] [PDF]
We prove that for every function G:{+1,-1}n -> Rm, if every output of G is a polynomial (over R) of degree at most d of at most s monomials and m > Õ(sn⌈ d/2 ⌉), then there is a polynomial time algorithm that can distinguish a vector of the form z=G(x) from a vector z where each coordinate is sampled independently according to the marginal distributions of the coordinates of G (assuming the latter satisfy some non-degeneracy condition).
In particular, if G:{+1,-1}n -> {+1,-1}m and m is as above, then G cannot be a pseudorandom generator.
Our algorithm is based on semidefinite programming and in particular the sum of squares (SOS) hierarchy.
As a corollary, we refute some conjectures recently made in the cryptographic literature. This includes refuting the assumptions underlying Lin and Tessaro's (2017) recently proposed candidate construction for indistinguishability obfuscation from bilinear maps, by showing that any block-wise 2-local PRG with block size b cannot go beyond m ≈ 22b n. We give an even stronger bound of m ≈ 2b n on the output length of random block-wise 2-local PRGs. We also show that a generalized notion of generators runs into similar barriers.
We complement our algorithm by presenting a class of candidate generators with block-wise locality 3 and constant block size, that resists both Gaussian elimination and SOS algorithms whenever m = n1.5-ε. This class is extremely easy to describe: Let G be any simple non-abelian group, and interpret the blocks of x as elements in G, then each output of the generator is of the form xi * xj * xk, where i,j,k are random and "*" is the group operation.
Daniel Benarroch,
Zvika Brakerski and
Tancrède Lepoint
FHE Over the Integers: Decomposed and Batched in the Post-Quantum Regime
PKC 2017.
[Abstract] [PDF]
Fully homomorphic encryption over the integers (FHE-OI) is currently the only alternative to lattice-based FHE. FHE-OI includes a family of schemes whose security is based on the hardness of different variants of the approximate greatest common divisor (AGCD) problem. The majority of these works is based on the noise-free variant of AGCD which is potentially weaker than the general one. In particular, the noise-free variant relies on the hardness of factoring and is thus vulnerable to quantum attacks.
A lot of effort was made to port techniques from second generation lattice-based FHE (using tensoring) to FHE-OI. Gentry, Sahai and Waters (Crypto 13) showed that third generation techniques (which were later formalized using the ``gadget matrix'') can also be ported. In this work, we propose a comprehensive study of applying third generation FHE techniques to the regime of FHE-OI. We present and analyze a third generation FHE-OI based on decisional AGCD without the noise-free assumption. We proceed to showing a batch version of our scheme where each ciphertext can encode a vector of messages and operations are performed coordinate-wise. We use a similar AGCD variant to Cheon et al. (Eurocrypt 13) who suggested the batch approach for second generation FHE, but we do not require the noise-free component or a subset sum assumption. However, like Cheon et al., we do require circular security for our scheme, even for bounded homomorphism. Lastly, we discuss some of the obstacles towards efficient implementation of our schemes and discuss a number of possible optimizations.
Zvika Brakerski,
Justin Holmgren and
Yael Kalai
Non-Interactive RAM and Batch NP Delegation from any PIR
STOC 2017.
[Abstract] [PDF]
We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the prover and the verifier do not need to interact at all: The verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements in a completely adaptive manner. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions.
We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message delegation protocols for batch NP statements. Specifically, we can simultaneously prove the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.
-
In (key-policy) attribute based encryption (ABE), messages are encrypted respective to attributes x, and keys are generated respective to policy functions f. The ciphertext is decryptable by a key only if f(x)=0. Adding homomorphic capabilities to ABE is a long standing open problem, with current techniques only allowing compact homomorphic evaluation on ciphertext respective to the same x. Recent advances in the study of multi-key FHE also allow cross-attribute homomorphism with ciphertext size growing (quadratically) with the number of input ciphertexts.
We present an ABE scheme where homomorphic operations can be performed compactly across attributes. Of course, decrypting the resulting ciphertext needs to be done with a key respective to a policy f with f(xi)=0 for all attributes involved in the computation. In our scheme, the target policy f needs to be known to the evaluator, we call this targeted homomorphism. Our scheme is secure under the polynomial hardness of learning with errors (LWE) with sub-exponential modulus-to-noise ratio.
We present a second scheme where there needs not be a single target policy. Instead, the decryptor only needs a set of keys representing policies fj s.t. for any attribute xi there exists fj with fj(xi)=0. In this scheme, the ciphertext size grows (quadratically) with the size of the set of policies (and is still independent of the number of inputs or attributes). Again, the target set of policies needs to be known at evaluation time. This latter scheme is secure in the random oracle model under the polynomial hardness of LWE with sub-exponential noise ratio.
Zvika Brakerski and
Or Dagmi
Shorter Circuit Obfuscation in Challenging Security Models
SCN 2016.
[Abstract] [PDF]
The study of program obfuscation is seeing great progress in recent years, which is crucially attributed to the introduction of graded encoding schemes by Garg, Gentry and Halevi (Eurocrypt 2013). In such schemes, elements of a ring can be encoded such that the content of the encoding is hidden, but restricted algebraic manipulations, followed by zero-testing, can be performed publicly. This primitive currently underlies all known constructions of general-purpose obfuscators.
However, the security properties of the current candidate graded encoding schemes are not well understood, and new attacks frequently introduced. It is therefore important to assume as little as possible about the security of the graded encoding scheme, and use as conservative security models as possible. This often comes at a cost of reducing the efficiency or the functionality of the obfuscator.
In this work, we present a candidate obfuscator, based on composite-order graded encoding schemes, which obfuscates circuits directly a la Zimmerman (Eurocrypt 2015) and Applebaum-Brakerski (TCC 2015). Our construction requires a graded encoding scheme with only 33 ``plaintext slots'' (= sub-rings of the underlying ring), which is directly related to the size and complexity of the obfuscated program. We prove that our obfuscator is superior to previous works in two different security models.
1. We prove that our obfuscator is indistinguishability-secure (iO) in the Unique Representation Generic Graded Encoding model. Previous works either required a composite-order scheme with polynomially many slots, or were provable in a milder security model. This immediately translates to a polynomial improvement in efficiency, and shows that improved security does not come at the cost of efficiency in this case.
2. Following Badrinarayanan et al. (Eurocrypt 2016), we consider a model where finding any ``non-trivial'' encoding of zero breaks the security of the encoding scheme. We show that, perhaps surprisingly, secure obfuscation is possible in this model even for some classes of non-evasive functions (for example, any class of conjunctions). We define the property required of the function class, formulate an appropriate (generic) security model, and prove that our aforementioned obfuscator is virtual-black-box (VBB) secure in this model.
Zvika Brakerski and
Renen Perlman
Lattice-Based Fully Dynamic Multi-Key FHE with Short Ciphertexts
CRYPTO 2016.
[Abstract] [PDF]
We present a multi-key fully homomorphic encryption scheme that supports an unbounded number of homomorphic operations for an unbounded number of parties. Namely, it allows to perform arbitrarily many computational steps on inputs encrypted by an a-priori unbounded (polynomial) number of parties. Inputs from new parties can be introduced into the computation dynamically, so the final set of parties needs not be known ahead of time. Furthermore, the length of the ciphertexts, as well as the space complexity of an atomic homomorphic operation, grow only linearly with the current number of parties.
Prior works either supported only an a-priori bounded number of parties (Lopez-Alt, Tromer and Vaikuntanthan, STOC 12), or only supported single-hop evaluation where all inputs need to be known before the computation starts (Clear and McGoldrick, Crypto 15, Mukherjee and Wichs, Eurocrypt 16). In all aforementioned works, the ciphertext length grew at least quadratically with the number of parties.
Technically, our starting point is the LWE-based approach of previous works. Our result is achieved via a careful use of Gentry's bootstrapping technique, tailored to the specific scheme. Our hardness assumption is that the scheme of Mukherjee and Wichs is circular secure (and thus bootstrappable). A leveled scheme can be achieved under standard LWE.
Zvika Brakerski,
Chris Brzuska and
Nils Fleischhacker
On Statistically Secure Obfuscation with Approximate Correctness
CRYPTO 2016.
[Abstract] [PDF]
Goldwasser and Rothblum (TCC 07) prove that statistical indistinguishability obfuscation (iO) cannot exist if the obfuscator must maintain perfect correctness (under a widely believed complexity theoretic assumption: NP not in SZK). However, for many applications of iO, such as constructing public-key encryption from one-way functions (one of the main open problems in theoretical cryptography), approximate correctness is sufficient. It had been unknown thus far whether statistical approximate iO (saiO) can exist.
We show that saiO does not exist, even for a minimal correctness requirement, if NP is not in the intersection of AM and coAM, and if one-way functions exist. A simple complementary observation shows that if one-way functions do not exist, then average-case saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a ``baseline'' for the obfuscated program.
We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build public-key encryption from one-way functions and thus deserve further investigation.
-
The notion of Zero Knowledge has driven the field of cryptography since its conception over thirty years ago. It is well established that two-message zero-knowledge protocols for NP do not exist, and that four-message zero-knowledge arguments exist under the minimal assumption of one-way functions. Resolving the precise round complexity of zero-knowledge has been an outstanding open problem for far too long.
In this work, we present a three-message zero-knowledge argument system with soundness against uniform polynomial-time cheating provers. The main component in our construction is the recent delegation protocol for RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely on a three-message variant of their protocol based on a key-less collision-resistant hash functions secure against uniform adversaries as well as other standard primitives.
More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee against real-world adversaries, which we formalize following Rogaway's ``human-ignorance" approach (VIETCRYPT 2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our protocol to finding collisions in the underlying hash function.
Zvika
Brakerski and Vinod Vaikuntanathan
Circuit-ABE from LWE: Unbounded Attributes and Semi-Adaptive Security
CRYPTO 2016.
[Abstract] [PDF]
We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth d circuit can have as many as 2d inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length.
We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the ``complexity leveraging'' technique is not applicable for unbounded attributes.)
We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.
-
We show how to securely obfuscate conjunctions, which are functions f(x1, ... , xn) = ∧ i yi where each literal yi is either just xi or ! xi e.g., f(x1, ... , xn) = x1 ∧ ! x3 ∧ ! x7 ... ∧ xn-1. Whereas prior work of Brakerski and Rothblum (CRYPTO 2013) showed how to achieve this using a non-standard object called cryptographic multilinear maps, our scheme is based on an ``entropic'' variant of the Ring Learning with Errors (Ring LWE) assumption. As our core tool, we prove that hardness assumptions on the recent multilinear map construction of Gentry, Gorbunov and Halevi (TCC 2015) can be established based on entropic Ring LWE. We view this as a first step towards proving the security of additional multilinear map based constructions, and in particular program obfuscators, under standard assumptions.
Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than black-box access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy.
Zvika
Brakerski and Gil Segev
Hierarchical Functional Encryption
ITCS 2017.
[Abstract] [PDF]
Functional encryption provides fine-grained access control for encrypted data, allowing each user to learn only specific functions of the encrypted data. We study the notion of hierarchical functional encryption, which augments functional encryption with delegation capabilities, offering significantly more expressive access control.
We present a generic transformation that converts any general-purpose public-key functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing (somewhat surprisingly) that the existence of functional encryption is equivalent to that of its hierarchical generalization.
Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various trade-offs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support arbitrary hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of public-key encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width.
Zvika
Brakerski, Ilan Komargodski and Gil Segev
From Single-Input to Multi-Input Functional Encryption in the Private-Key Setting
EUROCRYPT 2016.
[Abstract] [PDF]
We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of x1, ..., xt to compute f(x1, ..., xt) but nothing else. Our construction assumes any general-purpose private-key single-input scheme (without any additional assumptions), and is proven to be adaptively-secure for any constant number of inputs t. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure.
Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security and efficiency.
Previous constructions of multi-input functional encryption schemes either relied on somewhat stronger assumptions and provided weaker security guarantees (Goldwasser et al., EUROCRYPT '14), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al., EUROCRYPT '15).
Independent and concurrent work by Ananth and Jain (ePrint '15) shows (among other contributions) how to construct a selectively-secure multi-input private-key functional encryption scheme based on any public-key functional encryption scheme. In this context, our result both relies on a weaker assumption and guarantees stronger security.
Zvika
Brakerski and Vinod Vaikuntanathan
Constrained Key-Homomorphic PRFs from Standard Lattice Assumptions
(or: How to Secretly Embed a Circuit in Your PRF)
TCC 2015.
[Abstract] [PDF]
Boneh et al. (Crypto 13) and Banerjee and Peikert (Crypto 14) constructed pseudorandom functions (PRFs) from the Learning with Errors (LWE) assumption by embedding combinatorial objects, a path and a tree respectively, in instances of the LWE problem. In this work, we show how to generalize this approach to embed circuits, inspired by recent progress in the study of Attribute Based Encryption.
Embedding a universal circuit for some class of functions allows us to produce constrained keys for functions in this class, which gives us the first standard-lattice-assumption-based constrained PRF (CPRF) for general bounded-description bounded-depth functions, for arbitrary polynomial bounds on the description size and the depth. (A constrained key w.r.t a circuit C enables one to evaluate the PRF on all x for which C(x)=1, but reveals nothing on the PRF values at other points.) We rely on the LWE assumption and on the one-dimensional SIS (Short Integer Solution) assumption, which are both related to the worst case hardness of general lattice problems. Previous constructions for similar function classes relied on such exotic assumptions as the existence of multilinear maps or secure program obfuscation. The main drawback of our construction is that it does not allow collusion (i.e. to provide more than a single constrained key to an adversary). Similarly to the aforementioned previous works, our PRF family is also key homomorphic.
Interestingly, our constrained keys are very short. Their length does not depend directly either on the size of the constraint circuit or on the input length. We are not aware of any prior construction achieving this property, even relying on strong assumptions such as indistinguishability obfuscation.
Benny Applebaum and Zvika Brakerski
Obfuscating Circuits via Composite-Order Graded Encoding
TCC 2015.
[Abstract] [PDF]
We present a candidate obfuscator based on composite-order Graded Encoding Schemes (GES), which are a generalization of multilinear maps. Our obfuscator operates on circuits directly without converting them into formulas or branching programs as was done in previous solutions. As a result, the time and size complexity of the obfuscated program, measured by the number of GES elements, is directly proportional to the circuit complexity of the program being obfuscated. This improves upon previous constructions whose complexity was related to the formula or branching program size. Known instantiations of Graded Encoding Schemes allow us to obfuscate circuit classes of polynomial degree, which include for example families of circuits of logarithmic depth.
We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well.
As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.
-
In a functional encryption (FE) scheme, the owner of the secret key can generate restricted decryption keys that allow users to learn specific functions of the encrypted messages and nothing else. In many known constructions of FE schemes, such a notion of security is guaranteed only for messages that are fixed ahead of time (i.e., before the adversary even interacts with the system). This is called selective security, which is too restrictive for many realistic applications. Achieving adaptive security (also called full security), where security is guaranteed even for messages that are adaptively chosen at any point in time, seems significantly more challenging. The handful of known fully-secure schemes are based on specifically tailored techniques that rely on strong assumptions (such as obfuscation assumptions or multilinear maps assumptions).
In this paper we show that any sufficiently expressive selectively-secure FE scheme can be transformed into a fully secure one without introducing any additional assumptions. We present a direct black-box transformation, making novel use of hybrid encryption, a classical technique that was originally introduced for improving the efficiency of encryption schemes, combined with a new technique we call the Trojan Method. This method allows to embed a secret execution thread in the functional keys of the underlying scheme, which will only be activated within the proof of security of the resulting scheme.As another application of the Trojan Method, we show how to construct functional encryption schemes for arbitrary circuits starting from ones for shallow circuits (NC1 or even TC0).
Zvika Brakerski and Gil Segev
Function-Private Functional Encryption in the Private-Key Setting
TCC 2015.
[Abstract] [PDF]
Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Whereas the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided.
Whereas function privacy is inherently limited in the public-key setting, in the private-key setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages m1, ..., mT together with decryption keys corresponding to functions f1, ..., fT, reveal essentially no information other than the values { fi(mj)}i,j=1,...,T. Despite its great potential, the known function-private private-key schemes either support rather limited families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a function-private functional encryption scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme, and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain function-private schemes based either on obfuscation assumptions, on the Learning with Errors assumption, or even on general public-key encryption (offering various trade-offs between security and efficiency).
Zvika
Brakerski and Guy Rothblum
Virtual Black-Box Obfuscation for All Circuits via Generic Graded Encoding
TCC 2014.
[Abstract] [PDF]
We present a new general-purpose obfuscator for all polynomial-size circuits. The obfuscator uses graded encoding schemes, a generalization of multilinear maps. We prove that the obfuscator exposes no more information than the program's black-box functionality, and achieves virtual black-box security, in the generic graded encoded scheme model. This proof is under a plausible worst-case complexity-theoretic assumption related to the Exponential Time Hypothesis, in addition to standard cryptographic assumptions.
Zvika
Brakerski and Guy Rothblum
Black-Box Obfuscation for d-CNFs
ITCS 2014.
[Abstract] [PDF]
We show how to securely obfuscate a new class of functions: conjunctions of NC0d circuits. These are functions of the form C(x) = ∧i Ci(x), where each Ci is a boolean NC0d circuit, whose output bit is only a function of d = O(1) bits of the input x. For example, d-CNFs, where each clause is a disjunction of at most d variables, are in this class. Given such a function, we produce an obfuscated program that preserves the input-output functionality of the given function, but reveals nothing else. Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013).
We prove that the construction is a secure obfuscation in a generic multilinear group model, under the black-box definition of Barak et al. (CRYPTO 2001). Security is based on a new worst-case hardness assumption about exponential hardness of the NP-complete problem 3-SAT, the Bounded Speedup Hypothesis.
One of the new techniques we introduce is a method for enforcing input consistency, which we call randomizing sub-assignments. We hope that this technique can find further application in constructing secure obfuscators.
Zvika
Brakerski and Vinod Vaikuntanathan
Lattice-Based FHE as Secure as PKE
ITCS 2014.
[Abstract] [PDF]
We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of Õ(n1.5+ε)-approximation for lattice problems (such as GapSVP) under quantum reductions for any ε>0 (or Õ(n2+ε)-approximation under classical reductions). This matches the best known hardness for ``regular'' (non-homomorphic) lattice based public-key encryption up to the ε term. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a non-leveled FHE scheme.)
Our approach consists of three main ideas: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimension-modulus reduction.
Zvika
Brakerski and Guy Rothblum
Obfuscating Conjunctions
CRYPTO 2013.
[Abstract] [PDF]
We show how to securely obfuscate the class of conjunction functions (functions like f(x1, ... , xn) = x1 ∧ ! x4 ∧ ! x6 ∧ ... ∧ xn-2). Given any function in the class, we produce an obfuscated program which preserves the input-output functionality of the given function, but reveals nothing else.
Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the Diffie-Hellman assumption. We conjecture that our construction is secure for any conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against generic adversaries.
-
We show that the Learning with Errors (LWE) problem is classically at least as hard as
standard worst-case lattice problems, even with polynomial modulus.
Previously this was only known under quantum reductions.
Our techniques capture the tradeoff between the dimension and the modulus of LWE instances,
leading to a much better understanding of the
landscape of the problem. The proof is inspired by techniques from
several recent cryptographic constructions, most notably fully
homomorphic encryption schemes.
Zvika
Brakerski and Moni Naor
Fast Algorithms for Interactive Coding
SODA 2013.
[Abstract] [PDF]
Consider two parties who wish to communicate in order to execute some
interactive protocol $\pi$. However, the communication channel
between them is noisy: An adversary sees everything that is
transmitted over the channel and can change a constant
fraction of the bits as he pleases, thus interrupting the
execution of $\pi$ (which was designed for an errorless
channel). If $\pi$ only contains a single long message, then a
good error correcting code would overcome the noise with only
a constant overhead in communication. However, this solution
is not applicable to interactive protocols consisting of
many short messages.
Schulman (FOCS 92, STOC 93) presented the notion of interactive
coding: A simulator that, given any protocol $\pi$, is able to
simulate it (i.e.\ produce its intended transcript) even with
constant rate adversarial channel errors, and with only constant
(multiplicative) communication overhead. Until recently, however,
the running time of all known simulators was exponential (or
sub-exponential) in the communication complexity of $\pi$ (denoted
$N$ in this work). Brakerski and Kalai (FOCS 12) recently presented
a simulator that runs in time $\poly(N)$. Their simulator is
randomized (each party flips private coins) and has failure
probability roughly $2^{-N}$.
In this work, we improve the computational complexity of interactive
coding. While at least $N$ computational steps are required (even
just to output the transcript of $\pi$), the BK simulator runs in
time $\tilde\Omega(N^2)$.
We present two efficient algorithms for interactive coding: The first with computational complexity $O(N \log N)$ and exponentially small (in $N$) failure probability; and the second with computational complexity $O(N)$, but failure probability $1/\poly(N)$. (Computational complexity is measured in the RAM model.)
Zvika
Brakerski, Craig
Gentry and Shai Halevi
Packed Ciphertexts in LWE-based Homomorphic Encryption
PKC 2013.
[Abstract] [PDF]
We observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertext-packing technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of ``General-LWE'').
Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``general-LWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
Zvika
Brakerski
When Homomorphism Becomes a Liability
TCC 2013.
[Abstract] [PDF]
We show that an encryption scheme cannot have a simple decryption circuit and be homomorphic at the same time. Specifically, if a scheme can homomorphically evaluate the majority function, then its decryption circuit cannot be a linear function of the secret key (or even a succinct polynomial), even if decryption error is allowed.
An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of noise withlow hamming weight cannot be fully homomorphic. This applies to known schemes such as LPN-based symmetric or public key encryption.
An additional corollary is that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One by applying the aforementioned general statement, and another by directly attacking one of the components of the scheme.
Zvika
Brakerski and Yael Tauman Kalai
Efficient Interactive Coding Against Adversarial Noise
FOCS 2012.
[Abstract] [PDF]
In this work, we study the fundamental problem of constructing interactive protocols that are robust to noise,
a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors.
We show how to efficiently simulate any interactive protocol in the presence of constant-rate adversarial noise, while incurring only a constant blow-up in the communication complexity ($\cc$).
Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $1-2^{-\Omega(\cc)}$.
Previous solutions are either inefficient or are resilient only to stochastic noise.
Zvika Brakerski
Fully Homomorphic Encryption
without Modulus Switching from Classical GapSVP
CRYPTO 2012.
[Abstract] [PDF]
We present a new tensoring technique for LWE-based fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically ($B \to B^2\cdot\poly(n)$) with every multiplication (before ``refreshing''), our noise only grows linearly ($B \to B\cdot\poly(n)$).
We use this technique to construct a \emph{scale-invariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values.
Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be classically reduced to the worst-case hardness of the GapSVP problem (with quasi-polynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Zvika Brakerski, Craig Gentry and Vinod Vaikuntanathan
(Leveled) Fully Homomorphic Encryption without Bootstrapping
ITCS 2012.
[Abstract] [PDF]
We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and
bases security on weaker assumptions.
A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure.
Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have $2^k$ security against known attacks. For RLWE, we have:
- A leveled FHE scheme that can evaluate $L$-level arithmetic circuits with $\tilde{O}(k \cdot L^3)$ per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in $L$. This construction does not use the bootstrapping procedure.
- A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is $\tilde{O}(k^2)$, independent of $L$. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes).
We obtain similar results to the above for LWE, but with worse performance.
Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least $k$ --
we can reduce the per-gate computation of the bootstrapped version to $\tilde{O}(k)$, independent of $L$,
by batching the bootstrapping operation.
Previous FHE schemes all required $\tilde{\Omega}(k^{3.5})$ computation per gate.
At the core of our construction is a much more effective approach for managing the noise level
of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).
Zvika Brakerski and Vinod Vaikuntanathan
Efficient Fully Homomorphic Encryption from (Standard) LWE
FOCS 2011.
[Abstract] [PDF]
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Additionally, relying on LWE makes our scheme very natural to understand (and implement).
Our construction improves on previous works in two aspects:
- We show that ``somewhat homomorphic'' encryption can
be based on LWE, using a new re-linearization technique. In contrast, all
previous schemes relied on complexity assumptions related to ideals in various
rings.
- We deviate from the
``squashing paradigm'' used in all previous
works. We introduce a new dimension-modulus reduction technique, which shortens the
ciphertexts and reduces the decryption complexity of our scheme, {\em without
introducing additional assumptions}. In contrast, all previous works required an
additional, very strong assumption (namely, the sparse subset sum assumption).
Our scheme has very short ciphertexts and we therefore use it to construct an
asymptotically efficient LWE-based single-server private information
retrieval (PIR) protocol. The communication complexity of our protocol (in the
public-key model) is kpolylog(k)+log |DB| bits per single-bit
query, which is better than any known scheme (here, k is a security parameter).
Zvika Brakerski and Yael Tauman Kalai
A Parallel Repetition Theorem for Leakage Resilience
TCC 2012.
[Abstract] [PDF]
A leakage resilient encryption scheme is one which stays secure even against an attacker
that obtains a bounded amount of side information on the secret key (say
$\lambda$ bits of ``leakage''). A fundamental question is whether
parallel repetition amplifies leakage resilience. Namely, if we
secret share our message, and encrypt the shares under two independent keys,
will the resulting scheme be resilient to $2\lambda$ bits of leakage?
Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an
example of a public-key encryption scheme that is resilient to $\lambda$ bits
of leakage, and yet its $2$-repetition is not resilient to even
$(1+\epsilon)\lambda$ bits of leakage. In their counter-example, the repeated
schemes share secretly generated public parameters.
In this work we show that under a reasonable strengthening of the definition of
leakage resilience (one that captures known proof techniques for achieving non-trivial
leakage resilience), parallel repetition does in fact amplify
leakage. In particular, if fresh public parameters are used for each copy of the Lewko-Waters scheme, then
their negative result does not hold, and leakage is amplified by parallel repetition.
More generally, we show that given $t$ schemes that are resilient to
$\lambda_1, \ldots, \lambda_t$ bits of leakage, respectfully, their direct
product is resilient to $\sum (\lambda_i-1)$ bits. We present our
amplification theorem in a general framework that applies other cryptographic primitives as well.
Zvika Brakerski and Vinod Vaikuntanathan
Fully Homomorphic Encryption from Ring-LWE and Security for Key Dependent Messages
CRYPTO 2011.
[Abstract] [PDF]
We present a somewhat homomorphic encryption scheme that is both very simple to describe and analyze, and whose security (quantumly) reduces to the worst-case hardness of problems on ideal lattices. We then transform it into a fully homomorphic encryption scheme using standard ``squashing'' and ``bootstrapping'' techniques introduced by Gentry (STOC 2009).
One of the obstacles in going from ``somewhat'' to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key.
Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worst-case problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is s, and our public key is (a,b=as+2e), where s,a,e are all degree (n-1) integer polynomials whose coefficients are independently drawn from easy to sample distributions.
Zvika Brakerski and Gil Segev
Better Security for Deterministic Public-Key Encryption: The Auxiliary-Input Setting
CRYPTO 2011.
[Abstract] [PDF]
Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O'Neill (CRYPTO '07), provides an alternative to randomized public-key encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security when the plaintext is distributed over a small set. Bellare et al. addressed this difficulty by requiring semantic security to hold only when the plaintext has high min-entropy from the adversary's point of view.
In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high min-entropy from the adversary's point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees.
We formalize a framework for studying the security of deterministic public-key encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hard-to-invert auxiliary inputs. Within this framework, we propose two schemes: the first is based on the decisional Diffie-Hellman (and, more generally, on the $d$-linear) assumption, and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, quadratic residuosity and Paillier's composite residuosity). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the standard hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multi-user setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multi-user setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.
Zvika Brakerski, Jonathan Katz, Gil Segev and Arkady Yerukhimovich
Limits on the Power of Zero-Knowledge Proofs in Cryptographic Constructions
TCC 2011.
[Abstract] [PDF]
For over 20 years, black-box impossibility results have been used
to argue the infeasibility of constructing certain cryptographic
primitives (e.g., key agreement) from others (e.g., one-way
functions). A widely recognized limitation of such impossibility
results, however, is that they say nothing about the usefulness of
(known) nonblack-box techniques. This is
unsatisfying, as we would at least like to rule out constructions
using the set of techniques we have at our disposal.
With this motivation in mind, we suggest a new framework for
black-box constructions that encompasses constructions with a
nonblack-box flavor: specifically, those that rely on
zero-knowledge proofs relative to some oracle. We show that
our framework is powerful enough to capture the Naor-Yung/Sahai
paradigm for building a (shielding) CCA-secure public-key encryption
scheme from a CPA-secure one, something ruled out by prior
black-box separation results. On the other hand, we show that several
black-box impossibility results still hold even in a
setting that allows for zero-knowledge proofs.
Zvika Brakerski, Yael Tauman Kalai, Jonathan Katz and Vinod Vaikuntanathan
Overcoming the Hole in the Bucket: Public-Key Cryptography Resilient to Continual Memory Leakage
FOCS 2010.
[Abstract] [PDF]
In recent years, there has been a major effort to design cryptographic schemes that remain secure even if part of the secret key is leaked. This is due to a recent proliferation of side channel attacks which, through various physical means, can recover part of the secret key. We explore the possibility of achieving security even with continual leakage, i.e., even if some information is leaked each time the key is used.
We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, at each time period, can probe the entire memory (containing a secret key) and ``leak'' up to a $(1-o(1))$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [Micali-Reyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $(1/2-o(1))$) or the symmetric external Diffie-Hellman assumption (which allows for a leakage rate of $(1-o(1))$), we achieve the above for public key encryption, identity-based encryption, and signature schemes. Prior to this work, it was not known how to construct public-key encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage.
Zvika Brakerski and Shafi Goldwasser
Circular and Leakage Resilient Public-Key Encryption Under Subgroup Indistinguishability
(or: Quadratic Residuosity Strikes Back)
CRYPTO 2010.
[Abstract] [PDF]
The main results of this work are new public-key encryption schemes
that, under the quadratic residuosity (QR) assumption (or
Paillier's decisional composite residuosity (DCR) assumption), achieve key-dependent
message security as well as high resilience to secret key leakage
and high resilience to the presence of auxiliary input information.
In particular, under what we call the subgroup
indistinguishability assumption, of which the QR and DCR are special
cases, we can construct a scheme that has:
- Key-dependent message (circular) security. Achieves security
even when encrypting affine functions of its own secret key (in fact,
w.r.t.\ affine ``key-cycles'' of predefined length). Our scheme also
meets the requirements for extending key-dependent message security to
broader classes of functions beyond affine functions using previous
techniques of [BGK, ePrint09] or [BHHI, Eurocrypt10].
- Leakage resiliency.
Remains secure even if any
adversarial low-entropy (efficiently computable) function of the
secret key is given to the adversary. A proper selection of parameters
allows for a ``leakage rate'' of $(1-o(1))$ of the length of the secret key.
- Auxiliary-input security. Remains secure even if
any sufficiently \emph{hard to invert} (efficiently computable) function of the
secret key is given to the adversary.
Our scheme is the first to achieve key-dependent security and auxiliary-input
security based on the DCR and QR assumptions. Previous schemes that achieved
these properties relied either on the DDH or LWE assumptions. The proposed
scheme is also the first to achieve leakage resiliency for leakage rate
$(1-o(1))$ of the secret key length, under the QR assumption. We note that
leakage resilient schemes under the DCR and the QR assumptions, for the
restricted case of composite modulus product of safe primes, were implied by
the work of [NS, Crypto09], using hash proof systems. However, under the QR
assumption, known constructions of hash proof systems only yield a leakage rate
of $o(1)$ of the secret key length.
Zvika Brakerski and Yael Tauman Kalai
A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model
Manuscript, 2010.
[Abstract] [PDF]
In this work, we present a generic framework for constructing efficient signature scheme, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles).
We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call a-priori-message unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present simple constructions based on the RSA assumption, the short integer solution (SIS) assumption, and the computational Diffie-Hellman (CDH) assumption over bilinear groups.
Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the short integer solution (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call ring trapdoor functions. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.
Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the learning with error (LWE) assumption, and is similar to recently introduced IBE schemes; The second is based on the $d$-linear assumption over bilinear groups.
Zvika Brakerski, Shafi Goldwasser and Yael Tauman Kalai
Black-Box Circular-Secure Encryption Beyond Affine Functions
TCC 2011.
[Abstract] [PDF]
We show how to achieve public-key encryption schemes that can securely encrypt
nonlinear functions of their own secret key.
Specifically, we show that for any constant $d\in\mathbb{N}$, there exists a
public-key encryption scheme that can securely encrypt any function $f$ of its
own secret key, assuming $f$ can be expressed as a polynomial of total
degree $d$. Such a scheme is said to be key-dependent message (KDM) secure
w.r.t. degree-$d$ polynomials. We also show that for any constants $c,e$,
there exists a public-key encryption scheme that is KDM secure w.r.t. all
Turing machines with description size $c\log \lambda$ and running time
$\lambda^e$, where $\lambda$ is the security parameter. The security of such
public-key schemes can be based either on the standard decision Diffie-Hellman
(DDH) assumption or on the learning with errors (LWE) assumption (with certain
parameters settings).
In the case of functions that can be expressed as degree-$d$ polynomials, we
show that the resulting schemes are also secure with respect to key
cycles of any length. Specifically, for any polynomial number $n$ of key
pairs, our schemes can securely encrypt a degree-$d$ polynomial whose variables
are the collection of coordinates of all $n$ secret keys. Prior to this work,
it was not known how to achieve this for nonlinear functions.
Our key idea is a general transformation that amplifies KDM security. The
transformation takes an encryption scheme that is KDM secure w.r.t.
some functions even when the secret keys are weak (i.e. chosen from an
arbitrary distribution with entropy $k$), and outputs a scheme that is KDM
secure w.r.t. a richer class of functions. The resulting scheme may no
longer be secure with weak keys. Thus, in some sense, this transformation
converts security with weak keys into amplified KDM security.
Zvika Brakerski and Boaz Patt-Shamir
Distributed Discovery of Large Near-Cliques
PODC 2009 (Brief Announcement), DISC 2009, Distributed Computing 2011.
[Abstract] [PDF]
Given an undirected graph and $0\le\epsilon\le1$, a set of nodes is called $\epsilon$-near clique if all but an $\epsilon$ fraction of the pairs of nodes in the set have a link between them. In this paper we present a fast synchronous network algorithm that uses small messages and finds a near-clique. Specifically, we present a constant-time algorithm that finds, with constant probability of success, a linear size $\epsilon$-near clique if there exists an $\epsilon^3$-near clique of linear size in the graph. The algorithm uses messages of $O(\log n)$ bits. The failure probability can be reduced to $n^{-\Omega(1)}$ in $O(\log n)$ time, and the algorithm also works if the graph contains a clique of size $\Omega(n/\log^{\alpha}\log n)$ for some $\alpha \in (0,1)$.
-
Public-key
encryption schemes rely for their IND-CPA security on per-message fresh
randomness. In practice, randomness may be of poor quality for a variety of
reasons, leading to failure of the schemes. Expecting the systems to improve
is unrealistic. What we show in this paper is that we can, instead, improve
the cryptography to offset the lack of possible randomness. We provide
public-key encryption schemes that achieve IND-CPA security when the randomness they
use is of high quality, but, when the latter is not the case, rather than
breaking completely, they achieve a weaker but still useful notion of
security that we call IND-CDA. This hedged public-key encryption provides
the best possible security guarantees in the face of bad randomness. We
provide simple RO-based ways to make in-practice IND-CPA schemes hedge
secure with minimal software changes. We also provide non-RO model schemes
relying on lossy trapdoor functions (LTDFs) and techniques from
deterministic encryption. They achieve adaptive security by establishing
and exploiting the anonymity of LTDFs which we believe is of independent
interest.
Zvika Brakerski and Oded Goldreich
From Absolute Distinguishability to Positive Distinguishability
Studies in Complexity and
Cryptography 2011 (Book Chapter).
[Abstract] [PDF]
We study methods of converting algorithms that distinguish pairs
of distributions with a gap that has an absolute value that is noticeable
into corresponding algorithms in which the gap is always positive.
Our focus is on designing algorithms that, in addition to the tested string,
obtain a fixed number of samples from each distribution.
Needless to say, such algorithms can not provide a very reliable
guess for the sign of the original distinguishability gap,
still we show that even guesses that are noticeably better than random
are useful in this setting.
-
Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan, are
pseudorandom functions in which the owner of the seed produces a public-key
that constitutes a commitment to all values of the function and can then
produce, for any input $x$, a proof that the function has been evaluated
correctly on $x$, preserving pseudorandomness for all other inputs. No
public-key (even a falsely generated one) should allow for proving more than
one value per input.
VRFs are both a natural and a useful primitive, and previous works have
suggested a variety of constructions and applications. Still, there are many
open questions in the study of VRFs, especially their relation to more widely
studied cryptographic primitives and constructing them from a wide variety of
cryptographic assumptions.
In this work we define a natural relaxation of VRFs that we call weak
verifiable random functions, where pseudorandomness is required to hold only
for randomly selected inputs. We conduct a study of weak VRFs, focusing on
applications, constructions, and their relationship to other cryptographic
primitives. We show:
- Constructions. We present constructions of weak VRFs based on
a variety of assumptions, including general assumptions such as (enhanced)
trapdoor permutations, as well as constructions based on specific
number-theoretic assumptions such as the Diffie-Hellman assumption in bilinear
groups.
- Separations.
Verifiable random functions (both weak and standard) cannot be constructed from
one-way permutations in a black-box manner. This constitutes the first result
separating (standard) VRFs from any cryptographic primitive.
- Applications. Weak VRFs capture the essence of
constructing non-interactive zero-knowledge proofs for all NP languages.
Zvika Brakerski
Local Property Restoring
Manuscript, 2008.
[Abstract] [PDF]
In this paper we present the new notion of local restoring of
properties. A restorer gets implicit access, in the form of oracle
queries, to a function (which may correspond to an object such as a graph, a
code or a matrix) that is close to having a property, such as function
monotonicity or graph bipartiteness. The restorer then, using a small number of
queries to the function, can locally compute the value of a ``corrected''
function in any desired point. The corrected function is required to be both
close to the original function and strictly have the property. In the
case of error correcting codes, this corresponds to local self-correcting. Here
we extend the definition and study for general functions and properties.
We define the new notion and go on to give restorers for properties in the
dense graph model: Bipartiteness and $\rho$-Clique, and for function
monotonicity. We also show some general relations between property restoring
and other notions such as property testing and tolerant property testing.
Zvika Brakerski and Boaz Patt-Shamir
Jitter-Approximation Tradeoff for Periodic Scheduling
IPDPS 2004, Wireless Networks 2006.
[Abstract] [PDF]
We consider an asymmetric wireless communication setting, where a
server periodically broadcasts data items to different
mobile clients. The goal is to serve items in to a prescribed
rate, while minimizing the energy consumption of the mobile
users. Abstractly, we are presented with a set of jobs, each with a
known execution time and a requested period, and the task is to
design a schedule for these jobs over a single shared resource
without preemption.
Given any solution
schedule, its period approximation is the
maximal factor by which the average period of a job in the schedule
is blown up w.r.t. its requested period, and the jitter is roughly
the maximal variability of times between two
consecutive occurrences of the same job. Schedules with
low jitter allow the mobile devices to save power by
having their receivers switched off longer.
In this paper we consider a
scenario where clients may be willing to settle for non-optimal
period approximation so that the jitter is improved.
We present a parametric jitter-approximation tradeoff algorithm that
allows us to choose various combinations between jitter optimality
and period optimality for any given set of jobs.
Zvika Brakerski, Aviv Nisgav and Boaz Patt-Shamir
General Perfectly Periodic Scheduling
PODC 2002, Algorithmica 2006.
[Abstract] [PDF]
In a perfectly periodic schedule, each job must be scheduled
precisely every some fixed number of time units after its previous
occurrence. Traditionally, motivated by centralized systems, the
perfect periodicity requirement is relaxed, the main goal being to
attain the requested average rate. Recently, motivated by mobile
clients with limited power supply, perfect periodicity seems to be
an attractive alternative that allows clients to save energy by
reducing their ``busy waiting'' time. In this case, clients may be
willing to compromise their requested service rate in order to get
perfect periodicity. In this paper, we study a general model of
perfectly periodic schedules, where each job has a requested period
and a length; we assume that $m$ jobs can be served in parallel for
some given $m$. Job lengths may not be truncated, but granted
periods may be different than the requested periods.
We present an algorithm which computes schedules such that the
worst-case proportion between the requested period and the granted
period is guaranteed to be close to the lower bound. This algorithm
improves on previous algorithms for perfect schedules in providing a
worst-case guarantee rather than an average-case guarantee, in
generalizing unit length jobs to arbitrary length jobs, and in
generalizing the single-server model to multiple servers.
Zvika Brakerski, Vladimir Dreizin and Boaz Patt-Shamir
Dispatching in Perfectly-Periodic Schedules
J. Algorithms 2003.
[Abstract] [PDF]
In a perfectly-periodic schedule, time is divided into time-slots, and each client gets a time slot precisely every predefined number of time slots, called the period of that client. Periodic schedules are useful in mobile communication where they can help save power in the mobile device, and they also enjoy the best possible smoothness. In this paper we study the question of dispatching in a perfectly periodic schedule, namely how to find the next item to schedule, assuming that the schedule is already given somehow. Simple dispatching algorithms suffer from either linear time complexity per slot or from exponential space requirement. We show that if the schedule is given in a natural tree representation, then there exists a way to get the best possible running time per slot for a given space parameter, or the best possible space (up to a polynomial) for a given time parameter. We show that in many practical cases, the running time is constant and the space complexity is polynomial.