Indistinguishability Obfuscation
To recap the previous lecture, we introduced the notion of program obfuscation and the definition of Virtual Black Box (VBB) security. Very informally, VBB security claims that an obfuscation of a program is, from the point of view of a user, only as good as having black box oracle access to the original program. This definition of security with regards to obfuscation is an impossible one to achieve as shown by Barak et. al. The paper that introduced this impossibility result also introduced, as a weaker alternative of security, indistinguishability obfuscation (\(i\mathcal{O}\)).
Definitions
Recall that for VBB security, we had to specify the model of computation. In other words, our proofs and discussions were such that there was a difference between turing machines and circuits. In these notes, \(i\mathcal{O}\) definitions will be provided for circuits; however, these definitions and proofs will port over to other models of computation nicely. Note, from here on, a circuit is represented simply with \(C\).
An \(i\mathcal{O}\) for circuits is a ppt algorithm \(\mathcal{O}\) such that:
Correctness: \(\forall{k}\in\mathbb{N}, \forall{C}, \mathcal{O}(1^k, C) \equiv C\)
Efficiency: \(\forall{k}\in\mathbb{N}, \forall{C}, \lvert \mathcal{O}(1^k, C) \rvert = \text{poly}(k, \lvert C \rvert)\)
\(i\mathcal{O}\) Security: \(\forall{C_1,C_2}\) such that \(\lvert C_1 \rvert = \lvert C_2 \rvert, C_1 \equiv C_2\), \[\begin{aligned} \mathcal{O}(1^k, C_1) & \stackrel{c}{=}\mathcal{O}(1^k, C_2) \end{aligned}\]
Looking at the definiton of \(i\mathcal{O}\) above, we can see that \(i\mathcal{O}\) is to VBB what witness indistinguishability is to Zero-Knowledge. \(i\mathcal{O}\) is a weaker notion of security when compared to VBB, but it is still useful. We will show that with just \(i\mathcal{O}\) and one-way functions, we can achieve a public key encryption scheme.
Using \(i\mathcal{O}\) for Public-Key Encryption
\(i\mathcal{O}+ \text{OWF} \Rightarrow \text{PKE}\)
Proof Overview
The above is proved by proving the following two lemmas:
\(i\mathcal{O}\Rightarrow \text{Witness Encryption}\)
\(\text{Witness Encryption} + \text{OWF} \Rightarrow\text{PKE}\)
Therefore, to continue on with the proof, we introduce the concept of Witness Encryption.
Witness Encryption using \(i\mathcal{O}\)
Definitions
A Witness Encryption scheme for a relation \(R = \{(x, w) | V_R(x, w) = 1\}\), where \(R \in NP\), is a tuple of algorithms \((E, D)\) such that:
Correctness: \(\forall k \in{\mathbf N}, \forall{m}\in\{0, 1\},\forall{(x, w)}\in R,\) \[\begin{aligned} \Pr[D(1^k, E(1^k, x, m), w) = m] = 1 \end{aligned}\]
Security: \(\forall{x}\not\in R, E(1^k, x, 0) \stackrel{c}{=}E(1^k, x, 1)\).
\(i\mathcal{O}\Rightarrow \text{Witness Encryption}\)
Let \(\mathcal{O}\) be an \(i\mathcal{O}\). We construct a candidate witness encryption scheme \((E, D)\):
\[\begin{aligned} C_{x,m}(y) &:= \left\{\begin{array}{cc} m& \text{if $(x, y) \in R$} \\ \bot& \text{otherwise} \end{array}\right. \\ E(1^k, x, m) &:= \mathcal{O}(C_{x, m}) \\ D(1^k, c, w) &:= c(w)\end{aligned}\]
To show that this construction is a valid Witness Encryption \((E, D)\), we need to show that correctness and security are satisfied:
Correctness: This property is trivial to prove as the obfuscated circuit \(C_{x, m}\) when given an input \(w\) will output the message if and only if \((x, w)\) is in the relation \(R\).
Security: Proving security begins with fixing \(x \not \in R\). Note that the circuits \(C_{x, 0}\) and \(C_{x, 1}\) are equal in size and equivalent in functionality because they both output \(\bot\) if and only if there is no valid witness for \(x\). Having said this, we can apply \(i\mathcal{O}\) security to show that: \[\begin{aligned} \mathcal{O}(C_{x, 0}) \stackrel{c}{=}\mathcal{O}(C_{x, 1}) \end{aligned}\] Recall that by construction, the following statements are true: \[\begin{aligned} E(1^k, x, 0) &= \mathcal{O}(C_{x, 0}) \\ E(1^k, x, 1) &= \mathcal{O}(C_{x, 1}) \end{aligned}\] Therefore, we finally show: \[\begin{aligned} E(1^k, x, 0) = \mathcal{O}(C_{x, 0}) \stackrel{c}{=}\mathcal{O}(C_{x, 1}) = E(1^k, x, 1) \end{aligned}\]
Public Key Encryption using Witness Encryption and OWF
\(OWF + WE \Rightarrow PKE\)
We first begin with a construction of a candidate Public Key Encryption scheme \((G, E, D)\). The construction relies on the following: \[\begin{aligned} \hat{G}&: \{0,1\}^k \rightarrow \{0,1\}^{2k} \\ WE &:= (\hat{E}, \hat{D}) \end{aligned}\]
\(\hat{G}\) is a \(PRG\) with expansion factor of \(2\) that is implied from \(OWF\).
Now, recall that our Witness Encryption scheme requires a relation. To satisfy this requirement, we construct the following relation so that we can later invoke \(WE\) security in our proof: \[\begin{aligned} R_{\hat{G}}\{(z, s) : \hat{G}(s) = z, z \in \{0,1\}^{2k}, s \in \{0,1\}^k \}\end{aligned}\] We now have all the neccessary primitives to construct our candidate Public Key Encryption scheme:
\[\begin{aligned} G(1^k) &:= \begin{cases} \text{1. sample $s \in\{0,1\}^k$} \\ \text{2. $z := \hat{G}(s)$} \\ \text{3. output $pk := z$, $sk := s$} \end{cases} \\ \\ E(1^k, pk, m) &:= \hat{E}(1^k, pk, m) \\ \\ D(1^k, sk, c) &:= \hat{D}(1^k, c, sk) \end{aligned}\]
To show that \((G, E, D)\) is a valid Public Key Encryption scheme, we want to argue for completeness and security.
Completeness: Given that \(E\) and \(D\) use the witness encryption scheme under the hood, it suffices to claim that \((pk, sk) \in R\) is always true because of \(G\). Thus, the underlying witness encryption scheme’s correctness implies completeness for this PKE scheme.
Security: To prove that the PKE is secure, recall that it suffices to show that it is 1 message indistinguishable. Therefore, we want to show \(E(1^k, pk, 0) \stackrel{c}{=}E(1^k, pk, 1)\). The proof for this is as follows: \[\begin{aligned} E(1^k, pk, 0) &= \hat{E}(1^k, \hat{G}(s), 0) \\ &\stackrel{c}{=}\hat{E}(1^k, U_{2k}, 0) \\ &\stackrel{c}{=}\hat{E}(1^k, U_{2k}, 1) \\ &\stackrel{c}{=}\hat{E}(1^k, \hat{G}(s), 1) \\ &= E(1^k, pk, 1) \end{aligned}\]
Equation \((26.2)\) follows from the security of the \(PRG\), equation \((26.3)\) follows from \(WE\) security, and equation \((23.4)\) follows from the security of the \(PRG\) once again.
Best Possible Obfuscation[Goldwasser, Rothblum]
The road to obfuscation began with ruling out a definition for an obfuscator that was far too strong–VBB. From here, we lowered our expectations and played with indistinguishability obfuscation. However, this definition may feel too weak. The motivation behind Best Possible Obfuscation is to possibly achieve and obfuscator that feels stronger than \(i\mathcal{O}\). In the end, the two are equivalent.
Definition
The idea The idea behind a best possible obfuscator (BPO) is that there may be some information about a circuit which cannot possibly be hidden by any functionality-preserving obfuscation. We ask only that an obfuscator do the best that it can. Informally, we ask that the obfuscated program should leak as little information about the underlying circuit as any other program with the same functionality and size. More formally:
A BPObfuscator (for circuits) is a ppt \(\mathcal{O}\) such that:
Correctness: \(\forall{k}\in\mathbb{N}, \forall{C}, \mathcal{O}(1^k, C) \equiv C\)
Efficiency: \(\forall{k}\in\mathbb{N}, \forall{C}, \lvert \mathcal{O}(1^k, C) \rvert = \text{poly}(k, \lvert C \rvert)\)
BPO Security: \(\forall\) ppt \(A\), \(\exists\) ppt \(S\) \(\forall\) circuits \(C_1, C_2\), \[\begin{aligned} A(1^k, \mathcal{O}(1^k, C_1)) \stackrel{c}{=}S(1^k, C_2) \end{aligned}\]
VBB Security and BPO Security
VBB security \(\Rightarrow\) BPO security
Suppose for the sake of contradiction that BPO security does not exist. In other words, some obfuscator \(\mathcal{O}\) is not a BPO. Therefore, we fix a bad ppt adversary \(A\) and its simulator \(S:=A \circ \mathcal{O}\). Now, there exists a ppt \(D\) and two circuits \(C_1\), \(C_2\) such that: \[\begin{aligned} \biggr\lvert\Pr[D(1^k, A(1^k, \mathcal{O}(1^k, C_1))) = 1] - \Pr[D(1^k, S(1^k, C_2)) = 1] \biggr\rvert \geq \delta(k) \end{aligned}\] Above, \(\delta(\cdot)\) is a negligible function of its input. We can rewrite the above as follows: \[\begin{aligned} \biggr\lvert\Pr[D(1^k, A(1^k, \mathcal{O}(1^k, C_1))) = 1] - \Pr[D(1^k, A(1^k, \mathcal{O}(1^k, C_2))) = 1] \biggr\rvert \geq \delta(k) \end{aligned}\] We want to show that there exists an adversary \(A^*\) that breaks the notion of VBB security. To do so, we need to devise an \(A^*\) for which there does not exist a black box simulator \(S^*\).
The following is true for all ppt \(S^*\): \[\begin{aligned} \Pr[S^{*^{C_1}}(1^k) = 1] = \Pr[S^{*^{C_2}}(1^k) = 1] \end{aligned}\]
Now, we have to ask: what is \(A^*\)? Consider \(A^* := D \circ A\). Using this construction, it follows that: \[\begin{aligned} \biggr\lvert\Pr[A^*(1^k, \mathcal{O}(1^k, C_1)) = 1] - \Pr[A^*(1^k, \mathcal{O}(1^k, C_2)) = 1] \biggr\rvert \geq \delta(k) \end{aligned}\] Here, \(A^*\) is clearly an adversary for \(\mathcal{O}\) when considering \(VBB\) security. Consider the ppt simulator \(S^*\) for \(A^*\) with black box access to \(\mathcal{O}(1^k, C_2)\). Without loss of generality, we want VBB security for \(\mathcal{O}\); moreover, we want the following to be true: \[\begin{aligned} \biggr\lvert\Pr[A^*(1^k, \mathcal{O}(1^k, C_2)) = 1] - \Pr[S^{*^{C_2}}(1^k)\biggr\rvert = \text{ negl$(k)$} \end{aligned}\] Because of the aforementioned fact, we also require that the following be true for VBB security to hold: \[\begin{aligned} \biggr\lvert\Pr[A^*(1^k, \mathcal{O}(1^k, C_1)) = 1] - \Pr[S^{*^{C_2}}(1^k) = 1]\biggr\rvert = \text{ negl$(k)$} \end{aligned}\] However, this cannot be the case. Instead, the following is true: \[\begin{aligned} \biggr\lvert\Pr[A^*(1^k, \mathcal{O}(1^k, C_1)) = 1] - \Pr[A^*(1^k, \mathcal{O}(1^k, C_2)) = 1] \biggr\rvert &\geq\\ \biggr\lvert\Pr[A^*(1^k, \mathcal{O}(1^k, C_1)) = 1] - \Pr[S^{*^{C_2}}(1^k) = 1]\biggr\rvert &= \delta(k) + \text{negl$(k)$} \end{aligned}\] Contraditction. Thus, we have shown that VBB does not hold when BPO can be broken.
Equivalence of BPO and \(i\mathcal{O}\)
We introduced BPO in hopes that it would give us a stronger notion of obfuscation to work with when compared to \(i\mathcal{O}\). However, the two have been shown to be equivalent.
BPO \(\equiv\) \(i\mathcal{O}\)
We first prove the forward direction.
BPO \(\Rightarrow\) \(i\mathcal{O}\)
Fix \(C_1, C_2\) such that \(\lvert C_1 \rvert = \lvert C_2 \rvert\) and \(C_1 \equiv C_2\). We want to show that \(\mathcal{O}(1^k, C_1) \stackrel{c}{=}\mathcal{O}(1^k, C_2)\) for some BPO \(\mathcal{O}\). Now, consider \(A(\hat C) = \hat C\) and its corresponding simulator \(S_A\). Given this selection of adversary, the following holds: \[\begin{aligned} \biggr\lvert \Pr[A(1^k, \mathcal{O}(1^k, C_1)) = 1] - \Pr[S(1^k, \mathcal{O}(1^k, C_2)) = 1] \biggr\rvert &= \\ \biggr\lvert \Pr[\mathcal{O}(1^k, C_1) = 1] - \Pr[S(1^k, \mathcal{O}(1^k, C_2)) = 1] \biggr\rvert = &\text{ negl(k)} \end{aligned}\] Using the BPO \(\mathcal{O}\) for \(C_2, C_2\) instead of \(C_1, C_2\), we can show: \[\begin{aligned} \biggr\lvert \Pr[A(1^k, \mathcal{O}(1^k, C_2)) = 1] - \Pr[S(1^k, \mathcal{O}(1^k, C_2)) = 1] \biggr\rvert &= \\ \biggr\lvert \Pr[\mathcal{O}(1^k, C_2) = 1] - \Pr[S(1^k, \mathcal{O}(1^k, C_2)) = 1] \biggr\rvert = &\text{ negl(k)} \end{aligned}\] Therefore, we have that: \[\begin{aligned} \mathcal{O}(1^k, C_1) \stackrel{c}{=}S(1^k, C_2) \stackrel{c}{=}\mathcal{O}(1^k, C_2) \end{aligned}\]
Now consider the backwards direction.
\(i\mathcal{O}\Rightarrow\) BPO
Say there is an \(i\mathcal{O}\) \(\mathcal{O}\) such that for all pairs of circuits \(C_1, C_2\) equal in size and functionality, \(\mathcal{O}(1^k, C_1) \stackrel{c}{=}\mathcal{O}(1^k, C_2)\). We want to show that this implies \(\mathcal{O}\) is also a BPO. We start by saying that for all ppt \(A\), is simulator \(S(1^k, C_2) = A(1^k, \mathcal{O}(1^K, C_2)\). Therefore, from the definition of \(i\mathcal{O}\) security, we can show that the following holds: \[\begin{aligned} \biggr\lvert \Pr[A(1^k, \mathcal{O}(1^k, C_1)) = 1] - \Pr[A(1^k, \mathcal{O}(1^k, C_2)) = 1] \biggr\rvert &= \\ \biggr\lvert \Pr[A(1^k, \mathcal{O}(1^k, C_1)) = 1] - \Pr[S(1^k, C_2) = 1] \biggr\rvert &= \text{ negl$(k)$} \end{aligned}\]
Amplifying \(i\mathcal{O}\)
There may be some classes of circuits that are easier or harder to obfuscate. For example, it could be that we know how to construct obfuscators that work on shallow circuits but not deep ones. We will show that using fully homomorphic encryption, we can bootstrap an IO obfuscator for such circuits to an \(i\mathcal{O}\) obfuscator for circuits of size polynomial in the input. We will give the intuition for the proof here, but will not give the complete proof until next lecture.
Consider a circuit class \(\mathcal{C}\) where \(\forall{C_1, C_2} \in \mathcal{C}\), \(C_1\) and \(C_2\) are equal in size and functionality. An \(i\mathcal{O}\) \(\mathcal{O}\) for \(\mathcal{C}\) is such that the properties of correctness, efficiency, and \(i\mathcal{O}\) security holds for each pair of circuits in \(\mathcal{C}\). Prior to introducing another theorem on the implications related to the existence of \(i\mathcal{O}\), we introduce prerequisite definitions.
Definitions
\(NC_1\) is the class of decision problems that can be decided in polynomial time and in logarithmic depth.
A Fully Homomorphic Encryption scheme (FHE) is a tuple of algorithms
\((Gen, Enc, Dec, Eval)\) such that:
\((Gen, Enc, Dec, Eval)\) is a CPA secure Public Key Encryption scheme.
\(Dec(sk, Eval(pk, C, Enc(pk, x))) = C(x)\)
We show the following:
\(i\mathcal{O}\) for \(NC_1\) + Fully Homomorphic Encryption (FHE) \(\Rightarrow\) \(i\mathcal{O}\) for all polynomial size circuits.
The main idea is that the data and the program are equivalent. We will need a fully-homomorphic encryption scheme, which allows us to apply a function to an encrypted message to get an encryption of the function applied to that message. Informally, instead of using our \(i\mathcal{O}\) obfuscator to obfuscate the circuit itself, we will encrypt the circuit, and then obfuscate the decryption algorithm. Someone with the encrypted circuit, plus the obfuscated decryption algorithm, will be able to evaluate an input x by producing a function \(f\) which, given a circuit \(C\), returns \(C(x)\) (that is, it is a universal circuit evaluator with input hardcoded as x and then using FHE to apply \(f\) on the encryption of \(C\) to get an encrypted version of \(C(x)\); and then decrypting to get \(C(x)\). However, since \(C\) is encrypted, they will not be able to discover anything about the circuit itself.