Quantum Ot P
Quantum Ot P
Abstract
One-time programs (Goldwasser, Kalai and Rothblum, CRYPTO 2008) are functions that
can be run on any single input of a user’s choice, but not on a second input. Classically, they are
unachievable without trusted hardware, but the destructive nature of quantum measurements
seems to provide a quantum path to constructing them. Unfortunately, Broadbent, Gutoski
and Stebila showed that even with quantum techniques, a strong notion of one-time programs,
similar to ideal obfuscation, cannot be achieved for any non-trivial quantum function. On the
positive side, Ben-David and Sattath (Quantum, 2023) showed how to construct a one-time
program for a certain (probabilistic) digital signature scheme, under a weaker notion of one-
time program security. There is a vast gap between achievable and provably impossible notions
of one-time program security, and it is unclear what functionalities are one-time programmable
under the achievable notions of security.
In this work, we present new, meaningful, yet achievable definitions of one-time program
security for probabilistic classical functions. We show how to construct one time programs satis-
fying these definitions for all functions in the classical oracle model and for constrained pseu-
dorandom functions in the plain model. Finally, we examine the limits of these notions: we
show a class of functions which cannot be one-time programmed in the plain model, as well
as a class of functions which appears to be highly random given a single query, but whose
one-time program form leaks the entire function even in the oracle model.
2 Technical Overview 4
2.1 Definitional Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Positive Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Impossibility Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Concurrent Work and Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Preliminaries 14
3.1 Quantum Information and Computation . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Quantum Query Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Compressed Random Oracles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Subspace States and Direct Product Hardness . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 Tokenized Signature Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
8 Applications 56
8.1 Signature Tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
8.2 One-Time NIZK Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.3 Future Work: One-Time MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.4 One-time programs for Verifiable Functions Imply Quantum Money . . . . . . . . . 62
9 References 63
i
A Missing Proofs for Families of Single-Query Unlearnable Functions 67
A.1 Pairwise Independent and Highly Random Functions . . . . . . . . . . . . . . . . . . 67
D Additional Prelims 87
D.1 NIZK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
ii
1 Introduction
The notion of one-time programs, first proposed by Goldwasser, Kalai and Rothblum [GKR08a],
allows us to compile a program into one that can be run on a single input of a user’s choice,
but only one. If realizable, one-time programs would have wide-ranging applications in software
protection, digital rights management, electronic tokens and electronic cash. Unfortunately, one-
time programs immediately run into a fundamental barrier: software can be copied multiple times
at will, and therefore, if it can be run on a single input of a user’s choice, it can also be run on as
many inputs as desired.
To circumvent this barrier, [GKR08a] designed a one-time program with the assistance of a
specialized stateful hardware device that they called a one-time memory. A one-time memory is a
device instantiated with two strings (𝑠0 , 𝑠1 ); it takes as input a choice bit 𝑏 ∈ {0, 1}, outputs 𝑠𝑏 and
then self-destructs. Using one-time memory devices, Goldwasser et al. showed how to compile
any program into a one-time program, assuming one-way functions exist. Goyal et al. [GIS+ 10]
extended these results by achieving unconditional security against malicious parties and using a
weaker type of one-time memories that store single bits. Notwithstanding these developments,
the security of these schemes rests on shaky grounds: security relies on how much one is willing
to trust the impenetrability of these hardware devices in the hands of a motivated and resourceful
adversary who may be willing to mount sophisticated side-channel attacks. Which brings up the
motivating question of our paper: is there any other way to construct one-time programs?
One might hope that the quantum no-cloning theorem [WZ82] might give us a solution. The
no-cloning theorem states that quantum information cannot be generically copied, so if one can
encode the given program into an appropriate quantum state, one might expect to circumvent the
barrier. However, there is a simple impossibility result by Broadbent, Gutoski and Stebila [BGS13]
that rules out quantum one-time versions of any deterministic program. Indeed, given a candidate
quantum one-time program state |𝜓𝑓 ⟩, an adversary can evaluate 𝑓 many times on different inputs
as follows: it first evaluates the program on some input 𝑥, measures the output register to obtain
𝑓 (𝑥). Since 𝑓 is deterministic, the measurement does not disturb the state of the program at all
(if the computation is perfectly correct). The adversary then uncomputes the first evaluation,
restoring the initial program state. She can repeat this process on as many inputs as she wishes.
While this impossibility result rules out one-time programs for deterministic functionalities, it
raises the following natural question:
More concretely, can we construct quantum one-time programs for randomized functions 𝑓 :
𝒳 × ℛ → 𝒴 that lets the user choose the input 𝑥 ∈ 𝒳 but not the randomness 𝑟 ∈ ℛ? One
might hope that by forcing the evaluation procedure to utilize the inherent randomness of quantum
information in sampling 𝑟 ← ℛ, measuring the output would collapse the program state in a way
that does not allow further evaluations. However, once again, [BGS13] showed that it is impossi-
ble to compile any quantum channel into a one-time program, unless it is (essentially) learnable with
just one query. This is a much more general impossibility; in fact, it rules out non-trivial one-time
programs for classical randomized functions. (We refer the reader to Section 2 for a description of
this impossibility result.)
On the other hand, more recently, Ben-David and Sattath [BDS23] demonstrated the first in-
stance of a one-time program for a certain randomized function. In particular, they construct a
1
digital signature scheme where the (randomized) signing procedure can be compiled into a one-
time program that the user can use to generate a single signature for a message of her choice.
At a first glance, this positive result might seem like a contradiction to the [BGS13] impossi-
bility; however, that is not so, and the difference lies in which definition of one-time programs
one achieves. Ben-David and Sattath [BDS23] achieve a much weaker notion of one-time security
than what was proven to be impossible by [BGS13]. On the one hand, [BGS13] demanded that
an adversarial user should not be able to do anything other than evaluate the one-time program
on a single input, an ideal obfuscation-like guarantee [Had00, BGI+ 01]. On the other hand, the
positive result of [BDS23] only claimed security in the sense that an adversarial user cannot output
two different valid signatures.
The starting point of this paper is that there is a vast gap between these two security no-
tions. Within the gap, one could imagine several meaningful and useful intermediate notions of
quantum one-time programs for classical randomized functions. For example, strengthening the
[BDS23] definition, one could imagine requiring that the user should not even be able to verify the
correctness of two input-output pairs (and not just be unable to produce them). Such a definition
is a meaningful strengthening in the context of indistinguishability games (such as in pseudoran-
dom functions) rather than unpredictability games (such as in digital signatures). One could also
imagine realizing one-time programs for a wider class of functions than the signature tokens of
[BDS23].
In this work, we revisit notions of quantum one-time programs and make progress on these
questions. We propose a number of security notions of quantum one-time programs for random-
ized functions; give constructions both in the plain model and a classical oracle model; and exam-
ine the limits of these notions by showing negative results. We next describe our contributions in
more detail.
2
pass the above impossibility results.
We additionally introduce a weaker but highly useful security definition called operational
definition, in which the adversary cannot "evaluate" twice given a one-time program2 .
Constructions and Positive Results. We give a very generic construction for one-time sampling
programs in the classical oracle model3 , inspired by the one-time signature scheme in [BDS23]. We
allow an honest user to choose its own input and then generate a random string by measuring a
"signature token" state. The evaluation is on the user’s input together with this freshly generated
randomness.
In particular, an honest evaluator does not need to run a classical circuit coherently on a quan-
tum input, but only needs quantum memory and one measurement to evaluate the program in
our construction. But an adversary will likely need the power of evaluating large-depth classical
circuit coherently on quantum states.
We prove its security under the single-effective-query simulation-based definition.
Theorem 1.1. (Informal) There exists a secure one-time sampling program for all functions (with suffi-
ciently long randomness) in the classical oracle model, with respect to our simulation-based, single-effective-
query model one-time sampling security.
We also instantiate the classical oracle using indistinguishability obfuscation, to get a com-
piler in the plain model, and prove its security for the class of pseudorandom functions under an
operational security definition for cryptographic functionalities.
Theorem 1.2 (Informal). Assuming post-quantum iO and LWE (or alternatively subexponentially secure
iO and OWFs), there exists one-time sampling programs for constrained PRFs.
Impossibilities. To complement our constructions in the classical oracle model and the plain
model, we also give two new negative results. The first negative result shows we cannot hope
to one-time program all randomized functionalities in the plain model, even under the weakest
possible operational security definitions. This impossibility is inspired by the work of [AP21,
ABDS20]. We tweak the idea to work with randomized circuits that can only be evaluated once.
Theorem 1.3 (Informal). Assuming LWE and quantum FHE, there exists a family of circuits with high
min-entropy outputs but no secure one-time sampling programs exist for them.
We also show that having high min-entropy outputs is not a sufficient condition to have a
secure one-time programs. Our second impossibility result show that there exists a family of
randomized functions with high min-entropy and is unlearnable under a single physical query.
But it cannot be one-time programmed even in the classical oracle model, even under the weakest
possible operational security definitions4 .
We demonstrate the definitions presented in this work and their corresponding impossibilities
and/or constructions in Figure 1. We recommend the readers to come back to this figure after
going through the technical overview.
2
Throughout the work, we may use the terms "one-time programs" and "one-time sampling programs" interchange-
ably. But they both refer to one-time sampling programs unless otherwise specified.
3
A classical oracle is a classical circuit that can be accessed coherently by quantum users in a black-box manner.
4
This function is securely one-time programmable under the single-effective-query simulation-based definition, but
in a "meaningless" sense since both the simulator and the real-world adversary can fully learn the functionality. This
demonstrates the separations and relationships between several of our definitions.
3
Definition Impossibilities Construction
Single physical query, Strong impossibility For single physical
quantum-output, simulation-based in oracle model query learnable
(Definition 4.4) (Section 2.1) (trivial) functions only
[BGS13]
Single physical query, Impossibility for generic N/A
classical-output, simulation-based construction in oracle model
(Definition 4.6) (Section 2.3, Section 7.3)
Single effective query Impossibility for generic For all functions
quantum-output, constructions in plain (with proper randomness
simulation-based (Definition 4.8) model (Section 2.3, Section 7) length) in classical oracle model
Operational definitions Impossibility for generic For random functions
(Section 4.4) constructions in plain in classical oracle model;
model (Section 2.3, Section 7) For constrained PRF
in plain model
Figure 1: Definitions with Impossibilities and Constructions. The exact impossibility results and positive
results for operational definitions depend on which definition of single-query model we work with. See
Section 4.3, Section 4.4, and Section 7 for details.
Applications. Using the techniques we developed for one-time programs, we construct the fol-
lowing one-time cryptographic primitives:
• One-Time Signatures. We compile a wide class of existing signature schemes to add sig-
nature tokens, which allow a delegated party to sign exactly one message of their choice.
Notably, our construction only changes the signing process while leaving the verification
almost unmodified, unlike [BDS23]’s construction. Thus, it enables signature tokens for ex-
isting schemes with keys which are already distributed.
• One-Time NIZK Proofs. We show how a proving authority can delegate to a subsidiary the
ability to non-interactively prove a single (true) statement in zero-knowledge.
• Public-Key Quantum Money. We show that one-time programs satisfying a mild notion of
security imply public-key quantum money.
2 Technical Overview
2.1 Definitional Works
First Attempt at Defining One-Time Sampling Programs. As we discussed in the introduction,
we cannot achieve one-time security for deterministic classical functions without hardware as-
sumptions, even after encoding them into quantum states: by applying the gentle measurement
lemma [Aar04, Win99], any adversary can repair the program state after a measurement on the
program’s output that gives a deterministic outcome.
We therefore resort to considering classical randomized computation, which we model as the
following procedure: the user (adversary) can pick its own input 𝑥; the program samples a random
4
string 𝑟 for the user and outputs the evaluation 𝑓 (𝑥, 𝑟) for the user, for some deterministic function
𝑓 . Note that it’s essential that the user does not get to pick their own randomness 𝑟 – otherwise
the evaluation is deterministic again and is subject to the above attack.
For correctness, we need to guarantee that after an honest evaluation, the user gets the outcome
𝑓 (𝑥, 𝑟) for its own choice of 𝑥 and a uniformly random 𝑟. For security, the hope is that when
the output of 𝑓 looks "random enough" (e.g. 𝑓 is a hash function or a pseudorandom function),
the adversary should not be able to do more than evaluating the program honestly once. We
discuss several candidate definitions, the corresponding impossibility (no-go) results as well as
our solutions that circumvent the impossibilities.
Ideally, we would establish a simulation-based security definition. This might require the exis-
tence of a QPT algorithm 𝖲𝗂𝗆 which can produce the adversary’s real-world view given a single
query to 𝑓 :
𝖮𝖳𝖯(𝑓 ) ≈ 𝖲𝗂𝗆𝑓1
where 𝑓1 denotes that 𝖲𝗂𝗆 may query 𝑓 a single time. Indeed, such a definition is formalized, and
subsequently ruled out, by Broadbent, Gutoski, and Stebila [BGS13].
This definition can be adapted to sampling programs by considering sampling 𝑓 from a func-
tion family ℱ at the start of the experiment. Additionally, to prevent a trivial definition which can
be satisfied by 𝖲𝗂𝗆 choosing its own 𝑓 , the distinguisher gets access to the sampled 𝑓 :
Unfortunately, this candidate definition suffers from impossibility results of its own.
5
Barriers for Stateless One-Time Programs. Even more problematic, the above definition en-
counters impossibilities even in the oracle model, where we ensure that the program received by
𝒜 consists of oracle-aided circuits, preventing the non-black-box attack described earlier from ap-
plying.
This limitation primarily arises from the fact that 𝖲𝗂𝗆 is given a stateful oracle, while 𝒜 is
provided with a stateless one-time program (which includes a stateless oracle). To illustrate,
consider the following 𝒜 and distinguisher 𝒟: 𝒜 receives a possibly oracle-aided program and
simply passes the program itself to 𝒟. Let 𝒪𝑓 be a stateless oracle for 𝑓 that outputs 𝑦 = 𝑓 (𝑥, 𝑟) on
any input (𝑥, 𝑟) and not restricted in the number of queries that it can answer. 𝒟 is given arbitrary
oracle access to 𝒪𝑓 so 𝒟 can perform the following attack using gentle measurement (Lemma 3.2)
and un-computation:
1. Evaluate the program given by 𝒜 on a |𝑥1 ⟩𝗂𝗇𝗉 |0⟩𝗈𝗎𝗍 |0⟩𝖼𝗁𝖾𝖼𝗄 where the input register 𝗂𝗇𝗉 con-
tains 𝑥1 , 𝑟1 , some arbitrary 𝑥1 of 𝒟’s choice and some randomness 𝑟1 sampled by the pro-
gram. 𝗈𝗎𝗍 is an output register and 𝖼𝗁𝖾𝖼𝗄 is an additional register in 𝒟’s memory.
2. Get outcome |𝑥1 ⟩𝗂𝗇𝗉 |𝑟1 , 𝑦1 = 𝑓 (𝑥1 , 𝑟1 )⟩𝗈𝗎𝗍 |0⟩𝖼𝗁𝖾𝖼𝗄 .
3. But 𝒟 does not proceed to measure the register 𝗈𝗎𝗍. Instead it performs a gentle measure-
ment by checking if the 𝑦1 value in 𝗈𝗎𝗍 is equal to the correct 𝑦1 = 𝑓 (𝑥1 , 𝑟1 ), writing the
outcome in register 𝖼𝗁𝖾𝖼𝗄. It can do so because it has access to 𝒪𝑓 . Then it measures the bit
in register 𝖼𝗁𝖾𝖼𝗄.
4. Since the above measurement gives outcome 1 with probability 1, 𝒟 can uncompute the
above results and make sure that the program state is undisturbed. Then, it can evaluate the
program again on some different 𝑥2 of its choice.
5. But when given a simulator’s output, 𝖲𝗂𝗆 cannot produce a program that contains more
information than what’s given in a single quantum oracle access to 𝑓 . Therefore, unless 𝖲𝗂𝗆
can "learn" 𝑓 in a single quantum query and produce a program that performs very closely
to a real program on most inputs, 𝒟 may easily detect the difference between two worlds.
The above argument is formalized in [BGS13], which rules out stateless one-time programs
for quantum channels even in the oracle model unless the function can be learned in a single
query (for example, a constant function). The above simulation-based definition discussed can be
viewed as a subcase of [BGS13]’s definition. Only in this trivial case, 𝖲𝗂𝗆 can fully recover the
functionality of 𝑓 and make up a program that looks like a real world program, since both 𝖲𝗂𝗆 and
𝒜 can learn everything about 𝑓 .
To get around the above oracle-model attack, we first consider the following weakening: what
if we limit both the adversary and simulator to output only classical information? Intuitively, this re-
quires both 𝒜 and 𝖲𝗂𝗆 to dequantize and "compress" what they can learn from the program/oracle
into a piece of classical information, so that 𝒜 cannot output the entire functionality unless it has
"learned" the classical description of the functionality. However, we will show in 2.3 that there
even exists a function with high min-entropy output such that its classical description can be
"learned" given any stateless, oracle-based one-time sampling program, but is unlearnable given
only a single query. Thus we will need to explore other avenues
These impossibility results appear to stem more from a definitional limitation than a funda-
mental obstacle. The adversary is always given a stateless program, but the oracle given to the
simulator is by definition strongly stateful: it shuts down after answering any single query (we
call such an oracle single physical query oracle). Therefore, 𝖲𝗂𝗆 is more restricted than the 𝒜 in real
world.
6
The Single-Effective-Query Model. To avoid the above issue, we consider a weakening on the
restriction of the "single query" which 𝖲𝗂𝗆 can make. In the traditional one-time security, 𝖲𝗂𝗆 can
merely make one physical query, but 𝒜 and 𝒟 can actually make many queries, as long as the
measurements on those queries they make are "gentle" (for example, a query where the outcome
𝑓 (𝑥, 𝑟) is unmeasured and later uncomputed) or repeated (for example, two classical queries on
the same (𝑥, 𝑟)).
In the single-effective-query model, we relax 𝖲𝗂𝗆’s single-physical-query restriction to also al-
low multiple queries, as long as they are "gentle" or repeated. We will define a stateful oracle
𝑓𝖲𝖤𝖰 which tracks at all times which evaluations 𝑓 (𝑥; 𝑟) the adversary has knowledge about. If
𝑓𝖲𝖤𝖰 receives a query to some 𝑥′ while it knows the adversary has knowledge about an evalua-
tion on 𝑥 ̸= 𝑥′ , it will refuse to answer. Using this oracle, we may define single-effective query
simulation-security in the same manner as our previous attempt by giving the simulator access to
the single-effective-query oracle 𝑓𝖲𝖤𝖰 instead of the single-physical-query oracle 𝑓1 :
𝖮𝖳𝖯(𝑓 ) ≈ 𝖲𝗂𝗆𝑓𝖲𝖤𝖰
The reader may be concerned that since 𝑓 is not sampled from any distribution here, this
definition is subject to the previously discussed impossibility for deterministic functions. As we
will see shortly, the randomization of 𝑓 is directly baked in to the definition of 𝑓𝖲𝖤𝖰 .
Defining the Single-Effective-Query Oracle. To define the single-effective query oracle 𝑓𝖲𝖤𝖰 , we
use techniques from compressed random oracles, which were introduced by Zhandry to analyze
analyze security in the quantum-accessible random oracle model (QROM) [Zha19a]. Very roughly
speaking, a compressed oracle gives an efficient method of simulating quantum query access to a
random oracle on the fly by lazily sampling responses in superposition which can be "forgotten"
as necessary.
The first main idea in [Zha19a] compressed oracle technique is to take a purified view on the
joint view of the adversary’s query register and the oracle: evaluating a random function in the
adversary’s view is equivalent to evaluating on some function∑︀𝐻 from a uniform superposition
over all functions (of∑︀corresponding input and output length) 𝐻 |𝐻⟩ℋ . When adversary makes
a query of the form 𝑥,𝑢 𝛼𝑥,𝑢 |𝑥, 𝑢⟩, the oracle applies the operation
∑︁ ∑︁ ∑︁
𝛼𝑥,𝑢 |𝑥, 𝑢⟩ ⊗ |𝐻⟩ ⇒ 𝛼𝑥,𝑢 |𝑥, 𝑢 + 𝐻(𝑥)⟩ ⊗ |𝐻⟩
𝑥,𝑢 𝐻 𝑥,𝑢,𝐻
When 𝑓𝖲𝖤𝖰 decides to answer a query |𝑥, 𝑢⟩, it computes |𝑥, 𝑢 ⊕ 𝑓 (𝑥; 𝐻(𝑥))⟩ by reading register
ℋ𝑥 in the computational basis. The first query made results in the joint state
𝑈𝑓$ ∑︁ ∑︁ ⨂︁
|𝑥* , 𝑢⟩𝒬 ⊗ |𝐻∅ ⟩ −→ |𝑥* , 𝑢 ⊕ 𝑓 (𝑥* , 𝑟)⟩𝒬 ⊗ |𝑟⟩ℋ𝑥* ⊗ |𝐻(𝑥)⟩ℋ𝑥
𝑟 𝐻:𝐻(𝑥* )=𝑟 𝑥∈𝒳 ,𝑥̸=𝑥*
7
If 𝑓 (𝑥; 𝑟) were to uniquely determine 𝑟, then measuring 𝑓 (𝑥* , 𝐻(𝑥* )) would fully collapse
register ℋ𝑥* while leaving the others untouched. Afterwards, the single-effective-query oracle
𝑓𝖲𝖤𝖰 could detect which∑︀ input was evaluated and measured by comparing each register ℋ𝑥 to the
uniform superposition 𝑟∈ℛ |𝑟⟩. It could then use this information to decide whether to answer
further queries. On the other hand, if there were many collisions 𝑓 (𝑥* ; 𝑟* ) = 𝑓 (𝑥* ; 𝑟2* ) or the
adversary erased its knowledge of 𝑓 (𝑥* ; 𝑟* ) by querying on the same register again, then ℋ𝑥*
might not be fully collapsed. In this case, it is actually beneficial that 𝑓𝖲𝖤𝖰 does not completely
consider 𝑥* to have been queried, since this represents a "gentle" query which would allow the
adversary to continue evaluating a real one-time program.
When we switch to the compressed version of 𝐻, collapsing ℋ𝑥* to |𝑟* ⟩ corresponds to record-
ing (𝑥* , 𝑟* ) in a database 𝐷. Since the adversary’s queries may be in superposition, the database
register ℋ∑︀ may become entangled with the adversary. In other words, the general state of the
system is 𝑎,𝐷 𝛼𝑎,𝐷 |𝑎⟩𝒜 ⊗ |𝐷⟩ℋ where 𝒜 belongs to the adversary and ℋ belongs to 𝑓𝖲𝖤𝖰 . Us-
ing this view, 𝑓𝖲𝖤𝖰 may directly read the currently recorded query off of its database register to
decide whether to answer a new query. The entanglement between the adversary’s register and
the database register enables 𝑓𝖲𝖤𝖰 to answer or reject new queries 𝑥 precisely when the adversary
does not have another outstanding query 𝑥′ . As a result, the database register will always contain
databases with at most one entry.
Functions for which SEQ Access is Meaningful. The single-effective-query simulation defini-
tion captures all functions, including those that are trivially one-time programmable. Similarly to
the notion of ideal or virtual black-box obfuscation, any "unlearnability" properties depend on the
interaction of the function with the obfuscation definition. For example, deterministic functions
can be fully learned given access to an SEQ oracle, since measuring evaluations will never restrict
further queries.
Intuitively, a function must satisfy two loose properties in order to have any notion of unlearn-
ability with SEQ access:
• High Randomness. To restrict further queries, learning (via measuring) 𝑓 (𝑥; 𝑟) must col-
lapse the SEQ oracle’s internal state, causing (𝑥, 𝑟) to be recorded in the purified oracle 𝐻.
• Unforgeability. To have any hope that 𝑓 has properties that cannot be learned with SEQ
access, 𝑓 cannot be learnable given, say, a single evaluation 𝑓 (𝑥; 𝑟).
As an example, truly random functions exemplify both of these properties. A truly random func-
tion has maximal randomness on every input and 𝑓 (𝑥; 𝑟) is independent of 𝑓 (𝑥′ ; 𝑟′ ). We formally
explore SEQ access to truly random functions and a few other function families in Section 5.3.
8
is the security parameter. These parameters ensure that 𝐴 has exponentially many elements but is
still exponentially small compared to the entire space.
At a high-level, our one-time scheme requires an authorized user to query an oracle on sub-
space vectors of 𝐴 or its dual subspace 𝐴⊥ . Let 𝑓 be the function we want to one-time program.
Consider the simple case where 𝑥 is a single bit in {0, 1}. Let 𝐺 be a PRG or extractor (which can
be modeled as a random oracle since we already work in oracle model). The one-time program
consists of a copy of the subspace state |𝐴⟩ along with access to the following classical oracle:
⎧
⎨𝑓 (𝑥, 𝐺(𝑣))
⎪ if 𝑥 = 0, 𝑣 ∈ 𝐴
𝒪(𝑥, 𝑣) = 𝑓 (𝑥, 𝐺(𝑣)) if 𝑥 = 1, 𝑣 ∈ 𝐴⊥ .
⎪
⊥ otherwise
⎩
To evaluate on input 𝑥, an honest user will measure the state |𝐴⟩ to obtain a uniform random
vector in subspace 𝐴, if 𝑥 = 0; or apply a binary QFT to |𝐴⟩ and measure to obtain a uniform
random vector in the dual subspace 𝐴⊥ , if 𝑥 = 1. It then inputs (𝑥, 𝑣) into the oracle 𝒪 and will
obtain the evaluation 𝒪(𝑥, 𝐺(𝑣)) where the randomness 𝐺(𝑣) is uniformly random after putting
the subspace vector into the random oracle.
For security, we leverage an "unclonability" property of the state |𝐴⟩ ([BDS23, BKNY23]) called
"direct-product hardness": an adversary, given one copy of |𝐴⟩, polynomially bounded in query
to the above oracle should not be able to produce two vectors 𝑣, 𝑣 ′ which satisfy either of the
following: (1) 𝑣 ∈ 𝐴, 𝑣 ′ ∈ 𝐴⊥ ; (2) 𝑣, 𝑣 ′ ∈ 𝐴 or 𝑣, 𝑣 ′ ∈ 𝐴⊥ but 𝑣 ̸= 𝑣 ′ .
First, we consider a simpler scenario: this evaluation is destructive to the subspace state if
the user has obtained the outcome 𝑓 (𝑥, 𝐺(𝑣)) and the function 𝑓 behaves random enough so that
measuring the output 𝑓 (𝑥, 𝐺(𝑣)) is (computationally) equivalent to having measured the subspace
state 𝑣. Now, it will be hard for the user to make a second query into the oracle 𝒪 on a different
input (𝑥′ , 𝑣 ′ ) so that either 𝑥 ̸= 𝑥′ or 𝑣 ̸= 𝑣 ′ because it would lead to breaking the direct-product
hardness property mentioned above.
More generally, however, 𝑓 may be only somewhat random, or the adversary may perform
superposition queries. In these cases, |𝐴⟩ will be only partially collapsed in the real program,
potentially allowing further queries. This partial collapse also corresponds to a partial collapse of
the single-effective-query oracle’s database register, similarly restricting further queries.
To establish security, the main gap that the simulator needs to bridge is the usage of a subspace
state |𝐴⟩ versus a purified random oracle to control query access. Additionally, the real world eval-
uates 𝑓 (𝑥, 𝐺(𝑣)), where 𝑣 is a subspace vector corresponding to 𝑥, while the ideal world evaluates
𝑓 (𝑥, 𝐻(𝑥)) directly.5 If we were to purify 𝐺 as a compressed oracle, then |𝐴⟩ collapsing corre-
sponds to 𝐺 recording some subspace vector 𝑣 in its database. At a high level, this allows the
simulator to bridge the aforementioned gap by using a careful caching routine to ensure that |𝐴⟩
collapses/𝑣 is recorded in the cache if and only if 𝑥 is recorded in 𝐻. Using the direct product
hardness property, we can be confident that at most one 𝑣 and corresponding 𝑥 are recorded in the
simulator. Thus, to show that the simulator is indistinguishable from the real one-time program,
we can simply swap the role of 𝑥 and 𝑣 in the oracle, changing between 𝐺 and 𝐻. We provide
more details in Section 5.1.
5
Although 𝐻 and 𝐺 are both random oracles, we differentiate them to emphasize that they act on different domains.
9
Operational Security for Cryptographic Functionalities. While the classical oracle construction
is clean, implementing the oracle itself with concrete code will bump into the plain model versus
black-box obfuscation barrier again. We cannot hope to make one-time sampling programs for
all functions (not even for all high min-entropy output functions) in the plain model, due to the
counter-example we provide in the first paragraph of 2.3 we will soon come to. Moreover, the
simulation definition we achieve above captures functions all the way from those that can be
meaningfully one-time programmed, like a random function, to those "meaningless" one-time
programs of for example a constant function, which can be learned in a single query.
One may wonder, what are some meaningful functionalities we can implement a one-time
program with and what are some possible security notions we can realize for them in the plain
model?
We consider a series of relaxed security notions we call "operational one-time security defi-
nition", which can be implied by the simulation-based definition. The intuition of these security
definitions is to characterize "no QPT adversary can evaluate the program twice".
Consider a cryptographic functionality 𝑓 , we define the security game as follows: the QPT
adversary 𝒜 receives a one-time program for 𝑓 and will output its own choice of two input-
randomness pairs (𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ). For each (𝑥𝑖 , 𝑟𝑖 ), 𝒜 needs to answer some challenges from
the challenger with respect to the cryptographic functionality 𝑓 . The security guarantees that
𝒜’s probability of winning both challenges for 𝑖 ∈ {1, 2} is upper bounded by its advantage in a
cryptographic security game of winning a single such challenge, but without having acess to the
one-time program.
For some cryptographic functionalities, this cryptographic challenge is simply to compute 𝑦𝑖 =
𝑓 (𝑥𝑖 , 𝑟𝑖 ). A good example is a one-time signature scheme: 𝒜 produces two messages of its own
choice, but it should not be able to produce valid signatures for both of them with non-negligible
probability.
More generically, 𝒜 produces two message-randomness pairs and gives them to the challenger.
Then the challenger prepares some challenges independently for 𝑖 ∈ {1, 2} and 𝒜 has to provide
answers so that 𝒜’s inputs, the challenges and final answer need to satisfy a predicate. In the above
signature example, the predicate is simply verifying if the signature is a valid one for the message.
Another slightly more contrived predicate is answering challenges for a pseudorandomness game
of a PRF: receiving an 𝖮𝖳𝖯 for a PRF, no QPT adversary can produce two input-randomness pairs
(𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ) of its own choice such that it can win the pseudorandomness game with respect
to both of these inputs. That is, the challenger will flip two independent uniform bits for each
𝑖 ∈ {1, 2} to decide whether let 𝑦𝑖 = 𝖯𝖱𝖥(𝑥𝑖 , 𝑟𝑖 ) or let 𝑦𝑖 be real random. The security says that 𝒜’s
overall advantage should not be noticeably larger than 1/2; 𝒜 can always evaluate once and get
to answer one of the challenges with probability 1, but for the other challenge it can only make a
random guess.
One-Time Program for PRFs in the Plain Model. However, one cannot hope to achieve a one-
time program construction that is secure for all functions in the plain model, even if we consider
the most restrict ourselves to the weakest operational definition and high min-entropy output
functions. As aforementioned, we give the counter-example of such a circuit (assuming some
mild computational assumptions) in the next paragraph 2.3.
We therefore turn to considering constructions for specific functionalities in the plain model
and a give a secure construction for a family of PRFs, with respect to the aforentioned security
10
guarantee: no QPT adversary can produce two input-randomness pairs (𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ) of its own
choice such that it can win the pseudorandomness game with respect to both of these inputs.
To replace the oracle in the above construction, we use 𝗂𝖮 (which stands for indistinguishabil-
ity obfuscation, [BGI+ 01]) which guarantees that the obfuscations of two functionally-equivalent
circuits should be indistinguishable.
The construction in the plain model bears similarities to the one in the oracle model. Let
𝖯𝖱𝖥𝑘 (·) be the PRF as our major functionality. In our actual construction, we use another 𝖯𝖱𝖥 𝐺
on the subspace vector 𝑣 to extract randomness, but we omit it here for clarity of presentation. We
give out a subspace state |𝐴⟩ and an 𝗂𝖮 of a program.
The following program is a simplification of the actual program we put into 𝗂𝖮:
⎧
⎨𝖯𝖱𝖥𝑘 (𝑥, 𝑣)
⎪ if 𝑥 = 0, 𝑣 ∈ 𝐴
𝖯𝖱𝖥𝑘,𝐴 (𝑥, 𝑣) = 𝖯𝖱𝖥𝑘 (𝑥, 𝑣) if 𝑥 = 1, 𝑣 ∈ 𝐴⊥ .
⎪
⊥ otherwise
⎩
To show security, we utilize a constrained PRF ([BKW17]): a constrained PRF key 𝑘𝐶 constrained
to a circuit 𝐶 will allow us to evaluate on inputs 𝑥 that satisfy 𝐶(𝑥) = 1 and output ⊥ on the in-
puts that don’t satisfy. The constrained pseudorandomness security guarantees that the adversary
should not be able distinguish between (𝑥, 𝖯𝖱𝖥𝑘 (𝑥)) and (𝑥, 𝑦 ← random), where 𝒜 can choose 𝑥
such that 𝐶(𝑥) = 0 and 𝑘 is the un-constrained key.
In our proof, we can use hybrid arguments to show that we can use the adversary to violate
the constrained pseudorandomness security. Let us denote 𝐴0 = 𝐴, 𝐴1 = 𝐴⊥ . First we invoke the
security of 𝗂𝖮: we change the above program to one using a constrained key 𝑘𝐶 to evaluate 𝖯𝖱𝖥,
where 𝐶(𝑥, 𝑣) = 1 if and only if 𝑣 ∈ 𝐴𝑥 . The circuit has the equivalent functionality as the original
one. Next, we invoke a computational version of subspace state direct-product hardness property:
we change the one-time security game to rejecting all adversary’s chosen inputs (𝑥1 , 𝑣1 ), (𝑥2 , 𝑣2 )
such that both 𝑣1 ∈ 𝐴𝑥1 and 𝑣2 ∈ 𝐴𝑥2 hold. Such a rejection only happens with negligible proba-
bility due to the direct-product hardness property. Finally, the adversary must be able to produce
some (𝑥, 𝑣) where 𝐶(𝑥, 𝑣) = 0 and distinguish 𝖯𝖱𝖥𝑘 (𝑥, 𝑣) from a random value. We therefore use
it to break the constrained pseudorandomness security.
More Applications and Implications: Generic Signature Tokens, NIZK, Quantum Money We
show a way to lift signature schemes that satisfy a property called blind unforgeability to possess
one-time security. Unlike the [BDS23] signature token scheme where the verification key has to
be updated once we delegate a one-time signing token to some delegatee, our signature token
scheme can use an existing public verification procedure.
Apart from the above one-time PRF in the plain model, we also instantiate one-time NIZK
from iO and LWE in the plain model, using a similar construction as in the one-time PRF and the
NIZK from iO in [SW14]. The proof requires more careful handling because the NIZK proof also
has publicly-verifiable property.
Finally, we show that one-time program for publicly verifiable programs (e.g. signature, NIZK)
implies public-key quantum money. Despite the destructible nature of the one-time program, we
can design a public verification procedure that gently tests a program’s capability of computing a
function and use a one-time program token state as the banknote.
11
2.3 Impossibility Results
Impossibility Result in the Plain Model. In this paragraph, we come back to the discussions
about impossibility results in the paragraph "Barriers for stateless one-time programs"(2.1). We
describe the high level idea on constructing the program used to show an impossibility result in
the plain model, inspired by the approach in [AP21, ABDS20]. This impossibility holds even for the
weakest definition we consider: 𝒜 is not able to produce two input-output pairs after getting one
copy of the 𝖮𝖳𝖯.
We have provided a table in Figure 1 in Section 1.1 that demonstrates the several security
definitions we discuss in this work, their corresponding impossibility results and positive results.
Please refer to the table so that the relationships between the several definitions proposed in this
work are clearer.
We design an encryption circuit 𝐶 with a random "hidden point" such that when having non-
black-box access to the circuit, one can "extract" this hidden point using a quantum fully homo-
morphic encryption on the one-time program. However, when having only oracle access, one
cannot find this hidden point within any polynomial queries.
Let 𝖲𝖪𝖤 be a secret key encryption scheme. Let 𝑎, 𝑏 be two randomly chosen values in {0, 1}𝑛 .
The above circuit also comes with some classical auxiliary information, given directly to 𝒜 in the
real world (and 𝖲𝗂𝗆 in the simulated world, apart from giving oracle access). Let 𝖰𝖧𝖤 be a quan-
tum fully homomorphic encryption with semantic security. We also need a compute-and-compare
obfuscation program (which can be built from LWE). A compute-and-compare obfuscation pro-
gram is obfuscation of a circuit 𝖢𝖢[𝑓, 𝑚, 𝑦] that does the following: on input 𝑥, checks if 𝑓 (𝑥) = 𝑦;
if so, output secret message 𝑚, else output ⊥. The obfuscation security guarantees that when 𝑦
has a high entropy in the view of the adversary, the program is indistinguishable from a dummy
program always outputing ⊥ (a distributional generalization of a point function obfuscation).
In the auxiliary information, we give out 𝖼𝗍𝑎 = 𝖰𝖧𝖤.𝖤𝗇𝖼(𝑎) and the encryption key, evaluation
key 𝖰𝖧𝖤.𝗉𝗄 for the QFHE scheme. We also give the compute-and-compare obfuscation for the
following program:
{︃
(𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝗌𝗄) if 𝑓 (𝑥) = 𝑏
𝖢𝖢[𝑓, (𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝗌𝗄), 𝑏] =
⊥ otherwise
where 𝑓 (𝑥) = 𝖲𝖪𝖤.𝖣𝖾𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝖣𝖾𝖼(𝖰𝖧𝖤.𝗌𝗄, 𝑥)). Any adversary with non-black-box access
to the program can homomorphically encrypt the program and then evaluate to get a QFHE en-
cryption of a ciphertext 𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) (doubly encrypted by 𝖲𝖪𝖤 and then 𝖰𝖧𝖤 ). This ciphertext,
once put into the above 𝖢𝖢 obfuscation program, will give out all the secrets we need to recover
the functionality of the circuit 𝐶.
However, when only given oracle access, we can invoke the obfuscation security of the compute-
and-compare program and the semantic security of QFHE, finally removing the information of 𝑏
completely from the oracle, rendering the functionality as a regular SKE encryption sheme (which
behaves like a random oracle in the classical oracle model). The actual proof is more intricate,
12
involving a combination of hybrid arguments, quantum query lower bounds and induction, since
the secret information on 𝑎, 𝑏 is scattered around in the auxiliary input. We direct readers to Sec-
tion 7 for details.
Impossibility Result in the Oracle Model. In this section, we show a circuit family with high-
entropy output which cannot be one-time programmed in the oracle model, with respect to the
classical-output simulator definition discussed in Section 2.1. It can be one-time programmed with
respect to our single-effective-query simulator definition, but only in a "meaningless" sense, since
both the simulator and the real-world adversary can fully learn the functionality.
In short, it demonstrates several separations: (1) It separates a single-physical-query unlearn-
able function from single-effective-query unlearnable: one can fully recover the functionality once
having a single effective query oracle, but one cannot output two input-output pairs when hav-
ing only one physical query. (2) It is a single-physical-query unlearnable function that cannot be
securely one-time programmed with respect to the operational definition where we require the
adversary to output two different correct evaluations. (3) Having high min-entropy output dis-
tributions is not sufficient to prevent the adversary from evaluating twice (i.e. "meaningfully"
one-time programmed). We would also like to make a note that this result does not contradict our
result on the single-effective-query unlearnable families of functions in Appendix A because it is
not truly random or pairwise-independent.
Now consider the following circuit. An adversary receives an oracle-based one-time program
and a simulator gets only one (physical) query to the functionality’s oracle. Both are required to
output a piece of classical information to a distinguisher.
Let 𝑎 be a uniformly random string in {0, 1}𝑛 . Let 𝑘 be a random PRF key. The following
𝖯𝖱𝖥𝑘 (·) maps {0, 1}2𝑛 → {0, 1}2𝑛 . Let our circuit be the following:
⎧
⎨(𝑎, 𝖯𝖱𝖥𝑘 (0‖𝑟)) if 𝑥 = 0,
⎪
𝑓𝑎,𝑘 (𝑥; 𝑟) = (𝑘, 𝖯𝖱𝖥𝑘 (𝑎‖𝑟)) if 𝑥 = 𝑎, (1)
⎪
(0, 𝖯𝖱𝖥𝑘 (𝑥‖𝑟)) otherwise.
⎩
When 𝒜 is given an actual program, even using an oracle-aided circuit, 𝒜 can do the follow-
ing: evaluate the program on input 0 (and some randomness 𝑟 it cannot control) to get output
(𝑎, 𝖯𝖱𝖥𝑘 (0‖𝑟)); but instead of measuring the entire output, only measure the first 𝑛-bits to get 𝑎
with probability 1; evaluate the program again on 𝑎 and some 𝑟′ to obtain (𝑘, 𝖯𝖱𝖥𝑘 (𝑎|𝑟′ )). Now it
can reconstruct the classical description of the entire circuit using 𝑎 and 𝑘.
But when given only a single physical query, we can remove the information of 𝑘 using [BBBV97a]
argument since for a random 𝑘, no adversary should have non-negligible query weight on 𝑘 with
just one single query.
13
and plain model. [GM24] focuses on the oracle model. (2) We show simulation-based security for
our construction in the oracle model, which is likely stronger than the security definition used in
[GM24]; meanwhile, [GM24]’s security definition is likely stronger than the weaker security def-
inition we consider in the oracle model, the operational definition; (3) We show an impossibility
result for generic one-time randomized programs in the plain model; (4) We give constructions for
PRFs and NIZKs in the plain model whereas all constructions in [GM24] are in the oracle model;
(5) We also show a generic way to lift a plain signature schemes satisfying a security notion called
blind unforgeability to one-time signature tokens. (5) On the other hand, the oracle construction
in [GM24] is more generic by using any signature token state as the quantum part of the one-time
program, whereas we use the subspace state (namely, the signature token state in [BDS23]).
One-Time Programs. One-time programs were first proposed by Goldwasser, Kalai, and Roth-
blum [GKR08b] and further studied in a number of followup works [GIS+ 10, GG17, ACE+ 22].
Although these are impossible in the plain model, a number of alternative models have been
proposed to enable them, ranging from hardware assumptions to protein sequencing. Broad-
bent, Gutoski, and Stebila [BGS13] asked the question of whether quantum mechanics can act as
a stand-in for hardware assumptions. However, they found that quantum one-time programs are
only possible for “trivial” functions, which can be learned in a single query, and are generally im-
possible for deterministic classical functions. A pair of later works circumvented the impossibility
by allowing the program to output an incorrect answer with some probability [RKB+ 18, RKFW21].
Although their results are quite interesting, they do not give formal security definitions for their
scheme, and seem to assume a weaker adversarial model where the evaluator must make many in-
termediate physical measurements in an online manner. In contrast, we present a formal treatment
with an adversary who may perform arbitrary quantum computations on the one time program
as a whole.
[CGLZ19] develops a first quantum one-time program for classical message-authentication
codes, assuming stateless classical hardware tokens.
Besides, [LSZ20] studied security of classical one-time memory under quantum superposition
attacks. [Liu23] builds quantum one-time memory with quantum random oracle in the depth-
bounded adversary model, where the honest party only needs a short-term quantum memory but
the adversary, which attempts to maintain a quantum memory for a longer term cannot perform
attacks due to bounded quantum depth.
Signature Tokens. Signature tokens are a special case of one-time programs that allow the evalu-
ator to sign a single message, and no more. They were proposed by Ben-David and Sattath [BDS23]
in the oracle model and subsequently generalized to the plain model using indistinguishability ob-
fuscation [CLLZ21b]. Both of these works consider a very specific form of one-time security: an
adversarial evaluator should not be able to output two (whole) valid signatures.
3 Preliminaries
3.1 Quantum Information and Computation
We provide some basics frequently used in this work and refer to [NC02] for comprehensive de-
tails.
14
A projection is a linear operator 𝑃 on a Hilbert space that satisfies the property: 𝑃 2 = 𝑃 and is
Hermitian 𝑃 † = 𝑃 .
A projective-valued measurement (PVM) a generalization of this idea to an entire set of mea-
surement outcomes. A PVM is a collection of projection operators {𝑃𝑖 } that are associated with
the outcomes of a measurement, and ∑︀ they satisfy two important properties: (1) orthogonality:
𝑃𝑖 𝑃𝑗 = 0, ∀𝑖 ̸= 𝑗; (2) completeness: 𝑖 𝑃𝑖 = 𝐈 where 𝐈 is the identity operator.
𝑛 ×2𝑛
Definition 3.1 (Trace distance). Let 𝜌, 𝜎 ∈ ℂ2 be the density matrices of two quantum states. The
trace distance between 𝜌 and 𝜎 is
1
√︁
‖𝜌 − 𝜎‖tr := 𝖳𝗋[(𝜌 − 𝜎)† (𝜌 − 𝜎)],
2
Here, we only state a key lemma for our construction: the Gentle Measurement Lemma pro-
posed by Aaronson [Aar04], which gives a way to perform measurements without destroying the
state.
Lemma 3.2 (Gentle Measurement Lemma [Aar04]). Suppose a measurement on a mixed state 𝜌 yields
a particular outcome with probability 1 − 𝜖. Then after the measurement, one can recover a state 𝜌̃ such that
√
‖̃
𝜌 − 𝜌‖tr ≤ 𝜖. Here ‖·‖tr is the trace distance (defined in Definition 3.1).
Definition 3.3 (Classical Oracle). A classical oracle 𝒪 is a unitary transformation of the form 𝑈𝑓 |𝑥, 𝑦, 𝑧⟩ →
|𝑥, 𝑦 + 𝑓 (𝑥), 𝑧⟩ for classical function 𝑓 : {0, 1}𝑛 → {0, 1}𝑚 . Note that a classical oracle can be queried in
quantum superposition.
In the rest of the paper, unless specified otherwise, we refer to the word “oracle” as “classical
oracle”. A quantum oracle algorithm with oracle access to 𝒪 is a sequence of local unitaries 𝑈𝑖
and oracle queries 𝑈𝑓 . Thus, the query complexity of a quantum oracle algorithm is defined as the
number of oracle calls to 𝒪.
15
• 𝖣𝖾𝖼𝗈𝗆𝗉 is defined by |𝑥, 𝑢⟩ ⊗ |𝐷⟩ ↦→ |𝑥, 𝑢⟩ ⊗ 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 |𝐷⟩, where 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 is defined as
follows for any 𝐷 such that 𝐷(𝑥) = ⊥:
1 ∑︁
𝖣𝖾𝖼𝗈𝗆𝗉𝑥 |𝐷⟩ := √︀ |𝐷 ∪ {(𝑥, 𝑦)}⟩ (2)
|𝒴| 𝑦∈𝒴
⎛ ⎞
1 ∑︁
𝖣𝖾𝖼𝗈𝗆𝗉𝑥 ⎝ √︀ |𝐷 ∪ {(𝑥, 𝑦)}⟩⎠ := |𝐷⟩ (3)
|𝒴| 𝑦∈𝒴
⎛ ⎞ ⎛ ⎞
1 ∑︁ 1 ∑︁
𝖣𝖾𝖼𝗈𝗆𝗉𝑥 ⎝ √︀ (−1)𝑦·𝑢 |𝐷 ∪ {(𝑥, 𝑦)}⟩⎠ := ⎝ √︀ (−1)𝑦·𝑢 |𝐷 ∪ {(𝑥, 𝑦)}⟩⎠ for 𝑢 ̸= 0
|𝒴| 𝑦∈𝒴 |𝒴| 𝑦∈𝒴
(4)
16
queries, then 𝐷(𝑥) = ⊥. This is evident from the definition of 𝖣𝖾𝖼𝗈𝗆𝗉. Second, the compressed
oracle acts identically on all inputs, up to their names. In other words, querying 𝑥1 is the same
as renaming 𝑥1 to 𝑥2 in both the query register and the database register, then querying 𝑥2 , then
undoing the renaming.
Claim 3.6. Let 𝖲𝖶𝖨𝖳𝖢𝖧𝑥1 ,𝑥2 be the unitary which maps any database 𝐷 to the unique 𝐷′ defined by
𝐷′ (𝑥1 ) = 𝐷(𝑥2 ), 𝐷′ (𝑥2 ) = 𝐷(𝑥1 ), and 𝐷′ (𝑥3 ) = 𝐷(𝑥3 ) for all 𝑥3 ∈/ {𝑥1 , 𝑥2 } and acts as the identity
on all orthogonal states. Let 𝑈𝑥1 ,𝑥2 be the unitary which maps |𝑥1 ⟩ ↦→ |𝑥2 ⟩, |𝑥2 ⟩ ↦→ |𝑥1 ⟩ and acts as the
idenity on all orthogonal states. Then
Proof. This follows from the observation that (𝑈𝑥1 ,𝑥2 ⊗ 𝐼 ⊗ 𝖲𝖶𝖨𝖳𝖢𝖧𝑥1 ,𝑥2 ) commutes with both
𝖣𝖾𝖼𝗈𝗆𝗉 and 𝖢𝖮′ and it is self-inverse.
Definition 3.7 (Subspace States). For any subspace 𝐴 ⊆ 𝔽𝑛2 , the subspace state |𝐴⟩ is defined as
1 ∑︁
|𝐴⟩ = √︀ |𝑎⟩ .
|𝐴| 𝑎∈𝐴
Note that given 𝐴, the subspace state |𝐴⟩ can be constructed efficiently.
Definition 3.8 (Subspace Membership Oracles). A subspace membership oracle 𝒪𝐴 for subspace 𝐴 is a
classical oracle that outputs 1 on input vector 𝑣 if and only if 𝑣 ∈ 𝐴.
Query Lower Bounds for Direct-Product Hardness We now present a query lower bound result
for "cloning" the subspace states. These are useful for our security proof in the classical oracle
model.
The theorem states that when given one copy of a random subspace state, it requires exponen-
tially many queries to the membership oracles to produce two vectors either: one is in the primal
subspace, the other in the dual subspace; or both are in the same subspace but are different.
Theorem 3.9 (Direct-Product Hardness, [BDS23, BKNY23]). Let 𝐴 ⊆ 𝔽𝑛2 be a uniformly random sub-
space of dimension 𝑛/2. Let 𝜖 > 0 be such that 1/𝜖 = 𝑜(2𝑛/2 ). Given one copy of |𝐴⟩, and quantum access
√
to membership oracles for 𝐴 and 𝐴⊥ , an adversary needs Ω( 𝜖2𝑛/2 ) queries to output with probability at
least 𝜖 either of the following: (1) a pair (𝑣, 𝑤) such that 𝑣 ∈ 𝐴 and 𝑤 ∈ 𝐴⊥ ; (2) 𝑣, 𝑤 are both in 𝐴 or 𝐴⊥ ,
𝑣 ̸= 𝑤.
17
3.5 Tokenized Signature Definitions
A tokenized signature scheme is a signature scheme (𝖦𝖾𝗇, 𝖲𝗂𝗀𝗇, 𝖵𝖾𝗋𝗂𝖿𝗒) equipped with two addi-
tional algorithms 𝖦𝖾𝗇𝖳𝗈𝗄 and 𝖳𝗈𝗄𝖲𝗂𝗀𝗇. 𝖦𝖾𝗇𝖳𝗈𝗄 takes in a signing key 𝗌𝗄 and outputs a quantum
token |𝑇 ⟩. We overload it to also take in an integer 𝑛, in which case it outputs 𝑛 signing tokens.
𝖳𝗈𝗄𝖲𝗂𝗀𝗇 takes in a quantum token |𝑇 ⟩ and a message 𝑚, then outputs a signature 𝜎 on 𝑚.
A tokenized signature scheme must satisfy correctness and tokenized unforgeability. Correct-
ness requires that a signature token can be used to generate a valid signature on any 𝑚:
Definition 3.10. A tokenized signature scheme (𝖦𝖾𝗇, 𝖲𝗂𝗀𝗇, 𝖵𝖾𝗋𝗂𝖿𝗒, 𝖦𝖾𝗇𝖳𝗈𝗄, 𝖳𝗈𝗄𝖲𝗂𝗀𝗇) has tokenized un-
forgeability if for every QPT adversary 𝒜,
𝜆
⎡ ⎤
⨂︀𝑛(𝗌𝗄, 𝗏𝗄) ← 𝖦𝖾𝗇(1 )
Pr ⎣𝖵𝖾𝗋𝗂𝖿𝗒(𝗏𝗄, 𝑚𝑖 , 𝜎𝑖 ) = 𝖠𝖼𝖼𝖾𝗉𝗍 ∀𝑖 ∈ [𝑛 + 1] : 𝑖=1 |𝑇𝑖 ⟩ ← 𝖦𝖾𝗇𝖳𝗈𝗄(𝗌𝗄) ⨂︀
⎦ = 𝗇𝖾𝗀𝗅
𝑛
((𝑚1 , 𝜎1 ), . . . , (𝑚𝑛+1 , 𝜎𝑛+1 ) ← 𝒜(𝗏𝗄, 𝑖=1 |𝑇𝑖 ⟩)
Let ℱ be a family of randomized classical functions, such that any 𝑓 ∈ ℱ takes as input 𝑥 ∈ 𝒳 ,
$
then samples 𝑅 ← ℛ, and outputs 𝑓 (𝑥, 𝑅). For simplicity, let 𝒳 = {0, 1}𝑚 for some parameter
𝑚 ∈ ℕ, and let 1𝑚 be implicit in the description of 𝑓 .
Definition 4.1 (Syntax of 𝖮𝖳𝖯). Let ℱ be a family of randomized classical functions. For any given
𝑓 ∈ ℱ, define 𝒳 , ℛ, 𝒴 to be the sets for which 𝑓 : 𝒳 × ℛ → 𝒴, and let 𝑚 be the bit-length of inputs to 𝑓 ;
i.e. let 𝒳 = {0, 1}𝑚 .
Next, 𝖮𝖳𝖯 is the following set of quantum polynomial-time algorithms:
𝖦𝖾𝗇𝖾𝗋𝖺𝗍𝖾(1𝜆 , 𝑓 ): Takes as input the security parameter 1𝜆 and a description of the function 𝑓 ∈ ℱ, and
outputs a quantum state 𝖮𝖳𝖯(𝑓 ). We assume that 1𝑚 is implicit in 𝖮𝖳𝖯(𝑓 ).
𝖤𝗏𝖺𝗅(𝖮𝖳𝖯(𝑓 ), 𝑥): Takes as input a quantum state 𝖮𝖳𝖯(𝑓 ) and classical input 𝑥 ∈ {0, 1}𝑚 , and outputs
a classical value 𝑦 ∈ 𝒴.
Definition 4.2 (Correctness). 𝖮𝖳𝖯 satisfies correctness for a given function family ℱ if 𝖦𝖾𝗇𝖾𝗋𝖺𝗍𝖾 and
𝖤𝗏𝖺𝗅 run in polynomial time, and for every 𝑓 ∈ ℱ and 𝑥 ∈ {0, 1}𝑚 , there exists a negligible function
𝗇𝖾𝗀𝗅(·), such that for all 𝜆 ∈ ℕ, the distributions
18
are 𝗇𝖾𝗀𝗅(𝜆)-close in statistical distance, where the randomness in the second distribution {𝑓 (𝑥, 𝑟)} is over
$
the choice of 𝑟 ← ℛ.
Remark 4.3. In this work, we consider sampling uniform randomness as the input. Such a distribution
suffices for many applications we will discuss (one time signatures, one-time encryptions).
where both 𝒜 and 𝖲𝗂𝗆 are allowed to output oracle-aided circuits using oracles given to them (respectively).
𝒪𝑓 is a stateless oracle for 𝑓 that outputs 𝑦 = 𝑓 (𝑥, 𝑟) ion any input (𝑥, 𝑟) and not restricted in the number
of queries that it can answer.
Remark 4.5 (Auxiliary Input). The auxiliary information 𝖺𝗎𝗑𝑓 is a piece of classical, public information
sampled together when sampling/choosing 𝑓 . Therefore, all parties, 𝒜, 𝒟, 𝖲𝗂𝗆 get to see 𝖺𝗎𝗑𝑓 .
For instance, when 𝑓 is a signing function or a publicly-verifiable proof algorithm, this 𝖺𝗎𝗑𝑓 can be the
public verification key.
The above definition is a worst-case definition for all 𝑓 , but we will see in the following discus-
sions that even if we relax the definition to an average case 𝑓 sampled from the function family, a
strong impossibility result still holds.
19
our simulator is given only oracle access to 𝑓 , the program we give to 𝒜 consists of actual code
(and quantum states) . Once 𝒜 has non-black-box access to the given one-time program, 𝒜 may
be able to perform various attacks where the simulator cannot do: for instance, evaluating ho-
momorphically on the one-time program. As demonstrated in [ABDS20, AP21], this type of non-
black-box attacks are applicable even if the program is a quantum state.
Formalizing the actual non-black-box attack takes some effort due to the randomized evalua-
tion on 𝑓 in our setting. Nevertheless, we show that even for one-time programs with a sampling
circuit, there exists circuits that can never be securely one-time programmed when given non-
black-box access in Section 7.
Barriers for Stateless One-Time Programs Even worse, the above definition suffers from impos-
sibilities even in the oracle model, where we make sure that the program 𝒜 receives also consists
of oracle-aided circuits, so that the above non-black-box attack does not apply.
This barrier mainly results from the fact that we give 𝖲𝗂𝗆 a stateful oracle but 𝒜 a stateless
one-time program (which contains a stateless oracle).
Consider the following 𝒜 and distinguisher 𝒟: 𝒜 receives a possibly oracle-aided program and
simply outputs the program itself to 𝒟. 𝒟 is given arbitrary oracle access to 𝒪𝑓 so 𝒟 can perform
the following attack using gentle measurement (Lemma 3.2) and un-computation, discussed in
Section 2.1, paragraph "Overcoming barriers for stateless one-time programs" ( 2.1).
Conclusively, unless the function 𝑓 itself is "learnable" through a single oracle query (for
exaple, a constant function), one cannot achieve the above definition. Only in this trivial case,
the simulator can fully recover the functionality of 𝑓 and make up a program that looks like a real
world program, since both 𝖲𝗂𝗆 and 𝒜 can learn everything about 𝑓 . This argument is formalized
in [BGS13], which rules out stateless one-time programs even in the oracle model. Note that this
impossibility holds even if we consider a randomized 𝑓 sampled from the function family and or
give a verification oracle that verifies whether a computation regarding 𝑓 is correct, instead of full
access to 𝑓 .
To get around the above oracle-model attack, we first consider the following weakening: what
if we limit both the adversary and simulator to output only classical information? Intuitively, this re-
quires both 𝒜 and 𝖲𝗂𝗆 to dequantize and "compress" what they can learn from the program/oracle
into a piece of classical information.
Definition 4.6 (Single physical query classical-output simulation-based one-time security). Let 𝜆
be the security parameter. For all (non-uniform) quantum polynomial-time adversaries 𝒜′ with classical
output, there is a (non-uniform) quantum polynomial-time simulator 𝖲𝗂𝗆 that is given single (quantum)
query access to 𝑓 such that for any QPT distinguisher 𝒟, for any 𝑓 ∈ ℱ𝜆 , there exists a negligible 𝗇𝖾𝗀𝗅(·)
such that:
⃒ ⃒
(1)
⃒Pr[1 ← 𝒟𝑂𝑓 (𝒜′ (1𝜆 , 𝖮𝖳𝖯(𝑓 ), 𝖺𝗎𝗑𝑓 ), 𝖺𝗎𝗑𝑓 )] − Pr[1 ← 𝒟𝑂𝑓 (𝖲𝗂𝗆𝑂𝑓 (·,$) (𝖺𝗎𝗑𝑓 ), 𝖺𝗎𝗑𝑓 )]⃒ ≤ 𝗇𝖾𝗀𝗅(𝜆).
⃒ ⃒
⃒ ⃒
where both 𝒜 and 𝖲𝗂𝗆 are allowed to output oracle-aided circuits, but only classical information.
𝒪𝑓 is a stateless oracle for 𝑓 that outputs 𝑦 = 𝑓 (𝑥, 𝑟) on any input (𝑥, 𝑟) and not restricted in the
number of queries that it can answer.
However, we will demonstrate in Section 7 that even for 𝑓 with high min-entropy output, there
exists a family of functions such that: no single physical query simulator can "learn" much about 𝑓
20
(1)
when only given oracle access to stateful oracle 𝒪𝑓 (·),$ , but there exists an efficient 𝒜 that can fully
recover the functionality of 𝑓 when given a stateless oracle-aided one-time sampling program.
Therefore, 𝒜 can output a classical output that separates itself from 𝖲𝗂𝗆.
Re-defining the stateful Single-Query Oracle These barriers inspire us to look into the defini-
tion of "single query oracle" we give to 𝖲𝗂𝗆 and consider a weakening on the restriction of the
"single-query" we allow 𝖲𝗂𝗆 to make: 𝖲𝗂𝗆 can merely make one physical query, but 𝒜 and 𝒟 can
actually make many queries, as long as the measurements on those queries they make are "gentle".
Is it possible to allow 𝖲𝗂𝗆 to make "gentle" queries to the oracle for 𝑓 , just as the adversary and
distinguisher can do with their stateless oracles? We hope that for certain functionalities with sam-
pled random inputs, we will be able to detect whether 𝖲𝗂𝗆 makes a destructive (but meaningful)
measurement on its query or a gentle (but likely uninformative) query, so that the stateful oracle
can turn off once it has made a "destructive" query, but stay on when it has only made gentle
queries.
The second weaker security definition we propose examines the above "gentle measurement
attack" by the distinguisher more closely, and redefines what a single query means in the quantum
query model. We develop this notion in the following subsection.
The single effective query (SEQ) oracle 𝑓$,1 augments 𝑓$ with the ability to recognize how
many distinct (effective) queries the user has made to 𝐻 by implementing 𝐻 as a compressed
oracle that records the user’s queries. If answering a given query would increase the number of
recorded queries to 2 or more, then 𝑓$,1 does not answer it. Also, when 𝑓$,1 does answer a query,
it will flip a bit 𝑏 to indicate that it has done so. The formal definition 𝑓$,1 is given below.
Compressed Single-Effective-Query Oracle 𝑓$,1 We define 𝑓$,1 to implement the random oracle
𝐻 using the compressed oracle technique described in Section 3.3. In this case, the oracle’s ℋ
6
In general, this function may or may not output its randomness 𝑟.
21
register contains a superposition over databases 𝐷, which are sets of input/output pairs (𝑥, 𝑟).
When 𝑓$,1 is initialized, ℋ is initialized to |∅⟩.
𝑓$,1 responds to queries by an isometry implementing the following procedure coherently on
basis states |𝑥, 𝑢, 𝑏⟩𝒬 ⊗ |𝐷⟩ℋ :
|𝑥, 𝑢, 𝑏⟩𝒬 ⊗ |𝑥, 𝑟⟩𝒬′ ↦→ |𝑥, 𝑢 ⊕ 𝑓 (𝑥; 𝑟), 𝑏 ⊕ 1⟩𝒬 ⊗ |𝑥, 𝑟⟩𝒬′
to registers (𝒬, 𝒬′ )
4. Uncompute step 2 by querying 𝒬′ to the compressed oracle 𝐻.
Claim 4.7. Let 𝐴 be an oracle algorithm interacting with 𝑓$,1 . After every query in this interaction, ℋ is
supported on databases 𝐷 with at most one entry.
Proof. This is clearly true after 0 queries. We proceed by induction. Consider the projector onto
the space spanned by states of the form
for some 𝑥 ∈ 𝒳 , 𝑢 ∈ 𝒴, and 𝑟 ∈ ℛ. 𝑓$,1 acts as the identity on all states outside of this space.
Furthermore, this space is invariant under queries to 𝐻. This follows from the fact that the com-
pressed oracle operation 𝖣𝖾𝖼𝗈𝗆𝗉 maps |𝑥, 𝑢⟩ ⊗ |𝐷⟩ ↦→ |𝑥, 𝑢⟩ ⊗ |𝐷′ ⟩ where 𝐷 and 𝐷′ are the same,
except for the possibility that 𝐷(𝑥) ̸= 𝐷′ (𝑥), and the fact that the compressed oracle operation 𝖢𝖮′
does not modify |𝐷⟩. Finally, 𝑓$,1 only operates on ℋ by querying 𝐻 on states in this space, so it is
also invariant on the space.
Note that this is in the oracle model, so both 𝖮𝖳𝖯 and 𝖲𝗂𝗆 can output oracle-aided programs.
This definition helps us circumvent the strong impossibility result aforementioned: 𝖲𝗂𝗆 can
also make up a program that uses the single effective query oracle 𝑓$,1 . If the distinguisher tries to
gently query the oracle by only checking the correctness of evaluations, 𝖲𝗂𝗆 can also gently query
𝑓$,1 oracle as well, which does not prevent further queries.
22
4.3 Single-Query Learning Game and Learnability
In this section, we will give some further characterization and generalization of what types of
functions can be made into one-time sampling programs with a meaningful security notion.
Eventually, we want the adversary not to evaluate the program twice. Clearly, this is only
possible if the function itself cannot be learned with a single oracle query or at least it should be
hard to learn two input-output pairs given one oracle query. We formalize such unlearnability
through several definitions in this section.
The first definition is the most natural, which requires the adversary to be able to evaluate on
two different inputs of its own choice correctly.
Definition 4.9 (Single-Query Learning Game). A learning game for a function family ℱ = {ℱ𝜆 :
[𝑁 ] → [𝑀 ]}, a distribution family 𝒟 = {𝐷𝑓 }, and an adversary 𝒜 is denoted as 𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝒜 𝜆
ℱ ,𝒟 (1 )
which consists of the following steps:
1. Sampling Phase: At the beginning of the game, the challenger takes a security parameter 𝜆 and
samples a function (𝑓, 𝖺𝗎𝗑𝑓 ) ← ℱ𝜆 at random according to distribution 𝒟𝑓 , where 𝖺𝗎𝗑𝑓 is some
classical auxiliary information.
2. Query Phase: 𝒜 then gets a single oracle access to 𝑓 7 and some classical auxiliary information 𝖺𝗎𝗑𝑓 ;
3. Challenge Phase:
(a) 𝒜 outputs two input-output tuples (𝑥1 , 𝑟1 ; 𝑦1 ), (𝑥2 , 𝑟2 ; 𝑦2 ) where (𝑥1 , 𝑟1 ) ̸= (𝑥2 , 𝑟2 ).
(b) Challenger checks if 𝑓 (𝑥1 , 𝑟1 ) = 𝑦1 and 𝑓 (𝑥2 , 𝑟2 ) = 𝑦2.
Challenger outputs 1 if and only if both the above checks pass.
Definition 4.10 (Single-Query 𝛾-Unlearnability). Let 𝜆 be the security parameter and let 𝛾 = 𝛾(𝜆) be a
function. A function family ℱ = {ℱ𝜆 }𝜆∈ℕ is single-query 𝛾-unlearnable if for all (non-uniform) quantum
polynomial-time adversaries 𝒜,
Pr [𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝒜 𝜆
ℱ ,𝒟 (1 ) = 1] ≤ 𝛾(𝜆).
𝑓 ←ℱ𝑛
The most often used unlearnability definition refered to in this work is when 𝛾 = 𝗇𝖾𝗀𝗅(𝜆). We
will often refer to the following definition as "Single-Query unlearnable" for short.
Definition 4.11 (Single-query 𝗇𝖾𝗀𝗅(𝜆)-unlearnable functions). Let 𝜆 be the security parameter. A func-
tion family ℱ = {ℱ𝜆 }𝜆∈ℕ is single-query unlearnable if for all (non-uniform) quantum polynomial-time
adversaries 𝒜,
Pr [𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝒜 𝜆
ℱ ,𝒟 (1 ) = 1] ≤ 𝗇𝖾𝗀𝗅(𝜆).
𝑓 ←ℱ𝑛
23
Remark 4.13 (Examples of Single-Query 𝛾-Unlearnability). For some functionalities, 𝒜 may be able
to learn two input-outputs with a larger probability, (e.g. for a binary outcome random function, a random
guess would succeed with probability at least 1/2), but not non-negligibly larger than some threshold 𝛾(𝜆).
Generalized Unlearnability Note that the above single-query learnability is not strong enough
when we want 𝑓 to be a cryptographic functionality: the adversary may learn something impor-
tant about (𝑥1 , 𝑟1 , 𝑦1 ), (𝑥2 , 𝑟2 , 𝑦2 ) without outputting the entire input-output pair.
For example, when 𝑓 is a pseudorandom function: it may not suffice to guarantee that 𝒜
cannot compute correctly two pairs of input-outputs. We should also rule out 𝒜’s ability to win
an indistinguishability-based pseudorandomness game for both inputs (𝑥1 , 𝑟1 , 𝑦1 ), (𝑥2 , 𝑟2 , 𝑦2 ).
Before going into this more generic definition, we first define a notion of "predicate" important
to our generic definition.
Definition 4.14 (Predicate). A predicate 𝑃 (𝑓, 𝑥, 𝑟, 𝑧, 𝖺𝗇𝗌) is a binary outcome function that runs a pro-
gram 𝑓 on a some input (𝑥, 𝑟) to get output 𝑦, and outputs 0/1 depending on whether the tuple (𝑥, 𝑟, 𝑦, 𝑧, 𝖺𝗇𝗌)
satisfies a binary relation 𝑅𝑓 corresponding to 𝑓 : (𝑥, 𝑟, 𝑦, 𝑧, 𝖺𝗇𝗌) ∈ 𝑅𝑓 . 𝑧 is some auxiliary input that spec-
ifies the relation.
Note that the above predicate definition implicates that with the capability to evaluate 𝑓 (even
if using only oracle) access, one has the ability to verify whether the predicate 𝑃 (𝑓, 𝑥, 𝑟, 𝑧, 𝖺𝗇𝗌) is
satisfied.
Remark 4.15. We provide two concrete examples:
1. A first concrete example for the above predicate is a secret-key encryption scheme: 𝑓 is an encryption
function. The predicate is encrypting a message 𝑥 using 𝑟 and 𝖺𝗇𝗌 is an alleged valid ciphertext on
message 𝑥 using randomness 𝑟. Then the predicate 𝑃 is simply encrypting 𝑥 using 𝑟 to check if 𝖺𝗇𝗌
is the corresponding ciphertext.
2. Another example is when 𝑓 is a signing function. The predicate signs message 𝑥 using randomness 𝑟
and checks if 𝖺𝗇𝗌 is a valid signature for message 𝑥 with randomness 𝑟.
3. When 𝑓 is a PRF, a possible predicate is to check if an alleged evaluation 𝑦 is indeed the evaluation
𝖯𝖱𝖥(𝑘, 𝑥|𝑟).
Definition 4.16 (Generalized Single-Query Learning Game). learning game for a sampler 𝖲𝖺𝗆𝗉 (which
samples a function in ℱ𝜆 ), a predicate 𝑃 = {𝑃𝜆 }, and an adversary 𝒜 is denoted as 𝖦𝖾𝗇𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝒜 𝜆
𝖲𝖺𝗆𝗉,𝑃 (1 ),
which consists the following steps:
1. Sampling Phase: At the beginning of the game, the challenger samples (𝑓, 𝖺𝗎𝗑𝑓 ) ← 𝖲𝖺𝗆𝗉(1𝜆 ),
where 𝖺𝗎𝗑𝑓 is some classical auxiliary information.
Query Phase: 𝒜 then gets a single oracle access to 𝑓 and also gets 𝖺𝗎𝗑𝑓 ;
2. Challenge Phase:
(a) 𝒜 outputs two input-randomness pairs (𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ) where (𝑥1 , 𝑟1 ) ̸= (𝑥2 , 𝑟2 ).
(b) Challenger prepares challenges ℓ1 , ℓ2 i.i.d using (𝑥1 , 𝑟1 ) and (𝑥2 , 𝑟2 ) respectively and sends
them to 𝒜.
(c) 𝒜 outputs answers 𝖺𝗇𝗌1 , 𝖺𝗇𝗌2 for challenges ℓ1 , ℓ2 .
The game outputs 1 if and only if the answers satisfy the predicate 𝑃𝜆 (𝑓, 𝑥1 , 𝑟1 , ℓ1 , 𝖺𝗇𝗌1 ) and 𝑃𝜆 (𝑓, 𝑥2 , 𝑟2 , ℓ2 , 𝖺𝗇𝗌2 )
are both satisfied.
24
Definition 4.17 (Generalized Single-query 𝛾-Unlearnable functions). Let 𝜆 be the security parameter.
A function family ℱ = {ℱ𝜆 }𝜆∈ℕ is generalized single-query 𝛾 unlearnable for some 𝛾 = 𝛾(𝜆) if for all
(non-uniform) quantum polynomial-time adversaries 𝒜,
Pr [𝖦𝖾𝗇𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝒜 𝜆
𝖲𝖺𝗆𝗉,𝑃 (1 ) = 1] ≤ 𝛾.
(𝑓,𝖺𝗎𝗑𝑓 )←𝖲𝖺𝗆𝗉(1𝜆 )
Remark 4.18 (Example). To demonstrate how the above definition works on a concrete level, we give an
example of PRF: in the challenge phase, the adversary will provide (𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ). The challenger then
samples two independent, uniform random bits 𝑏1 , 𝑏2 : if 𝑏1 = 0, let 𝑦1 be the 𝖯𝖱𝖥 evaluation on (𝑥1 , 𝑟1 );
else let 𝑦1 be a random value. We assign 𝑦2 correspondingly using 𝑏2 similarly. The security guarantees
that 𝒜 should not be able to output correct guesses for both 𝑏1 , 𝑏2 with overall probability non-negligibly
larger than 1/2: 𝒜 supposedly always has the power to compute one of them correctly with probability 1;
but for the other challenge, 𝒜 should not be able to win with probability larger than it can do in a regular
pseudorandomness game, when it was not given the power to evaluate the PRF.
All the above definitions are well-defined for single-physical-query oracles or single effective
query oracles. We also make the following simple observation:
Claim 4.19. Single-effective-query 𝛾-unlearnability implies single-physical-query 𝛾-unlearnability.
The single-physical-query oracle is a strictly stronger oracle and therefore anything unlearn-
able with a single-effective query is unlearnable with a single physical query.
Remark 4.20. However, the other way of the above implication is not true. We will give a counter example
in Section 7.3.
Strong Operational One-Time Security We first give a one-time security that relaxes the simu-
lation based definitions Definition 4.8 and Definition 4.6.
The following definition says that given a one time program for 𝑓 , any 𝑄𝑃 𝑇 (or polynomial
quantum query in the oracle model) adversary should not be able to learn two input-output pairs
with a noticeably larger probability than a simulator with single query (physical/effective, resp.)
to the oracle.
Definition 4.21 (Strong Operational Security). A one-time (sampling) program for a function family
ℱ𝜆 satisfies strong operational security if: for all (non-uniform) quantum polynomial-time adversaries 𝒜
receiving one copy of 𝖮𝖳𝖯(𝑓 ), 𝖺𝗎𝗑𝑓 , (𝑓, 𝖺𝗎𝗑𝑓 ) ← ℱ𝜆 , there is a (non-uniform) quantum polynomial-time
simulator 𝖲𝗂𝗆 that is given single (physical/effective, resp.) quantum query access to 𝑓 , there exists a
negligible function 𝗇𝖾𝗀𝗅(𝜆), the following holds for all 𝜆 ∈ ℕ:
| Pr [𝑓 (𝑥1 , 𝑟1 ) = 𝑦1 ∧ 𝑓 (𝑥2 , 𝑟2 ) = 𝑦2 : ((𝑥1 , 𝑟1 , 𝑦1 ), (𝑥2 , 𝑟2 , 𝑦2 )) ← 𝒜(𝖮𝖳𝖯(𝑓 ), 𝖺𝗎𝗑𝑓 )]−
𝑓,𝖺𝗎𝗑𝑓 ←ℱ𝜆
Pr [𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝖲𝗂𝗆 𝜆
𝑓,𝒟 (1 ) = 1]| ≤ 𝗇𝖾𝗀𝗅(𝜆).
𝑓,𝖺𝗎𝗑𝑓 ←ℱ𝜆
25
where 𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝖲𝗂𝗆 𝜆
ℱ ,𝒟 (1 ) is the single query learnability game defined in Definition 4.10 and (𝑥1 , 𝑟1 ) ̸=
(𝑥2 , 𝑟2 ).
Remark 4.22. We will call the above 𝛾-strong operational one-time security if we have Pr[𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝖲𝗂𝗆 𝜆
𝑓,𝒟 (1 )] =
𝛾.
Generalized One-time Security Similar to the discussions in Section 4.3 the above security is not
sufficient when we want 𝑓 to be a cryptographic functionality: the adversary may learn something
important about (𝑥1 , 𝑟1 , 𝑦1 ), (𝑥2 , 𝑟2 , 𝑦2 ) without outputting the entire input-output pair.
Corresponding to the above generalized learning game in Definition 4.16, we give the follow-
ing definition:
Definition 4.23 (Generalized Operational One-Time Security Game). A Generalized Operational
One-Time Security game for a sampler 𝖲𝖺𝗆𝗉 (which samples a function in ℱ𝜆 ), a predicate 𝑃 = {𝑃𝜆 },
and an adversary 𝒜 is denoted as 𝖦𝖾𝗇𝖮𝖳𝖯𝒜 𝜆
𝖲𝖺𝗆𝗉,𝑃 (1 ), which consists the following steps:
1. Sampling Phase: At the beginning of the game, the challenger samples (𝑓, 𝖺𝗎𝗑𝑓 ) ← 𝖲𝖺𝗆𝗉(1𝜆 ),
where 𝖺𝗎𝗑 is some classical auxiliary information.
Query Phase: 𝒜 then gets a single copy 𝖮𝖳𝖯(𝑓 ) and classical auxiliary information 𝖺𝗎𝗑𝑓 ;
2. Challenge Phase:
(a) 𝒜 outputs two input-randomness pairs (𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ) where (𝑥1 , 𝑟1 ) ̸= (𝑥2 , 𝑟2 ).
(b) Challenger prepares challenges ℓ1 , ℓ2 i.i.d using (𝑥1 , 𝑟1 ) and (𝑥2 , 𝑟2 ) respectively and sends
them to 𝒜.
(c) 𝒜 outputs answers 𝖺𝗇𝗌1 , 𝖺𝗇𝗌2 for challenges ℓ1 , ℓ2 .
The game outputs 1 if and only if the predicate 𝑃𝜆 (𝑓, 𝑥1 , 𝑟1 , ℓ1 , 𝖺𝗇𝗌1 ) and 𝑃𝜆 (𝑓, 𝑥2 , 𝑟2 , ℓ2 , 𝖺𝗇𝗌2 ) are both
satisfied.
Definition 4.24 (Generalized Strong Operational Security). A one-time (sampling) program for a
function family ℱ𝜆 satisfies generalized strong operational security if: for all (non-uniform) quantum
polynomial-time adversaries 𝒜 there is a (non-uniform) quantum polynomial-time simulator 𝖲𝗂𝗆 that is
given single (physical/effective, resp.) quantum query access to 𝑓 , there exists a negligible function 𝗇𝖾𝗀𝗅(𝜆),
the following holds for all 𝜆 ∈ ℕ:
| Pr [𝖦𝖾𝗇𝖮𝖳𝖯𝒜
𝑃,𝖲𝖺𝗆𝗉 = 1] − Pr [𝖦𝖾𝗇𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝖲𝗂𝗆 𝜆
ℱ ,𝒟 (1 ) = 1]| ≤ 𝗇𝖾𝗀𝗅(𝜆).
𝑓,𝖺𝗎𝗑𝑓 ←ℱ𝜆 𝑓,𝖺𝗎𝗑𝑓 ←ℱ𝜆
where 𝖦𝖾𝗇𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝖲𝗂𝗆 𝜆
𝖲𝖺𝗆𝗉,𝑃 (1 ) is the single query learnability game defined in Definition 4.16.
Remark 4.25 (Example). We will give a formal concrete example for the above definition for PRFs for the
above in Definition 6.14.
Weak Operational One-time Security We finally present a relatively limited but intuitive defi-
nition: no efficient quantum adversary should be able to output two distinct samples, i.e. tuples
of the form (𝑥, 𝑟, 𝑓 (𝑥, 𝑟)) with non-negligible probability, given the one-time program 𝖮𝖳𝖯(𝑓 ) for
𝑓.
We can observe that this definition is only applicable to single-query 𝗇𝖾𝗀𝗅(𝜆)-unlearnable func-
tions defined in Definition 4.9. But this function class already covers many cryptographic applica-
tions that have search-based security, such as one-time signatures, encryptions and proofs.
26
Definition 4.26 (Weak operational one-time security). A one-time (sampling) program for a function
family ℱ𝜆 satisfies weak operational one-time security if: for all (non-uniform) quantum polynomial-time
adversaries 𝒜 there exists a negligible function 𝗇𝖾𝗀𝗅(𝜆), the following holds for all 𝜆 ∈ ℕ:
Remark 4.27. It is to observe that for a family of functions ℱ𝜆 which are 𝛾-unlearnable for any inverse
polynomial 𝛾 (i.e. 𝗇𝖾𝗀𝗅(𝜆)-unlearnable), Definition 4.26 and Definition 4.21 are equivalent.
First, it is easy to observe that Definition 4.21 implies Definition 4.26. When ℱ𝜆 is 𝗇𝖾𝗀𝗅(𝜆)-unlearnable,
the winning probability of the learning game for 𝖲𝗂𝗆 is 𝗇𝖾𝗀𝗅(𝜆), which makes Pr𝑓,𝖺𝗎𝗑𝑓 ←ℱ𝜆 [𝑓 (𝑥1 , 𝑟1 ) =
𝑦1 ∧ 𝑓 (𝑥2 , 𝑟2 ) = 𝑦2 : ((𝑥1 , 𝑟1 , 𝑦1 ), (𝑥2 , 𝑟2 , 𝑦2 )) ← 𝒜(𝖮𝖳𝖯(𝑓 ), 𝖺𝗎𝗑𝑓 )] negligible.
Operational one-time security for verifiable functions Another natural definition could require
security against adversaries to produce two input-output (𝑥, 𝑓 (𝑥, 𝑟)) pairs without necessarily
providing the randomness 𝑟 used to generate the output. This notion would make most sense
when there is an (not necessarily efficient) verification algorithm 𝖵𝖾𝗋𝗂𝖿𝗒𝑓 that takes pairs of the
form (𝑥, 𝑦) and either accepts or rejects. Call such function families verifiable.
Definition 4.29 (Operational one-time security for verifiable functions). A one-time (sampling) pro-
gram for a function family ℱ𝜆 satisfies verifiable operational one-time security if: for all (non-uniform)
quantum polynomial-time adversaries 𝒜 there exists a negligible function 𝗇𝖾𝗀𝗅(𝜆), the following holds for
all 𝜆 ∈ ℕ:
where 𝑥1 ̸= 𝑥2 .
27
4.5 Relationships among the definitions
Single-(physical) query
simulation-based security
Definition 4.4
Generalized Strong
operational security
Definition 4.24
Lemma 4.30. Suppose 𝖮𝖳𝖯 is a one-time program compiler that satisfies the single-query simulation-
based security definition (Definition 4.4) for a function family ℱ. Then, it also satisfies the single-query
classical-output simulation-based security definition (Definition 4.6) for ℱ.
Lemma 4.31. Suppose 𝖮𝖳𝖯 is a one-time program compiler that satisfies the single-query simulation-based
security definition (Definition 4.4) for a function family ℱ. Then, it also satisfies the single-effective-query
simulation-based security definition (Definition 4.8) for ℱ.
Lemma 4.32. Suppose 𝖮𝖳𝖯 is a one-time program compiler that satisfies the single-query classical-output
28
simulation-based security (Definition 4.6) for a function family ℱ. Then, it satisfies strong operational
security (Definition 4.21) ℱ.
Proof. We can observe by looking into the definition 4.21 that the adversary and simulator in this
definition are simply a special case of the classical-output adversary and simulator, by outputting
two correctly evaluated input-output pairs.
Lemma 4.33. Suppose 𝖮𝖳𝖯 is a one-time program compiler that satisfies the single-effective-query simulation-
based security (Definition 4.8) for a function family ℱ. Then, it satisfies generalized strong operational
security (Definition 4.24) ℱ.
Proof. The difference between definition 4.24 (resp. Definition 4.16) and Definition 4.21 (respec-
tively Definition 4.9) is that we can view the adversary/simulator as outputting a potentially
quantum state together with its own choice of (𝑥1 , 𝑟1 ) and (𝑥2 , 𝑟2 ) in the challenge phase; then
the quantum state is going to answer the challenger’s challenges. Therefore, it is a special case of
the single-physical/effective-query (depending which oracle we give to 𝖲𝗂𝗆) simulation definition
with quantum outputs, but not necessarily a special case of the simulation definition with classical
outputs.
Then the state |𝜓1 ⟩ ⊗ · · · ⊗ |𝜓𝑚 ⟩ gives an overwhelming fraction of its amplitude to values 𝐯 =
(𝑣1 , . . . , 𝑣𝑚 ) for which 𝒪𝐴𝑥𝑖 (𝑣𝑖 ) = 1 for all 𝑖 ∈ [𝑚].
𝑖
Then after applying 𝒪𝑓,𝐺,𝐀 to 𝒬, the state of the 𝒬 register gives an overwhelming fraction of
its amplitude to values (𝑥, 𝐯, 𝑦) such that 𝑦 = 𝑓 (𝑥, 𝐺(𝐯)).
Note that 𝐺(𝐯) is uniformly random over ℛ due to the randomness of 𝐺. Then the output
of 𝖤𝗏𝖺𝗅(1𝜆 , 𝖮𝖳𝖯(𝑓 ), 𝑥) is negligibly close in statistical distance to 𝑓 (𝑥, 𝑅), where 𝑅 is uniformly
random over ℛ.
Theorem 5.2. Let 𝜆 be the security parameter. Let 𝑚, ℓ ∈ ℕ. For any function 𝑓 ∈ ℱ : 𝒳 × ℛ → {0, 1}ℓ
for which we let 𝒳 = {0, 1}𝑚 , the OTP construction given in Figure 3 satisfies the single-effective query
simulation-based OTP security notion of Definition 4.8.
In general, 𝑓 (𝑥, 𝑟) may or may not output its randomness 𝑟. This will not modify our proof.
Our construction does not have any requirements on 𝑓 ’s input lengths, for either the message
or randomness. However, we remark that the SEQ simulation definition may not be very mean-
ingful when the randomness is small. For example, if |ℛ| is only polynomially large, then any
29
𝖦𝖾𝗇𝖾𝗋𝖺𝗍𝖾(1𝜆 , 𝑓 ):
{︃ {︃
1 if 𝑣 ∈ 𝐴𝑖 ∖{0}, 1 if 𝑣 ∈ 𝐴⊥
𝑖 ∖{0},
𝒪𝐴0 (𝑣) = and 𝒪𝐴1 (𝑣) =
𝑖 0 otherwise, 𝑖
0 otherwise.
(𝑥, 𝐯, 𝑢) otherwise
(︁ )︁
4. Output 𝖮𝖳𝖯(𝑓 ) = (|𝐴𝑖 ⟩)𝑖∈[𝑚] , 𝒪𝑓,𝐺,𝐀 .
𝖤𝗏𝖺𝗅(𝖮𝖳𝖯(𝑓 ), 𝑥):
measurement made to 𝑓 can at most disturb the program state by 1 − 1/𝗉𝗈𝗅𝗒. Thus, the adversary
may be able to perform a second query with a 1/𝗉𝗈𝗅𝗒 success rate.
The above theorem combined with the relationships between security definitions in Section 4.5
directly gives the following corollary:
Corollary 5.3. For function families satisfying the unlearnability definitions in Definition 4.17 (or Defi-
nition 4.10, Definition 4.11 respectively), there exists secure one-time sampling programs for them in the
classical oracle model with respect to security definition Definition 4.24 (or Definition 4.21, Definition 4.26)
resp.).
30
Intuition. To gain intuition about the simulator, we first recall the differences between the real
and ideal worlds. In the real world, the OTP uses randomness 𝐺(𝑣) to evaluate 𝑓 , where 𝑣 is a
measurement of |𝐴⟩ in a basis corresponding to the chosen input 𝑥. If 𝑓 is sufficiently random,
then measuring 𝑓 (𝑥, 𝐺(𝑣)) may collapse |𝐴⟩, preventing further queries. On the other hand, in the
ideal world, the SEQ oracle uses randomness 𝐻(𝑥) to evaluate 𝑓 . If 𝑓 is sufficiently random, then
measuring 𝑓 (𝑥; 𝐻(𝑥)) may collapse the SEQ oracle’s internal state, preventing further queries.
The main gaps that the simulator needs to bridge between these worlds are the usage of |𝐴⟩
versus the usage of an internal state to control query access, as well as the usage of 𝐺(𝑣) versus
𝐻(𝑥). Since 𝐺 and 𝐻 are internal to the OTP oracle and SEQ oracle, respectively, the latter is
not an issue even if 𝑓 outputs its randomness directly. The simulator addresses the former by
maintaining a cache for subspace vectors. If it detects that the SEQ oracle will not permit other
queries, it stores the most recent subspace vector locally. This collapses |𝐴⟩ in the view of the
adversary, ensuring that a successful 𝑓 evaluation in the ideal world looks similar to if 𝑓 were
evaluated using |𝐴⟩ in the real world.
1. For each 𝑖 ∈ [𝑚], sample a subspace 𝐴𝑖 ⊆ 𝔽𝜆2 of dimension 𝜆/2 uniformly at random.
2. Initialize vector cache register 𝒱 = 𝒱1 × · · · × 𝒱𝑚 to |0𝜆 ⟩𝒱1 ⊗ · · · ⊗ |0𝜆 ⟩𝒱𝑚 .
3. Prepare an oracle 𝒪𝖲𝗂𝗆 as follows.
(a) 𝒪𝖲𝗂𝗆 acts on a query register 𝒬 = (𝒬𝑥 , 𝒬𝐯 , 𝒬𝑢 ), which contains superpositions over
states of the form |𝑥, 𝐯, 𝑢⟩, where 𝑥 ∈ 𝒳 , 𝐯 = (𝑣1 , . . . , 𝑣𝑚 ) ∈ {0, 1}𝑛·𝑚 , 𝑢 ∈ 𝒴.
(b) If 𝒱 contains 0𝑛·𝑚 or 𝐯, and if 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖{0} for all 𝑖 ∈ [𝑚], then 𝒪𝖲𝗂𝗆 does the
following:
i. Prepare |𝑥 ⊕ 1, 0, 0⟩ in a register 𝒬′ = (𝒬′𝑥 , 𝒬′𝑢 , ℬ ′ ), then query 𝑓$,1 on register
𝒬′ .
ii. If ℬ ′ has value 0, apply a CNOT from register 𝒬𝐯 to register 𝒱.
iii. Uncompute step 3(b)i
iv. Query 𝑓$,1 on |𝑥, 𝑢, 0⟩𝒬𝑥 ,𝒬𝑢 ,ℬ .
v. Prepare |𝑥 ⊕ 1, 0, 0⟩ in a register 𝒬′ = (𝒬′𝑥 , 𝒬′𝑢 , ℬ ′ ), then query 𝑓$,1 on register
𝒬′ .
vi. If ℬ ′ has value 0, apply a CNOT from register 𝒬𝐯 to register 𝒱.
vii. Uncompute step 3(b)v.
(︁ )︁
4. Output (|𝐴𝑖 ⟩)𝑖∈[𝑚] , 𝒪𝖲𝗂𝗆 .
• 𝖧𝗒𝖻0 : The real distribution 𝖮𝖳𝖯(𝑓 ). Recall that 𝖮𝖳𝖯(𝑓 ) outputs an oracle 𝒪𝑓,𝐺,𝐴 which acts
as follows on input (𝑥, 𝑣, 𝑢):
1. Vector Check: It checks that 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖{0} for all 𝑖 ∈ [𝜆]. If not, it immediately outputs
(𝑥, 𝑣, 𝑢).
2. Evaluation: Compute 𝑢 ⊕ 𝑓 (𝑥; 𝐺(𝑣)).
3. Output (𝑥, 𝑣, 𝑢 ⊕ 𝑓 (𝑥; 𝐻(𝑣)).
31
• 𝖧𝗒𝖻1 : The only difference from 𝖧𝗒𝖻0 is that 𝐺 is implemented as a compressed oracle. 𝒪𝑓,𝐺,𝐴
maintains the compressed oracle’s database register 𝒟 internally. To evaluate 𝑓 in step 2, it
queries |𝑣, 0⟩𝒬′ to 𝐺 in register 𝒬′ to obtain |𝑣, 𝐺(𝑣)⟩𝒬′ , then applies the isometry
to registers 𝒬 and 𝒬′ , and finally queries 𝐺 on register 𝒬′ again to reset it to |𝑣, 0⟩𝒬′ .
• 𝖧𝗒𝖻2 : The only difference from 𝖧𝗒𝖻1 is in step 1 of 𝒪𝑓,𝐺,𝐴 . Instead of checking that 𝑣𝑖 ∈
𝐴𝑥𝑖 𝑖 ∖{0}, it checks that 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥
𝑖 ).
• 𝖧𝗒𝖻3 : The only difference from 𝖧𝗒𝖻2 is we add a single-effective-query check to 𝒪𝑓,𝐺,𝐴 . It
now answers basis state queries |𝑥, 𝑣, 𝑢⟩ as follows:
1. Vector Check: Check that 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥ 𝑖 ) for all 𝑖 ∈ [𝜆]. If not, immediately output
(𝑥, 𝑣, 𝑢).
2. SEQ Check (New): Look inside 𝐺’s compressed database register 𝒟 to see if there is
an entry of the form (𝑣 ′ , 𝑟) for some 𝑟 and 𝑣 ′ ∈
/ {0, 𝑣}. If so, then immediately output
register 𝒬.
3. Evaluation: Compute |𝑥, 𝑣, 𝑣⟩ ↦→ |𝑥, 𝑣, 𝑢 ⊕ 𝑓 (𝑥; 𝐺(𝑣))⟩ on register 𝒬. This involves
querying the compressed oracle 𝐺 twice, as described in 𝖧𝗒𝖻1 .
4. Output register 𝒬.
• 𝖧𝗒𝖻4 : The only difference from 𝖧𝗒𝖻2 is we add a caching routine to 𝒪𝑓,𝐺,𝐴 .8 The oracle
maintains a register ℛ𝑥 which is initialized to |0⟩. On receiving query (𝑥, 𝑣, 𝑢), the oracle
𝒪𝑓,𝐺,𝐴 does the following:
1. Vector Check: Check that 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥
𝑖 ) for all 𝑖 ∈ [𝜆]. If not, immediately output
(𝑥, 𝑣, 𝑢).
2. SEQ Check: Do the single-effective query check that was added in 𝖧𝗒𝖻3 .
3. Cache 1 (New): Look inside 𝐺’s compressed database register 𝒟. If it contains a
nonempty database 𝐷 ̸= ∅, then perform a CNOT from register 𝒬𝑥 to register ℛ𝑥 .
4. Evaluation: Apply the isometry |𝑥, 𝑣, 𝑢⟩𝒬 ↦→ |𝑥, 𝑣, 𝑢 ⊕ 𝑓 (𝑥; 𝐺(𝑣))⟩𝒬 to register 𝒬. This
involves querying the compressed oracle 𝐺 twice, as described in 𝖧𝗒𝖻1 .
5. Cache 2 (New): Look inside 𝐺’s compressed database register 𝒟. If there is an entry of
the form (𝑣 ′ , 𝑟) for some 𝑟 and 𝑣 ′ , then perform a CNOT from register 𝒬𝑥 to register ℛ𝑥 .
6. It outputs register 𝒬.
• 𝖧𝗒𝖻5 : In this hybrid, 𝒪𝑓,𝐺,𝐴 swaps the role of 𝑣 and 𝑥 in the cache and compressed oracle.9
The oracle, which we rename to 𝒪𝑓,𝐻,𝐴 , maintains a cache register 𝒱 and a compressed oracle
𝐻 : 𝒳 → ℛ instead of 𝐺 : 𝔽𝑛2 → ℛ. On query (𝑥, 𝑣, 𝑢), it does the following:
1. Vector Check: Check that 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥
𝑖 ) for all 𝑖 ∈ [𝜆]. If not, immediately output
(𝑥, 𝑣, 𝑢).
2. SEQ Check (Modified): Look inside 𝒱 to see if it contains some 𝑣 ′ ∈ / {0, 𝑣}. If so,
immediately output 𝒬.
8
Intuitively, this hybrid will ensure that whenever some 𝑣 corresponding to input 𝑥 is recorded, 𝑥 is also recorded.
Thus, the internal oracle state will look like |0, ∅⟩ or |𝑥, {(𝑣, 𝑟)}⟩ in 𝖧𝗒𝖻4 . This intuition is made formal in Claim 5.6.
9
Intuitively, this modifies the internal oracle state from |𝑥, {(𝑣, 𝑟)}⟩ to |𝑣, {(𝑥, 𝑟)}⟩, without modifying the case where
the oracle state would be |0, ∅⟩.
32
3. Cache 1 (Modified): Look inside 𝐻’s compressed database register 𝒟. If it contains a
nonempty database 𝐷 ̸= ∅, then perform a CNOT from register 𝒬𝑥 to register ℛ𝑥 .
4. Evaluation (Modified): Applies the isometry |𝑥, 𝑣, 𝑢⟩𝒬 ↦→ |𝑥, 𝑣, 𝑢 ⊕ 𝑓 (𝑥; 𝐻(𝑥))⟩𝒬 to reg-
ister 𝒬. This involves querying the compressed oracle 𝐻 twice, analogously to the pro-
cedure in 𝖧𝗒𝖻1 .
5. Cache 2 (Modified): It looks inside 𝐻’s compressed database register 𝒟. If it contains a
nonempty database 𝐷 ̸= ∅, then perform a CNOT from register 𝒬𝑥 to register ℛ𝑥 .
6. Output register 𝒬.
• 𝖧𝗒𝖻6 : The only difference from 𝖧𝗒𝖻4 is a change to the SEQ check in step 2. Instead of
looking inside 𝒱 to see if it contains some 𝑣 ′ ∈
/ {0, 𝑣}, the oracle 𝒪𝑓,𝐻,𝐴 instead looks inside
𝐻’s compressed database register 𝒟 to see if there is an entry of the form (𝑥′ , 𝑟) for 𝑥 ̸= 𝑥′ .
• 𝖧𝗒𝖻7 = 𝖲𝗂𝗆 : The only differences from 𝖧𝗒𝖻5 are in the caching routine in steps 3 and 5. It
replaces each of these steps with the following procedure:
1. Prepare a |0⟩ state in register ℬ. Controlled on 𝐻’s database register having an entry of
the form (𝑥′ , 𝑟) for 𝑥′ ̸= 𝑥 ⊕ 1, apply a NOT operation to register ℬ.
2. If register ℬ contains |1⟩, apply a CNOT operation from register 𝒬𝑣 to register 𝒱.
3. Uncompute step 1.
𝖧𝗒𝖻0 is perfectly indistinguishable from 𝖧𝗒𝖻1 by the properties of a compressed random oracle.
We now show indistinguishability for each of the other sequential pairs of hybrid experiments.
Proof. The only difference between these is that 𝖧𝗒𝖻2 additionally returns early whenever some
𝑣𝑖 ∈ (𝐴𝑖 ∩ 𝐴⊥𝑖 )∖{0}. This condition can only occur with negligible probability; otherwise, we can
break direct product hardness for subspace states (Theorem 3.9) by embedding a challenge sub-
space state in a random index 𝑖* , using polynomially many queries to the challenge membership
oracles to run 𝖧𝗒𝖻1 , and after every query, if 𝑣𝑖 ∈ (𝐴𝑖 ∩ 𝐴⊥𝑖 )∖{0} for some 𝑖 ∈ 𝜆, measuring the
query register to obtain 𝑣. Since 𝑖* is independent of the adversary’s view, 𝑖 = 𝑖* with probability
1/𝑚 whenever this occurs. Note that 𝑚 is polynomial in 𝜆.
Proof. It is sufficient to show that the single-effective-query check added in step 2 causes an early
output only with probability 𝗇𝖾𝗀𝗅(𝜆). We reduce this fact to the direct product hardness of sub-
space states (Theorem 3.9). Say that some adversary 𝒜𝖮𝖳𝖯 caused this event to occur with notice-
able probability. We construct an adversary 𝒜𝐷𝑃 to break direct product hardness as follows:
(︁ )︁
1. 𝒜𝐷𝑃 receives from the challenger |𝐴* ⟩ , 𝒪𝐴* , 𝒪𝐴⊥ *
for some subspace 𝐴* ⊆ 𝔽𝑛2 of dimen-
sion 𝑛/2 sampled uniformly at random. (︁ )︁
$
2. 𝒜𝐷𝑃 samples an index 𝑖* ← [𝑚] in which to embed 𝐴* , and they set |𝐴𝑖 ⟩ , 𝒪𝐴𝑖 , 𝒪𝐴⊥ =
(︁ )︁ 𝑖
33
(︁ )︁
4. 𝒜𝐷𝑃 uses |𝐴𝑖 ⟩ , 𝒪𝐴𝑖 , 𝒪𝐴⊥ to construct 𝖮𝖳𝖯(𝑓 ), as described in ℋ3 . Then they run
𝑖 𝑖∈[𝑚]
𝒜𝖮𝖳𝖯 on this construction, with the following modification: in 2, it measures the early return
condition, instead of checking it coherently.
5. 𝒜𝐷𝑃 terminates 𝖧𝗒𝖻3 as soon as the measurement result indicates to return early. Then, it
measures registers 𝒬𝑣 and the oracle database 𝒟 to obtain 𝑣 and (𝑣 ′ , 𝑟). It outputs 𝑣𝑖* and
𝑣𝑖′* .
Let 𝜈(𝜆) be the probability that 𝒜𝐷𝑃 finds and outputs two vectors 𝑣𝑖* and 𝑣𝑖′* . 𝜈(𝜆) is non-
negligible because otherwise, 𝒜𝖮𝖳𝖯 would trigger the early return condition with only negligible
probability.
By definition of the early return condition, there is at least one index where 𝑣𝑖 ̸= 𝑣𝑖′ . With
probability ≥ 𝑚 1
, this index is 𝑖* because the value of 𝑖* is independent of 𝒜𝖮𝖳𝖯 ’s view. Fur-
thermore, by definition of 𝖧𝗒𝖻2 , the only vectors 𝑤 that are queried to 𝐺 are those that satisfy
𝑤𝑖 ∈ (𝐴 ∪ 𝐴⊥ )∖(𝐴 ∩ 𝐴⊥ ), so at all times, the entries (𝑤, 𝑟) in 𝐺’s database satisfy this form. There-
fore 𝑣𝑖′ ∈ (𝐴 ∪ 𝐴⊥ )∖(𝐴 ∩ 𝐴⊥ ). Similarly, if step 2 is reached, then 𝑣𝑖 ∈ (𝐴 ∪ 𝐴⊥ )∖(𝐴 ∩ 𝐴⊥ ).
Therefore whenever 𝑣𝑖* ̸= 𝑣𝑖′* , 𝒜𝐷𝑃 wins the direct product hardness game. This occurs with
probability ≥ 𝜈(𝜆)
𝑚 . By Theorem 3.9, 𝜈(𝜆) must be negligible.
To aid in showing that 𝖧𝗒𝖻3 is indistinguishable from 𝖧𝗒𝖻4 , we prove that the cache introduced
in 𝖧𝗒𝖻4 maintains the invariant that 𝑥 is cached if and only if some 𝑣 uniquely corresponding to 𝑥
is recorded in the compressed oracle 𝐺.
Claim 5.6. After every query in 𝖧𝗒𝖻4 , the internal states of 𝒪𝑓,𝐺,𝐴 , consisting of the cache register ℛ𝑥 and
𝐺’s database register 𝒟, lies entirely within the space spanned by states of the form
|0⟩ℛ𝑥 ⊗ |∅⟩𝒟
|𝑥⟩ℛ𝑥 ⊗ |{(𝑣, 𝑟)}⟩𝒟
for some 𝑥 ∈ 𝒳 , 𝑟 ∈ ℛ, and 𝑣 ∈ {0, 1}𝑚·𝑛 such that 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥ 𝑖 ) for all 𝑖 ∈ [𝜆].
In other words, if the state at time 𝑡 is 𝜌𝑡ℛ𝑥 ,𝒟 and Π𝖧𝗒𝖻4 projects onto this space, then
𝖳𝗋[Π𝖧𝗒𝖻4 𝜌𝑡ℛ𝑥 ,𝒟 ] = 1
Proof. This is true at when 𝑓$,1 is initialized, since registers ℛ𝑥 , 𝒟 are initialized to |0, ∅⟩. We now
show that 𝒪𝑓,𝐺,𝐴 is invariant on the space determined by Π𝖧𝗒𝖻4 . It suffices to consider the action
of 𝒪𝑓,𝐺,𝐴 on these basis states.
We go through the operations of 𝒪𝑓,𝐺,𝐴 step-by-step. If step 1 (vector check) does not return
early, then 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥
𝑖 ) for all 𝑖 ∈ [𝜆]. In step 2 (SEQ check), there are two cases. If 𝐷 has an
′
entry (𝑣 , 𝑟), then the SEQ check causes 𝒪𝑓,𝐺,𝐴 to return register 𝒬 without modifying its internal
state. On the other hand, if 𝑣 = 𝑣 ′ or if 𝐷 = ∅, then 𝒪𝑓,𝐺,𝐴 proceeds to step 3.
We claim that at the end of step 3 (cache 1), ℛ𝑥 contains |0⟩. If 𝐷 = ∅, step 3 leaves ℛ𝑥 as |0⟩.
If 𝑣 = 𝑣 ′ , 𝒪𝑓,𝐺,𝐴 applies a CNOT operation from 𝒬𝑥 to ℛ𝑥 . Before this operation, 𝒬𝑥 contained
a value 𝑥′ such that 𝑣𝑖′ = 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 for all 𝑖 ∈ [𝜆]. Since 𝑣𝑖 ∈ / (𝐴 ∩ 𝐴⊥ ) for all 𝑖 ∈ [𝜆], 𝑣 uniquely
determines 𝑥, so 𝑥′ = 𝑥. Therefore after applying the CNOT, register 𝒬𝑥 contains |𝑥 ⊕ 𝑥⟩ = |0⟩.
In step 4 (evaluation), the oracle queries 𝐺 on 𝑣. During this, the compressed oracle modifies
its database register 𝒟, but the new databases 𝐷′ in the support of the state satisfy 𝐷′ (𝑤) = 𝐷(𝑤)
34
for all 𝑤 ̸= 𝑣. Since 𝐷(𝑤) = ⊥ for all 𝑤 ̸= 𝑣, register 𝒟 is supported on |∅⟩ and |{(𝑣, 𝑟)}⟩ for some
𝑟 ∈ ℛ at the end of this step.
Applying step 5 (cache 2) produces |𝑥⟩ℛ𝑥 ⊗|{(𝑣, 𝑟)}⟩𝒟 when 𝐷 = {(𝑣, 𝑟)} and produces |0⟩ℛ𝑥 ⊗
|∅⟩𝒟 when 𝐷 = ∅. Step 6 does not modify these registers further.
Proof. By Claim 5.6, the state of registers (𝒳 , 𝒟) in 𝖧𝗒𝖻3 is always supported on states of the form
|0, ∅⟩ or |𝑥, (𝑣, 𝑟)⟩ where 𝑥 ∈ 𝒳 , 𝑟 ∈ ℛ, and 𝑣 ∈ {0, 1}𝑚·𝑛 such that 𝑣𝑖 ∈ 𝐴𝑥𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥
𝑖 ) for all 𝑖 ∈ [𝜆].
This condition implies that 𝑣 uniquely determines 𝑥, which we now denote by 𝑥𝑣 . Therefore there
is an isometry mapping
|0, (𝑣, 𝑟)⟩𝒳 ,𝒟 ↦→ |𝑥𝑣 , (𝑣, 𝑟)⟩𝒳 ,𝒟
Let 𝑈 be a unitary which implements this isometry. The state of 𝖧𝗒𝖻4 at any time 𝑡 can be generated
by running 𝖧𝗒𝖻3 until time 𝑡 with the following modification: before each query, apply 𝑈 † and
after answering it, apply 𝑈 .
and acting as the identity on all orthogonal states. We show by induction over the time 𝑡 that
𝖧𝗒𝖻4 𝖧𝗒𝖻5
(𝐼𝐴,𝒬 ⊗ 𝑈𝒱,𝒟 ) |𝜓𝑡 ⟩𝒜,𝒬,𝒱,𝒟 = |𝜓𝑡 ⟩𝒜,𝒬,𝒱,𝒟
This is clearly true for 𝑡 = 0, since both hybrids initialize 𝒱, 𝒟 to |0, ∅⟩, which 𝑈 acts as
the identity on. Now consider some time 𝑡. By the inductive hypothesis and linearity of quan-
𝖧𝗒𝖻4 𝖧𝗒𝖻5
tum computation, it suffices to consider the actions of 𝒪𝑓,𝐺,𝐴 and 𝒪𝑓,𝐺,𝐴 on the same basis state
′
|𝑥, 𝑣, 𝑢⟩𝒬 ⊗ |𝑥 , 𝐷⟩ℛ𝑥 ,𝒟 . Furthermore, by Claim 5.6, we may restrict ourselves to basis states of the
form
𝑥′
where 𝑣𝑖′ ∈ 𝐴𝑖 𝑖 ∖(𝐴𝑖 ∩ 𝐴⊥𝑖 ).
𝖧𝗒𝖻4 𝖧𝗒𝖻5
Observe that step 1 (vector check) of 𝒪𝑓,𝐺,𝐴 and 𝒪𝑓,𝐺,𝐴 are identical, and that step 2 (SEQ
check) is also identical after applying 𝑈 to registers ℛ𝑥 = 𝒱 and 𝒟. If either step prompts an early
return, then the states are identical. Otherwise, the input state is now constrained to (1) 𝐷 = ∅ and
𝑥′ = 0 or (2) 𝐷 ∈ {{(𝑣, 𝑟)}}𝑟∈ℛ and 𝑥 = 𝑥′ . In either case, after step 3, the state of 𝖧𝗒𝖻3 and 𝖧𝗒𝖻4
are, respectively:
35
where either (1) 𝐷𝑣 = 𝐷𝑥 = 0 or 𝐷𝑣 = {(𝑣, 𝑟)} and 𝐷𝑥 = {(𝑥, 𝑟)} for some 𝑟 ∈ ℛ. In either case,
𝐷𝑣 is related to 𝐷𝑥 by 𝐷𝑣 (𝑣) = 𝐷𝑥 (𝑥), 𝐷𝑣 (𝑥) = ⊥ = 𝐷𝑥 (𝑣) and 𝐷𝑣 (𝑤) = 𝐷𝑣 (𝑤) for all 𝑤 ∈ / {𝑥, 𝑣}.
By Claim 3.6, the states after each query to 𝐺 (respectively 𝐻) in step 4 (evaluation) are iden-
tical up to renaming 𝑣 to 𝑥 in the compressed database. Thus, at the end of step 4, the states are,
respectively:10
∑︁
𝛼∅ |𝑥, 𝑣, 𝜓∅ ⟩𝒬 ⊗ |0, ∅⟩ℛ𝑥 ,𝒟 + 𝛼𝑟 |𝑥, 𝑣, 𝑢 ⊕ 𝑓 (𝑥; 𝑟)⟩𝒬 ⊗ |0, {(𝑣, 𝑟)}⟩ℛ𝑥 ,𝒟
𝑟
∑︁
𝛼∅ |𝑥, 𝑣, 𝜓∅ ⟩𝒬 ⊗ |0, ∅⟩𝒱,𝒟 + 𝛼𝑟 |𝑥, 𝑣, 𝑢 ⊕ 𝑓 (𝑥; 𝑟)⟩𝒬 ⊗ |0, {(𝑥, 𝑟)}⟩𝒱,𝒟
𝑟
36
Claim 5.11. 𝖧𝗒𝖻6 and 𝖧𝗒𝖻7 are perfectly indistinguishable.
Proof. Observe that the new caching procedure in 𝖧𝗒𝖻7 for steps 3 and 5 applies a CNOT operation
from register 𝒬𝑣 to register 𝒱 if and only if 𝐻’s database register contains an entry (𝑥′ , 𝑟) for
𝑥′ ̸= 𝑥 ⊕ 1. The single-effective-query check from step 2 ensures that 𝐻’s database register is in
the span of |𝐷⟩ where 𝐷 = ∅ or 𝐷 contains exactly one entry of the form (𝑥, 𝑟). Since 𝑥 ̸= 𝑥 ⊕ 1,
the new caching procedure applies a CNOT if and only if 𝐻’s database register contains an entry
of the form 𝑥.
On the other hand, steps 3 and 5 in 𝖧𝗒𝖻6 apply the same CNOT operation if 𝐻’s database
register is non-empty. The single-effective-query check in step 2 ensures that this occurs only when
𝐻 contains an entry of the form 𝑥. This is identical to the new caching procedure in 𝖧𝗒𝖻7 .
• High Min-Entropy. If 𝑓 (𝑥; 𝑟) is not sufficiently dependent on the randomness 𝑟, then mea-
suring 𝑓 (𝑥; 𝑟) may only gently measure 𝑟. In this case, the SEQ oracle will allow additional
queries with some lower, but still inverse polynomial, amplitude.
• Unforgeability. If it is possible to compute some 𝑓 (𝑥′ ; 𝑟′ ) given only 𝑓 (𝑥; 𝑟), then the adver-
sary could learn two function evaluations using one query.
We emphasize that any reasonable notion of unlearnability must be average-case over the choice
of 𝑓 from some family. Otherwise, an adversary could trivially learn everything about 𝑓 by receiv-
ing it as auxiliary input.
Truly Random Functions. As a concrete example, a truly random function exemplifies both of
the above properties; it has maximal entropy on every input and 𝑓 (𝑥; 𝑟) is completely independent
of 𝑓 (𝑥′ ; 𝑟′ ). Indeed, we are able to show that any adversary with SEQ access to a truly random
function cannot output two input/output pairs, except with negligible probability.
Proposition 5.12. Random functions with superpolynomial range size are single-effective-query 𝗇𝖾𝗀𝗅(𝜆)-
unlearnable Definition 4.11. More formally, for functions ℱ : 𝒳 × ℛ → 𝒴, where |ℛ| = 2𝜆 and |𝒴| is
superpolynomial in 𝜆, and for all (non-uniform) quantum polynomial-time adversaries 𝒜, there exists a
negligible function 𝗇𝖾𝗀𝗅(·) such that:
𝑓 is sampled uniformly at random from ℱ and 𝑓$,1 is the single-effective-query oracle for 𝑓 defined in
Section 4.2.
37
To prove this claim, we introduce the following technical lemma about compositions 𝑓 ∘ 𝐻
of random functions where 𝑓 and 𝐻 are implemented as compressed oracles. It shows that if 𝑓
records a query 𝑥‖𝑦, then 𝐻 must record a corresponding query 𝑥 where 𝑓 (𝑥) = 𝑦. Intuitively, this
implies that 𝑓 can only record a single query, since that is the restriction on 𝐻. Any input/output
pairs that the adversary learns will be recorded by 𝑓 , so they can only learn a single one. We prove
the technical lemma in Appendix C.2.11
𝑞 is the number of queries the adversary has made. Since 𝐷𝐻 contains at most one entry at a
time (Claim 4.7) and ℛ has superpolynomial size, 𝑝′ must be negligible in 𝜆. Since 𝒴 also has
superpolynomial size, 𝑝 must be negligible as well.
11
Appendix C.2 also contains a related technical lemma which shows that if an adversary has access to 𝐻 ∘ 𝐺 and 𝐻
records an entry (𝑦, 𝑧), then 𝐺 records an entry (𝑥, 𝑦) for some 𝑥. The difference from the lemma mentioned here is that
𝐻 does not also take 𝐺’s input 𝑥 as part of its input.
38
As an immediate corollary of Proposition 5.12, psuedorandom functions are SEQ-unlearnable
under Definition 4.11.
Pairwise Independent Functions. We are also able to relax the requirement that 𝑓 is a truly
random function to just require that it is pairwise independent and has high entropy. Intuitively,
the pairwise independence plays the role of unforgeability by ensuring the adversary cannot use
one evaluation 𝑓 (𝑥; 𝑟) to learn anything about other evaluations 𝑓 (𝑥′ ; 𝑟′ ). We prove the following
statement in Appendix A.1.
1. Pairwise independence: For any (𝑥, 𝑟, 𝑦), (𝑥′ , 𝑟′ , 𝑦 ′ ) ∈ 𝒳 × ℛ × 𝒴 such that (𝑥, 𝑟) ̸= (𝑥′ , 𝑟′ ),
Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦 ∧ 𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ] = Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] · Pr𝑓 [𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ].
2. High Randomness: There is a negligible function 𝜈(𝜆) such that for any (𝑥, 𝑟, 𝑦) ∈ 𝒳 × ℛ × 𝒴,
1
3. |ℛ| = 𝗇𝖾𝗀𝗅(𝜆)
Then ℱ is SEQ-𝗇𝖾𝗀𝗅(𝜆)-unlearnable.
Definition 5.15. Let ℱ = {𝑓 : 𝒳 → 𝒴}𝑓 be a function family associated with a distribution 𝖣𝗂𝗌𝗍𝗋ℱ over
function and auxiliary input pairs (𝑓, 𝖺𝗎𝗑𝑓 ) and let 𝑃 be a predicate on 𝑓 , 𝖺𝗎𝗑𝑓 , and a pair of strings (𝑥, 𝑠).
ℱ and 𝖣𝗂𝗌𝗍𝗋ℱ are quantum blind unforgeable with respect to 𝑃 if for every QTP adversary 𝒜 and
blinding set 𝐵 ⊂ 𝒳 × ℛ,
[︂ ]︂
(𝑓, 𝖺𝗎𝗑𝑓 ) ← 𝖣𝗂𝗌𝗍𝗋ℱ
Pr 𝑥 ∈ 𝐵 ∧ 𝑃 (𝑓, 𝖺𝗎𝗑𝑓 , 𝑥, 𝑠) = 𝖠𝖼𝖼𝖾𝗉𝗍 :
(𝑥, 𝑠) ← 𝒜𝑓𝐵 (𝖺𝗎𝗑𝑓 )
where 𝑓𝐵 denotes a (quantumly-accessible) oracle that takes as input 𝑥 then outputs 𝑓 (𝑥) if 𝑥 ∈
/ 𝐵, and
otherwise outputs ⊥.
We will show that if a randomized function 𝑓 is quantum blind unforgeable and uses the
sampled randomness in a particular way, then it is hard to come up with two input/output pairs
of 𝑓 when given SEQ access to it.
Proposition 5.16. Let ℱ = {𝑓 : 𝒳 × ℛ → 𝒴}𝑓 be a function family associated with distribution 𝐷. Let
𝐺 : 𝒳 × ℛ′ → ℛ be a random function where |ℛ′ |2 /|ℛ| = 𝗇𝖾𝗀𝗅(𝜆) and define 𝑓𝐺 : 𝒳 × ℛ′ → 𝒴 by
𝑓𝐺 (𝑥; 𝑟) = 𝑓 (𝑥; 𝐺(𝑥, 𝑟)). Define 𝖣𝗂𝗌𝗍𝗋𝐺 as the distribution which samples 𝑓 ← 𝐷, then outputs 𝑓𝐺 .
If (ℱ, 𝖣𝗂𝗌𝗍𝗋) is blind-unforgeable with respect to the predicate that outputs 𝖠𝖼𝖼 on input (𝑓, (𝑥, 𝑟), 𝑦)
such that 𝑓 (𝑥, 𝑟) = 𝑦, then (ℱ𝐺 , 𝖣𝗂𝗌𝗍𝗋𝐺 ) is single-effective-query unlearnable under Definition 4.26.
39
Intuitively, because the blind unforgeability property is also being applied to 𝑟, it also ensures
that 𝑓 must be highly randomized. For example, if 𝑓 ignored its randomness 𝑟, then it would be
trivially forgeable simply by outputting the same 𝑓 (𝑥; 𝑟) with two different 𝑟 and 𝑟′ .
To prove this claim, we use another technical lemma which shows that adversaries who can
find images of 𝐺 must know corresponding preimages. Intuitively, it will allow us to show that
an adversary who finds an input/output pair ((𝑥, 𝑟), 𝑦) must know an 𝑟′ such that 𝐺(𝑥, 𝑟′ ) = 𝑟.
We prove the technical lemma in Appendix C.3.
Lemma 5.17. Let 𝐺 : 𝒳1 × 𝒳2 → 𝒴 be a random function where |𝒳2(︁| < |𝒴|. Consider )︁ an oracle algorithm
(𝟏) (1) (1)
𝐴 makes 𝑞 of queries to 𝐺, then outputs two vector of 𝑘 values 𝐱 = 𝑥1 . . . , 𝑥𝑘 and 𝐲 = (𝑦1 , . . . , 𝑦𝑘 ).
(︁ )︁
(2) (1) (2)
Let 𝑝 be the probability that for every 𝑖, there exists an 𝑥𝑖 ∈ 𝒳 such that 𝐺 𝑥𝑖 , 𝑥𝑖 = 𝑦𝑖 .
Now consider running the same experiment where 𝐺 is instead implemented as a compressed oracle,
and measuring its database register after ′
(︁ 𝐴 outputs)︁ to obtain 𝐷. Let 𝑝 be the probability that for every 𝑖,
(2) (1) (2)
there exists an 𝑥𝑖 ∈ 𝒳2 such that 𝐷 𝑥𝑖 ‖𝑥𝑖 = 𝑦𝑖 . If 𝑘 and 𝑞 are 𝗉𝗈𝗅𝗒(𝜆) and |𝒳2 |𝑘 /|𝒴| = 𝗇𝖾𝗀𝗅(𝜆),
then12
𝑝 ≤ 𝑝′ + 𝗇𝖾𝗀𝗅(𝜆)
Proof of Proposition 5.16. We first claim that no adversary can output a tuple ((𝑥, 𝑟), 𝑦) such that
𝑓 (𝑥, 𝑟) = 𝑦 and 𝑟 is not in the image of 𝐺(𝑥, ·), except with negligible probability. This follows
from quantum blind unforgeability along with the observation that the SEQ oracle for 𝑓𝐺 can be
evaluated using a list of all evaluations 𝑓 (𝑥, 𝐺(𝑥, 𝑟′ )) for 𝑟′ ∈ ℛ′ .
Second, we claim that if an adversary outputs two valid input/output tuples ((𝑥1 , 𝑟1 ), 𝑦1 ) and
((𝑥2 , 𝑟2 ), 𝑦2 ) where 𝑥1 ̸= 𝑥2 , then with overwhelming probability 𝑟1 ̸= 𝑟2 . Say the adversary did
so with probability 𝑝. By the previous claim, whenever the adversary succeeds, there exist 𝑟1′ and
𝑟2′ such that 𝑟1 = 𝐺(𝑥1 , 𝑟1′ ) and 𝑟2 = 𝐺(𝑥2 , 𝑟2′ ), except with negligible probability. If 𝑟1 = 𝑟2 , then
𝐺(𝑥1 , 𝑟1′ ) = 𝐺(𝑥2 , 𝑟2′ ). Thus, we could find a collision in 𝐺 with probability 𝑝/|ℛ′ |2 − 𝗇𝖾𝗀𝗅(𝜆) by
guessing 𝑟1′ and 𝑟2′ . Lemma 3.5 shows that after 𝑞 queries, the probability of finding a collision is
𝑂(𝑞 3 /|ℛ|). Since |ℛ′ |2 /|ℛ| = 𝗇𝖾𝗀𝗅(𝜆), we have 𝑝 ≤ 𝑂(𝑞 3 |ℛ2 |/|ℛ|) = 𝗇𝖾𝗀𝗅(𝜆).
Lemma 5.17 shows that if we were to implement 𝐺 as a compressed oracle, then whenever the
adversary finds two valid input/output tuples where 𝑟1 ̸= 𝑟2 , with overwhelming probability 𝐺’s
compressed database contains entries (𝑥′1 ‖𝑟1′ , 𝑟1 ) and (𝑥′2 ‖𝑟2′ , 𝑟2 ). By Lemma 5.13, whenever this
occurs, 𝐻 also contains two entries, except with negligible probability. However, 𝐻 never contains
more than one entry (Claim 4.7). Combining all of these facts together, any QPT adversary given
SEQ access to 𝑓𝐺 ← 𝖣𝗂𝗌𝗍𝗋𝐺 cannot output two distinct tuples ((𝑥1 , 𝑟1 ), 𝑦1 ) and ((𝑥2 , 𝑟2 ), 𝑦2 ) such
that 𝑓 (𝑥1 , 𝑟1 ) = 𝑦1 and 𝑓 (𝑥2 , 𝑟2 ) = 𝑦2 , except with negligible probability.
In Section 8, we use the techniques developed in this section to compile any signature scheme
satisfying quantum blind unforgeability to enable signature tokens, almost without modifying the
verification process.
12
We remark that the reliance on the number of queries is unlikely to be tight. However, this bound is sufficient for
our purposes since we will anyway combine it with results that require a query-bounded adversary.
40
6 Construction in the Plain Model
In this section, we give a construction of a one-time sampling program in the plain model for
constrained PRFs. We prove the following theorem:
Theorem 6.1. Assuming the security of post-quantum indistinguishability obfuscation, and LWE (or al-
ternatively, assuming sub-exponentially secure iO and OWFs), then there exists secure one-time sampling
programs for constrained PRFs 𝖯𝖱𝖥 : {0, 1}𝑘 × {0, 1}𝑛 × {0, 1}ℓ → {0, 1}𝑚 with respect to the weak
operational security Definition 4.26. Here we let 𝜆, 𝑘, 𝑚 ∈ ℕ and ℓ ≥ 𝑛 · 𝜆. {0, 1}𝑘 is the key space for the
PRF, {0, 1}𝑛 the input space and {0, 1}ℓ the randomness space.
6.1 Preliminaries
6.1.1 Indistinguishability Obfuscation
Definition 6.2 (Indistinguishability Obfuscator (iO) [BGI+ 01, GGH+ 16, SW14]). A uniform PPT
machine 𝗂𝖮 is an indistinguishability obfuscator for a circuit class {𝒞𝜆 }𝜆∈ℕ if the following conditions are
satisfied:
• For all 𝜆, all 𝐶 ∈ 𝒞𝜆 , all inputs 𝑥, we have
[︁ ]︁
Pr 𝐶(𝑥)̂︀ ̂︀ ← 𝗂𝖮(1𝜆 , 𝐶) = 1
= 𝐶(𝑥) | 𝐶
• (Post-quantum security): For all (not necessarily uniform) QPT adversaries (𝖲𝖺𝗆𝗉, 𝐷), the following
holds: if Pr[∀𝑥, 𝐶0 (𝑥) = 𝐶1 (𝑥) : (𝐶0 , 𝐶1 , 𝜎) ← 𝖲𝖺𝗆𝗉(1𝜆 )] > 1−𝛼(𝜆) for some negligible function
𝛼, then there exists a negligible function 𝛽 such that:
⃒
⃒ [︁ ]︁
𝜆 𝜆
Pr 𝐷(𝜎, 𝗂𝖮(1 , 𝐶 )) = 1 : (𝐶 , 𝐶 , 𝜎) ← 𝖲𝖺𝗆𝗉(1 )
⃒
⃒ 0 0 1
⃒
⃒
[︁ ]︁ ⃒
𝜆 𝜆 ⃒
− Pr 𝐷(𝜎, 𝗂𝖮(1 , 𝐶1 )) = 1 : (𝐶0 , 𝐶1 , 𝜎) ← 𝖲𝖺𝗆𝗉(1 ) ⃒ ≤ 𝛽(𝜆)
⃒
Whenever we assume the existence of 𝗂𝖮 in the rest of the paper, we refer to 𝗂𝖮 for the class of
polynomial-size circuits, i.e. when 𝒞𝜆 is the collection of all circuits of size at most 𝜆.
41
• Output. 𝗌𝗁𝖮 outputs a circuit 𝑆̂ that computes membership in 𝑆. Precisely, let 𝑆(𝑥) be the function
that decides membership in 𝑆. Then there exists a negligible function 𝗇𝖾𝗀𝗅,
̂
Pr[𝑆(𝑥) = 𝑆(𝑥) ∀𝑥 : 𝑆̂ ← 𝗌𝗁𝖮(𝑆)] ≥ 1 − 𝗇𝖾𝗀𝗅(𝑛)
• Security. For security, consider the following game between an adversary and a challenger.
– The adversary submits to the challenger a subspace 𝑆0 of dimension 𝑑0 .
– The challenger samples a uniformly random subspace 𝑆1 ⊆ 𝔽𝑛 of dimension 𝑑1 such that 𝑆0 ⊆
𝑆1 .
It then runs 𝑆̂ ← 𝗌𝗁𝖮(𝑆𝑏 ), and gives 𝑆̂ to the adversary.
– The adversary makes a guess 𝑏′ for 𝑏.
𝗌𝗁𝖮 is secure if all QPT adversaries have negligible advantage in this game.
Zhandry [Zha19b] gives a construction of a subspace hiding obfuscator based on one-way
functions and 𝗂𝖮.
Theorem 6.4 (Theorem 6.3 in [Zha19b]). If injective one-way functions exist, then any indistinguishabil-
ity obfuscator, appropriately padded, is also a subspace hiding obfuscator for field 𝔽 and dimensions 𝑑0 , 𝑑1 ,
as long as |𝔽|𝑛−𝑑1 is exponential.
Note that by applying 𝐻 ⊗𝑛 , which is QFT for 𝔽𝑛2 , to the state |𝐴𝑠,𝑠′ ⟩, one obtains exactly |𝐴⊥
𝑠′ ,𝑠 ⟩.
′
Additionally, note that given |𝐴⟩ and 𝑠, 𝑠 , one can efficiently construct |𝐴𝑠,𝑠′ ⟩ as follows:
add 𝑠 𝐻 ⊗𝑛 ′
∑︁ ∑︁ ∑︁
|𝑎⟩ −−−→ |𝑎 + 𝑠⟩ −−−→ (−1)⟨𝑎 ,𝑠⟩ |𝑎′ ⟩
𝑎 𝑎 𝑎′ ∈𝐴⊥
adding 𝑠′ ∑︁ ′ 𝐻 ⊗𝑛
∑︁ ′
−−−−−→ (−1)⟨𝑎 ,𝑠⟩ |𝑎′ + 𝑠′ ⟩ −−−→ (−1)⟨𝑎,𝑠 ⟩ |𝑎 + 𝑠⟩
𝑎′ ∈𝐴⊥ 𝑎∈𝐴
42
Definition 6.6 (Coset Subspace Obfuscation Programs). We denote 𝗌𝗁𝖮(𝐴 + 𝑠) for the following pro-
gram: 𝗂𝖮(𝗌𝗁𝖮𝐴 (· − 𝑠)), where 𝗌𝗁𝖮𝐴 () denotes the subspace-hiding program 𝗌𝗁𝖮(𝐴), and 𝗌𝗁𝖮 is the sub-
space hiding obfuscator defined in Section 6.1.2. Therefore, 𝗌𝗁𝖮𝐴 (·−𝑠) is the program that on input 𝑥, runs
program 𝗌𝗁𝖮(𝐴) on input 𝑥 − 𝑠. 𝗂𝖮(𝗌𝗁𝖮𝐴 (· − 𝑠)) is an indistinguishability obfuscation of 𝗌𝗁𝖮𝐴 (· − 𝑠).
Theorem 6.7 (Computational Direct Product Hardness, [CLLZ21a, CHV23]). Assume the existence
of post-quantum 𝗂𝖮 and injective one-way function. Let 𝐴 ⊆ 𝔽𝑛2 be a uniformly random subspace of
dimension 𝑛/2, and 𝑠, 𝑠′ be uniformly random in 𝔽𝑛2 . Given one copy of |𝐴𝑠,𝑠′ ⟩, 𝗌𝗁𝖮(𝐴 + 𝑠) and 𝗌𝗁𝖮(𝐴⊥ +
𝑠′ ), any polynomial time adversary outputs a pair (𝑣, 𝑤) with only negligible probability such that either of
the following is satisfied: (1) 𝑣 ∈ 𝐴 + 𝑠 and 𝑤 ∈ 𝐴⊥ + 𝑠′ ; (2) 𝑣, 𝑤 ∈ 𝐴 + 𝑠 or 𝑣, 𝑤 ∈ 𝐴⊥ + 𝑠′ and 𝑣 ̸= 𝑤.
Definition 6.8 ((Post-quantum) Puncturable PRF). A PRF family 𝐹 : {0, 1}𝑘(𝜆) × {0, 1}𝑛(𝜆) →
{0, 1}𝑚(𝜆) with key generation procedure 𝖪𝖾𝗒𝖦𝖾𝗇𝐹 is said to be puncturable if there exists an algorithm
𝖯𝗎𝗇𝖼𝗍𝗎𝗋𝖾𝐹 , satisfying the following conditions:
• Functionality preserved under puncturing: Let 𝑆 ⊆ {0, 1}𝑛(𝜆) . For all 𝑥 ∈ {0, 1}𝑛(𝜆) where
𝑥∈/ 𝑆, we have that:
• Pseudorandom at punctured points: For every 𝑄𝑃 𝑇 adversary (𝐴1 , 𝐴2 ), there exists a negligible
function 𝗇𝖾𝗀𝗅 such that the following holds. Consider an experiment where 𝐾 ← 𝖪𝖾𝗒𝖦𝖾𝗇𝐹 (1𝜆 ),
(𝑆, 𝜎) ← 𝐴1 (1𝜆 ), and 𝐾𝑆 ← 𝖯𝗎𝗇𝖼𝗍𝗎𝗋𝖾𝐹 (𝐾, 𝑆). Then, for all 𝑥 ∈ 𝑆,
⃒ ⃒
⃒ ⃒
⃒Pr[𝐴2 (𝜎, 𝐾𝑆 , 𝑆, 𝐹 (𝐾, 𝑥)) = 1] − Pr [𝐴2 (𝜎, 𝐾𝑆 , 𝑆, 𝑟) = 1]⃒⃒ ≤ 𝗇𝖾𝗀𝗅(𝜆)
𝑟←{0,1}𝑚(𝜆)
⃒
Constrained PRFs A PRF 𝐹 : 𝒦 × 𝒳 → 𝒴 has an additional key space 𝒦𝖢𝗈𝗇𝗌𝗍𝗋𝖺𝗂𝗇 and two ad-
ditional algorithms 𝐹.𝖢𝗈𝗇𝗌𝗍𝗋𝖺𝗂𝗇 and 𝐹.𝖢𝗈𝗇𝗌𝗍𝗋𝖺𝗂𝗇𝖤𝗏𝖺𝗅 as follows. A constrained key key 𝐾𝐶 with
respect to a circuit 𝐶 enables the evaluation of 𝐹 (𝐾, 𝑥) for all 𝑥 such that 𝐶(𝑥) = 1 and no other 𝑥.
𝖪𝖾𝗒𝖦𝖾𝗇(1𝜆 , 1𝑛 ) → 𝗆𝗌𝗄. On input the security parameter 𝜆, outputs the master secret key 𝗆𝗌𝗄.
𝖤𝗏𝖺𝗅(𝗆𝗌𝗄, 𝑥) → 𝑦 ∈ 𝒴 : On master secret key 𝗆𝗌𝗄 and value 𝑥 ∈ 𝒳 , outputs the evaluation 𝑦 ∈ 𝒴.
𝖢𝗈𝗇𝗌𝗍𝗋𝖺𝗂𝗇(𝗆𝗌𝗄, 𝐶) → 𝗌𝗄𝐶 : takes as input a PRF key 𝐾 ∈ 𝒦 and the description of a circuit 𝐶 (so
that domain of 𝐶 ⊆ 𝒳 ); outputs a constrained key 𝗌𝗄𝐶 .
𝖢𝗈𝗇𝗌𝗍𝗋𝖺𝗂𝗇𝖤𝗏𝖺𝗅(𝗌𝗄𝐶 , 𝑥) → 𝑦/⊥: On input a secret key 𝗌𝗄𝐶 , and an input 𝑥 ∈ {0, 1}𝑛 , the constrained
evaluation algorithm 𝖯𝖱𝖥.𝖢𝗈𝗇𝗌𝗍𝗋𝖺𝗂𝗇𝖤𝗏𝖺𝗅 outputs an element 𝑦 ∈ {0, 1}𝑚 .
43
Definition 6.9 (Constrained PRF Correctness). A constrained PRF is correct for a circuit class 𝒞 if
𝗆𝗌𝗄 ← 𝖯𝖱𝖥.𝖪𝖾𝗒𝖦𝖾𝗇(1𝜆 ), for every circuit 𝐶 ∈ 𝒞 and input 𝑥 ∈ {0, 1}𝑛 such that 𝐶(𝑥) = 1, it is the case
that:
Here the adversary 𝒜 is said to be admissible as long as it satisfies the following conditions — (1) it makes
at most one query to the constrain oracle 𝖢𝗈𝗇𝗌𝗍𝗋𝖺𝗂𝗇(𝗆𝗌𝗄, ·), and its queried circuit 𝐶 must be such that
𝐶(𝑥) = 0, (2) it must send 𝑥 as one of its evaluation queries to 𝖤𝗏𝖺𝗅(𝗆𝗌𝗄, ·).
We only need single-key security of the above definition for our use.
Remark 6.11 (Double-challenge security). We will make a simple remark here on the following variant
of the above security game: the adversary submits two arbitrarily chosen 𝑥1 , 𝑥2 ; the challenger chooses
𝑟1,0 , 𝑟1,1 , and 𝑟2,0 , 𝑟2,1 independently as in the above security game. 𝒜 receives both 𝑟1,𝑏1 and 𝑟2,𝑏2 and has
to guess both 𝑏1 , 𝑏2 correctly. The winning probability of any 𝒜 in this "double-challenge" version of the
game is upper bounded by the probability of it winning the single challenge game. We will make use of this
fact later.
The type of constrained PRF we use in this work can be built from standard lattice assumptions
([BV15]) or alternatively from subexponentially-secure iO and OWFs [BLW17].
Invertible PRFs An invertible pseudorandom function (IPF) is an injective PRF whose inverse
function can be computed efficiently (given the secret key).
Therefore, it has the following additional algorithm apart from the PRF 𝖪𝖾𝗒𝖦𝖾𝗇 and 𝖤𝗏𝖺𝗅:
Definition 6.12 (Correctness for Injective IPR). A invertible PRF is correct if for all 𝗆𝗌𝗄 ← 𝖯𝖱𝖥.𝖪𝖾𝗒𝖦𝖾𝗇(1𝜆 ),
it is the case that:
𝐹.𝖨𝗇𝗏𝖾𝗋𝗍(𝗌𝗄, 𝐹.𝖤𝗏𝖺𝗅(𝗌𝗄, 𝑥)) = 𝑥.
and 𝐹.𝖨𝗇𝗏𝖾𝗋𝗍(𝗌𝗄, 𝑦) = ⊥ where 𝑦 is not an image of any 𝑥 ∈ 𝒳 .
In this paper, we only need the "regular" pseudorandomness property of the IPF and therefore
we do not provide additional security definitions as in [BKW17]. We also do not need the IPF in
our construction to be puncturable/constrainable.
In addition, we would like the IPF we use to act as extractors on their inputs:
44
Definition 6.13 (Extracting PRF). An extracting PRF with error 𝜖(·) for min-entropy 𝑘(·) is a (punc-
turable) PRF 𝐹 mapping 𝑛(𝜆) bits to ℓ(𝜆) bits such that for all 𝜆, if 𝑋 is any distribution over 𝑛(𝜆) bits
with min-entropy greater than 𝑘(𝜆), then the statistical distance between (𝗌𝗄, 𝐹 (𝐾, 𝑋)) and (𝗌𝗄, 𝑟 ←
{0, 1}ℓ(𝜆) ) is at most 𝜖(·), where 𝗌𝗄 ← 𝖪𝖾𝗒𝖦𝖾𝗇(1𝜆 ).
The constrained PRFs and invertible PRFs used in this work can all be obtained from LWE or
from subexponentially-secure iO plus OWFs. [SW14, BKW17, BV15, ?] .
1. A constrained PRF 𝐹1 : {0, 1}𝑘1 × {0, 1}𝑛+ℓ → {0, 1}𝑚 , where {0, 1}𝑘1 is the key space and
{0, 1}𝑛+ℓ is the input space.
Let 𝗌𝗄1 ← 𝐹1 .𝖪𝖾𝗒𝖦𝖾𝗇(1𝜆 ).
2. An extracting invertible PRF 𝐹2 : {0, 1}𝑘2 × {0, 1}𝑛·𝜆 → {0, 1}ℓ , where {0, 1}𝑘2 is the key
space and {0, 1}𝜆 is the input space. ℓ ≥ 𝑛 · 𝜆 and 𝜆 is the security parameter.
The PRF is extracting with negligible error for inputs for min-entropy 𝑛 · 𝜆/2.
Let 𝗌𝗄2 ← 𝐹2 .𝖪𝖾𝗒𝖦𝖾𝗇(1𝜆 ).
3. Sample 𝑛 random subspaces 𝐴1 , · · · , 𝐴𝑛 independently from 𝔽𝜆2 , where each dim(𝐴𝑖 ) = 𝜆/2.
Sample 2𝑛 random strings, 𝑠1 , 𝑠′1 , · · · 𝑠𝑛 , 𝑠′𝑛 each uniformly random from {0, 1}𝜆 .
Prepare the coset subspace-hiding obfuscation programs {(𝗌𝗁𝖮(𝐴1 +𝑠1 ), 𝗌𝗁𝖮(𝐴⊥ ′
1 +𝑠1 ), · · · , (𝗌𝗁𝖮(𝐴𝑛 +
⊥ ′
𝑠𝑛 ), 𝗌𝗁𝖮(𝐴𝑛 + 𝑠𝑛 ))} as defined in Definition 6.6.
For convenience, we will use the notation 𝗌𝗁𝖮0𝑖 for 𝗌𝗁𝖮(𝐴𝑖 + 𝑠𝑖 ), and 𝗌𝗁𝖮1𝑖 for 𝗌𝗁𝖮(𝐴𝑖 + 𝑠′𝑖 )⊥
for the rest of this section.
45
The 𝖮𝖳𝖯(𝖼𝖯𝖱𝖥(𝗌𝗄1 , ·)) consists of the subspace states (|𝐴1 ⟩ , · · · , |𝐴𝑛 ⟩) and an 𝗂𝖮 of the follow-
ing program Figure 5:
Correctness The correctness follows from the extracting property of the PRF 𝐹2 : in any honest
evaluation with 𝑢 ∈ {0, 1}𝜆·𝑛 satisfies 𝑢𝑖 ∈ 𝗌𝗁𝖮𝑥𝐴𝑖𝑖 ,for each 𝑖 ∈ [𝑛]. Therefore 𝑢 has min-entropy
𝑛 · 𝜆/2. By the extracting property of 𝐹2 and the evaluation correctness of 𝐹1 , the above scheme
satisfies correctness Definition 4.2.
𝐻0 : In this hybrid, the challenger plays the original game defined in Definition 6.14 using the
above construction:
1. The challenger prepares the program 𝖮𝖳𝖯(𝐹1 .𝖤𝗏𝖺𝗅(𝗌𝗄1 , ·)) as in Section 6.2. 𝒜 gets a copy of
the one-time program for 𝖮𝖳𝖯(𝐹1 .𝖤𝗏𝖺𝗅(𝗌𝗄1 , ·)).
2. 𝒜 outputs two (input, randomness) pairs (𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ) such that 𝑥1 ̸= 𝑥2 or 𝑟1 ̸= 𝑟2 .
3. Challenger samples two independent, uniform random bits 𝑏1 ← {0, 1}, 𝑏2 ← {0, 1}.
If 𝑏1 = 0, then let 𝑦1 = 𝐹1 .𝖤𝗏𝖺𝗅(𝗌𝗄1 , 𝑥1 , 𝑟1 ); else let 𝑦1 ← {0, 1}𝑚 .
If 𝑏2 = 0, then let 𝑦2 = 𝐹1 .𝖤𝗏𝖺𝗅(𝗌𝗄1 , 𝑥2 , 𝑟2 ); else let 𝑦2 ← {0, 1}𝑚 .
Challenger sends (𝑦1 , 𝑦2 ) to 𝒜.
4. 𝒜 outputs guesses (𝑏′1 , 𝑏′2 ) for (𝑏1 , 𝑏2 ) respectively. 𝒜 wins if and only if 𝑏′1 = 𝑏1 and 𝑏′2 = 𝑏2 .
𝐻1 : In this hybrid, the challenger modifies the original 𝖮𝖳𝖯 program in Figure 5 and the game
as follows:
46
Hardcoded: 𝗌𝗄2 , {𝗌𝗁𝖮0𝑖 , 𝗌𝗁𝖮1𝑖 }𝑖∈[𝑛] .
On input (𝑥 ∈ {0, 1}𝑛 , 𝑟 ∈ {0, 1}ℓ ):
i. Compute (𝑢 = 𝑢1 ||𝑢2 || · · · ||𝑢𝑛 ∈ {0, 1}𝑛·𝜆 ) ←
𝐹2 .𝖨𝗇𝗏𝖾𝗋𝗍(𝗌𝗄2 , 𝑟), where each 𝑢𝑖 ∈ 𝔽𝜆2 .
If the inversion result is ⊥, output 0.
ii. If for all 𝑖 ∈ [𝑛], 𝗌𝗁𝖮𝑥𝑖 𝑖 (𝑢𝑖 ) = 1, where 𝑥𝑖 is the 𝑖-th bit
of 𝑥:
Output 1.
iii. Else:
Output 0
𝐻2 : In this hybrid, all steps are the same except in step 2, the challenger additionally checks if
𝐹2 .𝖨𝗇𝗏𝖾𝗋𝗍(𝗌𝗄2 , 𝑟1 ) and 𝐹2 .𝖨𝗇𝗏𝖾𝗋𝗍(𝗌𝗄2 , 𝑟2 ) are in the subspaces with respect to 𝑥1 , 𝑥2 : if so, abort the
game and 𝒜 loses.
47
3. Challenger samples two independent, uniform random bits 𝑏1 ← {0, 1}, 𝑏2 ← {0, 1}.
If 𝑏1 = 0, then let 𝑦1 = 𝐹1 .𝖤𝗏𝖺𝗅(𝗌𝗄1 , 𝑥1 , 𝑟1 ); else let 𝑦1 ← {0, 1}𝑚 .
If 𝑏2 = 0, then let 𝑦2 = 𝐹1 .𝖤𝗏𝖺𝗅(𝗌𝗄1 , 𝑥2 , 𝑟2 ); else let 𝑦2 ← {0, 1}𝑚 .
Challenger sends (𝑦1 , 𝑦2 ) to 𝒜.
4. 𝒜 outputs guesses (𝑏′1 , 𝑏′2 ) for (𝑏1 , 𝑏2 ) respectively. 𝒜 wins if and only if 𝑏′1 = 𝑏1 and 𝑏′2 = 𝑏2 .
Claim 6.15. Assuming the post-quantum security of 𝗂𝖮, The difference between 𝒜’s advantage in 𝐻0 and
𝐻1 is negligible.
Proof. The program 𝖼𝖯𝖱𝖥𝖮𝖳𝖯 in 𝐻0 and 𝐻1 have the same functionality: we only change the time
of when we check the input vectors 𝑢 are in the corresponding subspaces. Therefore, by the secu-
rity of 𝗂𝖮, the above claim holds.
Claim 6.16. By the security of subspace-hiding obfuscation (Definition 6.3), the difference between 𝒜’s
advantage in 𝐻1 and 𝐻2 is negligible.
Proof. We invoke the computational direct product hardness Theorem 6.7: when giving a subspace
state, and subspace-hiding obfuscations for the corresponding primal and dual subspaces, it is
hard to produce two different vectors in the subspaces. Therefore, the event that challenger aborts
on the event in Step 2 of Hybrid 1 is negligible.
Otherwise, if all the preimages of 𝑟1 , 𝑟2 are valid subspace vectors, since we require 𝑥1 ̸= 𝑥2 or
𝑟1 ̸= 𝑟2 , there must exist at least an index 𝑖* ∈ [𝑛] such that 𝑢1,𝑖* ̸= 𝑢2,𝑖* . Therefore, we can build a
reduction to break the computational direct product hardness property: the reduction can sample
its own 𝗆𝗌𝗄, 𝗌𝗄2 and constrain the key 𝗆𝗌𝗄 on the circuit 𝐶𝐴 since it is given the programs 𝗌𝗁𝖮𝐴𝑖 ’s.
When receiving the adversary’s output of 𝑟1 , 𝑟2 , it can invert them to find the vectors 𝑢1,𝑖* , 𝑢2,𝑖*
that help it break Theorem 6.7.
Claim 6.17. Assuming adaptive single-key constrained pseudorandomness of 𝖼𝖯𝖱𝖥𝐹1 , then 𝒜’s advantage
in 𝐻2 is negligible.
Proof. If there exists an 𝒜 that wins the game in 𝐻2 with probability non-negligibly larger than 1/2,
then we can build a reduction ℬ to break the adaptive single-key constrained pseudorandomness
in Definition 6.10.
Note that in this game, we have ruled out all 𝒜 that outputs (𝑥1 , 𝑟1 ), (𝑥2 , 𝑟2 ) that satisfies
𝐶𝐴 (𝑥1 , 𝑟1 ) = 𝐶𝐴 (𝑥2 , 𝑟2 ) = 1. That is, at least one of the above evaluations is 0. Therefore, the
reduction, which makes a single key query on the circuit 𝐶𝐴 , can use this input as the challenge
input to the Definition 6.10 security game. If both inputs satisfy that 𝐶𝐴 (𝑥1 , 𝑟1 ) = 𝐶𝐴 (𝑥2 , 𝑟2 ) = 0,
then we use the variant game in Remark 6.11. In either case, the security of the constrained PRF
guarantees that 𝒜’s winning probability in the game of 𝐻2 is 1/2 + 𝗇𝖾𝗀𝗅(𝜆).
48
2. There exists a family of single physical query unlearnable, high average min-entropy output,
but partially deterministic functions where there are no insecure OTP for it, even in the
oracle model, with respect to the weakest operational definition Definition 4.26.
The first result in the non-black-box model is inspired by the non-black-box impossibility result
of quantum obfuscation and copy protection in [ABDS20, AP21]. However, the circuit family they
use is a deterministic one. In order to show a nontrivial result for the one-time program, we design
a family of randomized circuits which have almost full entropy output, but can nevertheless be
"learned" through a single non-black-box evaluation.
7.1 Preliminaries
7.1.1 Quantum Fully Homomorphic Encryption Scheme
We give the definition of the type of QFHE we need for the construction in this section.
Definition 7.1 (Quantum Fully Homomorphic Encryption). Let ℳ be the Hilbert space associated
with the message space (plaintexts), 𝒞 be the Hilbert space associated with the ciphertexts, and ℛ𝖾𝗄 be the
Hilbert space associated with the evaluation key. A quantum fully homomorphic encryption scheme is a
tuple of QPT algorithms 𝖰𝖧𝖤 = (𝖪𝖾𝗒𝖦𝖾𝗇, 𝖤𝗇𝖼, 𝖣𝖾𝖼, 𝖤𝗏𝖺𝗅) satisfying:
𝖪𝖾𝗒𝖦𝖾𝗇(1𝜆 ) : a classical probabilistic algorithm that outputs a public key, a secret key as well as an
evaluation key, (𝗉𝗄, 𝗌𝗄, 𝖾𝗄).
𝖤𝗇𝖼(𝗉𝗄, 𝜌ℳ ) : takes as input a state 𝜌ℳ in the space 𝐿(ℳ) and outputs a ciphertext 𝜎 in 𝐿(𝒞).
𝖣𝖾𝖼(𝗌𝗄, 𝜎) : takes a quantum ciphertext 𝜎, and outputs a state 𝜌ℳ in the message space 𝐿(ℳ).
𝖤𝗏𝖺𝗅(𝖾𝗄, 𝑈, 𝜎1 , · · · , 𝜎𝑘 ) takes input of a quantum citcuit 𝑈 with 𝑘-qubits input and 𝑘 ′ -qubits of outputs.
Its output is a sequence of 𝑘 ′ quantum ciphertexts.
The semantic security is analogous to the classical semantic security of FHE. We refer to [BJ15].
Classical Ciphertexts for Classical Plaintexts For the impossibility result, we require a QFHE
scheme where ciphertexts of classical plaintexts are also classical. Given any 𝑥 ∈ {0, 1}, we want
𝖰𝖧𝖤.𝖤𝗇𝖼(𝗉𝗄, |𝑥⟩ ⟨𝑥| to be a computational basis state |𝑧⟩ ⟨𝑧| for some 𝑧 ∈ {0, 1}𝑙 (here, 𝑙 is the length
of ciphertexts for 1-bit messages). In this case, we write 𝖰𝖧𝖤.𝖤𝗇𝖼(𝑝𝑘, 𝑥). We also want the same to
be true for evaluated ciphertexts: if 𝑈 |𝑥⟩ ⟨𝑥| = |𝑦⟩ ⟨𝑦| for some basis state 𝑥 ∈ {0, 1}𝑛 , 𝑦 ∈ {0, 1}𝑙 ,
then we have: 𝖰𝖧𝖤.𝖤𝗏𝖺𝗅(𝖾𝗄, 𝑈, 𝖰𝖧𝖤.𝖤𝗇𝖼(𝗉𝗄, |𝑥⟩ ⟨𝑥|)) → 𝖰𝖧𝖤.𝖤𝗇𝖼(𝗉𝗄, |𝑦⟩ ⟨𝑦|) where the result is a
classical ciphertext.
The QFHE schemes in [Bra18, Mah20] satisfy the above requirement.
Note that we also need to evaluate on a possibly arbitrary polynomial depth circuit. The QFHE
schemes in [Bra18, Mah20] still require circular security to go beyond leveled FHE.
49
We define the following class of unpredictable distributions over pairs of the form (𝖢𝖢[𝑓, 𝑦, 𝑧], 𝖺𝗎𝗑),
where 𝖺𝗎𝗑 is auxiliary quantum information. These distributions are such that 𝑦 is computation-
ally unpredictable given 𝑓 and 𝖺𝗎𝗑.
Definition 7.3 (Unpredictable Distributions). We say that a family of distributions 𝐷 = {𝐷𝜆 } where
𝐷𝜆 is a distribution over pairs of the form (𝖢𝖢[𝑓, 𝑦, 𝑧], 𝖺𝗎𝗑) where 𝖺𝗎𝗑 is a quantum state, belongs to the
class of unpredictable distributions if the following holds. There exists a negligible function 𝗇𝖾𝗀𝗅 such
that, for all QPT algorithms 𝒜,
[︁ ]︁
Pr 𝐴(1𝜆 , 𝑓, 𝖺𝗎𝗑) = 𝑦 ≤ 𝗇𝖾𝗀𝗅(𝜆).
(𝖢𝖢[𝑓,𝑦,𝑧],𝖺𝗎𝗑)←𝐷𝜆
We assume that a program 𝑃 has an associated set of parameters 𝑃.𝗉𝖺𝗋𝖺𝗆 (e.g input size, out-
put size, circuit size, etc.), which we are not required to hide.
Definition 7.4 (Compute-and-Compare Obfuscation). A PPT algorithm 𝖢𝖢.𝖮𝖻𝖿 is an obfuscator for
the class of unpredictable distributions (or sub-exponentially unpredictable distributions) if for any family
of distributions 𝐷 = {𝐷𝜆 } belonging to the class, the following holds:
• Functionality Preserving: there exists a negligible function 𝗇𝖾𝗀𝗅 such that for all 𝜆, every program 𝑃
in the support of 𝐷𝜆 ,
Pr[∀𝑥, 𝑃̃︀(𝑥) = 𝑃 (𝑥), 𝑃̃︀ ← 𝖢𝖢.𝖮𝖻𝖿(1𝜆 , 𝑃 )] ≥ 1 − 𝗇𝖾𝗀𝗅(𝜆)
• Distributional Indistinguishability: there exists an efficient simulator 𝖲𝗂𝗆 such that:
(𝖢𝖢.𝖮𝖻𝖿(1𝜆 , 𝑃 ), 𝖺𝗎𝗑) ≈𝑐 (𝖲𝗂𝗆(1𝜆 , 𝑃.𝗉𝖺𝗋𝖺𝗆), 𝖺𝗎𝗑)
where (𝑃, 𝖺𝗎𝗑) ← 𝐷𝜆 .
Combining the results of [WZ17, GKW17] with those of [Zha16], we have the following two
theorems.
Theorem 7.5. Assuming the existence of the quantum hardness of LWE, there exist obfuscators for unpre-
dictable distributions, as in Definition 7.4.
50
7.2 Impossibility Result for Single-Query Security in the Plain Model: for fully ran-
domized functions
In this section, we present a lower bound/impossibility result for a generic one-time program in
the plain model (i.e. without using black-box oracles). The result states that there is no way to
construct a generic one-time sampling program for all randomized functionalities if the programs
allow non-black-box access.
Our result is inspired by the non-black-box impossibility result of quantum obfuscation and
copy protection in [ABDS20, AP21]. However, the circuit family they use is a deterministic one.
In order to show a nontrivial result for the one-time program, we design a family of randomized
circuits which have almost full entropy output, but can nevertheless be "learned" through a single
non-black-box evaluation.
Theorem 7.7. Assuming the post-quantum security of LWE and QFHE, there exists a family of randomized
circuits which are single-query 𝗇𝖾𝗀𝗅(𝜆)-unlearnable Definition 4.11, but not one-time program secure with
respect to the the weak operational one-time security Definition 4.26.
We construct the following circuit which has high-entropy outputs and is 1-query unlearnable
with only (quantum) single-query access to the function and access to a piece of classical informa-
tion, but once put into any OTP in the plain model, is insecure.
First we give a few building blocks for the following circuit 𝐶.
We design the following with auxiliary information 𝖺𝗎𝗑 = (𝖼𝗍𝑎 = 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, 𝑎); 𝑃̃ =
𝖢𝖢.𝖮𝖻𝖿(𝖲𝖪𝖤.𝖣𝖾𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝖣𝖾𝖼(𝖰𝖧𝖤.𝗌𝗄, ·)), 𝑏, (𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝗌𝗄)), 𝖰𝖧𝖤.𝗉𝗄).
51
else:
output 𝖲𝖪𝖤.𝖤𝗇𝖼(𝖤𝗇𝖼.𝗌𝗄, 𝑥; 𝑟).
Note that the program 𝑃̃ in the auxiliary information 𝖺𝗎𝗑 is a compute-and-compare obfusca-
tion program of the following circuit:
{︃
(𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝗌𝗄) if 𝑓 (𝑥) = 𝑏
𝖢𝖢[𝑓, (𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝗌𝗄), 𝑏] =
⊥ otherwise
where 𝑓 (𝑥) = 𝖲𝖪𝖤.𝖣𝖾𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝖣𝖾𝖼(𝖰𝖧𝖤.𝗌𝗄, 𝑥)).
Claim 7.8. The above circuit with auxiliary information (𝐶, 𝖺𝗎𝗑) can be perfectly reconstructed by any
QPT adversary given any one-time program with correctness of the above circuit.
Proof. Given any OTP |𝜓𝐶 ⟩ of the circuit 𝐶 together with the auxiliary information 𝖺𝗎𝗑. A QPT
adversary can perform the following attack:
1. Encrypt the program: 𝖼𝗍|𝜓𝐶 ⟩ ← 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, |𝜓𝐶 ⟩) using the QFHE public key 𝖰𝖧𝖤.𝗉𝗄
given in 𝖺𝗎𝗑.
2. Homomorphically evaluate the program on the input 𝖼𝗍𝑎 = 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, 𝑎) from 𝖺𝗎𝗑,
with respect to a universal quantum circuit 𝑈 , to obtain an outcome 𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) :
𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) := 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, 𝖲𝖪𝖤.𝖤𝗇𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝑏; 𝑟𝑏 )) ← 𝖰𝖧𝖤.𝖤𝗏𝖺𝗅(𝖰𝖧𝖤.𝗉𝗄, 𝑈, 𝖼𝗍|𝜓𝐶 ⟩ , 𝖼𝗍𝑎 ).
for some random 𝑟𝑏 .
The above evaluation holds due to the correctness of OTP scheme and the QFHE scheme:
when one evaluates 𝑈 (|𝜓𝐶 ⟩ , 𝑎) honestly, then one obtains 𝖲𝖪𝖤.𝖤𝗇𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝑏; 𝑟𝑏 ) for some
random classical string 𝑟𝑏 . Therefore, by the correctness of QFHE, we obtain the a QFHE
ciphertext 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, 𝖲𝖪𝖤.𝖤𝗇𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝑏; 𝑟𝑏 )). This evaluation procedure is random-
ized and the original OTP state |𝜓𝐶 ⟩ gets destroyed during the procedure.
Since the message 𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏; 𝑟𝑏 ) is classical, the ciphertext 𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) under QFHE is classi-
cal by the property of the QFHE we use.
3. Evaluate the compute-and-compare obfuscation program 𝑃̃ on input 𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) .
Note that by the correctness of the compute-and-compare obfuscation program, the input
𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) satisfies that 𝑓 (𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) ) = 𝖲𝖪𝖤.𝖣𝖾𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝖣𝖾𝖼(𝖰𝖧𝖤.𝗌𝗄, 𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) )) =
𝑏.
Therefore, one will obtain the information (𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝗌𝗄).
4. Now one can first decrypt the QFHE ciphertext 𝖼𝗍𝑎 = 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, 𝑎) to obtain 𝑎 and
the doubly-encrypted ciphertext 𝖼𝗍𝖲𝖪𝖤.𝖤𝗇𝖼(𝑏) to obtain 𝑏, with keys 𝖰𝖧𝖤.𝗌𝗄, 𝖲𝖪𝖤.𝗌𝗄.
Given the above information, one can fully reconstruct the circuit 𝐶 together with all the
auxiliary information in 𝖺𝗎𝗑 perfectly (note that the information in 𝖺𝗎𝗑 are classical and can
be copied and kept in the first place).
Remark 7.9. Note that the above construction actually shows a stronger statement than an infeasibility
result of OTP: it lets a QPT adversary recover the entire circuit perfectly, which obviously allows it to violate
the OTP security. But if we only need the adversary to output two input-output pairs, storing 𝖲𝖪𝖤.𝗌𝗄 as
the secret message in the compute-and-compare program suffices.
52
Claim 7.10. Assuming the post-quantum security of LWE,the above circuit with auxiliary information
(𝐶, 𝖺𝗎𝗑) satisfies single-query unlearnability (Definition 4.11, for both physical and effective queries).
Proof. We will prove through a sequence of hybrids to show that the oracle is indistinguishable
from a regular SKE functionality, which is single-query unlearnable.
The proof is similar to the proof for Claim 47 in [AP21], with some modifications. We directly
use the [BBBV97b] argument instead of adversary method.
Let |𝜑𝑖 ⟩ be the state of the adversary after the 𝑖-th query to the oracle 𝒪𝐶 (quantum black-box
access to the circuit 𝐶), i.e. |𝜑𝑖 ⟩ = 𝑈𝑖 𝒪𝐶 · · · 𝒪𝐶 𝑈2 𝒪𝐶 𝑈1 |𝜑0 ⟩, where |𝜑0 ⟩ is the initial adversary
state. We first make the following claim:
Claim 7.11. Assuming∑︀ the post-quantum security of LWE, the sum of squared amplitudes of query on
strings starting with 𝑎: 𝑇𝑖 𝑊𝑎 (|𝜑𝑖 ⟩) ≤ 𝗇𝖾𝗀𝗅(𝜆), where 𝑇 is the total number of steps.
Proof. We prove this by induction and the security properties of QFHE and 𝖢𝖢.𝖮𝖻𝖿.
Base case: before the adversary makes the first query, clearly 𝑊𝑎 (|𝜑0 ⟩) is negligible (in fact 0
here), we consider the following hybrids:
1. 𝐻0 : this is the original game where we give out auxiliary information 𝖺𝗎𝗑 = (𝖼𝗍𝑎 = 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, 𝑎);
𝑃̃ = 𝖢𝖢.𝖮𝖻𝖿(𝖲𝖪𝖤.𝖣𝖾𝖼(𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝖣𝖾𝖼(𝖰𝖧𝖤.𝗌𝗄, ·)), 𝑏, (𝖲𝖪𝖤.𝗌𝗄, 𝖰𝖧𝖤.𝗌𝗄)), 𝖰𝖧𝖤.𝗉𝗄).
2. 𝐻1 : reprogram the oracle 𝒪𝐶 to have the functionality in Figure 8.
′
Figure 8: 𝒪𝐶 in 𝐻1
The adversary’s state |𝜑0 ⟩ should have negligible difference in terms of trace distance by
[BBBV97a] (by plugging 𝑇 = 0) in 𝐻0 and 𝐻1 .
Let the projection Π𝑎 := (|𝑎⟩ ⟨𝑎|𝑥 ⊗ 𝐈𝑟,𝒜 ), where 𝑥, 𝑟 are the registers corresponding to the input
(𝑥, 𝑟) to 𝒪𝐶 and 𝒜 represents the rest of registers in the adversary’s state.
We can measure the adversary’s first query by projecting the state |𝜑0 ⟩ onto 𝒪𝑐 𝑈1 Π𝑎 (𝒪𝑐 𝑈1 )† .
The adversary with state |𝜑0 ⟩ should have negligible difference in query weight on 𝑎 for the first
query in 𝐻1 and 𝐻2 : since 𝑏 is sampled uniformly at random and for the adversarial state 𝑈1 |𝜑0 ⟩,
𝑏 satisfies the unpredictable distribution property in Definition 7.3. Therefore, by the property
of Definition 7.4, any measurement in the adversary’s behaviors in 𝐻1 and 𝐻2 should result in
computationally indistinguishable outcomes.
In more details, the reduction to the compute-and-compare security works as follows: since
the oracle 𝒪𝐶 is now independent of 𝑎, 𝑏, the reduction can sample𝖲𝖪𝖤, 𝖰𝖧𝖤 keys, prepare the
oracle 𝒪𝐶 and sample its own 𝑎; it then receives the obfuscated compute-and-compare program
𝑃̃ (or the simulated program 𝖲𝗂𝗆(1𝜆 , 1|𝑓 | )) from the challenger, where 𝑏 is uniformly random. If
53
the measurement 𝒪𝑐 𝑈1 Π𝑎 (𝒪𝑐 𝑈1 )† on adversary’s first query gives outcome 1, then output "real
program", else output "simulated program".
The query weights 𝐻2 and 𝐻3 should have negligible difference by the security of the QFHE.
Since the program 𝑃̃ has been replaced with a simulated program, the 𝖰𝖧𝖤 reduction can prepare
the programs as well as the oracle 𝒪𝐶 . It receives 𝖼𝗍𝑎 or 𝖼𝗍0 from the challenger. If the measurement
on the adversary’s first query returns 1, then guess 𝖼𝗍𝑎 , else guess 𝖼𝗍0 .
The adversary’s first query weight on 𝑎 in 𝐻3 is negligible since now there is no information
about 𝑎 anywhere in 𝖺𝗎𝗑 and 𝑎 is only a uniform random string in {0, 1}𝑛 . By the above arguments,
the adversary’s first query’s weight on 𝑎 is negligible in the original game 𝐻0 .
Induction: the above argument applies to the 𝑘-th query, if the sum of squared amplitudes
over the first (𝑘 −1) queries, 𝑖𝑘−1 𝑊𝑎 (|𝜑𝑖 ⟩), is negligible, then we can invoke the above arguments
∑︀
and show that 𝑊𝑎 (|𝜑𝑘 ⟩) is negligible as well.
Since we have shown that the total (squared) query weight on 𝑎, 𝑇𝑖 𝑊𝑎 (|𝜑𝑖 ⟩) is negligible, we
∑︀
can replace the oracle 𝒪𝐶 for the entire game with the oracle 𝒪𝐶 ′ in the above 𝐻 , i.e. Figure 8
1
and by [BBBV97a], the trace distance between |𝜑𝑇 ⟩ using the original oracle 𝒪𝐶 and |𝜑𝑇 ⟩ using
the oracle 𝒪𝐶 ′ in Figure 8 is negligible. Now it remains to show that 𝒪 ′ together with 𝖺𝗎𝗑 is
𝐶
single-query unlearnable for any QPT adversary.
By similar hybrids as above, we can replace the information in 𝖺𝗎𝗑 with a dummy program
and dummy ciphertext so that 𝖺𝗎𝗑 = (𝖼𝗍0 = 𝖰𝖧𝖤.𝖤𝗇𝖼(𝖰𝖧𝖤.𝗉𝗄, 0𝑛 ), 𝖲𝗂𝗆(1𝜆 , 1|𝑓 | ), 𝖰𝖧𝖤.𝗉𝗄). The
adversary’s advantage in the unlearnability game should have negligible difference by similar
hybrid arguments.
Now recall that we instantiate 𝖲𝖪𝖤 from 𝖯𝖱𝖥 using the textbook construction for IND-CPA
SKE. We can then show that if there exists an adversary that violates the single-query unlearn-
ability of a 𝖯𝖱𝖥 that maps 2 · |𝑅|-length inputs to |𝑚|-length inputs (which is essentially a random
oracle when accessed in the oracle model and thus it satisfies single-query unlearnability and high
min-entropy outputs): the reduction can simulate the oracle 𝒪𝐶 ′ on query 𝑥 by querying a 𝖯𝖱𝖥 or-
acle on some random string 𝑟1 of its own choice; the 𝖯𝖱𝖥 oracle will return (𝑟2 , 𝖯𝖱𝖥(𝗌𝗄, 𝑟1 ||𝑟2 )),
where 𝑟2 is the randomness chosen by the randomized 𝖯𝖱𝖥 oracle itself; the reduction then replies
the adversary with (𝑟1 ||𝑟2 , 𝖯𝖱𝖥(𝗌𝗄, 𝑟1 ||𝑟2 )⊕𝑚). In the end, if the adversary wins by outputting two
pairs (𝑚, 𝑟, 𝖯𝖱𝖥(𝗌𝗄, 𝑟) ⊕ 𝑚) and (𝑚′ , 𝑟′ , 𝖯𝖱𝖥(𝗌𝗄, 𝑟′ ) ⊕ 𝑚′ ), then the reduction outputs (𝑟, 𝖯𝖱𝖥(𝗌𝗄, 𝑟))
and (𝑟′ , 𝖯𝖱𝖥(𝗌𝗄, 𝑟′ )) and would break the single-query unlearnability of the 𝖯𝖱𝖥 (see Remark 4.12
and Section 6).
7.3 Impossibility Result for Partially Randomized Functions in the Oracle Model
In this section, we give an example of a function family that cannot be compiled into a one-time
program under even the weakest definition of operational security, even in the classical oracle
model. It is known that deterministic functions fall into this category, but this counterexample
is not only randomized but also has high entropy in a very strong sense. It is however partially
deterministic: the (high) entropy is restricted to one half of the output and the other half is essen-
tially deterministic, demonstrating that high entropy is not sufficient for a function to be one-time
programmable.
Moreover, this example also demonstrates the following
1. It separates a single-physical-query unlearnable function from single-effective-query un-
learnable
54
2. It is a single-physical-query unlearnable function that cannot be securely one-time pro-
grammed with respect to the classical-output simulation definition Definition 4.6.
Suppose 𝖯𝖱𝖥 is a length-preserving pseudorandom function that is secure against adversaries
who are allowed to make quantum superposition queries.13 Let 𝑎 be a uniformly random string
in {0, 1}𝑛 . Let 𝑘 be a random PRF key. Consider the function family ℱ𝑛 = {𝑓𝑎,𝑘 }𝑎,𝑘∈{0,1}𝑛 , where
𝑓𝑎,𝑘 : {0, 1}𝑛 × {0, 1}𝑛 → {0, 1}2𝑛 is defined as
⎧
⎨(𝑎, 𝖯𝖱𝖥𝑘 (0‖𝑟)) if 𝑥 = 0,
⎪
𝑓𝑎,𝑘 (𝑥; 𝑟) = (𝑘, 𝖯𝖱𝖥𝑘 (𝑎‖𝑟)) if 𝑥 = 𝑎, (6)
⎪
(0, 𝖯𝖱𝖥𝑘 (𝑥‖𝑟)) otherwise.
⎩
Also define an associated distribution 𝒟 over this function family such that 𝑎 ← {0, 1}𝑛 is chosen
uniformly at random, and the key 𝑘 is sampled according to the PRF key-generation procedure.
First, we establish that this function family cannot be compiled into a one-time program even
under the weak operational security definition.
Lemma 7.12. ℱ cannot be compiled into a one-time program under the weak operational security definition.
Proof. We will construct an adversary 𝒜 that is able to entirely learn the function given its one-
time program. First, the 𝒜 runs the one-time program evaluatoin procedure on input 𝑥 = 0,
and measures the first 𝑛 bits of the output to obtain value 𝑎. Since the first 𝑛 bits of 𝑓𝑎,𝑘 (0, ·) will
always equal 𝑎, by gentle measurement lemma, this measurement by the adversary cannot disturb
the one-time program state. Now, having learned the value of 𝑎, the advesrary 𝒜 uncomputes its
query on 0 to restore the intial state of the one-time program. Finally, the adversary makes a
second query to the one-time program evaluation procedure on input 𝑎, and measures the first 𝑛
bits of the output to get the 𝖯𝖱𝖥 key 𝑘. This reveals the entire function description to the adversary.
In particular, the adversary can break the weakest operational security definition by computing
two input-output pairs (𝑥1 , 𝑓𝑎,𝑘 (𝑥1 )) and (𝑥2 , 𝑓𝑎,𝑘 (𝑥2 )).
If the PRF has output length 𝑚, this counterexample is insidtinguishable from a function 𝑓 *
that has high min-entropy for every input 𝑥 ∈ {0, 1}𝑛 ,
Claim 7.14 (Single physical query unlearnable). The function family ℱ defined in Equation (6) is
unlearnable given a single physical query under the associated probability distribution 𝒟.
13
The GGM construction, for instance is secure against quantum superposition queries provided that the underlying
PRG is quantum-secure [Zha12b].
55
Proof. We show that for every non-uniform quantum-polynomial-time 𝒜,
Pr [𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾𝒜
ℱ ,𝒟 = 1] ≤ 𝗇𝖾𝗀𝗅(𝑛).
𝑓 ←ℱ𝑛
We consider a sequence of hybrids. Let the first hybrid ℋ1 be the learning game. In the second
hybrid ℋ2 , the function family is now
⎧
⎨(𝑎, 𝖯𝖱𝖥𝑘 (0‖𝑟)) if 𝑥 = 0,
⎪
𝑓𝑎,𝑘 (𝑥; 𝑟) = (0, 𝖯𝖱𝖥𝑘 (𝑎‖𝑟)) if 𝑥 = 𝑎,
⎪
(0, 𝖯𝖱𝖥𝑘 (𝑥‖𝑟)) otherwise.
⎩
Since the adversary 𝒜 gets to make a single quantum query to the oracle 𝑂𝑓 (·,$)(1) , and since 𝑎 ←
{0, 1}𝑛 is sampled uniformly at random by the challenger, with overwhelming probability, the
weight placed by this query on input 𝑥 = 𝑎 must be negligible. Therefore, hybrids ℋ1 and ℋ2 are
indistinguishable by [BBBV97a].
Now consider a third hybrid ℋ3 where the PRF is replaced by a random oracle 𝐻.
⎧
⎨(𝑎, 𝐻(0‖𝑟)) if 𝑥 = 0,
⎪
𝑓𝑎,𝑘 (𝑥; 𝑟) = (0, 𝐻(𝑎‖𝑟)) if 𝑥 = 𝑎,
⎪
(0, 𝐻(𝑥‖𝑟)) otherwise.
⎩
By the security of the PRF against quantum superposition query attacks, hybrids ℋ2 and ℋ3 are in-
distinguishable. Therefore, since the random oracle family is unlearnable under a single physical
query, so is the function family ℱ.
Since we know that the function family cannot be compiled into a one-time program that
satisfies weak operational security definition, by Lemma 7.12 and Claim 7.14, we now know this
function family cannot be compiled into a one-time program that satisfies the single physical query
classical output simlution-based definition.
Corollary 7.15. ℱ cannot be compiled into a one-time program under the single physival query classical-
output simulation-based definition.
Claim 7.16 (SEQ learnable). There is an adversary that, given single effective query access to 𝑓$,1 succeeds
in the learning game 𝖫𝖾𝖺𝗋𝗇𝗂𝗇𝗀𝖦𝖺𝗆𝖾ℱ ,𝒟 with probability 1.
Proof. The proof of Lemma 7.12 also shows that the function is learnable in the SEQ model.
This trivially implies that the function family is one-time programmable in the SEQ model.
Corollary 7.17. The 𝖮𝖳𝖯 construction in Section 5.1 gives a one-time compiler for ℱ in the classical oracle
model, under the single effective query simulation-based definition.
8 Applications
8.1 Signature Tokens
Motivation and Comparison to [BDS23] In this section we briefly discuss how to generate one-
time tokens for the Fiat-Shamir signature schemes, by embedding our construction in Section 5.1
in to the plain signature scheme.
56
One might wonder why we cannot simply use the signature token in [BDS23], where the sig-
natures are simply measured subspace vectors corresponding to the messages.
One unsatisfactory property of [BDS23] is that it doesn’t satisfy the regular existential unforge-
ability of signatures. If we give a "signing oracle" to the adversary, then it can trivially break the
one-time security. A more idealized notion would be allowing the adversary to query a signing
oracle, but not enabling it to produce two signatures where neither is in the queried set.
The other unsatisfactory part of the [BDS23] signature scheme is that one has to use subspace
vectors as signatures and hard to integrate other properties of a signature scheme we may want
(e.g. a short signature scheme). More importantly, a corporation may have been using a plain
signature scheme for a long time but when they occasionally need to delegate a one-time sign-
ing key to some external third-party, they have to change their entire cooperation’s verification
scheme into the subspace signature token scheme, which can result in more cost and inconve-
nience. Therefore, one interesting question is: Can we build a generic way to upgrade an existing
signature scheme to be one-time secure such that the verification algorithm?
The advantage of the signature schemes below over [BDS23] is that they preserve the origi-
nal signature scheme’s properties. In particular, the verification algorithm is almost identical to
the original verification algorithm of the signature scheme being compiled; the signature tokens
produce signatures from the original scheme on messages of the form 𝑚‖𝑟 for some 𝑟. Thus, the
verifier can use the original verification procedure and ignore the latter half of the signed message.
Blind Unforgeability We describe here how to compile signature schemes satisfying a certain
notion of unforgeability with quantum query access into signature tokens (see Section 3.5 for def-
initions). The notion we require is a slight variant on blind unforgeability.
Definition 8.1 (Quantum Blind Unforgeability [AMRS20]). A signature scheme (𝖦𝖾𝗇, 𝖲𝗂𝗀𝗇, 𝖵𝖾𝗋𝗂𝖿𝗒)
for message space ℳ is blind-unforgeable if for every QPT adversary 𝒜 and blinding set 𝐵 ⊂ ℳ,
where 𝖲𝗂𝗀𝗇𝐵 (𝗌𝗄, ·) denotes a (quantumly-accessible) signature oracle that signs messages 𝑚 using 𝗌𝗄 if
𝑚∈/ 𝐵, and otherwise outputs ⊥.
This definition differs from the original in that the adversary may choose its blinding set 𝐵.
[AMRS20] show that the hardness of this task is polynomially related to their original definition,
which samples 𝐵 uniformly at random.
We note that any sub-exponentially secure signature scheme is blind-unforgeable, since the
adversary could simply query for all signatures in ℳ∖𝐵, then simulate the blind-unforgeability
experiment. [AMRS20] also gives several other signature schemes which are blind-unforgeable
(under the original definition).
Construction. Given a signing key 𝗌𝗄, the signer constructs a signature token by outputting a
one-time program for the following functionality:
To sign a message 𝑚 using a signature token 𝑇 , the temporary signer evaluates 𝑇 (𝑚) and
measures the output.
57
Hardcoded: A signing key 𝗌𝗄, two PRF key 𝑘1 and 𝑘2 .
On input 𝑚:
Theorem 8.2. If (𝖦𝖾𝗇, 𝖲𝗂𝗀𝗇, 𝖵𝖾𝗋𝗂𝖿𝗒) satisfies blind-unforgeability (Definition 8.1), 𝖯𝖱𝖥 is a psuedoran-
dom function secure against quantum queries, and the one-time program satisfies SEQ simulation security
(Definition 4.8), then the above construction is one-time unforgeable (Definition 3.10).
Proof. We first show that any QPT adversary can only sign messages of the form 𝑚‖𝑟‖𝖯𝖱𝖥(𝑘1 , 𝑟)
for some 𝑟. Consider the following hybrid experiments:
• Setup: produces a master secret key 𝗆𝗌𝗄 together with a verification key/common reference
string 𝖢𝖱𝖲.
• Delegate: on input 𝗆𝗌𝗄, produces a one-time proving token 𝜌.
58
• Prove: on input 𝜌 and a statement-witness pair (𝑥, 𝑤), produces a proof 𝜋.
• Verify: on input 𝑥, 𝜋 and 𝗏𝗄, outputs accept or reject.
Note that all objects here are classical except for the proving token 𝜌 which is a quantum state.
In addition to the usual properties of completeness, soundness and zero knowledge, we require
that the proving token is one-time use only.
We construct a one-time proof token, following constructions of one-time PRFs in the plain
model. The proving token consists of a sequence of 𝑛 = |𝑥| many subspace states corresponding
to subspaces 𝐴1 , . . . , 𝐴𝑛 together with the obfuscation of a program that contains a PRF key 𝐾
together with the 𝐴𝑖 ; takes as input 𝑥, 𝑤 and 𝑛 vectors 𝑣1 , . . . , 𝑣𝑛 ; checks that (𝑥, 𝑤) ∈ 𝑅𝐿 , and
that each 𝑣𝑖 ∈ 𝐴𝑖 if 𝑥𝑖 = 0 and 𝑣𝑖 ∈ 𝐴⊥ 𝑖 if 𝑥𝑖 = 1. If all checks pass, output 𝖯𝖱𝖥𝐾 (𝑥, 𝑣1 , . . . , 𝑣𝑛 );
otherwise output ⊥ 14 .
The security of the one-time proof follows from similar arguments of the security for one-time
PRF in the plain model Section 6.2. We give a more formal description of the scheme below.
One-time Security Definition The one-time NIZK scheme first needs to satisfy the usual NIZK
soundness and zero knowledge property. We omit these standard definitions here and refer to
[SW14] Section 5.5 for details.
We then define a very natural one-time security through the following game, as a special case
of the Definition 4.23 for NIZK proofs:
1. The challenger samples 𝖢𝖱𝖲 ← 𝖲𝖾𝗍𝗎𝗉(1𝜆 ) and prepares the program 𝖮𝖳𝖯 for the 𝖯𝗋𝗈𝗏𝖾 func-
tionality. 𝒜 gets a copy of the one-time program for 𝖮𝖳𝖯 and 𝖢𝖱𝖲.
2. 𝒜 outputs two instance-proof pairs (or instance-randomness-proof tuple, for a relaxed no-
tion) (𝑥1 , 𝜋1 ), (𝑥2 , 𝜋2 ) , which satisfies 𝑥1 ̸= 𝑥2 or 𝜋1 ̸= 𝜋2 , otherwise 𝒜 loses.
3. Challenger checks if 𝖵𝖾𝗋𝗂𝖿𝗒(𝑥1 , 𝜋1 ) = 1 and 𝖵𝖾𝗋𝗂𝖿𝗒(𝑥2 , 𝜋2 ) = 1. output 1 if and only both are
satisfied.
We say that a one-time sampling program for NIZK satisfies security if for any QPT adversary 𝒜,
there exists a negligible function 𝗇𝖾𝗀𝗅(·), such that for all 𝜆 ∈ ℕ, the following holds:
Construction Let 𝐹 be a constrainable PRF that takes inputs of ℓ bits and outputs 𝜆 bits. Let 𝑓 (·)
be a PRG. Let 𝐿 be a language and 𝑅(·, ·) be a relation that takes in an instance and a witness. Our
system will allow proofs of instances of ℓ bits and witness of ℓ′ bits. The values of bounds given
by the values ℓ and ℓ′ can be specified at setup, although we suppress that notation here.
For simplicity of presentation, we omit the need of a second PRF used to extract randomness
in Section 6 since it is only used to extract full entropy and will not affect the security proof.
𝖲𝖾𝗍𝗎𝗉(1𝜆 ) : The setup algorithm in our case generates a common reference string 𝖢𝖱𝖲 along with
a one-time proof token.
14
We may also obtain a construction from random oracle based NIZK such as Fiat Shamir, but it would require the
use of classical oracles while using iO and PRF gives a plain model result.
59
1. The setup algorithm first chooses a puncturable PRF key 𝐾 for 𝐹 . Next, it creates an
obfuscation of the 𝖵𝖾𝗋𝗂𝖿𝗒 NIZK of Figure 11. The size of the program is padded to be the
maximum of itself and the program we define later in the security game.
2. It samples 𝑛 independent subspaces {|𝐴𝑖 ⟩}𝑖∈[𝑛] . It creates an obfuscation of the program
Prove NIZK of Figure 10. The size of the program is padded to be the maximum of itself
and the program we define later in the security game.
The common reference string 𝖢𝖱𝖲 consists of the two obfuscated programs and a master
secret key 𝗆𝗌𝗄 = {𝐴𝑖 }𝑖∈[𝑛] .
𝖣𝖾𝗅𝖾𝗀𝖺𝗍𝖾: takes in a master secret key 𝗆𝗌𝗄 = {𝐴𝑖 }𝑖∈[𝑛] and outputs {|𝐴𝑖 ⟩}𝑖∈[𝑛] as the one-time
proof token.
𝖯𝗋𝗈𝗏𝖾(𝖢𝖱𝖲, (𝑥, 𝑤), {|𝐴𝑖 ⟩}𝑖∈[𝑛] ) : The NIZK prove algorithm runs the obfuscated program of 𝖯𝗋𝗈𝗏𝖾
from CRS on inputs (𝑥, 𝑤) and subspace states |𝐴𝑖 ⟩ , 𝑖 ∈ [𝑛] in the following way: Apply
QFT to each |𝐴𝑖 ⟩ if 𝑥𝑖 = 1, else apply identity operator. Run the program in Figure 10 on
input (𝑥, 𝑤) and the modified subspace states. If 𝑅(𝑥, 𝑤) holds the program returns a proof
𝜋 = (𝑟, 𝐹 (𝐾, 𝑥‖𝑟)).
𝖵𝖾𝗋𝗂𝖿𝗒(𝑥, 𝜋, 𝖢𝖱𝖲): Run the input (𝑥, 𝜋) into the obfuscated program Figure 11 and output the
program’s output.
60
One-Time Security The correctness and one-time security proof follows relatively straightfor-
ward in a similar way as the proof for the PRF construction Section 6.
We first replace the PRF 𝐹 ’s kardcoded key 𝐾 in both programs with a key 𝐾 * that is con-
strained on a circuit to evaluate only on (𝑥, 𝑟) such that 𝑟 are subspace vectors corresponding
to 𝑥. By the property of iO, this change is indistinguishable. Then by the computational direct
product hardness of the subspace states, we can argue that the adversary can only provide two
proofs such that one of 𝑟1 and 𝑟2 are not valid subspace vectors, which will break the constrained
pseudorandomness of the constrained PRF.
Soundness For the regular soundness of the construction is inspired by the proof for Theorem 9
in [SW14]. but we need to use subexponentially secure iO and slightly more complicated hybrids.
In hybrid 1, for some instance 𝑥* ∈ / 𝐿, we constrain the PRF 𝐹 key 𝐾 used in the program Figure 10
to a constrained key 𝐾 that evaluates on inputs (𝑥‖𝑟) such that 𝑥 ̸= 𝑥* . Since 𝑥* ∈
* / 𝐿, the
functionality of the program 𝖯𝗋𝗈𝗏𝖾𝖮𝖳𝖯 is unchanged and we can invoke iO. The next few steps
deviate from [SW14].
We design hybrids (2.𝑗.1) for 𝑖 = 1, 2 · · · , 𝑡, 𝑡 = 2𝜆·𝑛/2 : for all vectors 𝑢1 ∈ 𝐴𝑥1 , · · · , 𝑢𝑛 ∈ 𝐴𝑥𝑛 ,
we make a lexigraphical order on them and call the 𝑗-th vector 𝐮𝑗 . In hybrid (2.𝑗.1), we modify
the program 𝖵𝖾𝗋𝗂𝖿𝗒𝖮𝖳𝖯 into the following.
Note that the key 𝐾𝑥* ,𝑟𝑗 is a punctured/constrained key that does not evaluate at (𝑥* |𝑟𝑗 ).
1. Parse 𝜋 := (𝑟, 𝑦)
2. If for all 𝑖 ∈ [𝑛], 𝗌𝗁𝖮𝑥𝑖 𝑖 (𝑟𝑖 ) = 1, where 𝑥𝑖 is the 𝑖-th bit of 𝑥: proceed to
step 3; else output 0.
3. If 𝑥 = 𝑥* and 𝑟 < 𝑟𝑗 : output 0.
4. Else if 𝑥 = 𝑥* and 𝑟 = 𝑟𝑗 :
check if 𝑓 (𝑦) = 𝑓 (𝑦 * ): if yes, output 1; if no, output 0.
5. Else if 𝑥 ̸= 𝑥* or 𝑟 > 𝑟𝑗 :
Check if 𝑓 (𝑦) = 𝑓 (𝐹 (𝐾𝑥* ,𝑟𝑗 , 𝑥‖𝑟))): if yes, output 1; if no, output 0.
Figure 12: Program NIZK 𝖵𝖾𝗋𝗂𝖿𝗒𝖮𝖳𝖯 in hybrid 2.𝑗.1. Note that the key 𝐾𝑥* ,𝑟𝑗 is a punctured/constrained
key that does not evaluate at (𝑥* |𝑟𝑗 )
Note that the functionality of the program is essentially the same as in the original program
in Figure 11 and therefore we can invoke the security of iO. Aftern hybrid (2.𝑗.1), we intro-
duce the next hybrid (2.𝑗.2). In the next hybrid (2.𝑗.2), we will replace 𝑦 * = 𝐹 (𝐾, 𝑥* ‖𝑟𝑗 ) hard-
coded in the program with a random value. This follows from the pseudorandomness at punc-
tured/constrained values. In the following hybrid (2.𝑗.3), we replace the value 𝑓 (𝑦 * ) with a ran-
dom value from the range of 𝑓 . This is indistinguishable by the property of PRG 𝑓 . Now with
overwhelming probability, when the input is (𝑥* , 𝜋 = (𝑟𝑗 , 𝑦)), there exists no value 𝑦 that will
make the program output 1.
After hybrid (2.𝑗.3), we move to the next hybrid (2.𝑗 + 1.1), where 𝖵𝖾𝗋𝗂𝖿𝗒𝖮𝖳𝖯 is the same pro-
gram as in Figure 12 but we increment the counter 𝑗 to 𝑗 +1. By the above observations, the hybrid
61
(2.𝑗.3) is statistically indistinguishable from the next hybrid 2.𝑗 +1.1, and now for all inputs where
𝑥 = 𝑥* , 𝑟 < 𝑟𝑗+1 , the program outputs 0.
Finally, after hybrid 2.𝑡.3, we obtain a 𝖵𝖾𝗋𝗂𝖿𝗒 program that outputs all 0 for inputs (𝑥, 𝜋 = (𝑟, 𝑦))
where 𝑥 = 𝑥* , 𝑟 ∈ 𝐴𝑥1 , · · · , 𝐴𝑥𝑛 . The adversary’s advantage in getting an acceptance on instance
𝑥* is 0.
Zero Knowledge The zero-knowledge proof follows exactly from [SW14]. A simulator 𝑆 that on
input 𝑥, runs the setup algorithm and outputs the corresponding 𝖢𝖱𝖲 along with 𝜋 = 𝐹 (𝐾, 𝑥‖𝑟)
for some 𝑟 of its own choice since it samples the subspaces on its own. The simulator has the exact
same distribution as any real prover’s algorithm for any 𝑥 ∈ 𝐿 and witness 𝑤 where 𝑅(𝑥, 𝑤) = 1.
Definition 8.4 (Public-key quantum money). A quantum money scheme consists of a pair of quantum
polynomial-time algorithms (𝖦𝖾𝗇, 𝖵𝖾𝗋) with the following syntax, correctness and security specifications.
• Syntax: The generation procedure 𝖦𝖾𝗇(1𝜆 ) takes as input a security parameter 𝜆 and outputs a
classical serial number 𝜎 and a quantum state |𝜓⟩. The verification algorithm 𝖵𝖾𝗋(𝜎, |𝜓⟩) outputs an
accept/reject bit, 0 or 1.
• Correctness: There exists a negligible function 𝗇𝖾𝗀𝗅 such that
• Security: For all quantum polynomial-time algorithms 𝐴, there exists a negligible functions 𝗇𝖾𝗀𝗅 such
that 𝐴 wins the following security game with probability 𝗇𝖾𝗀𝗅(𝜆):
– The challenger runs (𝜎, |𝜓⟩) ← 𝖦𝖾𝗇(1𝜆 ), and give 𝜎, |𝜓⟩ to 𝐴.
– 𝐴 produces a (potentially entangled) join state 𝜌1,2 over two registers 𝜌1 and 𝜌2 . 𝐴 sends 𝜌1,2
to the challenger.
– The challenger runs 𝑏1 ← 𝖵𝖾𝗋(𝜎, 𝜌1 ) and 𝑏2 ← 𝖵𝖾𝗋(𝜎, 𝜌2 ). The adversary 𝐴 wins if and only if
𝑏1 = 𝑏2 = 1.
Theorem 8.5. Let ℱ be a verifiable randomized function family. Suppose there exists a one-time pro-
gram scheme that satisfies the operational one-time security notion for verifiable functions (Definition 4.29).
Then, a public-key quantum money scheme exists.
62
Proof. Let ℱ be a verifiable randomized function family, and let 𝖮𝖳𝖯 = (𝖦𝖾𝗇𝖾𝗋𝖺𝗍𝖾𝖮𝖳𝖯 , 𝖤𝗏𝖺𝗅𝗎𝖺𝗍𝖾𝖮𝖳𝖯 )
be a one-time program scheme that satisfies the operational one-time security notion for verifiable
functions. Then, we can construct a quantum money scheme 𝖰𝖬𝗈𝗇𝖾𝗒 = (𝖦𝖾𝗇𝖰𝖬 , 𝖵𝖾𝗋𝖰𝖬 ) as fol-
lows:
• 𝖦𝖾𝗇𝖰𝖬 (1𝜆 ): Sample a function along with its verification key 𝑓, 𝑣𝑘𝑓 ← ℱ𝜆 , and generate its
one-time program |𝜓⟩ ← 𝖮𝖳𝖯(𝑓 ). Output classical serial number 𝜎 = 𝑣𝑘𝑓 and the money
state |𝜓⟩ = 𝖮𝖳𝖯(𝑓 ).
• 𝖵𝖾𝗋𝖰𝖬 (𝜎, 𝜌): Sample a random 𝑥 ← 𝒳 and initialize an input register 𝑋 to be |𝑥⟩. Coherently
run (without performing any measurement) 𝖤𝗏𝖺𝗅𝗎𝖺𝗍𝖾𝖮𝖳𝖯 (𝜌, 𝑥) to get the output in register
𝑌 . Coherently run 𝖵𝖾𝗋ℱ (𝑣𝑘𝑓 , 𝑋, 𝑌 ) on the 𝑋 and 𝑌 registers and measure the resulting
accept/reject bit. Accept if the verification passes and reject otherwise.
The correctness of this quantum money scheme follows from the correctness of the one-time
program 𝖮𝖳𝖯(𝑓 ), the verification procedure 𝖵𝖾𝗋ℱ as well as the gentle measurement lemma Lemma 3.2.
We now prove security. Suppose for contradiction there exists an adversary 𝐴 that wins the
quantum money security game with non-negligible probability, so that it outputs a joint state 𝜌1,2
on two registers 𝜌1 and 𝜌2 such that both verification checks pass. Then, there exists an adversary
𝐵 that breaks the one-time program security of 𝖮𝖳𝖯(𝑓 ): given 𝖮𝖳𝖯(𝑓 ) and 𝑣𝑘𝑓 as auxiliary infor-
mation, 𝐵 runs 𝐴 on the same input, 𝐴(𝖮𝖳𝖯(𝑓 ), 𝑣𝑘𝑓 ). Suppose that 𝐴 outputs (a joint state on)
two registers 𝜌1 , 𝜌2 . Then, since the verification procedure verifies by sampling a random input
to test the functionality on each state, running the verification procedure on both these registers
results in two samples (𝑥1 , 𝑦1 ) and (𝑥2 , 𝑦2 ) with non-negligible probability, breaking the one-time
program security of 𝖮𝖳𝖯.
9 References
[Aar04] Scott Aaronson. Limitations of quantum advice and one-way communication. In
Proceedings. 19th IEEE Annual Conference on Computational Complexity, 2004., pages 320–
332. IEEE, 2004. 4, 15
[ABDS20] Gorjan Alagic, Zvika Brakerski, Yfke Dulek, and Christian Schaffner. Impossibility of
quantum virtual black-box obfuscation of classical circuits, 2020. 3, 5, 12, 20, 49, 51
[AC12] Scott Aaronson and Paul Christiano. Quantum money from hidden subspaces. In
Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 41–
60. ACM, 2012. 8
[ACE+ 22] Ghada Almashaqbeh, Ran Canetti, Yaniv Erlich, Jonathan Gershoni, Tal Malkin, It-
sik Pe’er, Anna Roitburd-Berman, and Eran Tromer. Unclonable polymers and their
cryptographic applications. In Orr Dunkelman and Stefan Dziembowski, editors,
EUROCRYPT 2022, Part I, volume 13275 of LNCS, pages 759–789. Springer, Cham,
May / June 2022. 14
[AMRS20] Gorjan Alagic, Christian Majenz, Alexander Russell, and Fang Song. Quantum-access-
secure message authentication via blind-unforgeability. In Anne Canteaut and Yu-
val Ishai, editors, EUROCRYPT 2020, Part III, volume 12107 of LNCS, pages 788–817.
Springer, Cham, May 2020. 39, 57
63
[AP21] Prabhanjan Ananth and Rolando L. La Placa. Secure software leasing. Springer-
Verlag, 2021. 3, 5, 12, 20, 49, 51, 53
[BBBV97a] Charles H Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths
and weaknesses of quantum computing. SIAM journal on Computing, 26(5):1510–1523,
1997. 13, 50, 53, 54, 56
[BBBV97b] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths
and weaknesses of quantum computing. SIAM Journal on Computing, 26(5):1510–1523,
Oct 1997. 53
[BDS23] Shalev Ben-David and Or Sattath. Quantum tokens for digital signatures. Quantum,
7:901, 2023. 1, 2, 3, 4, 8, 9, 11, 14, 17, 56, 57
[BGI+ 01] Boaz Barak, Oded Goldreich, Rusell Impagliazzo, Steven Rudich, Amit Sahai, Salil
Vadhan, and Ke Yang. On the (im) possibility of obfuscating programs. In Annual
International Cryptology Conference, pages 1–18. Springer, 2001. 2, 11, 19, 41
[BGS13] Anne Broadbent, Gus Gutoski, and Douglas Stebila. Quantum one-time programs. In
Annual Cryptology Conference, pages 344–360. Springer, 2013. 1, 2, 4, 5, 6, 14, 20
[BJ15] Anne Broadbent and Stacey Jeffery. Quantum homomorphic encryption for circuits of
low t-gate complexity. In Annual Cryptology Conference, pages 609–629. Springer, 2015.
49
[BKNY23] James Bartusek, Fuyuki Kitagawa, Ryo Nishimaki, and Takashi Yamakawa. Obfusca-
tion of pseudo-deterministic quantum circuits. In Proceedings of the 55th Annual ACM
Symposium on Theory of Computing, pages 1567–1578, 2023. 9, 17
[BKS23] James Bartusek, Dakshita Khurana, and Akshayaram Srinivasan. Secure computation
with shared epr pairs (or: How to teleport in zero-knowledge). In Annual International
Cryptology Conference, pages 224–257. Springer, 2023. 62
[BKW17] Dan Boneh, Sam Kim, and David J Wu. Constrained keys for invertible pseudoran-
dom functions. In Theory of Cryptography Conference, pages 237–263. Springer, 2017. 11,
44, 45
[BLW17] Dan Boneh, Kevin Lewi, and David J Wu. Constraining pseudorandom functions
privately. In IACR International Workshop on Public Key Cryptography, pages 494–524.
Springer, 2017. 44
[Bra18] Zvika Brakerski. Quantum fhe (almost) as secure as classical. In Annual International
Cryptology Conference, pages 67–95. Springer, 2018. 49
[BV15] Zvika Brakerski and Vinod Vaikuntanathan. Constrained key-homomorphic prfs from
standard lattice assumptions: Or: How to secretly embed a circuit in your prf. In The-
ory of Cryptography: 12th Theory of Cryptography Conference, TCC 2015, Warsaw, Poland,
March 23-25, 2015, Proceedings, Part II 12, pages 1–30. Springer, 2015. 44, 45
[CGLZ19] Kai-Min Chung, Marios Georgiou, Ching-Yi Lai, and Vassilis Zikas. Cryptography
with disposable backdoors. Cryptography, 3(3):22, 2019. 14
64
[CHV23] Céline Chevalier, Paul Hermouet, and Quoc-Huy Vu. Semi-quantum copy-protection
and more. In Theory of Cryptography Conference, pages 155–182. Springer, 2023. 42, 43
[CLLZ21a] Andrea Coladangelo, Jiahui Liu, Qipeng Liu, and Mark Zhandry. Hidden cosets and
applications to unclonable cryptography. In Advances in Cryptology–CRYPTO 2021:
41st Annual International Cryptology Conference, CRYPTO 2021, Virtual Event, August
16–20, 2021, Proceedings, Part I 41, pages 556–584. Springer, 2021. 8, 42, 43
[CLLZ21b] Andrea Coladangelo, Jiahui Liu, Qipeng Liu, and Mark Zhandry. Hidden cosets and
applications to unclonable cryptography. In Tal Malkin and Chris Peikert, editors,
CRYPTO 2021, Part I, volume 12825 of LNCS, pages 556–584, Virtual Event, August
2021. Springer, Cham. 14
[DFM20] Jelle Don, Serge Fehr, and Christian Majenz. The measure-andreprogram technique
2.0: multi-round fiat-shamir and more. In Annual International Cryptology Conference,
pages 602–631, 2020. 79
[DFMS19] Jelle Don, Serge Fehr, Christian Majenz, and Christian Schaffner. Security of the
fiat-shamir transformation in the quantum random-oracle model. In Advances in
Cryptology–CRYPTO 2019: 39th Annual International Cryptology Conference, Santa Bar-
bara, CA, USA, August 18–22, 2019, Proceedings, Part II 39, pages 356–383. Springer,
2019. 79
[GG17] Rishab Goyal and Vipul Goyal. Overcoming cryptographic impossibility results using
blockchains. In Yael Kalai and Leonid Reyzin, editors, TCC 2017, Part I, volume 10677
of LNCS, pages 529–561. Springer, Cham, November 2017. 14
[GGH+ 16] Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, and Brent
Waters. Candidate indistinguishability obfuscation and functional encryption for all
circuits. SIAM Journal on Computing, 45(3):882–929, 2016. 41
[GIS+ 10] Vipul Goyal, Yuval Ishai, Amit Sahai, Ramarathnam Venkatesan, and Akshay Wadia.
Founding cryptography on tamper-proof hardware tokens. In Daniele Micciancio,
editor, Theory of Cryptography, 7th Theory of Cryptography Conference, TCC 2010, Zurich,
Switzerland, February 9-11, 2010. Proceedings, volume 5978 of Lecture Notes in Computer
Science, pages 308–326. Springer, 2010. 1, 14
[GKR08a] Shafi Goldwasser, Yael Tauman Kalai, and Guy N Rothblum. One-time programs. In
Advances in Cryptology–CRYPTO 2008: 28th Annual International Cryptology Conference,
Santa Barbara, CA, USA, August 17-21, 2008. Proceedings 28, pages 39–56. Springer, 2008.
1
[GKR08b] Shafi Goldwasser, Yael Tauman Kalai, and Guy N. Rothblum. One-time programs. In
David Wagner, editor, CRYPTO 2008, volume 5157 of LNCS, pages 39–56. Springer,
Berlin, Heidelberg, August 2008. 14
[GKW17] Rishab Goyal, Venkata Koppula, and Brent Waters. Lockable obfuscation. In 2017
IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 612–
621. IEEE, 2017. 50
65
[GM24] Sam Gunn and Ramis Movassagh. Quantum one-time protection of any randomized
algorithm. private communication, 2024. 13, 14
[Had00] Satoshi Hada. Zero-knowledge and code obfuscation. In Tatsuaki Okamoto, editor,
ASIACRYPT 2000, volume 1976 of LNCS, pages 443–457. Springer, Berlin, Heidelberg,
December 2000. 2
[LSZ20] Qipeng Liu, Amit Sahai, and Mark Zhandry. Quantum immune one-time memories.
Cryptology ePrint Archive, 2020. 14
[Mah20] Urmila Mahadev. Classical homomorphic encryption for quantum circuits. SIAM
Journal on Computing, 52(6):FOCS18–189, 2020. 49
[NC02] Michael A Nielsen and Isaac Chuang. Quantum computation and quantum informa-
tion, 2002. 14
[RKB+ 18] Marie-Christine Roehsner, Joshua A. Kettlewell, Tiago B. Batalhão, Joseph F. Fitzsi-
mons, and Philip Walther. Quantum advantage for probabilistic one-time programs.
Nature Communications, 9(1):5225, Dec 2018. 14
[SW14] Amit Sahai and Brent Waters. How to use indistinguishability obfuscation: deniable
encryption, and more. In Proceedings of the forty-sixth annual ACM symposium on Theory
of computing, pages 475–484, 2014. 11, 41, 45, 59, 61, 62
[VZ21] Thomas Vidick and Tina Zhang. Classical proofs of quantum knowledge. In Annual
International Conference on the Theory and Applications of Cryptographic Techniques, pages
630–660. Springer, 2021. 42
[Win99] Andreas Winter. Coding theorem and strong converse for quantum channels. IEEE
Transactions on Information Theory, 45(7):2481–2485, 1999. 4
[YZ21] Takashi Yamakawa and Mark Zhandry. Classical vs quantum random oracles. In An-
nual International Conference on the Theory and Applications of Cryptographic Techniques,
pages 568–597. Springer, 2021. 23, 79, 80, 82
66
[Zha12a] Mark Zhandry. How to construct quantum random functions. In Proceedings of the
2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, FOCS ’12, page
679–687, USA, 2012. IEEE Computer Society. 51
[Zha12b] Mark Zhandry. How to construct quantum random functions. In 2012 IEEE 53rd
Annual Symposium on Foundations of Computer Science, pages 679–687. IEEE, 2012. 55
[Zha16] Mark Zhandry. The magic of ELFs. In Matthew Robshaw and Jonathan Katz, editors,
CRYPTO 2016, Part I, volume 9814 of LNCS, pages 479–508. Springer, Berlin, Heidel-
berg, August 2016. 50
[Zha19a] Mark Zhandry. How to record quantum queries, and applications to quantum indif-
ferentiability. In Alexandra Boldyreva and Daniele Micciancio, editors, Advances in
Cryptology – CRYPTO 2019, pages 239–268, Cham, 2019. Springer International Pub-
lishing. 2, 7, 15, 16, 23
[Zha19b] Mark Zhandry. Quantum lightning never strikes the same state twice. In Annual
International Conference on the Theory and Applications of Cryptographic Techniques, pages
408–438. Springer, 2019. 41, 42
1. Pairwise independence: For any (𝑥, 𝑟, 𝑦), (𝑥′ , 𝑟′ , 𝑦 ′ ) ∈ 𝒳 × ℛ × 𝒴 such that (𝑥, 𝑟) ̸= (𝑥′ , 𝑟′ ),
Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦 ∧ 𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ] = Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] · Pr𝑓 [𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ].
2. High Randomness: There is a negligible function 𝜈(𝜆) such that for any (𝑥, 𝑟, 𝑦) ∈ 𝒳 × ℛ × 𝒴,
1
3. |ℛ| = 𝗇𝖾𝗀𝗅(𝜆)
Then ℱ is SEQ-𝗇𝖾𝗀𝗅(𝜆)-unlearnable.
Proof.
1. The SEQ oracle implements a compressed oracle for 𝐻. In particular, it maintains a database
register 𝒟𝐻 that stores ∅ or a list of values (𝑥, 𝑟) ∈ 𝒳 × ℛ. See Section 3.3 for a formal
definition of the compressed oracle representation.
The SEQ oracle also stores the function 𝑓 on register ℱ. Let ℱ be initialized to the uniform
superposition:
1 ∑︁
|𝐹∅ ⟩ := √︀ |𝑓 ⟩ℱ
|ℱ| 𝑓 ∈ℱ
67
Then the SEQ oracle will answer queries coherently, without measuring the superposition.
Also, let us represent 𝑓 as its truth table, so for every (𝑥, 𝑟) ∈ 𝒳 × ℛ, there is a register ℱ𝑥,𝑟
that holds the value 𝑦 = 𝑓 (𝑥, 𝑟).
Let 𝒪 = 𝒟𝐻 × ℱ be the oracle’s internal register, let 𝒬 be the query register submitted to
the oracle, and let 𝒜 be the adversary’s private register. We can assume that the state of the
system is a pure state over 𝒜 × 𝒬 × 𝒪 since the adversary and the SEQ oracle act as unitaries
over these registers.
2. Let us define the states that 𝒪 is allowed to be in, and let 𝐸 project onto the allowed states.
𝐸 = 𝕀𝒜×𝒬 ⊗ 𝐸𝒪
𝐸𝒪 projects onto all states on 𝒪 in the span of the following basis states:
(︁ )︁
|∅⟩𝒟𝐻 ⊗ |𝐹∅ ⟩ℱ or |(𝑥, 𝑟)⟩𝒟𝐻 ⊗ |𝐹𝑥,𝑟,𝑦 ⟩ℱ
(𝑥,𝑟,𝑦)∈𝒳 ×ℛ×𝒴
68
adversary will lose the learning game with overwhelming probability because
Pr [𝑓 (𝑥, 𝑟) = 𝑦 ∧ 𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ] = Pr [𝑓 (𝑥, 𝑟) = 𝑦] · Pr [𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ]
$ $ $
𝑓 ←ℱ 𝑓 ←ℱ 𝑓 ←ℱ
≤ 𝜈(𝜆)2
= 𝗇𝖾𝗀𝗅(𝜆)
(b) Case 2: Otherwise, the measurement on 𝒟𝐻 returns some (𝑥′′ , 𝑟′′ ), and we also measure
𝑦 ′′ . At the end of step 2, the state on the ℱ register is |𝐹𝑥′′ ,𝑟′′ ,𝑦′′ ⟩. When we measure ℱ
$
in step 3, we obtain a uniformly random 𝑓 ← ℱ𝑥′′ ,𝑟′′ ,𝑦′′ .
At least one of (𝑥, 𝑟) and (𝑥′ , 𝑟′ ) do not equal (𝑥′′ , 𝑟′′ ). Without loss of generality, let
us say that (𝑥, 𝑟) ̸= (𝑥′′ , 𝑟′′ ). Then the probability that the adversary wins the learning
game is
Pr[𝒜 wins] ≤ Pr [𝑓 (𝑥, 𝑟) = 𝑦]
$
𝑓 ←ℱ𝑥′′ ,𝑟′′ ,𝑦′′
≤ 𝜈(𝜆) = 𝗇𝖾𝗀𝗅(𝜆)
We’ve shown that in both cases, the adversary wins the learning game with negligible prob-
ability. Therefore ℱ is SEQ-𝗇𝖾𝗀𝗅(𝜆)-unlearnable.
Lemma A.2. After any polynomial number of queries to the SEQ oracle, the state of the system |𝜓⟩ satisfies
‖𝐸 · |𝜓⟩ ‖ ≥ 1 − 𝗇𝖾𝗀𝗅(𝜆).
Proof.
1. Let us define some useful operations.
• Let 𝑉𝑥 be a projector acting on 𝒪 that projects onto all states for which 𝒟𝐻 contains ∅
or a single entry of the form (𝑥, 𝑟) for some 𝑟 ∈ ℛ. Furthermore, let 𝑉 be the following
projector acting on 𝒜 × 𝒬 × 𝒪:
∑︁
𝑉 = 𝕀𝒜×𝒬𝑢 ×𝒬𝑏 ⊗ |𝑥⟩ ⟨𝑥|𝒬𝑥 ⊗ (𝑉𝑥 )𝒪
𝑥∈𝒳
In other words, 𝑉 verifies whether the query 𝑥 on 𝒬𝑥 , if answered by the SEQ oracle,
would keep the number of queries recorded in 𝒟𝐻 ≤ 1.
Note that 𝐸 and 𝑉 commute with each other and share an eigenbasis.
69
• Let 𝖣𝖾𝖼𝗈𝗆𝗉 and 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 be defined as they were in section 3.3.
• The SEQ oracle maintains an internal register ℛ that is used to store an intermediate
𝑟-value. Then 𝖢𝖮′𝐻 copies the output of 𝐻(𝑥) into the ℛ register. 𝖢𝖮′𝐻 computes the
following mapping:
𝖢𝖮′
|𝑥⟩𝒬𝒳 ⊗ |𝑟′ ⟩ℛ ⊗ |(𝑥, 𝑟)⟩𝒟𝐻 −→
𝐻
|𝑥⟩𝒬𝒳 ⊗ |𝑟′ ⊕ 𝑟⟩ℛ ⊗ |(𝑥, 𝑟)⟩𝒟𝐻
𝖢𝖮′
𝐹
|𝑥, 𝑢, 𝑏⟩𝒬 ⊗ |𝑟⟩ℛ ⊗ |𝑓 ⟩ℱ −→ |𝑥, 𝑢 ⊕ 𝑓 (𝑥, 𝑟), 𝑏 ⊕ 1⟩𝒬 ⊗ |𝑟⟩ℛ ⊗ |𝑓 ⟩ℱ
3. Let us assume that at the beginning of the query, the state |𝜓⟩ is in the span of 𝐸. This is true
at the beginning of the first query because the state of 𝒪 is |∅⟩𝒟𝐻 ⊗ |𝐹∅ ⟩ℱ , which is in the
span of 𝐸. Then we will show that at the end of the query, the state |𝜓 ′ ⟩ is negligibly close to
a state in the span of 𝐸. Then by induction, after any polynomial number of queries to the
SEQ oracle, the state of 𝒪 will be negligibly close to a state in the span of 𝐸.
4. Next, we split |𝜓⟩ into the components in the span of 𝑉 and perpendicular to 𝑉 . |𝜓⟩ =
(𝕀 − 𝑉 ) · |𝜓⟩ + 𝑉 · |𝜓⟩ Note that 𝐬𝑉 := (𝕀 − 𝑉 ) · |𝜓⟩ and 𝐬𝑉 := 𝑉 · |𝜓⟩ are still in the span of
𝐸 because applying (𝕀 − 𝑉 ) or 𝑉 to |𝜓⟩ just checks whether 𝒬𝒳 records a different 𝑥-value
than 𝒟𝐻 .
The SEQ oracle applies 𝕀 to the first component 𝐬𝑉 and applies 𝖣𝖾𝖼𝗈𝗆𝗉 ∘ 𝖢𝖮′𝐻 ∘ 𝖢𝖮′𝐹 ∘ 𝖢𝖮′𝐻 ∘
𝖣𝖾𝖼𝗈𝗆𝗉 to 𝐬𝑉 . We will show that applying 𝖣𝖾𝖼𝗈𝗆𝗉 ∘ 𝖢𝖮′𝐻 ∘ 𝖢𝖮′𝐹 ∘ 𝖢𝖮′𝐻 ∘ 𝖣𝖾𝖼𝗈𝗆𝗉 to 𝐬𝑉
produces a state that gives an overwhelming fraction of its amplitude to a state in the span
of 𝐸.
5. Lemma A.3 says that for any state |𝜑⟩ on 𝒜 × 𝒬 × 𝒪, such that 𝑉 · |𝜑⟩ = |𝜑⟩,
This means that if a state starts in the span of 𝐸 and 𝑉 , then applying 𝖣𝖾𝖼𝗈𝗆𝗉 to it will not
move it out of the span of 𝐸, except by a negligible amount.
70
𝐬𝑉
Let us set |𝜑⟩ = ‖𝐬𝑉 ‖ . Note that 𝐸 |𝜑⟩ = |𝜑⟩, and 𝑉 |𝜑⟩ = |𝜑⟩. Then applying lemma A.3
shows that
‖𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉 · 𝐬𝑉 ‖
1 − 𝗇𝖾𝗀𝗅(𝜆) ≤
‖𝐬𝑉 ‖
′
‖𝐬𝑉 ‖ − 𝗇𝖾𝗀𝗅 (𝜆) ≤ ‖𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉 · 𝐬𝑉 ‖
6. Next, the operations 𝖢𝖮′𝐻 and 𝖢𝖮′𝐹 commute with 𝐸. If a state |𝜑⟩ is in the span of 𝐸, then
𝖢𝖮′𝐻 ∘ 𝖢𝖮′𝐹 ∘ 𝖢𝖮′𝐻 |𝜑⟩ will be in the span of 𝐸 as well. Therefore,
Furthermore, 𝖢𝖮′𝐻 ∘𝖢𝖮′𝐹 ∘𝖢𝖮′𝐻 ∘𝖣𝖾𝖼𝗈𝗆𝗉·𝐬𝑉 will be in the span of 𝑉 since all of the operations
𝖢𝖮′𝐻 , 𝖢𝖮′𝐹 , 𝖣𝖾𝖼𝗈𝗆𝗉 map a state in the span of 𝑉 to a state in the span of 𝑉 .
7. After applying 𝖢𝖮′𝐻 ∘ 𝖢𝖮′𝐹 ∘ 𝖢𝖮′𝐻 , the SEQ oracle applies 𝖣𝖾𝖼𝗈𝗆𝗉 again. We can apply
lemma A.3 again to show that
Then,
This shows that after any polynomial number of queries to the SEQ oracle, the state of the
system |𝜓⟩ satisfies ‖𝐸 · |𝜓⟩ ‖ ≥ 1 − 𝗇𝖾𝗀𝗅(𝜆).
Lemma A.3. For any state |𝜓⟩ on 𝒜 × 𝒬 × 𝒪, such that 𝑉 · |𝜓⟩ = |𝜓⟩, ‖𝐸 · |𝜓⟩ ‖ − 𝗇𝖾𝗀𝗅(𝜆) ≤ ‖𝐸 ·
𝖣𝖾𝖼𝗈𝗆𝗉 · 𝐸 · |𝜓⟩ ‖.
Proof.
1. A generic state over 𝒜 × 𝒬 × 𝒪 such that 𝑉 · |𝜓⟩ = |𝜓⟩ can be written as follows:
∑︁
|𝜓⟩ = 𝛼𝑎,𝑥,𝑢,𝑏 · |𝑎⟩𝒜 ⊗ |𝑥, 𝑢, 𝑏⟩𝒬 ⊗ |Ψ𝑎,𝑥,𝑢,𝑏 ⟩𝒪
𝑎,𝑥,𝑢,𝑏
where 1 = 𝑎,𝑥,𝑢,𝑏 |𝛼𝑎,𝑥,𝑢,𝑏 |2 , and for each (𝑎, 𝑥, 𝑢, 𝑏), 𝑉𝑥 · |Ψ𝑎,𝑥,𝑢,𝑏 ⟩ = |Ψ𝑎,𝑥,𝑢,𝑏 ⟩. This means
∑︀
the 𝒟𝐻 register of |Ψ𝑎,𝑥,𝑢,𝑏 ⟩ contains either ∅ or (𝑥, 𝑟) for some 𝑟 ∈ ℛ.
71
Next,
∑︁ (︀ )︀
𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉 · 𝐸 · |𝜓⟩ = 𝛼𝑎,𝑥,𝑢,𝑏 · |𝑎⟩𝒜 ⊗ |𝑥, 𝑢, 𝑏⟩𝒬 ⊗ 𝐸𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ𝑎,𝑥,𝑢,𝑏 ⟩𝒪
𝑎,𝑥,𝑢,𝑏
∑︁ ⃦2
‖𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉 · 𝐸 · |𝜓⟩‖22 = |𝛼𝑎,𝑥,𝑢,𝑏 |2 · ⃦𝐸𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ𝑎,𝑥,𝑢,𝑏 ⟩𝒪 ⃦2
⃦
𝑎,𝑥,𝑢,𝑏
Additionally,
∑︁ (︀ )︀
𝐸 · |𝜓⟩ = 𝛼𝑎,𝑥,𝑢,𝑏 · |𝑎⟩𝒜 ⊗ |𝑥, 𝑢, 𝑏⟩𝒬 ⊗ 𝐸𝒪 · |Ψ𝑎,𝑥,𝑢,𝑏 ⟩𝒪
𝑎,𝑥,𝑢,𝑏
∑︁ ⃦2
|𝜓⟩‖22 |𝛼𝑎,𝑥,𝑢,𝑏 |2 · ⃦𝐸𝒪 · |Ψ𝑎,𝑥,𝑢,𝑏 ⟩𝒪 ⃦2
⃦
‖𝐸 · =
𝑎,𝑥,𝑢,𝑏
It suffices to prove that for any 𝑥 ∈ 𝒳 , and any state |Ψ⟩ on register 𝒪 such that 𝑉𝑥 ·|Ψ⟩ = |Ψ⟩,
‖𝐸𝒪 · |Ψ⟩𝒪 ‖2 − 𝗇𝖾𝗀𝗅(𝜆) ≤ ‖𝐸𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩𝒪 ‖2
because that would imply that
∑︁ (︁⃦ ⃦2 )︁
|𝛼𝑎,𝑥,𝑢,𝑏 |2 · ⃦𝐸𝒪 · |Ψ𝑎,𝑥,𝑢,𝑏 ⟩𝒪 ⃦2 − 𝗇𝖾𝗀𝗅(𝜆) ≤ ‖𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉 · 𝐸 · |𝜓⟩‖22
𝑎,𝑥,𝑢,𝑏
‖𝐸 · |𝜓⟩‖22 − 𝗇𝖾𝗀𝗅(𝜆) =
‖𝐸 · |𝜓⟩‖2 − 𝗇𝖾𝗀𝗅′ (𝜆) ≤ ‖𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉 · 𝐸 · |𝜓⟩‖2
which is what we wanted to prove. From now on, we will focus on proving that for any
𝑥 ∈ 𝒳 , and any state |Ψ⟩ on register 𝒪 such that 𝑉𝑥 · |Ψ⟩ = |Ψ⟩,
‖𝐸𝒪 · |Ψ⟩𝒪 ‖2 − 𝗇𝖾𝗀𝗅(𝜆) ≤ ‖𝐸𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩𝒪 ‖2
2.
∑︁
Let 𝐸𝒪 · |Ψ⟩𝒪 = 𝑣∅ · |∅⟩𝒟𝐻 ⊗ |𝐹∅ ⟩ℱ + 𝑣𝑟,𝑦 · |(𝑥, 𝑟)⟩𝒟𝐻 ⊗ |𝐹𝑥,𝑟,𝑦 ⟩ℱ
(𝑟,𝑦)∈ℛ×𝒴
72
We used the fact that for any (𝑥, 𝑟), the sets (ℱ𝑥,𝑟,𝑦 )𝑦∈𝒴 partition ℱ.
Therefore, we only need to focus on the remaining terms.
∑︁
𝐸 𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩ = 𝐸 𝒪 · 𝑣𝑟,𝑦 · (𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · |(𝑥, 𝑟)⟩𝒟𝐻 ) ⊗ |𝐹𝑥,𝑟,𝑦 ⟩ℱ
(𝑟,𝑦)∈ℛ×𝒴
(︂ )︂
∑︁ 1
= 𝑣𝑟,𝑦 · 1 − · 𝐸 𝒪 · |(𝑥, 𝑟)⟩ ⊗ |𝐹𝑥,𝑟,𝑦 ⟩ℱ
|ℛ|
(𝑟,𝑦)∈ℛ×𝒴
∑︁ −1 ∑︁
+ 𝑣𝑟,𝑦 · · 𝐸 𝒪 · |(𝑥, 𝑟′ )⟩ ⊗ |𝐹𝑥,𝑟,𝑦 ⟩ℱ
|ℛ|
(𝑟,𝑦)∈ℛ×𝒴 𝑟′ ∈ℛ∖{𝑟}
∑︁ 1
+ 𝑣𝑟,𝑦 · √︀ · 𝐸 𝒪 · |∅⟩ ⊗ |𝐹𝑥,𝑟,𝑦 ⟩ℱ
(𝑟,𝑦)∈ℛ×𝒴
|ℛ|
𝐸 𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩
⎛ ⎞ ⎛ ⎞
1 ∑︁ ∑︁ ∑︁−𝑣𝑟,𝑦
= √︀ · |(𝑥, 𝑟′ )⟩ ⊗ ⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · ⎝ √︀ · |𝐹𝑥,𝑟,𝑦 ⟩ℱ ⎠
|ℛ| 𝑟′ ∈ℛ 𝑦 ′ ∈𝒴 (𝑟,𝑦)∈ℛ∖{𝑟′ }×𝒴
|ℛ|
⎛ ⎞
∑︁ 𝑣𝑟,𝑦
+ |∅⟩ ⊗ (𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) · ⎝ √︀ · |𝐹𝑥,𝑟,𝑦 ⟩ℱ ⎠
(𝑟,𝑦)∈ℛ×𝒴
|ℛ|
(︂ )︂
4. Let 𝑀𝑥 be a matrix whose columns are √1 · |𝐹𝑥,𝑟,𝑦 ⟩ .
|ℛ|
(𝑟,𝑦)∈ℛ×𝒴
Also, for a given 𝑟′ ∈ ℛ, let 𝐯𝑟′ be the same as 𝐯 except that for every 𝑦 ∈ 𝒴, the (𝑟′ , 𝑦)-th
entry is set to 0. Note that ‖𝐯𝑟′ ‖ ≤ ‖𝐯‖ ≤ 1.
73
Then,
⎛ ⎞
1 ∑︁ ∑︁
𝐸 𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩ = √︀ · |(𝑥, 𝑟′ )⟩ ⊗ ⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · (−𝑀𝑥 · 𝐯𝑟′ )
|ℛ| 𝑟′ ∈ℛ 𝑦 ′ ∈𝒴
5. We know that
⃦⎛ ⎞ ⃦
⃦ ⃦
⃦
⃦⎝𝕀 −
∑︁ ⃦ 1
⃦ |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · 𝑀𝑥 ⃦
⃦ ≤ ‖(𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) · 𝑀𝑥 ‖2 ≤ √︀|ℛ|
⃦ 𝑦 ′ ∈𝒴 ⃦
2
⃦𝐸 𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩⃦2 ≤ 1 ·
∑︁ 1
· ‖𝐯𝑟′ ‖22
⃦ ⃦
2 |ℛ| ′ |ℛ|
𝑟 ∈ℛ
1
· ‖𝐯‖22
+
|ℛ|
1 (︁ )︁ 2
≤ · ‖𝐯𝑟′ ‖22 + ‖𝐯‖22 ≤
|ℛ| |ℛ|
√︃
⃦ ⃦
⃦𝐸 𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩⃦ ≤ 2
2
= 𝗇𝖾𝗀𝗅(𝜆)
|ℛ|
6. Finally,
2
≤ ‖𝐸𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩𝒪 ‖22 +
|ℛ|
We used the fact that 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 is a unitary, so it preserves norms. Then,
2
‖𝐸𝒪 · |Ψ⟩𝒪 ‖22 − ≤ ‖𝐸𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩𝒪 ‖22
|ℛ|
‖𝐸𝒪 · |Ψ⟩𝒪 ‖2 − 𝗇𝖾𝗀𝗅(𝜆) ≤ ‖𝐸𝒪 · 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · 𝐸𝒪 · |Ψ⟩𝒪 ‖2
74
(︁ )︁
1 1 ′ )⟩ + √1
∑︀
Lemma A.4. 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · |(𝑥, 𝑟)⟩𝒟𝐻 = 1 − |ℛ| · |(𝑥, 𝑟)⟩ − |ℛ| · 𝑟′ ∈ℛ∖{𝑟} |(𝑥, 𝑟 · |∅⟩
|ℛ|
Proof.
1 ∑︁
Let 𝐯𝟎 = · |(𝑥, 𝑟′ )⟩
|ℛ| ′
𝑟 ∈ℛ
1 ∑︁
𝐯𝟏 = |(𝑥, 𝑟)⟩ − · |(𝑥, 𝑟′ )⟩
|ℛ| ′
𝑟 ∈ℛ
|(𝑥, 𝑟)⟩ = 𝐯𝟎 + 𝐯𝟏
1
𝖣𝖾𝖼𝗈𝗆𝗉𝑥 · |(𝑥, 𝑟)⟩ = √︀ · |∅⟩ + 𝐯𝟏
|ℛ|
1 ∑︁ 1
= |(𝑥, 𝑟)⟩ − · |(𝑥, 𝑟′ )⟩ + √︀ · |∅⟩
|ℛ| ′ |ℛ|
𝑟 ∈ℛ
(︂ )︂
1 1 ∑︁ 1
= 1− · |(𝑥, 𝑟)⟩ − · |(𝑥, 𝑟′ )⟩ + √︀ · |∅⟩
|ℛ| |ℛ| ′ |ℛ|
𝑟 ∈ℛ∖{𝑟}
Proof.
∑︀ (︀ )︀
1. Note that 𝕀 − 𝑦′ ∈𝒴 |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ | is a projector because the states |𝐹𝑥,𝑟′ ,𝑦′ ⟩ 𝑦′ ∈𝒴 are mu-
tually orthogonal. Therefore,
⃦⎛ ⎞⃦
⃦ ⃦
⃦ ∑︁ ⃦
⃦⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ | ⃦
⃦ ≤1
⎠
⃦
⃦ 𝑦 ′ ∈𝒴 ⃦
2
75
∑︀
2. |𝐹∅ ⟩ is orthogonal to 𝕀 − 𝑦′ ∈𝒴 |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |.
⎛ ⎞
∑︁ ∑︁
⎝𝕀 − · |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · |𝐹∅ ⟩ = |𝐹∅ ⟩ − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ · ⟨𝐹𝑥,𝑟′ ,𝑦′ |𝐹∅ ⟩
𝑦 ′ ∈𝒴 𝑦 ′ ∈𝒴
⎛ ⎞
∑︁ ∑︁ 1 ∑︁ 1
= |𝐹∅ ⟩ − |𝑓 ⟩ · √︀ ·⎝ √︀ ⎠
′
𝑦 ∈𝒴 𝑓 ∈ℱ𝑥,𝑟′ ,𝑦′
|ℱ 𝑥,𝑟 ′ ,𝑦 ′ | ′
𝑓 ∈ℱ𝑥,𝑟′ ,𝑦′
|ℱ 𝑥,𝑟 ′ ,𝑦 ′ | · |ℱ |
(︃ )︃
∑︁ ∑︁ |ℱ𝑥,𝑟′ ,𝑦′ |
= |𝐹∅ ⟩ − |𝑓 ⟩ · √︀
𝑦 ′ ∈𝒴 𝑓 ∈ℱ ′ ′
|ℱ𝑥,𝑟′ ,𝑦′ | · |ℱ|
𝑥,𝑟 ,𝑦
∑︁ ∑︁ 1
= |𝐹∅ ⟩ − |𝑓 ⟩ · √︀
𝑦 ′ ∈𝒴 𝑓 ∈ℱ𝑥,𝑟′ ,𝑦 ′
|ℱ|
∑︁ 1
= |𝐹∅ ⟩ − |𝑓 ⟩ · √︀ = |𝐹∅ ⟩ − |𝐹∅ ⟩
𝑓 ∈ℱ
|ℱ|
=𝟎
3. Next,
⎛ ⎞ ⎛ ⎞
∑︁ ∑︁
⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · (𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) = ⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠
𝑦 ′ ∈𝒴 𝑦 ′ ∈𝒴
⎛ ⎞
∑︁
− ⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · |𝐹∅ ⟩ ⟨𝐹∅ |
𝑦 ′ ∈𝒴
⎛ ⎞
∑︁
= ⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠
𝑦 ′ ∈𝒴
4. Finally,
⃦⎛ ⎞ ⃦ ⃦⎛ ⎞ ⃦
⃦ ⃦ ⃦ ⃦
⃦ ∑︁ ⃦ ⃦ ∑︁ ⃦
⃦⎝𝕀 − |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · 𝑀 ⃦ = ⃦⎝𝕀 −
⃦ ⃦ |𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠ · (𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) · 𝑀 ⃦
⃦ ⃦
⃦ 𝑦 ′ ∈𝒴 ⃦ ⃦ 𝑦 ′ ∈𝒴 ⃦
2 ⃦⎛ 2
⎞⃦
⃦ ⃦
⃦ ∑︁ ⃦
≤⃦
⃦
⎝𝕀 −
⃦ · ‖(𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) · 𝑀 ‖2
|𝐹𝑥,𝑟′ ,𝑦′ ⟩ ⟨𝐹𝑥,𝑟′ ,𝑦′ |⎠⃦
⃦ 𝑦 ′ ∈𝒴 ⃦
2
≤ ‖(𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) · 𝑀 ‖2
(︂ )︂
Lemma A.6. Let 𝑀𝑥 be a matrix whose columns are √1 · |𝐹𝑥,𝑟,𝑦 ⟩ . Then
|ℛ|
(𝑟,𝑦)∈ℛ×𝒴
1
‖(𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) · 𝑀𝑥 ‖2 ≤ √︀
|ℛ|
76
Proof.
1. The (𝑟, 𝑦), (𝑟′ , 𝑦 ′ )-th entry of 𝑀𝑥† · 𝑀𝑥 is
1
(𝑀𝑥† · 𝑀𝑥 )(𝑟,𝑦),(𝑟′ ,𝑦′ ) = · ⟨𝐹𝑥,𝑟,𝑦 |𝐹𝑥,𝑟′ ,𝑦′ ⟩
|ℛ|
⎧
⎪ 1
⎪ |ℛ|
⎨ , (𝑟, 𝑦) = (𝑟′ , 𝑦 ′ )
= 0, 𝑟 = 𝑟′ ∧ 𝑦 ̸= 𝑦 ′
⎪
⎩ 1 · Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] · Pr𝑓 [𝑓 (𝑥, 𝑟′ ) = 𝑦 ′ ], 𝑟 ̸= 𝑟′
⎪ √︀ √︀
|ℛ|
by lemma A.7.
2. Let us write 𝑀𝑥† · 𝑀𝑥 as a linear combination of PSD matrices. √︁
Pr𝑓 [𝑓 (𝑥,𝑟)=𝑦]
First, let |Φ0 ⟩ be a column vector such that the (𝑟, 𝑦)-th entry is |ℛ| . Additionally,
let |Φ0,𝑟 ⟩ be a column vector such that the (𝑟′ , 𝑦)-th entry is 0 if 𝑟 ̸= 𝑟′ , and Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦]
√︀
if 𝑟 = 𝑟′ .
Note that these vectors have unit norm:
∑︀
∑︁ Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] ∑︁ 𝑦 Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] ∑︁ 1
‖ |Φ0 ⟩ ‖22 = = = =1
𝑟,𝑦
|ℛ| 𝑟
|ℛ| 𝑟
|ℛ|
∑︁
‖ |Φ0,𝑟 ⟩ ‖22 = Pr[𝑓 (𝑥, 𝑟) = 𝑦] = 1
𝑓
𝑦∈𝒴
Second,
∑︁ 1
let 𝑁𝑥 = · |Φ0,𝑟 ⟩ ⟨Φ0,𝑟 |
|ℛ|
𝑟∈ℛ
77
Therefore, 𝑀𝑥† · 𝑀𝑥 = |ℛ|
1
· 𝕀 + |Φ0 ⟩ ⟨Φ0 | − 𝑁𝑥
3. For any |Φ1 ⟩ of unit norm that is orthogonal to |Φ0 ⟩,
= |𝐹∅ ⟩
We used the fact that for any (𝑥, 𝑟), the sets (ℱ𝑥,𝑟,𝑦 )𝑦∈𝒴 partition ℱ.
5. Any vector |Φ⟩ of unit norm can be written as |Φ⟩ = 𝛼 · |Φ0 ⟩ + 𝛽 · |Φ1 ⟩ for some vector |Φ1 ⟩
of unit norm that is orthogonal to |Φ0 ⟩ and for some 𝛼, 𝛽 for which |𝛼|2 + |𝛽|2 = 1. Next,
1
‖(𝕀 − |𝐹∅ ⟩ ⟨𝐹∅ |) · 𝑀𝑥 ‖2 ≤ √︀
|ℛ|
78
• If (𝑥, 𝑟) ̸= (𝑥′ , 𝑟′ ), then ⟨𝐹𝑥,𝑟,𝑦 |𝐹𝑥′ ,𝑟′ ,𝑦′ ⟩ =
√︀
Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] · Pr𝑓 [𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ].
Proof.
⎛ ⎞ ⎛ ⎞
1 ∑︁ 1 ∑︁
⟨𝐹∅ |𝐹𝑥,𝑟,𝑦 ⟩ = ⎝ √︀ · ⟨𝑓 |⎠ · ⎝ √︀ · |𝑓 ′ ⟩⎠
|ℱ| 𝑓 ∈ℱ |ℱ𝑥,𝑟,𝑦 | 𝑓 ′ ∈ℱ
𝑥,𝑟,𝑦
1 ∑︁
= √︀ · ⟨𝑓 |𝑓 ⟩
|ℱ| · |ℱ𝑥,𝑟,𝑦 | 𝑓 ∈ℱ
𝑥,𝑟,𝑦
√︃
|ℱ𝑥,𝑟,𝑦 | |ℱ𝑥,𝑟,𝑦 |
= √︀ =
|ℱ| · |ℱ𝑥,𝑟,𝑦 | |ℱ|
√︂
= Pr[𝑓 (𝑥, 𝑟) = 𝑦]
𝑓
Next, if (𝑥, 𝑟) = (𝑥′ , 𝑟′ ), but 𝑦 ̸= 𝑦 ′ , then ℱ𝑥,𝑟,𝑦 ∩ ℱ𝑥′ ,𝑟′ ,𝑦′ = {}, so ⟨𝐹𝑥,𝑟,𝑦 |𝐹𝑥′ ,𝑟′ ,𝑦′ ⟩ = 0.
Finally, if (𝑥, 𝑟) ̸= (𝑥′ , 𝑟′ ), then
⎛ ⎞ ⎛ ⎞
1 ∑︁ 1 ∑︁
⟨𝐹𝑥,𝑟,𝑦 |𝐹𝑥′ ,𝑟′ ,𝑦′ ⟩ = ⎝ √︀ · ⟨𝑓 |⎠ · ⎝ √︀ · |𝑓 ′ ⟩⎠
|ℱ𝑥,𝑟,𝑦 | 𝑓 ∈ℱ |ℱ𝑥′ ,𝑟′ ,𝑦′ | 𝑓 ′ ∈ℱ ′ ′ ′
𝑥,𝑟,𝑦 𝑥 ,𝑟 ,𝑦
1 ∑︁
= √︀ · ⟨𝑓 |𝑓 ⟩
|ℱ𝑥,𝑟,𝑦 | · |ℱ𝑥′ ,𝑟′ ,𝑦′ | 𝑓 ∈ℱ ∩ℱ
𝑥,𝑟,𝑦 𝑥′ ,𝑟 ′ ,𝑦 ′
|ℱ𝑥,𝑟,𝑦 ∩ ℱ |
𝑥′ ,𝑟′ ,𝑦 ′ |ℱ| · Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦 ∧ 𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ]
= √︀ = √︀
|ℱ𝑥,𝑟,𝑦 | · |ℱ𝑥′ ,𝑟′ ,𝑦′ | |ℱ| · Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] · |ℱ| · Pr𝑓 [𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ]
Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] · Pr𝑓 [𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ] √︂
= √︀ = Pr[𝑓 (𝑥, 𝑟) = 𝑦] · Pr[𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ]
Pr𝑓 [𝑓 (𝑥, 𝑟) = 𝑦] · Pr𝑓 [𝑓 (𝑥′ , 𝑟′ ) = 𝑦 ′ ] 𝑓 𝑓
B.1 Preliminaries
We review the measure-and-reprogram lemma developed in [DFMS19] and [DFM20]. We adopt
the formulation presented in in [YZ21, Section 4.2].
Definition B.1 (Reprogramming Oracle). Let 𝒜 be a quantum algorithm that is given quantum oracle
access to an oracle 𝒪, where 𝒪 is an oracle that is intialized to compute a classical function 𝑓 : 𝒳 → 𝒴
79
such that 𝒜. At some point in an execution of 𝒜𝒪 , we say that we reprogram 𝒪 to output 𝑔(𝑥) on 𝑥 ∈ 𝒳
if we update the oracle to compute the function 𝑓𝑥,𝑔 defined by
{︃
′ 𝑔(𝑥′ ) if 𝑥 = 𝑥′ ,
𝑓𝑥,𝑔 (𝑥 ) =
𝑓 (𝑥′ ) otherwise.
This updated oracle is used in the rest of execution of 𝒜𝒪 . We denote the above reprogramming procedure
as 𝒪 ← 𝖱𝖾𝗉𝗋𝗈𝗀𝗋𝖺𝗆(𝒪, 𝑥′ , 𝑔).
Definition B.2 (Measure-and-Reprogram Algorithm). Let 𝒳 , 𝒴, 𝒵 be a set of classical strings and 𝑘
be a positive integer. Let 𝒜 be a 𝑞-quantum-query algorithm that is given quantum oracle access to an
oracle that computes a classical function 𝑓 : 𝒳 → 𝒴. The algorithm 𝒜, when given a (possibly quantum)
input 𝗂𝗇𝗉𝗎𝗍, outputs 𝐱 ∈ 𝒳 𝑘 and 𝑧 ∈ 𝒵. For a function 𝑔 : 𝒳 → 𝒴, we define a measure-and-reprogram
̃ 𝑔] as follows:
algorithm 𝒜[𝑓,
̃ 𝑔](𝗂𝗇𝗉𝗎𝗍)
𝒜[𝑓,
1. For each 𝑗 ∈ [𝑘], uniformly pick (𝑖𝑗 , 𝑏𝑗 ) ∈ ([𝑞] × {0, 1}) ∪ {(⊥, ⊥)} such that there does not exist
𝑗 ̸= 𝑗 ′ such that 𝑖𝑗 = 𝑖𝑗 ′ ̸= ⊥.
2. Run 𝒜𝒪 (𝗂𝗇𝗉𝗎𝗍) where the oracle 𝒪 is initialized to be a quantumly-accessible classical oracle that
computes the classical function 𝑓 . When 𝒜 makes its 𝑖th query, the oracle is simulated as follows:
(a) If 𝑖 = 𝑖𝑗 for some 𝑗 ∈ [𝑘], measure 𝒜’s query register to obtain 𝑥′𝑗 , and do either of the following:
i. If 𝑏𝑗 = 0, reprogram 𝒪 ← 𝖱𝖾𝗉𝗋𝗈𝗀𝗋𝖺𝗆(𝒪, 𝑥′𝑗 , 𝑔) and answer 𝒜’s 𝑖th query using the
reprogrammed oracle.
ii. If 𝑏𝑗 = 1, answer 𝒜’s 𝑖th query by using the oracle before the reprogramming and then
reprogram 𝒪 ← 𝖱𝖾𝗉𝗋𝗈𝗀𝗋𝖺𝗆(𝒪, 𝑥′𝑗 , 𝑔).
(b) Otherise, answer 𝒜’s 𝑖the query by just using the oracle 𝒪 without any measurement or repro-
gramming.
3. Let (𝐱 = (𝑥1 , . . . , 𝑥𝑘 ), 𝑧) be 𝒜’s output.
4. For all 𝑗 ∈ [𝑘] such that 𝑖𝑗 = ⊥, set 𝑥′𝑗 := 𝑥𝑗 .
5. Output (𝐱′ , 𝑧), where 𝐱′ := (𝑥′1 , . . . , 𝑥′𝑘 ).
Lemma B.3. Let 𝒳 , 𝒴, 𝒵, and 𝒜 be as in Definition B.2. Then, for any input 𝗂𝗇𝗉𝗎𝗍, functions 𝑓, 𝑔 : 𝒳 →
𝒴, 𝐱* ∈ 𝒳 𝑘 such that 𝑥*𝑗 ̸= 𝑥*𝑗 ′ for all 𝑗 ̸= 𝑗 ′ , and relation 𝑅 ⊆ 𝒳 𝑘 × 𝒴 𝑘 × 𝒵, we have
𝐱 ′ = 𝐱* ∧ 1 2𝑘 𝐱 = 𝐱* ∧
[︂ ]︂ [︂ ]︂
′ ̃ |𝑓𝐱* ,𝑔 ⟩
Pr : (𝐱 , 𝑧) ← 𝒜[𝑓, 𝑔](𝗂𝗇𝗉𝗎𝗍) ≥ Pr : (𝐱, 𝑧) ← 𝒜 (𝗂𝗇𝗉𝗎𝗍) ,
(𝐱′ , 𝑔(𝐱′ ), 𝑧) ∈ 𝑅 2𝑞 + 1 (𝐱, 𝑔(𝐱′ ), 𝑧) ∈ 𝑅
̃ 𝑔] is the measure-and-reprogram algorithm as defined in Definition B.2, and 𝑓𝐱* ,𝑔 is defined as
where 𝒜[𝑓,
{︃ ′
𝑔(𝑥 ) if ∃𝑗 ∈ [𝑘] s.t. 𝑥′ = 𝑥*𝑗 ,
𝑓𝐱* ,𝑔 (𝑥′ ) :=
𝑓 (𝑥′ ) otherwise.
80
B.2 Security Proof
Theorem B.5. Let ℱ be a function family that is 1-query unlearnable Definition 4.11 and 2-wise replace-
able. Then the scheme in Figure 3 satisfies the one-time security of Definition 4.26.
Proof. We first prove one-time secrecy for what we call a "mini-scheme" where the user’s input is
one bit, i.e. 𝒳 = {0, 1}. We will call this user input 𝑏 ∈ {0, 1} for "bit".
Suppose for contradiction that there exists a (quantum) polynomial time algorithm 𝒜 that takes
as input one-time sampling program 𝒫𝑓 = (|𝐴⟩ , 𝒪𝐴 , 𝒪𝐴⊥ , 𝒪𝑓,𝐴 ) and breaks one-time secrecy with
non-negligible probability. Note that we can consider 𝒜 as having access to oracles 𝑓𝐴 , which
compute 𝑓 restricted on valid inputs:
{︃
𝑓 (𝑏, 𝐻(𝑢)) if 𝑢 ∈ 𝐴𝑏 ,
𝑓𝐴 (𝑏, 𝑢) =
⊥ otherwise.
We will use 𝒜 to construct a (quantum) polynomial time algorithm ℬ that either (1) breaks the
one-query unlearnability of 𝑓 , or (2) violates the direct product hardness for random subspaces.
ℬ(1𝜆 , 𝒫𝑓 )
1. Sample a random function 𝑔 ← ℱ.
̃ 𝑓𝐴 ] as defined in Definition B.2 with 𝑘 = 2.
2. Run the measure-and-reprogram algorithm 𝒜[𝑔,
̃
3. Output the output of 𝒜[𝑔, 𝑓𝐴 ].
Using the shorthand 𝐱 = (𝑥1 , 𝑥2 ) ∈ ({0, 1} × ℛ)2 where 𝑥1 = (𝑏1 , 𝑟1 ) and 𝑥2 = (𝑏2 , 𝑟2 ) and
𝐲 = (𝑦1 , 𝑦2 ) ∈ 𝒴 2 , define the relation 𝑅 ⊆ ({0, 1} × ℛ)2 × 𝒴 2 as
𝑅 = {(𝐱, 𝐲) | 𝑓 (𝑥1 ) = 𝑦1 , 𝑓 (𝑥2 ) = 𝑦2 } .
By Lemma B.3, we have that for all 𝐱* ∈ ({0, 1} × ℛ)2 such that 𝑥*1 ̸= 𝑥*2 ,
[︁ ]︁
Pr (𝐱, 𝐲) ∈ 𝑅 ∧ 𝐱 = 𝐱* : (𝐱, 𝐲) ← ℬ(1𝜆 , 𝒫𝑓 )
𝑓 ←ℱ
1 [︁
* |𝑔𝐱* ,𝑓 ⟩ 𝜆
]︁
≥ Pr (𝐱, 𝐲) ∈ 𝑅 ∧ 𝐱 = 𝐱 : (𝐱, 𝐲) ← 𝒜 (1 , |𝐴⟩ , 𝒪 𝐴 , 𝒪 𝐴⊥ ) .
(2𝑞 + 1)4 𝑓,𝑔←ℱ
Since ℱ is 2-wise replaceable, the above probability expression is
1 [︁
* |𝑓 ⟩ 𝜆
]︁
= Pr (𝐱, 𝐲) ∈ 𝑅 ∧ 𝐱 = 𝐱 : (𝐱, 𝐲) ← 𝒜 (1 , |𝐴⟩ , 𝒪 𝐴 , 𝒪 𝐴 ⊥)
(2𝑞 + 1)4 𝑓 ←ℱ
1 [︁
* 𝜆
]︁
= Pr (𝐱, 𝐲) ∈ 𝑅 ∧ 𝐱 = 𝐱 : (𝐱, 𝐲) ← 𝒜(1 , 𝒫𝑓 ) ≥ 𝗇𝗈𝗇-𝗇𝖾𝗀𝗅(𝜆),
(2𝑞 + 1)4 𝑓 ←ℱ
where the last inequality holds because 𝒜 breaks the one-time security of 𝒫𝑓 , by assumption. Note
that ℬ makes at most 2 queries, both of which are classical, to 𝒪𝑓,𝐴 . Consider the following two
cases:
Case 1. ℬ makes two valid and distinct classical queries 𝑥1 = (𝑏1 , 𝑟1 ), 𝑥2 = (𝑏2 , 𝑟2 ) ∈ {0, 1} × ℛ
to 𝒪𝑓,𝐴 such that 𝒪𝑓,𝐴 (𝑥1 ) ̸= ⊥ and 𝒪𝑓,𝐴 (𝑥2 ) ̸= ⊥. This means that 𝑟1 ∈ 𝐴𝑏1 and 𝑟2 ∈ 𝐴𝑏2 . Since
these are classical queries, they can be recorded, and this immediately gives us an adversary that
breaks the direct product hardness of subspace states (Theorem 3.9).
81
Case 2. ℬ makes at most one valid classical queries 𝑥 = (𝑏, 𝑟) ∈ {0, 1} × ℛ to 𝒪𝑓,𝐴 such that
𝒪𝑓,𝐴 (𝑥) ̸= ⊥. This is impossible because it would break the 1-query unlearnability of ℱ.
Remark B.6. We conjecture that the above proof extends to function families larger than uniformly random
functions (which are shown to be 𝑘-wise replaceable in [YZ21]. We will leave to future works on how to
characterize the 𝑘-wise replaceable property.
We note that this only describes the state in between queries to 𝑥. A second query to 𝑥 will
apply 𝖣𝖾𝖼𝗈𝗆𝗉𝑥 to the database register before determining the response, which maps the database
register back to |𝐷 ∪ (𝑥, 𝑦)⟩. Thus, despite the support of the state on other databases, 𝐺(𝑥) cannot
actually change in between queries to 𝑥. However, this form is relevant when looking at the
database register without knowledge of 𝑥.
82
Lemma C.1 (Compressed Oracle Chaining). Let 𝐺 : 𝒳 → 𝒴 and 𝐻 : 𝒴 → 𝒵 be random oracles
implemented by the compressed oracle technique. Consider running an interaction of an oracle algorithm
with their composition 𝐻 ∘ 𝐺 until query 𝑡, then measuring the internal state of 𝐺 and 𝐻 to obtain 𝐷𝐺 and
𝐷𝐻 .
Let 𝐸𝑡 be the event that after the measurement at time 𝑡, for all (𝑦, 𝑧) ∈ 𝐷𝐻 , there exists an 𝑥 ∈ 𝒳 such
that (𝑥, 𝑦) ∈ 𝐷𝐺 . Then (︂ )︂
2 2 1
Pr[𝐸𝑡 ] ≥ 1 − 4𝑡 −
|𝒴| |𝒴|2
Proof. We first explicitly describe how 𝐻 ∘ 𝐺 works. The internal states of 𝐻 and 𝐺 are stored
in registers 𝒟𝐻 and 𝒟𝐺 , respectively. In general, we can consider a basis query |𝑥, 𝑢⟩𝒬 in register
𝒬 = 𝒬𝒳 , 𝒬𝒵 . For convenience, we insert into 𝒬 an additional work register 𝒬𝒴 which is initialized
to |0⟩, resulting in |𝑥, 0, 𝑢⟩𝒬 . To compute 𝐻 ∘ 𝐺 on this query, first query (𝒬𝒳 , 𝒬𝒴 ) to 𝐺 to obtain
|𝑥, 𝐺(𝑥), 𝑢⟩𝒬 , i.e. apply 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 ∘ 𝖢𝖮′𝐺 ∘ 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 on (𝒬𝒳 , 𝒬𝒴 , 𝒟𝐺 ).15 Then query (𝒬𝒴 , 𝒬𝒵 ) to 𝐻
to obtain |𝑥, 𝐺(𝑥), 𝑢 ⊕ 𝐻(𝐺(𝑥))⟩. Finally, query (𝒬𝒳 , 𝒬𝒴 ) to 𝐺 again to obtain |𝑥, 0, 𝑢 ⊕ 𝐻(𝐺(𝑥))⟩
and return registers (𝒬𝒳 , 𝒬𝒵 ).
Observe that 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 commutes with the query to 𝐻, since they operate on disjoint registers
(𝒬𝒳 , 𝒟𝐺 ) and (𝒬𝒴 , 𝒬𝒵 , 𝒟𝐺 ), respectively. Furthermore, 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 ∘ 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 = 𝐼. Thus, if we
write the operation of 𝐻 as 𝑈𝐻 , we may write the implementation of a query to 𝐻 ∘ 𝐺 as
where 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 and 𝖢𝖮′𝐺 act on registers (𝒬𝒳 , 𝒬𝒴 , 𝒟𝐺 ), while 𝑈𝐻 acts on registers (𝒬𝒴 , 𝒬𝒵 , 𝒟𝐻 ).
Now consider the interaction of the algorithm with 𝐻 ∘ 𝐺, where the algorithm maintains an
additional internal register 𝒜. Define the projector 𝐸 onto states |𝑎⟩𝒜 ⊗ |𝑥, 0, 𝑢⟩𝒬 ⊗ |𝐷𝐺 , 𝐷𝐻 ⟩𝒟
where 𝐷𝐺 and 𝐷⃦𝐻 satisfy the ⃦ requirements of event 𝐸𝑡 and define 𝐸 =⃦ 𝐼 − 𝐸. ⃦ We will upper
bound the norm ⃦𝐸𝑈𝐻∘𝐺 |𝜓⟩⃦ from after the query in terms of the norm ⃦𝐸 |𝜓⟩⃦ from before the
query. To do this, we will individually bounding the norm after each step in terms of ⃦ the norm
before that step, e.g. bound ⃦𝐸𝖣𝖾𝖼𝗈𝗆𝗉𝐺 |𝜓 ′ ⟩⃦ from after the query in terms of the norm ⃦𝐸 |𝜓 ′ ⟩⃦.
⃦ ⃦ ⃦
We first bound the intermediate operations, in between the two 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 operations.
Claim C.2.
⃦𝐸(𝖢𝖮′𝐺 ) · (𝑈𝐻 ) · (𝖢𝖮′𝐺 · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 ) |𝜓⟩⃦ = ⃦𝐸(𝖣𝖾𝖼𝗈𝗆𝗉𝐺 ) |𝜓⟩⃦
⃦ ⃦ ⃦ ⃦
83
𝒟𝐺 , applying 𝐻 always results in a valid database state with respect to the event 𝐸. In particular,
if 𝐷𝐻 and 𝐷𝐺 already satisfied 𝐸 before the query to 𝐻, they continue to do so afterwards. Thus
⃦𝐸(𝑈𝐻 ) · (𝖢𝖮′𝐺 · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 ) |𝜓⟩⃦ = ⃦𝐸(𝖢𝖮′𝐺 · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 ) |𝜓⟩⃦
⃦ ⃦ ⃦ ⃦
Putting this together with the prior bound on the effect of 𝖢𝖮′𝐺 yields the claim.
Next, we bound the effect of 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 on a general state |𝜓 ′ ⟩. In general, both the first and the
last 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 operation will operate on a state of the form
∑︁
𝛼𝑎,𝑥,𝑢,𝐷𝐺 ,𝐷𝐻 |𝑎⟩𝒜 ⊗ |𝑥, 0, 𝑢⟩𝒬 ⊗ |𝐷𝐺 , 𝐷𝐻 ⟩𝒟
𝑎,𝑥,𝑢,𝐷𝐺 ,𝐷𝐻
where 𝒜 is the adversary’s internal register, 𝒬 contains the (expanded) query, and 𝒟 contains the
oracle databases. This is clearly true for the first application; it holds for the second because 𝖢𝖮′𝐺
is applied twice and 𝑈𝐻 does not modify the registers that 𝖢𝖮′𝐺 acts on.16
Claim C.3. √︃
⃦𝐸𝖣𝖾𝖼𝗈𝗆𝗉𝐺 |𝜓 ′ ⟩⃦ ≤ ⃦𝐸 |𝜓⟩⃦ +
⃦ ⃦ ⃦ ⃦ 2 1
−
|𝒴| |𝒴|2
• 𝐸𝑌 1,𝑍 projects onto states where (1) 𝐷𝐺 and 𝐷𝐻 satisfy 𝐸, (2) 𝐷𝐺 (𝑥) = 𝑦 for some 𝑦, (3)
𝐷𝐻 (𝑦) = 𝑧 for some 𝑧, and (4) 𝑥 is the unique preimage of 𝑦 under 𝐺, i.e. 𝐷𝐺 (𝑥′ ) ̸= 𝑦 for all
𝑥 ̸= 𝑥′ .
• 𝐸𝑌 +,𝑍 projects onto states where (1) 𝐷𝐺 and 𝐷𝐻 satisfy 𝐸, (2) 𝐷𝐺 (𝑥) = 𝑦 for some 𝑦, (3)
𝐷𝐻 (𝑦) = 𝑧 for some 𝑧, and (4) there exists at least one 𝑥′ ̸= 𝑥 such that 𝐷𝐺 (𝑥′ ) = 𝑦.
• 𝐸𝑌,⊥ projects onto states where (1) 𝐷𝐺 and 𝐷𝐻 satisfy 𝐸, (2) 𝐷𝐺 (𝑥) = 𝑦 for some 𝑦, and (3)
𝐷𝐻 (𝑦) = ⊥.
• 𝐸⊥ projects onto states where (1) 𝐷𝐺 and 𝐷𝐻 satisfy 𝐸, (2) 𝐷𝐺 (𝑥) = ⊥.
Observe that 𝐸 = 𝐸𝑌 1,𝑍 +𝐸𝑌 +,𝑍 +𝐸𝑌,⊥ +𝐸⊥ , so 𝐼 = 𝐸 +𝐸𝑌 1,𝑍 +𝐸𝑌 +,𝑍 +𝐸𝑌,⊥ +𝐸⊥ . Furthermore,
𝖣𝖾𝖼𝗈𝗆𝗉𝐺 only modifies register 𝒟𝐺 , where it maps 𝐷𝐺 to 𝐷𝐺 ′ , with the only potential difference
′
that 𝐷𝐺 (𝑥) ̸= 𝐷𝐺 (𝑥). In the case where 𝑥 is not part of an (𝑥, 𝑦) ∈ 𝐷𝐺 and (𝑦, 𝑧) ∈ 𝐷𝐻 pair
mandated by 𝐸, this modification does not affect the containment of the state in space 𝐸. Thus
⃦𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 · 𝐸𝑌,⊥ |𝜓 ′ ⟩⃦ = ⃦𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 · 𝐸⊥ |𝜓 ′ ⟩⃦ = 0
⃦ ⃦ ⃦ ⃦
Similarly, because there is a “backup” option 𝐷𝐺 (𝑥′ ) = 𝑦 for 𝑦 in the case 𝐸𝑌 +,𝑍 ,
⃦𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 · 𝐸𝑌 +,𝑍 |𝜓 ′ ⟩⃦ = 0
⃦ ⃦
′ ∪ (𝑥, 𝑦) and 𝐷 ′
For basis vectors in the support of 𝐸𝑌 1,𝑍 , we can write 𝐷𝐺 = 𝐷𝐺 𝐻 = 𝐷𝐻 ∪
′ ′ ′ ′
(𝑦, 𝑧), where 𝐷𝐺 (𝑥) = ⊥ and 𝐷𝐻 (𝑦) = ⊥. 𝐷𝐺 and 𝐷𝐻 represent 𝐷𝐺 and 𝐷𝐻 with 𝑥 and 𝑦
removed, respectively. Furthermore, there does not exist an 𝑥′ ∈ 𝒳 such that 𝐷𝐺 ′ (𝑥′ ) = 𝑦, since
16
We note that 𝖢𝖮′𝐺 and 𝑈𝐻 do not commute, despite this, since 𝑈𝐻 does certain actions controlled on the registers
that 𝖢𝖮′𝐺 acts on.
84
𝑥 is the unique preimage of 𝑦 under 𝐺. Recall from Appendix C.1 that the effect of 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 on
|𝑥, 0, 𝐷 ∪ (𝑥, 𝑦)⟩ is
⎛ ⎞
(︂ )︂
1 1 ∑︁ 1
|𝑥, 0⟩ ⊗ ⎝ 1 − |𝐷 ∪ (𝑥, 𝑦)⟩ − |𝐷 ∪ (𝑥, 𝑦 ′ )⟩ + √︀ |𝐷⟩⎠
|𝒴| |𝒴| ′ |𝒴|
𝑦 ̸=𝑦
Since 𝐷𝐻 (𝑦) ̸= ⊥, the projection 𝐸 · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 |𝑎⟩𝒜 ⊗ |𝑥, 0, 𝑢⟩𝒬 ⊗ |𝐷𝐺 ′ ∪ (𝑥, 𝑦), 𝐷 ′ ∪ (𝑦, 𝑧)⟩ is
𝐻 𝒟
⎛ ⎞
′ 1 ∑︁ ′ 1
|𝑎, 𝑥, 0, 𝐷𝐻 ∪ (𝑦, 𝑧)⟩𝒜,𝒬,𝒟𝐻 ⊗ ⎝− |𝐷𝐺 ∪ (𝑥, 𝑦 ′ )⟩ + √︀ |𝐷𝐺′ ⎠
⟩
|𝒴| ′ |𝒴|
𝑦 ̸=𝑦
𝒟𝐺
We can write 𝐸𝑌 1,𝑍 |𝜓 ′ ⟩ in general as
∑︁ ∑︁
𝐸𝑌 1,𝑍 |𝜓 ′ ⟩ = 𝛼𝑎,𝑥,𝑢,𝑦,𝐷𝐺
′ ,𝐷
𝐻
′
|𝑎⟩ ⊗ |𝑥, 0, 𝑢⟩ ⊗ |𝐷𝐺 ∪ (𝑥, 𝑦), 𝐷𝐻 ⟩
𝑎,𝑥,𝑢,𝑦 𝐷′ ∋
𝐺 ̸ 𝑦,𝐷𝐻 ∋𝑦
Putting together Claim C.2 and Claim C.3, the norm after any single query to 𝐻 ∘ 𝐺 is bounded
as
√︃
⃦𝐸 · 𝑈𝐻∘𝐺 |𝜓⟩⃦ ≤ ⃦𝐸 · (𝖢𝖮′𝐺 ) · (𝑈𝐻 ) · (𝖢𝖮′ · 𝖣𝖾𝖼𝗈𝗆𝗉𝐺 ) |𝜓⟩⃦ +
⃦ ⃦ ⃦ ⃦ 2 1
−
|𝒴| |𝒴|2
√︃
⃦ ⃦ 2 1
= ⃦𝐸𝖣𝖾𝖼𝗈𝗆𝗉𝐺 |𝜓⟩⃦ + −
|𝒴| |𝒴|2
√︃
⃦ ⃦ 2 1
≤ ⃦𝐸 |𝜓⟩⃦ + 2 −
|𝒴| |𝒴|2
85
√︁
2 1
The norm starts at 0, so after 𝑡 queries to 𝐻 ∘ 𝐺 it is at most 2𝑡 |𝒴| − |𝒴|2
. The probability of
seeing the(︁ event corresponding
)︁ to 𝐸 when we measure 𝒟 is the square of the norm, which is at
2 2 1
most 4𝑡 |𝒴| − |𝒴|2 . Therefore the probability of seeing the complementary event 𝐸 is at least
(︁ )︁
2 1
1 − 4𝑡2 |𝒴| − |𝒴| 2 , as claimed.
Lemma C.5. Let 𝐺 : 𝒳1 × 𝒳2 → 𝒴 be a random function where |𝒳2(︁| < |𝒴|. Consider )︁ an oracle algorithm
(𝟏) (1) (1)
𝐴 makes 𝑞 of queries to 𝐺, then outputs two vector of 𝑘 values 𝐱 = 𝑥1 . . . , 𝑥𝑘 and 𝐲 = (𝑦1 , . . . , 𝑦𝑘 ).
(︁ )︁
(2) (1) (2)
Let 𝑝 be the probability that for every 𝑖, there exists an 𝑥𝑖 ∈ 𝒳 such that 𝐺 𝑥𝑖 , 𝑥𝑖 = 𝑦𝑖 .
Now consider running the same experiment where 𝐺 is instead implemented as a compressed oracle,
and measuring its database register after ′
(︁ 𝐴 outputs)︁ to obtain 𝐷. Let 𝑝 be the probability that for every 𝑖,
(2) (1) (2)
there exists an 𝑥𝑖 ∈ 𝒳2 such that 𝐷 𝑥𝑖 ‖𝑥𝑖 = 𝑦𝑖 . If 𝑘 and 𝑞 are 𝗉𝗈𝗅𝗒(𝜆) and |𝒳2 |𝑘 /|𝒴| = 𝗇𝖾𝗀𝗅(𝜆),
then17
𝑝 ≤ 𝑝′ + 𝗇𝖾𝗀𝗅(𝜆)
17
We remark that the reliance on the number of queries is unlikely to be tight. A tighter bound might be achieved by
performing a direct computation of the effects of querying 𝐺 on every 𝑥 ∈ 𝒳 at the end of the experiment.
86
Proof. Consider the adversary 𝐵 which attempts to find 𝑘 input/output pairs(︁by running 𝐴 to ob- )︁
(1) (2) (1) (2)
tain 𝐱(𝟏) and 𝐲, then guessing a uniform 𝐱(𝟐) ∈ 𝒳2𝑘 to construct the vector 𝐱 = 𝑥1 ‖𝑥1 , . . . , 𝑥𝑘 ‖𝑥𝑘
(1) (2)
and outputting (𝐱, 𝐲). Denote 𝑝𝐵 as the probability that it outputs (𝐱, 𝐲) such that 𝐺(𝑥𝑖 , 𝑥𝑖 ) = 𝑦𝑖
for all 𝑖. Since 𝐱(𝟐) is independent of 𝐴, we have 𝑝𝐵 ≥ 𝑝/|𝒳2 |𝑘 .
Now consider running 𝐵 with a compressed oracle, then measuring the compressed oracle to
obtain a database 𝐷. Note that this is the same distribution over databases as running 𝐴. Denote
(1) (2)
𝑝′𝐵 as the probability that it successfully outputs (𝐱, 𝐲) such that (𝑥𝑖 ‖𝑥𝑖 , 𝑦𝑖 ) ∈ 𝐷. We may
decompose the corresponding event into two mutually exclusive components:
′ (1) (2)
• Let 𝐸𝐵,+ be the event that (𝑥𝑖 ‖𝑥𝑖 , 𝑦𝑖 ) ∈ 𝐷 for all 𝑖 and 𝐷 contains a collision (𝑥*0 , 𝑦 * ) and
* *
(𝑥1 , 𝑦 ).
′ (1) (2)
• Let 𝐸𝐵,1 be the event that (𝑥𝑖 ‖𝑥𝑖 , 𝑦𝑖 ) ∈ 𝐷 for all 𝑖 and 𝐷 does not contain a collision.
By Lemma 3.4,
√ √︁ √︀
𝑝𝐵 ≤ 𝑝′𝐵 + 𝑘/|𝒴|
Since 𝑘 and 𝑞 are polynomial in 𝜆 and |𝒳 |𝑘 /|𝒴| = 𝗇𝖾𝗀𝗅, we obtain the desired result by squaring
both sides of the inequality.
D Additional Prelims
We give some additional preliminaries in this section.
D.1 NIZK
A NIZK for NP scheme should satisfy the following properties:
Correctness A NIZK proof (𝖲𝖾𝗍𝗎𝗉, 𝖣𝖾𝗅𝖾𝗀𝖺𝗍𝖾, 𝖯𝗋𝗈𝗏𝖾, 𝖵𝖾𝗋𝗂𝖿𝗒) is correct if there exists a negligible
function 𝗇𝖾𝗀𝗅(·) such that for all 𝜆 ∈ ℕ, all 𝑥 ∈ 𝐿, and all 𝑤 ∈ ℛ𝐿 (𝑥) it holds that
87
Computational Soundness A one-time NIZK proof (𝖲𝖾𝗍𝗎𝗉, 𝖯𝗋𝗈𝗏𝖾, 𝖵𝖾𝗋𝗂𝖿𝗒) is computationally sound
if there exist a negligible function 𝗇𝖾𝗀𝗅(·) such that for all unbounded adversaries 𝒜 and all 𝑥 ∈
/ 𝐿,
it holds that:
Pr[𝖵𝖾𝗋𝗂𝖿𝗒(𝖢𝖱𝖲, 𝜋 ← 𝒜(𝖢𝖱𝖲, 𝑥, 𝗍𝗈𝗄𝖾𝗇), 𝑥) = 1] = 𝗇𝖾𝗀𝗅(𝜆)
where (𝖢𝖱𝖲, 𝗆𝗌𝗄) ← 𝖲𝖾𝗍𝗎𝗉(1𝜆 ), 𝗍𝗈𝗄𝖾𝗇 ← 𝖣𝖾𝗅𝖾𝗀𝖺𝗍𝖾(𝗆𝗌𝗄).
Computational Zero Knowledge A one-time NIZK proof (𝖲𝖾𝗍𝗎𝗉, 𝖣𝖾𝗅𝖾𝗀𝖺𝗍𝖾, 𝖯𝗋𝗈𝗏𝖾, 𝖵𝖾𝗋𝗂𝖿𝗒) is com-
putationally zero-knowledge if there exists a simulator 𝑆 such that for all non-uniform QPT ad-
versaries with quantum advice {𝜌𝜆 }𝜆∈ℕ , all statements 𝑥 ∈ 𝐿 and all witnesses 𝑤 ∈ ℛ𝐿 (𝑥), it holds
that 𝑆(1𝜆 , 𝑥) ≈𝑐 𝖯𝗋𝗈𝗏𝖾(𝖢𝖱𝖲, 𝗍𝗈𝗄𝖾𝗇, 𝑤, 𝑥) where 𝖢𝖱𝖲 ← 𝖲𝖾𝗍𝗎𝗉(1𝜆 ), 𝗍𝗈𝗄𝖾𝗇 ← 𝖣𝖾𝗅𝖾𝗀𝖺𝗍𝖾(𝗆𝗌𝗄).
88