ABSTRACT Title of Dissertation: PRACTICAL CRYPTOGRAPHY FOR BLOCKCHAINS: SECURE PROTOCOLS WITH MINIMAL TRUST Noemi Sara Maria Glaeser Doctor of Philosophy, 2024 Dissertation Directed by: Professor Jonathan Katz Department of Computer Science In 2008, Satoshi Nakamoto introduced Bitcoin, the first digital currency without a trusted authority whose security is maintained by a decentralized blockchain. Since then, a plethora of decentralized applications have been pro- posed utilizing blockchains as a public bulletin board. This growing popularity has put pressure on the ecosystem to prioritize scalability at the expense of trustlessness and decentralization. This work explores the role cryptography has to play in the blockchain ecosystem to improve performance while maintaining minimal trust and strong security guarantees. I discuss a new paradigm for scalability, called naysayer proofs, which sits between optimistic and zero-knowledge approaches. Next, I cover two on-chain applications: First, I show how to improve the security of a class of coin mixing protocols by giving a formal security treatment of the con- struction paradigm and patching the security of an existing, insecure protocol. Second, I show how to construct practical on-chain protocols for a large class of elections and auctions which simultaneously offer fairness and non-interactivity without relying on a trusted third party. Finally, I look to the edges of the blockchain and formalize new design requirements for the problem of backing up high-value but rarely-used secret keys, such as those used to secure the re- serves of a cryptocurrency exchange, and develop a protocol which efficiently meets these new challenges. All of these works will be deployed in practice or have seen interest from practitioners. These examples show that advanced cryptography has the poten- tial to meaningfully nudge the blockchain ecosystem towards increased security and reduced trust. PRACTICAL CRYPTOGRAPHY FOR BLOCKCHAINS: SECURE PROTOCOLS WITH MINIMAL TRUST by Noemi Sara Maria Glaeser Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2024 Advisory Committee: Professor Jonathan Katz, Chair/Advisor Dr. Giulio Malavolta, Co-Advisor Professor Dana Dachman-Soled, Dean’s Representative Professor Ian Miers Professor Gabriel Kaptchuk © Copyright by Noemi Sara Maria Glaeser 2024 ii Acknowledgments It takes a village to raise a child, and apparently to raise a Ph.D. as well. My graduate career has been marked by uncountable people who have given me the energy, support, and confidence to keep going in the graduate program over the last five-and-a-half years. I hope they know who they are, and I will try to list as many as I can here. First and foremost, I want to thank my advisors for taking me on as a student. To Jonathan Katz, who took me on with only limited experience in cryptography; and Giulio Malavolta, who agreed to co-advise me as a relative unknown. Even though I was constantly coming and going, Jon and Giulio al- ways answered when I reached out with questions and supported me in applying to internships, traveling to conferences, etc. Thanks also to Gilles Barthe, my first advisor at MPI-SP, who was patient with me when I was still trying to figure out what sort of security and privacy research I was interested in, helped put me in Giulio’s hands, and continued to be willing to support me even when I was no longer his student. And thank you to Duncan Buell, who, back in 2015, convinced me to study math and computer science. I am grateful to the National Science Foundation for the Graduate Research Fellowship which helped fund me during most of graduate school, and to the Maryland-Max Planck program for allowing me to travel widely, meet people from all over the world, and collaborate across many institutions. I am also indebted to NTT Research and a16z crypto for amazing internship experiences which had a great effect on shaping my research agenda and led to a large portion of the research in this dissertation. The wider faculty at UMD, MPI-SWS, MPI-SP, and Bocconi University also left a mark as role models, lunch and coffee companions, and by dispensing oc- casional doses of wisdom and experience. The same goes for the researchers and visitors at NTT Research and a16z crypto who I had the privilege of coming into contact with during my summer internships. Thank you especially to all of my coauthors, each of whom taught me something new about how to do and present great research. Finally, I am grateful to the other students and young researchers I met at these institutions or through these internships, re- search visits, and conferences for creating a kind, fun, and supportive research community. In particular, I want to thank Erica Blum, Miranda Christ, Ja- son Fan, Phillip Gajland (and the rest of the kicker squad at MPI-SP), Doruk Gür, Lillian Huang, Hannah Keller, Hunter Kippen, Lisa Masserova, Vedant iii Nanda, Michael Rosenberg, István Seres, Noel Warford, and the Bocconi Clowns (Damiano Abram, Pedro Branco, Ruta Jawale, Darya Kaviani, Tamer Mour, and Nikolaj Schwartzbach). On the personal side, thank you to my family, especially my parents for being my biggest fans and always encouraging and supporting me in my endeavors, and my sister Tullia for her pragmatism and reminding me when things don’t merit my constant worrying. Thanks also to my many flatmates over the last five and a half years who always made my various houses (some seven or eight of them) feel like homes; and especially Sarah, who reminds me to eat and sleep even when I feel very overwhelmed. Thank you to my friends (especially Joelle, Liz, and Noémie; Marie and Marisa; Makana; and Spyral), who remind me I’m a whole person and not just a grad student. On a technical level, thank you to Alex Block and Doruk Gür for feedback on naysayer proofs for FRI (Section 3.4.2) and Dilithium (Section 3.4.3), respec- tively, and to Joe Bonneau, Jonathan Katz, Giulio Malavolta, and István Seres for proofreading and comments on larger sections of this dissertation. iv Basis of this Dissertation [GGJ+24] Sanjam Garg, Noemi Glaeser, Abhishek Jain, Michael Lodder, and Hart Montgomery. Hot-cold threshold wallets with proofs of re- membrance, 2024. Under submission. [GMM+22] Noemi Glaeser, Matteo Maffei, Giulio Malavolta, Pedro Moreno- Sanchez, Erkan Tairi, and Sri Aravinda Krishnan Thyagarajan. Foundations of coin mixing services. In Heng Yin, Angelos Stavrou, Cas Cremers, and Elaine Shi, editors, ACM CCS 2022, pages 1259– 1273. ACM Press, November 2022. [GSZB24] Noemi Glaeser, István András Seres, Michael Zhu, and Joseph Bonneau. Cicada: A framework for private non-interactive on- chain auctions and voting. In Workshop on Cryptographic Tools for Blockchains, 2024. Also under submission. [SGB24] István András Seres, Noemi Glaeser, and Joseph Bonneau. Short paper: Naysayer proofs. In Jeremy Clark and Elaine Shi, editors, Financial Cryptography and Data Security, 2024. v vi Other Publications by the Author [AGRS24] Behzad Abdolmaleki, Noemi Glaeser, Sebastian Ramacher, and Daniel Slamanig. Circuit-succinct universally-composable NIZKs with updatable CRS. In 37th IEEE Computer Security Founda- tions Symposium, 2024. [GKMR23] Noemi Glaeser, Dimitris Kolonelos, Giulio Malavolta, and Ah- madreza Rahimi. Efficient registration-based encryption. In Weizhi Meng, Christian Damsgaard Jensen, Cas Cremers, and Engin Kirda, editors, ACM CCS 2023, pages 1065–1079. ACM Press, November 2023. vii viii Table of Contents Acknowledgments iii Basis of This Dissertation v Other Publications by the Author vii 1 Introduction 1 2 Preliminaries 3 2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Non-interactive Proof Systems . . . . . . . . . . . . . . . . . . . 4 2.3 Bilinear Pairings . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Shamir Secret Sharing . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Digital Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.5.1 BLS Signatures . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 KZG Polynomial Commitments . . . . . . . . . . . . . . . . . . . 7 2.7 Pedersen Commitments . . . . . . . . . . . . . . . . . . . . . . . 8 2.8 Universal Composability (UC) Framework . . . . . . . . . . . . . 8 3 Naysayer Proofs 11 3.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3 Formal Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4 Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4.1 Merkle Commitments . . . . . . . . . . . . . . . . . . . . 22 3.4.2 FRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.3 Post-quantum Signature Schemes . . . . . . . . . . . . . . 24 3.4.4 Verifiable Shuffles . . . . . . . . . . . . . . . . . . . . . . . 25 3.4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5 Storage Considerations . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Extensions and Future Work . . . . . . . . . . . . . . . . . . . . 28 ix 4 Cryptocurrency Mixers 29 4.1 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 Technical Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.4 Additional Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 36 4.4.1 Adaptor Signatures . . . . . . . . . . . . . . . . . . . . . . 37 4.4.2 Linear-Only Homomorphic Encryption . . . . . . . . . . . 39 4.4.3 One-More Discrete Logarithm Assumption . . . . . . . . 39 4.5 The A2L Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.5.1 Randomizable Puzzles and Homomorphic Encryption . . 41 4.6 Counterexamples of A2L . . . . . . . . . . . . . . . . . . . . . . . 43 4.6.1 Key Recovery Attack . . . . . . . . . . . . . . . . . . . . . 44 4.6.2 One-More Signature Attack . . . . . . . . . . . . . . . . . 45 4.7 Blind Conditional Signatures . . . . . . . . . . . . . . . . . . . . 47 4.8 The A2L+ Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.8.1 Security Analysis . . . . . . . . . . . . . . . . . . . . . . . 51 4.9 UC-Secure Blind Conditional Signatures . . . . . . . . . . . . . . 57 4.9.1 Ideal Functionality . . . . . . . . . . . . . . . . . . . . . . 57 4.9.2 The A2LUC Protocol . . . . . . . . . . . . . . . . . . . . . 58 4.9.3 Security Analysis . . . . . . . . . . . . . . . . . . . . . . . 60 4.10 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . 65 4.10.1 A2L+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.10.2 A2LUC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5 Fair and Non-interactive On-chain Voting and Auctions 69 5.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.1.1 Voting and Auctions . . . . . . . . . . . . . . . . . . . . . 71 5.1.2 Time-based Cryptography . . . . . . . . . . . . . . . . . . 72 5.2 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.3 Additional Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 74 5.3.1 Voting and Auction Schemes . . . . . . . . . . . . . . . . 74 5.3.2 Time-Lock Puzzles . . . . . . . . . . . . . . . . . . . . . . 76 5.3.3 Vector Packing . . . . . . . . . . . . . . . . . . . . . . . . 79 5.4 Time-Locked Voting and Auction Protocols . . . . . . . . . . . . 80 5.5 The Cicada Framework . . . . . . . . . . . . . . . . . . . . . . . 82 5.5.1 Security Proof of Cicada . . . . . . . . . . . . . . . . . . . 85 5.5.2 Ballot/Bid Correctness Proofs . . . . . . . . . . . . . . . 86 5.5.3 Security Proofs of Sigma Protocols . . . . . . . . . . . . . 90 5.6 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.6.1 Parameter Settings . . . . . . . . . . . . . . . . . . . . . . 93 5.6.2 Our Implementation . . . . . . . . . . . . . . . . . . . . . 94 5.6.3 Deployment Costs . . . . . . . . . . . . . . . . . . . . . . 96 5.7 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.7.1 Everlasting Ballot Privacy for HTLP-based Protocols . . 99 5.7.2 A Trusted Setup Protocol for the CJSS Scheme . . . . . . 101 x 6 High-Value Secret-Key Backups 103 6.1 Novel Design Requirements . . . . . . . . . . . . . . . . . . . . . 105 6.1.1 Our Contribution . . . . . . . . . . . . . . . . . . . . . . . 107 6.2 Technical Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.3 Additional Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 112 6.3.1 Leftover Hash Lemma . . . . . . . . . . . . . . . . . . . . 112 6.3.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.4 Hot and Cold Key Shares . . . . . . . . . . . . . . . . . . . . . . 114 6.4.1 Additive Secret Sharing from Additive ElGamal . . . . . . 115 6.5 Proofs of Remembrance . . . . . . . . . . . . . . . . . . . . . . . 116 6.5.1 PoK of (Unencrypted) Key Share . . . . . . . . . . . . . . 116 6.5.2 PoK of Encrypted Key Share . . . . . . . . . . . . . . . . 117 6.6 Throback: Hot-Cold Threshold BLS with Proofs of Remembrance and Proactive Refresh . . . . . . . . . . . . . . . . . . . . . . . . 118 6.6.1 Subprotocols . . . . . . . . . . . . . . . . . . . . . . . . . 118 6.6.2 Security Analysis . . . . . . . . . . . . . . . . . . . . . . . 124 6.6.3 Implementation and Evaluation . . . . . . . . . . . . . . . 132 6.6.4 Trustless Proactive Refresh Using a Bulletin-Board . . . . 132 7 Conclusion 137 Bibliography 139 xi xii Chapter 1 Introduction Bitcoin [Nak08] was the first digital currency to successfully implement a fully trustless and decentralized payment system. Underpinning Bitcoin is a block- chain, a distributed append-only ledger to record transactions. In Bitcoin, the blockchain’s consistency is enforced via a “proof-of-work” (PoW) consen- sus mechanism in which participants solve difficult computational puzzles (hash preimages) to append the newest bundle of transactions (a block) to the chain. Ethereum [But14] introduced programmability via smart contracts, special applications which sit on top of the consensus layer and can maintain state and modify it programmatically. This has led to the emergence of a number of decentralized applications enabling more diverse functionalities. Unfortunately, as the number of applications and their users has increased, developers have been forced to sacrifice trustlessness and sometimes even secu- rity or privacy in favor of performance and scalability. This dissertation uses cryptographic tools to enable blockchain applications which are both practical and secure while staying true to the original blockchain ethos and minimiz- ing trust. All the works discussed were created in collaboration with industry practitioners and have seen interest or deployment in the blockchain ecosystem. In Chapter 2, I introduce the necessary background and building blocks used throughout this dissertation. Chapter 3 describes naysayer proofs, a new paradigm for verifying zero- knowledge proofs in the so-called optimistic setting. One notable application of naysayer proofs is for scalability, where it sits between the existing solutions— optimistic rollups and zero-knowledge rollups—and can provide better perfor- mance and accessibility for certain parties in rollup ecosystems. Since their publication, naysayer proofs have seen interest in production deployment from at least two startups (that I am aware of) in the blockchain space. In Chapters 4 and 5, I move to on-chain applications. Chapter 4 analyzes the security of a class of coin mixing services which require minimal functionalities of the underlying blockchain, rendering them highly interoperable. I discuss two gaps in the formal treatment of a previous protocol and attacks which exploit them. To close this gap, I introduce a new primitive called blind conditional 1 signatures (BCS) and use it to prove the security of two new coin mixing pro- tocols. Chapter 5 describes Cicada, a smart contract protocol for realizing the first fair and non-interactive on-chain elections and auctions. This resolves im- portant security and usability hurdles present in previous systems, which are widely used for on-chain governance votes or sales of digital goods such as non- fungible tokens (NFTs). Cicada is accompanied by a Solidity smart contract implementation, making it easily deployable on Ethereum and a large number of “layer 2” (L2) chains. Furthermore, we have reached out to Optimism, one of the largest L2s in the Ethereum ecosystem, to discuss the use of Cicada for their retroactive public goods funding (retroPGF) vote.1 Finally, Chapter 6 moves off-chain and looks at securing the edges of the system: cryptocurrency wallets. I envision a system that allows users to con- veniently back up their rarely-used, high-value keys (such as the signing keys of high-balance wallets). This new setting necessitates a novel set of design re- quirements. I develop new security definitions that capture them and construct a UC-secure protocol that implements threshold BLS signatures in our new model. The protocol is practically efficient for the envisioned large numbers of custodians: for a 67-out-of-100 threshold configuration, setup takes 170ms and signing less than 1ms. This design was created with collaborators from Lit Protocol2 and the Linux Foundation and is in the process of being adopted in production by the former. An Apache 2.0-licensed implementation is also publicly available in Hyperledger Labs3, a popular open source organization. I conclude in Chapter 7 by discussing potential future directions for these three works and a broader outlook on the role of cryptography in blockchains. 1https://community.optimism.io/docs/governance/ 2https://www.litprotocol.com/ 3https://github.com/hyperledger-labs 2 https://community.optimism.io/docs/governance/ https://www.litprotocol.com/ https://github.com/hyperledger-labs Chapter 2 Preliminaries Contents 2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Non-interactive Proof Systems . . . . . . . . . . . 4 2.3 Bilinear Pairings . . . . . . . . . . . . . . . . . . . . 5 2.4 Shamir Secret Sharing . . . . . . . . . . . . . . . . 5 2.5 Digital Signatures . . . . . . . . . . . . . . . . . . . 6 2.5.1 BLS Signatures . . . . . . . . . . . . . . . . . . . . . 6 2.6 KZG Polynomial Commitments . . . . . . . . . . . 7 2.7 Pedersen Commitments . . . . . . . . . . . . . . . . 8 2.8 Universal Composability (UC) Framework . . . . 8 2.1 Notation I use [n] to denote the range {1, . . . , n}. For other ranges (mostly zero-indexed), we explicitly write the (inclusive) endpoints, e.g., [0, n]. Concatenation of vec- tors x,y is written as x||y. Let λ be the security parameter. I use the uppercase variable X for the free variable of a polynomial, e.g., f(X). I use a calligraphic font, e.g., S or X , to denote sets or domains. When applying an operation to two sets of equal size ℓ I mean pairwise application, e.g., Z = X + Y means zi = xi + yi ∀i ∈ [ℓ]. Sampling an element x uniformly at random from a set X is written as x ←$ X . I use := to denote variable assignment, y ← Alg(x) to assign to y the output of some algorithm Alg on input x, and y ←$ Alg(x) if the algorithm is randomized (or sometimes Alg(x) $→ y). When I want to be explicit about the randomness r used, I write y ← Alg(x; r). An algorithm is PPT if it runs in probabilistic polynomial time and a function is negligible if it vanishes faster than any polynomial. I use D1 ≈λ D2 to denote that two distributions D1,D2 have statistical distance bounded by a negligible function negl(λ). 3 For a non-interactive proof system Π, I write π ← Π.Prove(x;w) to show that the proving algorithm takes as input an instance x and witness w and outputs a proof π. Verification is written as Π.Verify(x, π) and outputs a bit b. I distinguish the key-pairs used in a signature scheme (vk, sk for “verification” and “signing” key, respectively) from those used in an encryption scheme (ek, dk for “encryption” and “decryption” key, respectively). 2.2 Non-interactive Proof Systems A language L ⊆ {0, 1}∗ is a set of elements. In this dissertation, I will only consider the set of NP languages. Every language L ∈ NP has a corresponding polynomially-decidable relation RL (i.e., decidable by a circuit C(x,w) such that |C| ∈ poly(|x|)) such that if x ∈ L,∃w s.t. (x,w) ∈ RL, with w ∈ poly(|x|). Conversely, I may refer to the language corresponding to a relation R as LR. Definition 1 (Non-interactive proof system). A non-interactive proof system Π for some NP language L is a tuple of PPT algorithms (Setup,Prove,Verify): • Setup(1λ)→ crs: Given a security parameter, output a common reference string crs. This algorithm might use private randomness (a trusted setup). • Prove(crs, x, w)→ π: Given the crs, an instance x, and witness w such that (x,w) ∈ RL, output a proof π. • Verify(crs, x, π)→ {0, 1}: Given the crs and a proof π for the instance x, output a bit indicating accept or reject. (Perfect) completeness requires that for all (x,w) ∈ RL, Pr [ Verify(crs, x, π) = 1 ∣∣∣∣ crs← Setup(1λ) ∧ π ← Prove(crs, x, w) ] = 1. Definition 2 (computational soundness). Soundness requires that for all x /∈ L, λ ∈ N, and all PPT adversaries A, Pr [ Verify(crs, x, π) = 1 ∣∣∣∣ crs← Setup(1λ) ∧ π ← A(crs, x) ] ≤ negl(λ). I refer the reader to [Tha23b, Gol01] for a formal description of other prop- erties of proof systems (e.g., correctness, zero-knowledge). There are numerous definitions of succinctness adopted for proof systems in the literature. In this work, I require succinct proof systems only to have proofs which are sublinear in the size of the witness and “work-saving” verifica- tion [Tha23a]: Definition 3 (succinctness). We say a proof system Π is succinct if |π| ∈ o(|x|+ |w|) and Π.Verify(crs, x, π) runs in time |x|+ o(|w|). 4 2.3 Bilinear Pairings Definition 4 (bilinear pairing). A bilinear pairing is a map e : G1 ×G2 7→ GT where G1,G2, and GT are cyclic groups of prime order p. Let g1, h1 ∈ G1 and g2, h2 ∈ G2 be generators of their respective groups. Bilinearity requires the map e to have the following properties: e(g1h1, g2) = e(g1, g2) · e(h1, g2) e(g1, g2h2) = e(g1, g2) · e(g1, h2) (Note this implies e(ga1 , g2) = e(g1, g a 2 ) = e(g1, g2) a for all a ∈ Z∗p.) Pairings used in cryptography are generally also required to be non-degen- erate, i.e., e(g1, g2) ̸= 1. Based on the (in)equality of the groups G1,G2 and the (non-)existence of an efficiently computable homomorphism between them, pairings can be divided into three types. The constructions in this dissertation will use a type-3 pair- ing, which is asymmetric (G1 ̸= G2) and has no such efficiently computable homomorphism. In this case, the Decisional Diffie-Hellman (DDH) assumption is believed to hold in both G1 and G2; this is referred to as the symmetric external Diffie-Hellman (SXDH) assumption. 2.4 Shamir Secret Sharing Shamir [Sha79] introduced a scheme to share a secret among n parties such that any t parties can work together to recover the secret, but with any fewer parties the secret remains information-theoretically hidden. Construction 1 (Shamir secret sharing [Sha79]). Let p be a prime. • {s1, . . . , sn} ← Share(s, t, n): Given a secret s ∈ Zp and t ≤ n ∈ N, com- pute a t-out-of-n sharing of s by choosing a random degree-(t− 1) polyno- mial f(X) ∈ Zp[X] such that f(0) = s. For i ∈ [n], compute si := (i, f(i)). • {s′,⊥} ← Reconstruct(S, t, n): Given some set of shares S, check if |S| < t. If so, output ⊥. Otherwise, without loss of generality, let S′ := {s1, . . . , st} be the first t entries of S, where si := (xi, yi). Output the Lagrange inter- polation at 0: s′ := t∑ i=1 yi t∏ j=1,j ̸=i xj xj − xi . The secret sharing scheme is correct, since for any secret s and values t ≤ n ∈ N, we have Reconstruct(Share(s, t, n), t, n) = s. For notational convenience, let Share(s, t, n; r)[i] denote the ith share of s computed with randomness r. The reconstruction algorithm can be generalized to interpolate any point f(k) (not just the secret at k = 0) and thereby recover the ith share: 5 • {sk,⊥} ← Interpolate(S, k, t, n): If |S| < t, output ⊥. Otherwise, use the first t entries (x1, y1), . . . , (xt, yt) of S to interpolate f(k) = t∑ i=1 yi t∏ j=1,j ̸=i xj − k xj − xi . Output sk := (k, f(k)). 2.5 Digital Signatures A digital signature scheme is a triple of algorithms (KGen,Sign,Verify). The key generation algorithm (vk, sk)← KGen(1λ) outputs a verification-signing key pair. The owner of the signing key sk can compute signatures on a message m by running σ ← Sign(sk,m), which can be publicly verified using the corresponding verification key vk by running Verify(vk,m, σ). 2.5.1 BLS Signatures A popular digital signature scheme is the BLS signature scheme, which uses bilinear pairings (Section 2.3). Construction 2 (BLS signature scheme [BLS01]). • (sk, vk)← KGen(1λ): Given the security parameter 1λ, generate elliptic curve groups G1,G2 of prime order p (where log p = λ) with generators g1 and g2, respectively, and an efficiently computable1 asymmetric pairing e : G1 ×G2 → GT . Sample x ←$ Zp and output the keypair consisting of signing key sk := x and verification key vk := gx2 . For simplicity, we add the group description (p,G1,G2,GT , g1, g2, e) to the verification key vk. Let H : {0, 1}∗ → G1 be a public hash function. • σ ← Sign(sk,m): Given a signing key sk ∈ Zp and a message m ∈ {0, 1}∗, compute a signature σ := H(m)sk. • {0, 1} ← Verify(vk,m, σ): Given a verification key vk ∈ G2, message m ∈ {0, 1}∗, and signature σ ∈ G1, if e(σ, g2) = e(H(m), vk), output 1. Else output 0. The security of BLS relies on the gap co-Diffie Hellman assumption on (G1,G2), i.e., co-DDH being easy but co-CDH being hard on G1,G2, as well as the existence of an efficiently computable homomorphism ϕ : G2 → G1 (type-2 pairing). Since we require a type-3 pairing for our purposes (i.e., no efficiently computable ϕ exists), we rely on a stronger variant of the co-GDH assumption (see discussion in [BLS01, §3.1] and [SV05, §2.2]). 1i.e., in time poly(λ) 6 Threshold variant Sharing a BLS signing key sk ∈ Zp via Shamir secret sharing leads directly to a robust t-out-of-n threshold signature [BLS01]. More specifically, each party i ∈ [n] receives a t-out-of-n Shamir secret share ski of the key. The “partial” public keys vki := g ski 2 are published along with the public key vk. A partial signature is computed in exactly the same way as a regular BLS signature, but under the secret key share: σi := H(m)ski . This value is publicly verifiable by checking that (g2, vki, H(m), σi) is a co-Diffie-Hellman tuple (i.e., it is of the form (g2, g a 2 , h, h a) where g2 ∈ G2 and h ∈ G1). Given t valid partial signatures on a message m ∈ {0, 1}∗ anyone can recover a regular BLS signature: • σ ← Reconstruct(S := {(i, σi)}): Let S′ ⊆ S be the set of valid partial sig- natures in S. If |S′| < t, output ⊥. Otherwise, without loss of generality, assume the first t valid signatures come from users 1, . . . , t and recover the complete signature as σ ← t∏ i=1 σλi i , where λi = t∏ j=1,j ̸=i j j − i (mod p) Notice that the reconstruction simply performs Shamir reconstruction of the signing key shares ski in the exponent and thus the output will equal H(m)sk. Hence, the complete signature is indistinguishable from a regular BLS signature, and verification proceeds exactly as in the regular scheme. 2.6 KZG Polynomial Commitments KZG commitments can be instantiated using either a symmetric or asymmetric pairing; I give the asymmetric version of KZG below. Construction 3 (KZG polynomial commitments [KZG10]). • crs← Setup(1λ, d) : Given a security parameter 1λ, generate elliptic curve groups G1,G2 of prime order p (where log p = λ) with generators g1 and g2, respectively, and an efficiently computable asymmetric pairing e : G1× G2 7→ GT . To allow commitments to polynomials in Zp[X] with degree at most d, sample τ ←$ Zp and output crs := {g1, gτ1 , gτ 2 1 , . . . , gτ d 1 , g2, g τ 2}. For simplicity, we add the group description (p,G1,G2,GT , g1, g2, e) to crs. • comf ← Com(crs, f(X)) : Let f(X) = a0+a1X+ · · ·+adXd ∈ Zp[X]. Use crs to compute and output gf(τ)1 = ga0 1 ·(gτ1 )a1 . . . (gτ d 1 )ad = ga0+a1τ+···+adτ d 1 ∈ G1. 7 • (f(i), πi)← Open(crs, f(X), i) : To open f(X) at i, let qi(X) := f(X)−f(i) X−i ∈ Zp[X]2. Then compute comqi ← Com(crs, qi(X)) and output (f(i), comqi) ∈ Zp ×G1. • {0, 1} ← Verify(crs, comf , i, y, πi) : To confirm y = f(i), it suffices to check that qi(X) = f(X)−y X−i at X = τ . This can be done with a single pairing check: e(comf/g y 1 , g2) ? = e(πi, g τ 2/g i 2) The security of the scheme relies on the d-Strong Diffie Hellman assumption (d-SDH), which states that given {g1, gτ1 , . . . , gτ d 1 , g2, g τ 2}, it is difficult to com- pute (c, g 1 τ−c 1 ) for any c ∈ Zp \ {−τ}. This assumption is stronger than d-SDH in the symmetric case when G1 = G2, which in turn implies DDH in G1. 2.7 Pedersen Commitments Next, I recall Pedersen commitments [Ped92], a commitment scheme which is unconditionally (information-theroetically) hiding and computationally binding (by the discrete logarithm assumption on G). Construction 4 (Pedersen commitment scheme [Ped92]). Let G be a group of order p and g, h be generators of G. The following is a commitment scheme for elements x ∈ Zp. • (com, decom)← Com(x) : Sample r ←$ Zp and return com := gxhr and decommitment information (x, r). • (x, r)← Open(com, decom) : To open com, directly output decom = (x, r). • {0, 1} ← Verify(com, x, r) : To confirm the opening of com to x, it suffices to check that com = gxhr. A PoK of the committed value can be computed using a Sigma protocol due to Okamoto [Oka93], which can be made non-interactive using the Fiat-Shamir transform [FS87]. I refer to this protocol as Πped and present it in Figure 2.1. 2.8 Universal Composability (UC) Framework In the universal composability (UC) framework [Can20], the security require- ments of a protocol are defined via an ideal functionality which is executed by a trusted party. To prove that a protocol UC-realizes a given ideal functional- ity, we show that the execution of this protocol (in the real or hybrid world) can be emulated in the ideal world, where in both worlds there is an additional adversary E (the environment) which models arbitrary concurrent protocol ex- ecutions. Specifically, we show that for any adversary A attacking the protocol 2This is a polynomial by Little Bézout’s Theorem. 8 PoK of Pedersen opening (Πped) Parameters: Group G1 of order p with generators g1, h1. Prove(comped; (v, r))→ πped: Given a Pedersen commitment comped = gv1h r 1, a party can prove knowledge of the opening (v, r) as follows: 1. The prover samples s1, s2 ←$ Zp and sends a := gs11 h s2 1 to the verifier. 2. The verifier sends back a uniform challenge c←$ Zp. 3. The prover computes t1 := s1 + vc and t2 := s2 + rc and sends both values to the verifier. The proof is defined as πped := (a, c, (t1, t2)). Verify(comped, πped)→ {0, 1}: Given the commitment comped and a proof πped, the verifier parses πped := (a, c, (t1, t2)) and outputs 1 iff a · comc ped = gt11 h t2 1 . Figure 2.1: The proof system Πped used to prove knowledge of the opening to a Pedersen commitment [Oka93]. execution in the real world (by controlling communication channels and cor- rupting parties involved in the protocol execution), there exists an adversary S (the simulator) in the ideal world who can produce a protocol execution which no environment E can distinguish from the real-world execution. Below we describe the UC framework as it is presented in [CLOS02a]. All parties are represented as probabilistic interactive Turing machines (ITMs) with input, output, and ingoing/outgoing communication tapes. For simplicity, we assume that all communication is authenticated, so an adversary can only de- lay but not forge or modify messages from parties involved in the protocol. Therefore, the order of message delivery is also not guaranteed (asynchronous communication). We consider a PPT malicious, adaptive adversary who can corrupt or tamper with parties at any point during the protocol execution. The execution in both worlds consists of a series of sequential party acti- vations. Only one party can be activated at a time (by writing a message on its input tape). In the real world, the execution of a protocol Π occurs among parties P1, . . . , Pn with adversary A and environment E . In the ideal world, interaction takes place between dummy parties P̃1, . . . , P̃n communicating with the ideal functionality F , with the adversary (simulator) S and environment E . Every copy of F is identified by a unique session identifier sid. In both the real and ideal worlds, the environment is activated first and activates either the adversary (A resp. S) or an uncorrupted (dummy) party by writing on its input tape. If A (resp. S) is activated, it can take an action or return control to E . After a (dummy) party (or F) is activated, control returns to E . The protocol execution ends when E completes an activation without 9 writing on the input tape of another party. We denote with realΠ,A,E(λ, x) the random variable describing the out- put of the real-world execution of Π with security parameter λ and input x in the presence of adversary A and environment E . We write the correspond- ing distribution ensemble as {realΠ,A,E(λ, x)}λ∈N,x∈{0,1}∗ . The output of the ideal-world interaction with ideal functionality F , adversary (simulator) S, and environment E is represented by the random variable idealF,S,E(λ, x) and cor- responding distribution ensemble {idealF,S,E(λ, x)}λ∈N,x∈{0,1}∗ . The actions each party can take are summarized below: • Environment E : read output tapes of the adversary (A or S) and any uncorrupted (dummy) parties; then write on the input tape of one party (the adversary A or S or any uncorrupted (dummy) parties). • Adversary A: read its own tapes and the outgoing communication tapes of all parties; then deliver a pending message to party by writing it on the recipient’s ingoing communication tape or corrupt a party (which becomes inactive: its tapes are given to A and A controls its actions from this point on, and E is notified of the corruption). • Real-world party Pi: only follows its code (potentially writing to its output tape or sending messages via its outgoing communication tape). • Dummy party P̃i: acts only as a simple relay with the ideal functionality F , copying inputs from its input tape to its outgoing communication tape (to F) and any messages received on its ingoing communication tape (from F) to its output tape. • Adversary S: read its own input tape and the public headers (see be- low) of the messages on F ’s and dummy parties’ outgoing communication tapes; then deliver a message to F from a dummy party or vice versa by copying it from the sender’s outgoing communication tape to the re- cipient’s incoming communication tape or send its own message to F by writing on the latter’s incoming communication tape or corrupt a dummy party (which becomes inactive: its tapes are given to S and S controls its actions from this point on, and E and F are notified of the corruption). • Ideal functionality F : read incoming communication tape; then send any messages specified by its definition to the dummy parties and/or adversary S by writing to its outgoing communication tape. Definition 5. We say a protocol Π UC-realizes an ideal functionality F if for any PPT adversary A, there exists a simulator S such that for any environment E, the distribution ensembles {realΠ,A,E(λ, x)}λ∈N,x∈{0,1}∗ and {idealF,S,E(λ, x)}λ∈N,x∈{0,1}∗ are computationally indistinguishable. 10 Chapter 3 Naysayer Proofs∗ Contents 3.1 Related Work . . . . . . . . . . . . . . . . . . . . . . 14 3.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3 Formal Definitions . . . . . . . . . . . . . . . . . . . 18 3.4 Constructions . . . . . . . . . . . . . . . . . . . . . . 21 3.4.1 Merkle Commitments . . . . . . . . . . . . . . . . . 22 3.4.2 FRI . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.3 Post-quantum Signature Schemes . . . . . . . . . . . 24 3.4.4 Verifiable Shuffles . . . . . . . . . . . . . . . . . . . . 25 3.4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5 Storage Considerations . . . . . . . . . . . . . . . . 27 3.6 Extensions and Future Work . . . . . . . . . . . . 28 In most blockchains with programming capabilities, e.g., Ethereum [Woo24], developers are incentivized to minimize the storage and computation complexity of on-chain programs. Applications with high compute or storage incur signifi- cant fees, commonly referred to as gas, to compensate validators in the network. Often, these costs are passed on to users of an application. High gas costs have motivated many applications to utilize verifiable com- putation (VC) [GGP10], off-loading expensive operations to powerful but un- trusted off-chain entities who perform arbitrary computation and provide a short proof1 that the claimed result is correct. This computation can even depend on secret inputs not known to the verifier by relying on zero-knowledge proofs (i.e., zkSNARKs). 1More precisely, a succinct non-interactive argument of knowledge, or SNARK. ∗Portions of this section have been adapted from [SGB24]. 11 Figure 3.1: Using VC to move computation off-chain. The off-chain “prover” applies the function f to input y and potentially an auxiliary off-chain input w to get the result y′ = f(y, w). (In the case of a zk-rollup, f is the state transition function, y is the previous on-chain state, w is a batch of transactions, and y′ is the new state after applying the transactions in w to y.) It posts y′ and a proof π of its correctness, which is verified on-chain before the output y′ is accepted. This paradigm does not require any challengers. VC leads to a paradigm in which smart contracts, while capable of arbi- trary computation, primarily act as verifiers and outsource all significant com- putation off-chain (see Figure 3.1). A motivating application is so-called “zk”- rollups2 [Sta, ZKs, Azt, dYd, Scr], which combine transactions from many users into a single smart contract which verifies a proof that all have been executed correctly. However, verifying these proofs can still be costly. For example, the StarkEx rollup has spent hundreds of thousands of dollars to date to verify FRI polynomial commitment opening proofs.3 We observe that this proof verification is often wasteful. In most applica- tions, provers have strong incentives to post only correct proofs, suffering direct financial penalties (in the form of a lost security deposit) or indirect costs to their reputation and business for posting incorrect proofs. As a result, a sig- nificant fraction of a typical layer-1 blockchain’s storage and computation is expended verifying proofs, which are almost always correct.4 This state of affairs motivates us to propose a new paradigm called nay- sayer proofs (Figure 3.2). In this paradigm, the verifier (e.g., a rollup smart 2The “zk” part of the name is often a misnomer, since these services do not necessarily offer the zero-knowledge property (and in fact most do not). Instead, the term is used by 12 Figure 3.2: The naysayer proof approach. As in VC, the off-chain prover computes y′ = f(y, w) and π, which it posts on-chain. This time, the proof is not verified on-chain, but is provisionally accepted while waiting for the challenge period to pass. Any party can verify π off-chain and, if it fails, issue a challenge by creating a naysayer proof πnay. The on-chain verifier checks any submitted naysayer proofs, and if they pass, it rejects the claimed result y′. If the challenge period elapses without any successful naysaying, y′ is accepted. contract) optimistically accepts a submitted proof without verifying its correct- ness. Instead, any observer can check the proof off-chain and, if needed, prove its incorrectness to the verifier by submitting a naysayer proof. The verifier then checks the naysayer proof and, if it is correct, rejects the original proof. Otherwise, if no party successfully naysays the original proof before the end of the challenge period, the original proof is accepted. To deter denial of service, naysayers may be required to post collateral, which is forfeited if their naysayer proof is incorrect. This paradigm potentially saves the verifier work in two ways. First, in the optimistic case, where the proof is not challenged, the verifier does no work at all (just like the related fraud proof paradigm; see Section 3.1). We expect this to almost always be the case in practice. Second, even in the pessimistic case, we will see below that checking the naysayer proof can be much more efficient than checking the original proof. In other words, the naysayer acts as a helper to the practitioners to emphasize succinctness, which is the more relevant property in practice. 3https://etherscan.io/address/0x3e6118da317f7a433031f03bb71ab870d87dd2dd 4At the time of this writing, we are unaware of any major rollup service which has posted an incorrect proof in production. 13 https://etherscan.io/address/0x3e6118da317f7a433031f03bb71ab870d87dd2dd verifier by reducing the cost of the verification procedure in fraudulent cases. At worst, checking the naysayer proof is equivalent to verifying the original proof (this is the trivial naysayer construction). Naysayer proofs enable other interesting trade-offs. For instance, naysayer proofs can be instantiated with a lower security level than the original proof system. This is because a violation of the naysayer proof system’s soundness undermines only the completeness of the original proof system. For an applica- tion like a rollup service, this would result in a loss of liveness; importantly, the rollup users’ funds would remain secure. Liveness could be restored by falling back to full proof verification on-chain. We will formally define naysayer proofs in Section 3.3 and show that every succinct proof system has a logarithmic-size and constant-time naysayer proof. Before that, we discuss related work in Section 3.1 and define our system model in more detail in Section 3.2. In Section 3.4, we construct naysayer proofs for four concrete proof systems and evaluate their succinctness. We discuss storage considerations in Section 3.5 and conclude with some open directions in Section 3.6. 3.1 Related Work A concept related to the naysayer paradigm is refereed delegation [FK97]. The idea has found widespread adoption [TR19, KGC+18, AAB+24] under the name “fraud proofs” or “fault proofs” and is the core idea behind optimistic rollups [Eth23b, Lab23, Opt23a]. In classic refereed delegation, a server can output a heavy computation to two external parties, who independently compute and return the result. If the reported results disagree, the parties engage in a bisec- tion protocol which pinpoints the single step of the computation which gave rise to the disagreement by recursively halving the computational trace (essentially performing a binary search). Once the discrepancy has been reduced to a single step of the computation, the original server can re-execute only that step to determine which of the two parties’ results is correct. In the context of optimistic rollups, a “prover” performs the computation off-chain and posts the result on-chain, where it is provisionally accepted. Any party can then challenge the correctness of the result by posting a challenge on-chain and engaging in the bisection protocol with the prover via on-chain messages (Figure 3.3). (The term “fraud proof” or “fault proof” refers to these messages.5) Once the problematic step is identified, it is re-executed on-chain to resolve the dispute. A dispute can also be resolved non-interactively by re- running the entire computation on-chain in the event of a dispute (Figure 3.4), an approach initially taken by Optimism [Sin22, Buc]. If no one challenges the prover’s result before the end of the challenge period (typically 7 days [Fic]), it is accepted and irreversibly commited on the layer-1 chain. 5Despite the name, this is not actually a proof system, nor does it depend on any proof system. 14 Figure 3.3: Interactive fraud “proofs”. Like naysayer proofs, fraud “proofs” make use of challengers and challenge periods. Again, the off-chain “prover” computes y′ = f(y, w), but without providing any proof of correctness π. During the challenge period, anyone (with access to w, e.g., the batch of transactions) can re-compute f(y, w). If the result y′′ does not equal y′, the party engages in a bisection protocol with the original “prover” to narrow the disagreement to a single step of the computation fi(b). The defender and challenger submit their respective one-step results a′ ̸= a′′ to the on-chain verifier, who re-executes fi(b) on-chain. Based on the result, it may reject the original claim y′. If the challenge period elapses without any successful fraud proofs, y′ is accepted. The naysayer approach offers significant speedups for the challenger over fraud proofs, since for succinct proof systems, verification is much more efficient than the original computation. Notice that there is a slight semantic difference between fraud proofs and naysayer proofs: A fraud proof challenges the cor- rectness of the prover’s computation, and thus can definitively show that the computation output is incorrect. In contrast, a naysayer proof challenges the correctness of the accompanying proof, and can therefore only show that the proof is invalid—the computation itself may still have been correct. A prover who performs the computation honestly has no incentive to attach an incorrect proof6, since that would mean it wasted computational power to compute the result, but would forfeit the reward (and likely incur some additional penalty). We compare classic verifiable computation, fraud proofs, and our naysayer proofs in Table 3.1. We discuss the main differences in more detail below. Assumptions. Both fraud proofs and naysayer proofs work under an optimistic 6It is possible that an honest prover will still attach an incorrect proof if, for exmaple, the proof generation software has a bug. 15 Figure 3.4: Non-interactive fraud “proofs”. The non-interactive version functions similarly, except that there is no bisection game. Instead, the on-chain verifier simply re-executes the entire computation f(y, w) on-chain to decide whether or not to reject y′. Again, if the challenge period elapses without any challenges, y′ is accepted. assumption, meaning a computation is accepted as correct unless some party challenges it. This requires assuming that at least one honest party is present to challenge any incorrect results. This party must also be able to submit a challenge, meaning we rely on the censorship-resistance of the underlying blockchain and assume new messages are added within some known, bounded delay. VC does not rely on these assumptions since every computation is checked at the time the result is submitted. It is well known that this leads to the faster on-chain finality of zk-rollups, which use the VC paradigm and thus do not require a challenge period. On-chain interaction. Except for the interactive version of fraud proofs, all of the approaches require only a single message from the (off-chain) prover or challenger to the (on-chain) verifier. VC offers the strongest form of non-interactivity, since it consists of one round total (for the original com- putation and (non-existent) challenge phase). At the other end of the spectrum, optimistic rollups almost universally employ interactive fraud proofs, requiring multiple on-chain messages in case of a dispute. This means the challenge period must be long enough to ensure that all the messages of the dispute resolution protocol can be posted on-chain, even in the presence of some fraction of malicious consensus nodes who delay inclusion of the challenger’s (or prover’s) messages. We conjecture that by virtue of having a non-interactive challenge phase, naysayer proofs (and 16 VC fraud proof fraud proof naysayer proof (interactive) (non-interactive) No optimistic assumption # # # Non-interactive # G# G# Off-chain f G# Off-chain Π.Verify # - - Witness-independent challenge - # # Witness-independent resolution G# # No Π.Prove # # Table 3.1: Trade-offs between VC, fraud proofs, and naysayer proofs. non-interactive fraud proofs) admit a shorter challenge period. Further- more, the challenge period must also be long enough to accommodate the challenge resolution protocol to run on-chain. Thus, naysayer proofs should have an advantage even over non-interactive fraud proofs, since for all practical use cases, the on-chain resolution of the former (verifying a naysayer proof) will always be faster than re-computing the function f on-chain. On-chain computation & witnesses. As is their goal, none of the approaches require running the original computation f on-chain, except for non- interactive fraud proofs in the (rare) case of a dispute. Compared to VC, fraud proofs and naysayer proofs do not require running proof verification on-chain (fraud proofs do not use a proof system at all). However, fraud proofs require the full computation input (including any off-chain input w, which we refer to as the witness) to be available to potential challengers and at least in part to the verifier. Neither VC nor naysayer proofs require this information to verify the correctness of the output y′: they use only the statement and proof, which are already available on-chain. Underlying proof system. Finally, a major advantage of fraud proofs is that they do not use any proof system at all. This makes them much easier to implement and deploy. VC and naysayer proofs, on the other hand, require computing a succinct proof, which is costly both in terms of im- plementation complexity and prover runtime. However, the design and efficiency of the bisection protocol can depend significantly on the pro- gramming model used [KGC+18] and the particular function f being com- puted [PD16, PB17, SNBB19, SJSW19]. We thus view naysayer proofs as a drop-in replacement for the many application-specific fault proofs, offering an alternative which is both more general and more efficient. 17 3.2 Model There are three entities in a naysayer proof system. We assume that all parties can read and write to a public bulletin board (e.g., a blockchain). Fix a function f : X ×W → Y and let Lf be the language {(x, y) : ∃w s.t. y = f(x,w)}. Let Rf = {((x, y), w)} be the corresponding relation. We assume f, x are known by all parties. When f : Y×W → Y is a state transition function with y′ = f(y, w), this corresponds to the rollup scenarios described above. Prover The prover posts y and a proof π to the bulletin board claiming (x, y) ∈ Lf . Verifier The verifier does not directly verify the validity of y or π, rather, it waits for time Tnay. If no one naysays (y, π) within that time, the verifier accepts y. In the pessimistic case, a party (or multiple parties) naysay the validity of π by posting proof(s) πnay. The verifier checks the validity of each πnay, and if any of them pass, it rejects y. Naysayer If Verify(crs, (x, y), π) = 0, then the naysayer posts a naysayer proof πnay to the public bulletin board before Tnay time elapses. Note that, due to the optimistic paradigm, we must assume a synchronous communication model: in partial synchrony or asynchrony, the adversary can arbitrarily delay the posting of naysayer proofs, and one cannot enforce sound- ness of the underlying proofs. Furthermore, we assume that the public bulletin board offers censorship-resistance, i.e., anyone who wishes to write to it can do so successfully within a certain time bound. Finally, we assume that there is at least one honest party who will submit a naysayer proof for any invalid π. 3.3 Formal Definitions Next, we introduce a formal definition and syntax for naysayer proofs. A nay- sayer proof system Πnay can be seen as a “wrapper” around an underlying proof system Π. For example, Πnay defines a proving algorithm Πnay.Prove which uses the original prover Π.Prove as a subroutine. Definition 6 (Naysayer proof). Given a non-interactive proof system Π = (Setup,Prove,Verify) for a NP language L, the naysayer proof system corre- sponding to Π is a tuple of PPT algorithms Πnay = (Setup,Prove,Naysay,VerifyNay) defined as follows: Setup(1λ, 1λnay) $→ (crs, crsnay): Given (potentially different) security parameters 1λ and 1λnay , output two common reference strings crs and crsnay. This algorithm may use private randomness. Prove(crs, x, w)→ π′: Given a statement x and witness w such that (x,w) ∈ RL, output π′ = (π, aux), where π ← Π.Prove(crs, x, w). 18 Naysay(crsnay, (x, π ′), tdnay)→ πnay: Given a statement x and values π′ = (π, aux) where π is a (potentially invalid) proof that ∃w s.t. (x,w) ∈ RL using the proof system Π, output a naysayer proof πnay disputing π. This algo- rithm may also make use of some (private) trapdoor information tdnay ⊆ w. VerifyNay(crsnay, (x, π ′), πnay)→ {0,⊥}: Given a statement-proof pair (x, π′) and a naysayer proof πnay disputing π′, output a bit indicating whether the ev- idence is sufficient to reject (0) or inconclusive (⊥). A trivial naysayer proof system always exists in which πnay = ⊤, π′ = (π,⊥), and VerifyNay simply runs the original verification procedure, outputting 0 if Π.Verify(crs, x, π) = 0 and ⊥ otherwise. We say a proof system Π is effi- ciently naysayable if there exists a corresponding naysayer proof system Πnay such that VerifyNay is asymptotically faster than Verify. If VerifyNay is only concretely faster than Verify, we say Πnay is a weakly efficient naysayer proof. Note that some proof systems already have constant proof size and verification time [Gro16, Sch90] and therefore can, at best, admit only weakly efficient nay- sayer proofs. Moreover, if tdnay =⊥, we say Πnay is a public naysayer proof (see Section 3.4.4 for an example of a non-public naysayer proof). Definition 7 (Naysayer completeness). Given a proof system Π, a naysayer proof system Πnay = (Setup,Prove,Naysay,VerifyNay) is complete if, for all honestly generated crs, crsnay and all values of aux,7given an invalid statement- proof pair (x, π), Naysay outputs a valid naysayer proof πnay. That is, for all λ, λnay ∈ N and all aux, x, π, the following expression equals 1: Pr VerifyNay(crsnay, (x, (π, aux)), πnay) = 0 ∣∣∣∣∣∣ (crs, crsnay)← Setup(1λ, 1λnay) ∧ Π.Verify(crs, x, π) ̸= 1 ∧ πnay ← Naysay(crsnay, (x, (π, aux)),⊥)  Definition 8 (Naysayer soundness). Given a proof system Π, a naysayer proof system Πnay is sound if, for all PPT adversaries A, and for all honestly gen- erated crs, crsnay, all (x,w) ∈ RL, and all correct proofs π′, A produces a pass- ing naysayer proof πnay with at most negligible probability. That is, for all λ, λnay ∈ N, and all tdnay, the following expression is bounded by negl(λnay):8 Pr VerifyNay(crsnay, (x, π′), πnay) = 0 ∣∣∣∣∣∣∣∣ (crs, crsnay)← Setup(1λ, 1λnay) ∧ (x,w) ∈ RL ∧ π′ ← Prove(crs, x, w) ∧ πnay ← A(crsnay, (x, π′), tdnay)  7We do not place any requirement on aux. 8If we assume aux is computed correctly, the second and third line of the precondition can be simplified to see that Πnay is required to be a sound proof system for the language Lnay = {(x, π) : x /∈ L ∨ Π.Verify(crs, x, π) ̸= 1}. 19 Next, we show that every proof system has corresponding naysayer proof system with a logarithmic-sized (in the size of the verification circuit) naysayer proofs and constant verification time (i.e., a succinct naysayer proof system). Lemma 1. A claimed satisfying assignment for a circuit C : X → {0, 1} on input x ∈ X is efficiently naysayble. That is, if C(x) ̸= 1, there is an O(log |C|)- size proof of this fact which can be checked in constant-time, assuming oracle access to the wire assignments of C(x). Proof. Without loss of generality, let C be a circuit of fan-in 2. If C(x) ̸= 1, then there must be some gate of C for which the wire assignment is inconsistent. Let i be the index of this gate (note |i| ∈ O(log |C|)). To confirm that C(x) ̸= 1, a party can re-evaluate the indicated gate Gi on its inputs a, b and compare the result to the output wire c. That is, if Gi(a, b) ̸= c, the verifier rejects the satisfying assignment. Theorem 1. Every proof system Π with poly(|x|, |w|) verification complexity has a succinct naysayer proof. Proof. Given any proof system Π, the evaluation of Π.Verify(crs, ·, ·) can be represented as a circuit C. (We assume this circuit description is public.) Then the following is a complete and sound naysayer proof system Πnay: Setup(1λ, 1λnay): Output crs← Π.Setup(1λ) and crsnay := ∅. Prove(crs, x, w)→ π′: Let π ← Π.Prove(crs, x, w) and aux be the wire assign- ments of Π.Verify(crs, x, π). Output π′ = (π, aux). Naysay(crsnay, (x, π ′), tdnay): Parse π′ = (π, aux)9 and output πnay := ⊤ if aux = aux′∥0. Otherwise, evaluate Π.Verify(crs, x, π). If the result is not 1, search aux to find an incorrect wire assignment for some gate Gi ∈ C. Output πnay := i. VerifyNay(crsnay, (x, π ′), πnay): Parse π′ = (·, aux) and πnay = i. If aux = aux′∥0, output 0, indicating rejection of the proof π′. Otherwise, obtain the values in, out ∈ aux corresponding to the gate Gi and check Gi(in) ? = out. If the equality does not hold, output ⊥; else output 0. Completeness (if a π fails to verify, we can naysay (π, aux)) follows by Lemma 1. If Π.Verify(crs, x, π) ̸= 1, then we have two cases: If aux is con- sistent with a correct evaluation of Π.Verify(crs, x, π), either aux = aux′∥0 (and VerifyNay rejects) or we can apply the lemma to find an index i such that Gi(in) ̸= out for in, out ∈ aux, where Gi ∈ C. Alternatively, if aux is not consis- tent with a correct evaluation, there must be some gate (with index i′) which was evaluated incorrectly, i.e., Gi′(in) ̸= out for in, out ∈ aux. Soundness follows by the completeness of Π. If (x,w) ∈ RL and π′ = (π, aux) is computed correctly, completeness of Π implies Π.Verify(crs, x, π) = 1. Since 9If |aux| is larger than the number of wires in C, truncate it to the appropriate length. 20 aux is correct, it follows that aux ̸= aux∥0 and Gi(in) = out for all i ∈ |C| and corresponding values in, out ∈ aux. Thus there is no index i which will cause VerifyNay(crsnay, (x, π ′), i) to output 0. Succinctness of πnay follows from the fact that |i| = log |Π.Verify(crs, ·, ·)| = O (log(|x|, |w|)) ∈ o(|x|+ |w|) and that the runtime of VerifyNay is constant. The proof of Theorem 1 gives a generic way to build a succinct naysayer proof system for any proof system Π with polynomial-time verification. For succinct proof systems, the generic construction even allows efficient (sublinear) naysaying, since the runtime of Naysay depends only on the runtime of Π.Verify, which is sublinear if Π is succinct. Notice that although the syntax gives π′ = (π, aux) as an input to the VerifyNay algorithm, in the generic construction the algorithm does not make use of π. Thus, if the naysayer rollup from Figure 3.2 were instantiated with this generic construction, π would not need to be posted on-chain since the on-chain verifier (running the VerifyNay algorithm) will not use this information. In fact, the verifier wouldn’t even need most of aux—only the values corresponding to the gate Gi, which is determined by πnay. Thus, although π′ must be available to all potential naysayers, only a small (adaptive) fraction of it must be accessible on-chain. In Section 3.5, we will discuss how to leverage this insight to reduce the storage costs of a naysayer rollup. 3.4 Constructions Our first construction (Section 3.4.1) is a concrete example of the generic nay- sayer construction from Theorem 1, applied to Merkle trees. We then highlight three naysayer proof constructions which take advantage of repetition in the ver- ification procedure to achieve better naysayer performance: the FRI polynomial commitment scheme (Section 3.4.2) and two post-quantum signature schemes (Section 3.4.3). Many proof systems have some repetitive structure in their verification algo- rithm. This structure allows for more efficient naysaying. A common example is a verification check which is a conjunction of multiple independent checks: since all the statements in the conjunction must hold for a proof to be accepted, for naysaying it suffices to point out a single clause of the conjunction which does not hold. Our constructions in this section fall into this category. Other ex- amples are Plonk proofs [GWC19], whose verification requires multiple bilinear pairing checks, or proofs with multi-round soundness amplification. We then highlight three naysayer proof constructions which take advantage of repetition in the verification algorithm to achieve better naysayer perfor- mance: the FRI polynomial commitment scheme (Section 3.4.2) and two post- quantum signature schemes (Section 3.4.3). Then, in Section 3.4.4, we give an example of a non-public naysayer proof which uses a trapdoor to reduce the size and verification complexity of the naysayer proof. 21 v0 v1 v2 v3 v4 v5 v6 v7 h h0 h01 h010 h1 h00 h011 Figure 3.5: Each node in a Merkle tree consists of a hash of its children. The root h is a commitment to the vector of leaves (v0, v1, . . . , v7). An opening proof for the element v2 is its copath (black nodes); the “verification trace” for the proof is the path (gray nodes). Finally, in Section 6.6.3 we estimate the performance of each naysayer proof system compared to that of the original proof system. 3.4.1 Merkle Commitments Proof size Verification Original log n H log n H Naysayer log log n B 1H Table 3.2: Cost savings of the naysayer paradigm applied to Merkle proofs. H = hash output size/hash operations, B = bits. Merkle trees [Mer88] and their variants are ubiquitous in modern systems, including Ethereum’s state storage [Eth24b]. A Merkle tree can be used to commit to a vector v of elements as shown in Figure 3.5, with the root h acting as a commitment to v. The party who created the tree can prove the inclusion of some element vi at position i in the tree by providing the corresponding copath. For example, to open the leaf at position 2, a prover provides its value v2 and an opening proof π = (h011, h00, h1) consisting of the copath from the leaf v2 to the root h. The proof π is checked by using its contents to recompute the root h′ starting with v2, then checking that h = h′. This involves recomputing the nodes along the path from the leaf to the root (the gray nodes in the figure). These nodes can be seen as a “verification trace” for the proof π. 22 In the context of a naysayer proof system, the prover provides π along with the verfication trace aux = (h010, h01, h0). A naysayer can point out an error at a particular point of the trace by submitting the incorrect index of aux (e.g., πnay = 1 to indicate h01). The naysayer verifier checks πnay by computing a single hash using π and oracle access to aux, e.g., checking H(h010, h011) ? = h01, where h010, h01 ∈ aux and h011 ∈ π. This is the generic construction from Theorem 1. 3.4.2 FRI Proof size Verification Original O ( λ log2 d ) H+O (λ log d)F O ( λ log2 d ) H+O (λ log d)F Naysayer 2 log(q log d) + 1 B best: O (1)F worst: O (log d)H Table 3.3: Cost savings of the naysayer paradigm applied to FRI opening proofs. H = hash output size/hash operations, F = field element size/operations, B = bits. The Fast Reed-Solomon IOP of proximity (FRI) [BBHR18a] is used as a building block in many non-interactive proof systems, including the STARK IOP [BBHR18b]. Below, we describe only the parts of FRI as applied in STARK. We refer the reader to the cited works for details. The FRI commitment to a polynomial p(X) ∈ F[X]≤d is the root of a Merkle tree with ρ−1d leaves. Each leaf is an evaluation of p(X) on the set L0 ⊂ F, where ρ−1d = |L0| ≪ |F| for a constant 0 < ρ < 1 (the Reed-Solomon rate parameter). We focus on the verifier’s cost in the proof of proximity. Let δ be a parameter of the scheme such that δ ∈ (0, 1 − √ρ). The prover sends log d+1 values (roots of successive “foldings” of the original Merkle tree, plus the value of the constant polynomial encoded by the final tree). The verifier makes q = λ/ log(1/(1−δ)) queries to ensure 2−λ soundness error; the prover responds to each query with 2 log d Merkle opening proofs (2 for each folded root). For each query, the verifier must check each Merkle authentication path, amounting to O ( log d log ρ−1d ) hashes per query. Furthermore, it must perform log d arith- metic checks (roughly 3 additions, 2 divisions, and 2 multiplications in F per folding) per query to ensure the consistency of the folded evaluations. There- fore, the overall FRI verification consists of O ( λ log2 d ) hashes and O (λ log d) field operations. A FRI proof is invalid if any of the above checks fails. Therefore a straight- forward naysayer proof πFRI nay = (i, j, k) need only point out a single Merkle proof (the jth proof for the ith query, i ∈ [q], j ∈ [2 log d]) or a single arithmetic check k ∈ [q log d] which fails. The naysayer verifier only needs to recompute that 23 particular check: O ( log ρ−1d ) hashes in the former case10 or a few arithmetic operations over F in the latter. This approach can lead to incredible concrete savings: According to [Hab22], for λ = 128, d = 212,11 ρ = 2−3, q = 91, δ = 9, the size of a vanilla FRI opening proof (i.e., without concrete optimizations) can be estimated at around 322KB. A naysayer proof for the same parameter settings is 2 log(q log d)+1 ≈ 2·10+1 = 21 bits < 3 bytes. 3.4.3 Post-quantum Signature Schemes Proof size Verification Original O (λ)F O (λ)F+ 1H Naysayer 2 + log k + log d B best: O(1)F worst: O (λ)F+ 1H Table 3.4: Cost savings of the naysayer paradigm applied to CRYSTALS- Dilithium signatures. H = hash output size/hash operations, F = field element size/operations, B = bits. Since the parameter k depends on λ and d is a con- stant, |πnay| ∈ O (log λ). With the advent of account abstraction [Eth23a], Ethereum users can define their own preferred digital signature schemes, including post-quantum signa- tures as recently standardized by NIST [BHK+19, DKL+18, PFH+22]. Com- pared to their classical counterparts, post-quantum signatures generally have either substantially larger signatures or substantially larger public keys.12 Since this makes post-quantum signatures expensive to verify on-chain, these schemes are prime candidates for the naysayer proof paradigm. CRYSTALS-Dilithium [DKL+18]. We give a simplified version of sig- nature verification in lattice-based signatures like CRYSTALS-Dilithium. In these schemes, the verifier checks that the following holds for a signature σ = (z1, z2, c), public key pk = (A, t), and message M : ∥z1∥∞ < β ∧ ∥z2∥∞ < β ∧ c = H(M,w, pk). (3.1) 10One could use a Merkle naysayer proof (Section 3.4.1) to further reduce the naysayer verification from checking a full Merkle path to a single hash evaluation. 11This is smaller than most polynomial degrees used in production systems today. 12Considering the NIST-standardized post-quantum signature schemes, Dilithium has 1.3KB public keys and 2.4KB signatures for its lowest provided security level (NIST level 2) [DKL+21]; the “small” variant of SPHINCS+ for NIST level 1 has 32B public keys but 7.8KB signatures [ABB+22]; and FALCON at level 1 has 897B public keys and 666B sig- natures [FHK+20]. By comparison, 2048-bit RSA requires only 256B both for public keys and signatures while offering comparable security [Gir20] (only against classical adversaries, of course). 24 Here β is a constant, A ∈ Rk×ℓ q , z1 ∈ Rℓ q, z2, t ∈ Rk q for the polynomial ring Rq := Zq[X]/(Xd + 1), and w = Az1 + z2 − ct mod q. (Dilithium uses d = 256.) We will write elements of Rq as polynomials p(X) = ∑ j∈[d] αjX j with coefficients αj ∈ Zq. Since Equation (3.1) is a conjunction, the naysayer prover must show that (∃zi ∈ z1, z2 : ∥zi∥∞ > β) ∨ c ̸= H(M,w, pk). (3.2) If the first check of Equation (3.1) fails, the naysayer gives an index i for which the infinity norm of one of the polynomials in z1 or z2 is large. (In particular, it can give a tuple (b, i, j) such that αj > β for zi = · · ·+ αjX j + · · · ∈ zb.)13 If the second check fails, the naysayer indicates that clause to the naysayer verifier, who must recompute w and perform a single hash evaluation which is compared to c. Overall, πnay is a tuple (a, b, i, j) indicating a clause a ∈ [2] of Equation (3.2), the vector zb with b ∈ [2], an entry i ∈ [max{k, ℓ}] in that vector, and the index j ∈ [d] of the offending coefficient in that entry. Since k ≥ ℓ, we have |πnay| = (2 + log k + log d) bits. The verifier is very efficient when naysaying the first clause, and only slightly faster than the original verifier for the second clause. SPHINCS+ [BHK+19]. The signature verifier in SPHINCS+ checks several Merkle authentication proofs, requiring hundreds or even thousands of hash evaluations. An efficient naysayer proof can be easily devised akin to the Merkle naysayer described in Section 3.4.1. Given a verification trace, the naysayer prover simply points to the hash evaluation in one of the Merkle-trees where the signature verification fails. 3.4.4 Verifiable Shuffles Proof size Verification Original O ( √ n)G O (n)G Naysayer log n B+ 3G+ 1F O (1)G+ 1H Table 3.5: Cost savings of the naysayer paradigm applied to Bayer-Groth shuf- fles. H = hash output size/hash operations, G = group element size/operations, B = bits. Verifiable shuffles are applied in many (blockchain) applications such as sin- gle secret leader election algorithms [BEHG20], mix-nets [Cha81], cryptocur- rency mixers [SNBB19], and e-voting [Adi08]. The state-of-the-art proof sys- 13The same idea can be applied to constructions bounding the ℓ2 norm, but with lower efficiency gains for the naysayer verifier, who must recompute the full ℓ2 norm of either z1, z2. 25 tem for proving the correctness of a shuffle is due to Bayer and Groth [BG12]. Their proof system is computationally heavy to verify on-chain as the proof size is O( √ n) and verification time is O(n), where n is the number of shuffled elements. Most shuffling protocols (of public keys, re-randomizable commitments, or ElGamal ciphertexts) admit a particularly efficient naysayer proof if the nay- sayer knows at least one of the shuffled elements. Let us consider the simple case of shuffling public keys. The shuffler wishes to prove membership in the following NP language: Lperm := {((pki, pk ′ i) n i=1, R) : ∃r, w1, . . . , wn ∈ Fp, σ ∈ Perm(n) s.t. ∀i ∈ [n], pki = gwi ∧ pk′i = gr·wσ(i) ∧R = gr}. Here Perm(n) is the set of all permutations f : [n]→ [n]. Suppose a party knows that for some j ∈ [n], the prover did not correctly include pk′j = gr·wj in the shuffle. The party can naysay by showing that (g, pkj , R, pk ′ j) ∈ LDH ∧ pk′j /∈ (pki, ·)ni=1 where LDH is the language of Diffie-Hellman tuples14. To produce such a proof, however, the naysayer must know the discrete logarithm wj . Unlike our previous examples, which were public naysayer proofs, this is an exam- ple of a private Naysay algorithm using tdnay := wj . The naysayer proof is πnay := (j, pk′j , πDH). The Diffie-Hellman proof can be checked in constant time and, with the right data structure for the permuted list (e.g., a hash table), so can the list non-membership. This, πnay is a O (log n)-sized naysayer proof with O (1)-verification, yielding in exponential savings compared to verifying the original Bayer-Groth shuffle proof. 3.4.5 Summary We showed the asymptotic cost savings of the verifiers in the four examples discussed in Sections 3.4.1 to 3.4.4 in their respective tables. Note that the verifier speedup is exponential for verifiable shuffles and logarithmic for the Merkle and FRI openings. For CRYSTALS-Dilithium, our naysayer proof is only weakly efficient (see Section 3.3) as there is no asymptotic gap in the complexity of the original signature verification and the naysayer verification in the worst case. As for proof size, in all the examples, our naysayer proofs are logarithmically smaller than the original proofs. (Note this calculation does not include the size of aux, but we will see in the next section that aux does not meaningfully impact the proof size for the verifier.) Furthermore, in most cases, the naysayer proof consists of an integer index or indices rather than group or field elements. 14Membership in LDH can be shown via a proof of knowledge of discrete logarithm equal- ity [CP93] consisting of 2 group elements and 1 field element which can be verified with 4 exponentiations and 2 multiplications in the group. 26 Representing the former requires only a few bits compared to the latter (which are normally at least λ bits long), so in practice, naysayer proofs can offer practically smaller proofs sizes even when they are not asymptotically smaller. This can lead to savings even when the original proof is constant-size (e.g., a few group elements). 3.5 Storage Considerations So far, we have assumed that the naysayer verifier can read the instance x, the original proof π and aux, and the naysayer proof πnay entirely. A naysayer proof system thus requires increased storage (long-term for aux, and temporary for πnay only in case of a challenge). However, the verifier only needs to compute VerifyNay instead of Verify. A useful naysayer proof system should therefore compensate for the increased storage by considerably reducing verification costs. In either case, in blockchain contexts where storage is typically very costly, the approach of storing all data on chain may not be sufficient. Furthermore, as we pointed out previously, the verifier—the only entity which requires data to be stored on-chain in order to access it—does not access all of this data. Blockchains such as Ethereum differentiate costs between persistent stor- age (which we can call Sper) and “call data” (Scall), which is available only for one transaction and is significantly cheaper as a result. Verifiable computation proofs, for example, are usually stored in Scall with only the verification result persisted to Sper. Some applications now use a third, even cheaper, tier of data storage, namely off-chain data availability services (SDA), which promise to make data available off-chain but which on-chain contracts have no ability to read. Verifiable storage, an analog of verifiable computation, enables a verifier to store only a short commitment to a large vector [CF13, Mer88] or polynomial [KZG10], with an untrusted storage provider (SDA) storing the full values. Individual data items (elements in a vector or evaluations of the polynomial) can be provided as needed to Scall or Sper with short proofs that they are correct with respect to the stored commitment. (Ethereum implemented this type of storage, commonly referred to as “blob data”, using KZG commitments in EIP-4844 [BFL+22].) This suggests an optimization for naysayer proofs in a blockchain context: the prover posts only a binding commitment Com(π′), which the contract stores in Sper, while the actual proof π′ = (π, aux) is stored in SDA. We assume that potential naysayers can read π′ from SDA. In the optimistic case, the full proof π′ is never written to the more-expensive Scall or Sper. In the other case, when naysaying is necessary, the naysayer must send openings of the erroneous elements to the verifier (in Scall), who checks that these data elements are valid with respect to the on-chain commitment Com(π′) stored in Sper. Note that most naysayer proof systems don’t require reading all of π′ for verification, so even the pessimistic case will offer significant savings over storing all of π′ in Scall. 27 3.6 Extensions and Future Work In the months since the publication of the original paper [SGB24], naysayer proofs have already received attention from various groups of practitioners hop- ing to deploy them in production. We see many interesting directions for inves- tigation, both for immediate deployments and to improve our understanding of this new paradigm and its implications. Rollup design space. Naysayer proofs can be viewed as a way (one of several) of combining two established solutions—verifiable computation and fraud proofs—to offer different tradeoffs. There may be other ways to combine these two paradigms to achieve different tradeoffs which can be more suited to certain application scenarios. For example, one can imagine a different fraud/naysayer proof hybrid in which the prover sends only the instance x, but a challenger can provide a corrected instance x′ and proof for the new statement (i.e., “nesting” a zk-rollup inside an optimistic rollup). This can reduce both prover and verifier costs, but makes issuing a challenge less accessible than in either VC or fraud proofs. Naysaying zkVMs. An emerging trend in zk-rollups are so-called “zero-knowl- edge virtual machines”, or zkVMs (again, the “zk” part may or may not hold). A zkVM expresses each language as program P in a standard in- struction set and, given an instance (a, b) s.t. b = P (a), proves a conjunc- tion of correct state transitions based on P and the instruction set. Given this repetitive structure of the proven relation, challenging an incorrect instance (a, b) s.t. b ̸= P (a) can be done extremely efficiently. Designing proof systems to efficiently disprove state transitions of popular zkVM instruction sets is an interesting direction for future work. Other naysayer constructions. In all of our application-specific naysayer constructions, the verification of the base proof systems was a conjunction. An interesting question to explore is whether there are other application- specific naysayer proofs which are particularly efficient or useful. Computing aux. A drawback of (generic) naysayer proofs is that the prover must additionally run Π.Verify to create the verification trace aux. It would be very useful to separate or outsource this computation to another party to reduce the added computational burden on provers (who are already spending required to spend significant computational power to compute proofs). Implementing query access. Section 3.5 introduced a straightforward way to implement efficient and secure query access to π′ via a secure data availability service (e.g., Ethereum blob data). Both on- and off-chain, there may be other, more efficient approaches for realizing this access (or removing it entirely) and/or reducing the underlying trust assumptions. 28 Chapter 4 Cryptocurrency Mixers† Contents 4.1 Our Contributions . . . . . . . . . . . . . . . . . . . 31 4.2 Technical Overview . . . . . . . . . . . . . . . . . . 32 4.3 Related Work . . . . . . . . . . . . . . . . . . . . . . 35 4.4 Additional Preliminaries . . . . . . . . . . . . . . . 36 4.4.1 Adaptor Signatures . . . . . . . . . . . . . . . . . . . 37 4.4.2 Linear-Only Homomorphic Encryption . . . . . . . . 39 4.4.3 One-More Discrete Logarithm Assumption . . . . . 39 4.5 The A2L Protocol . . . . . . . . . . . . . . . . . . . 40 4.5.1 Randomizable Puzzles and Homomorphic Encryption 41 4.6 Counterexamples of A2L . . . . . . . . . . . . . . . 43 4.6.1 Key Recovery Attack . . . . . . . . . . . . . . . . . . 44 4.6.2 One-More Signature Attack . . . . . . . . . . . . . . 45 4.7 Blind Conditional Signatures . . . . . . . . . . . . 47 4.8 The A2L+ Protocol . . . . . . . . . . . . . . . . . . 51 4.8.1 Security Analysis . . . . . . . . . . . . . . . . . . . . 51 4.9 UC-Secure Blind Conditional Signatures . . . . . 57 4.9.1 Ideal Functionality . . . . . . . . . . . . . . . . . . . 57 4.9.2 The A2LUC Protocol . . . . . . . . . . . . . . . . . . 58 4.9.3 Security Analysis . . . . . . . . . . . . . . . . . . . . 60 4.10 Performance Evaluation . . . . . . . . . . . . . . . 65 4.10.1 A2L+ . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.10.2 A2LUC . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Bitcoin and cryptocurrencies sharing Bitcoin’s core principles have attained huge prominence as decentralized and publicly verifiable payment systems. They †Portions of this section have been adapted from [GMM+22]. 29 have attracted not only cryptocurrency enthusiasts but also banks [Eur21], lead- ing IT companies (e.g., Facebook and PayPal), and payment providers such as Visa [CEG+21]. At the same time, the initial perception of payment un- linkability based on pseudonyms has been refuted in numerous academic re- search works [MPJ+13, SHMSM21], and the blockchain surveillance indus- try [HSRK21] demonstrates this privacy breach in practice. This has led to a large amount of work devoted to providing a privacy-preserving overlay to Bitcoin in the form of coin mixing protocols [BBSU12, GFW22]. Decentralized coin mixing protocols such as CoinJoin [Wik22] or CoinShuf- fle [RMK14, RMK16, RM17] allow a set of mutually distrusting users to mix their coins to achieve unlinkability : that is, the coins cannot be linked to their initial owners even by malicious participants. These protocols suffer from a common drawback, the bootstrapping problem, i.e., how to find a set of partic- ipants to execute the protocol. In fact, while a high number of participants is desirable to improve the anonymity guarantees provided by the coin mixing protocol, such a high number is at the same time undesirable as it results in poor scalability and makes bootstrapping harder. An alternative mechanism is one in which a third party, referred to as the hub, alleviates the bootstrapping problem by connecting users that want to mix their coins. Moreover, the hub itself can provide a coin mixing service by acting as a tumbler. In more detail, users send their coins to the hub, which, after collecting all the coins, sends them back to the users in a randomized order, thereby providing unlinkability for an observer of such transfers (e.g., an observer of the corresponding Bitcoin transactions). Synchronization Puzzles There are numerous reported cases of “exit scams” by mixing services which took in new payments but stopped providing the mix- ing service [Sto22]. This has prompted the design of numerous cryptographic protocols [BNM+14, VR15, Coi22, HBG16] to remove trust from the hub, pro- viding a trade-off between trust assumptions, minimum number of transactions, and Bitcoin compatibility [HAB+17]. Of particular interest is the work by Heil- man et al. [HAB+17], which lays the groundwork for the core cryptographic primitive which can be used to build a mixing service. This primitive, referred to as a synchronization puzzle, enables unlinkability from even the view of a corrupt hub. However, Heilman et al. only present informal descriptions of the security and privacy notions of interest. Furthermore, the protocol proposed (TumbleBit) relies on hashed time-lock contracts (HTLCs), a smart contract incompatible with major cryptocurrencies such as Monero, Stellar, Ripple, Mim- bleWimble, and Zerocash (shielded addresses), lowering the interoperability of the solution. The recent work of Tairi et al. [TMM21] attempts to overcome both of these limitations. It gives formal security notions for a synchronization puzzle in the universal composability (UC) framework [Can20]. It also provides an instantiation of the synchronization puzzle (called A2L) that is simultaneously more efficient and more interoperable than TumbleBit, requiring only timelocks 30 and digital signature verification from the underlying cryptocurrencies. In this work, we identify a gap in their security analysis, and we substan- tiate the issue by presenting two concrete counterexamples: there exist two encryption schemes (secure under standard cryptographic assumptions) that satisfy the prerequisites of their security notions, yet yield completely insecure systems. This shows that our understanding of synchronization puzzles as a cryptographic primitive is still inadequate. Establishing firm foundations for this important cryptographic primitive requires us to rethink this object from the ground up. 4.1 Our Contributions We summarize the contributions of this work below. Counterexamples. First, we identify a gap in the security model of the syn- chronization puzzle protocol A2L [TMM21], presenting two concrete counterex- amples (Section 4.6). Specifically, we show that there exist underlying crypto- graphic building blocks that satisfy the prerequisites stated in A2L, yet they allow for: • a key recovery attack, in which a user can learn the long-term secret de- cryption key of the hub; • a one-more signature attack, in which a user can obtain n signed trans- actions from the hub while only engaging in n− 1 successful instances of signing a transaction which pays the hub. In other words, the user obtains n coins from the hub while the hub receives only n− 1 coins. Both attacks run in polynomial time and succeed with overwhelming probability. Definitions. To place the synchronization puzzle on firmer foundations, we propose a new cryptographic notion that we call blind conditional signatures (BCS). Our new notion intuitively captures the functionality of a synchroniza- tion puzzle from [HAB+17, TMM21]. BCS is a simple and easy-to-understand tool, and we formalize its security notions both in the game-based (Section 4.7) and universal composability (Section 4.9) setting. The proposed game-based definitions for BCS are akin to the well-understood standard security notions for regular blind signatures [Cha82, SU17]. We hope that this abstraction may lay the foundations for further studies on this primitive in all cryptocurrencies, scriptless or not. Constructions. We give two constructions, one that satisfies our game-based security guarantees and one that is UC-secure. Both require only the same limited functionality as A2L from the underlying blockchain. In more detail: • We give a modified version of A2L (Sections 4.8 and 4.8.1) which we refer to as A2L+ that satisfies the game-based notions (Section 4.7) of BCS, albeit in the linear-only encryption (LOE) model [Gro04]. In this model, the attacker does not directly have access to a homomorphic encryption 31 scheme; instead, it can perform the legal operations by querying the cor- responding oracles. This is a strong model with a non-falsifiable flavor, similar to the generic/algebraic group model [Sho97, Mau05, FKL18]. • We then provide a less efficient construction A2LUC that securely realizes the UC notion of BCS (Section 4.9). This scheme significantly departs from the construction paradigm of A2L and is based on general-purpose cryptographic tools such as secure two-party computation (2PC). Our results hint at the fact that achieving UC-security for a synchronization puzzle requires a radical departure from current construction paradigms, and it is likely to lead to less efficient schemes. On the other hand, we view the game- based definitions (a central contribution of our work) as a reasonable middle ground between security and efficiency. 4.2 Technical Overview To put our work into context, we give a brief overview of A2L [TMM21] recast as a synchronization puzzle (a notion first introduced in [HAB+17]), and discuss how it can be used as a coin mixing protocol. We then outline the vulnerabilities in A2L and discuss how to fix them using the tools that we develop in this work. Synchronization Puzzles A synchronization puzzle protocol is a protocol between three parties: Alice, Bob, and Hub (refer to Figure 4.1 for a depiction). The synchronization puzzle begins with Hub and Bob executing a puzzle promise protocol (step 1) with respect to some message, mHB such that Bob receives a puzzle τ that contains a signature s (at this point still hidden) on mHB . Bob wishes to solve the puzzle and obtain the embedded signature. To do this, he sends the puzzle τ privately to Alice (step 2), who executes a puzzle solve protocol (step 3) with Hub with respect to some message mAH such that, at the end of the protocol, Alice obtains the signature s, whereas Hub obtains a signature s′ on mAH . Alice then sends the signature s privately to Bob (step 4). Such a protocol must satisfy the following properties. Blindness: The puzzle solve protocol does not leak any information to Hub about τ , and Hub blindly helps solve the puzzle. This ensures that Hub cannot link puzzles across interactions. Unlockability: If step 3 is successfully completed, then the secret s must be a valid secret for Bob’s puzzle τ . This guarantees that Hub cannot learn a signature on mAH , without at the same time revealing a signature on mHB . Unforgeability: Bob cannot output a valid signature on mHB before Alice inter- acts with the Hub. Towards a Coin Mixing Service As shown in [HAB+17, TMM21], the synchronization puzzle is the cryptographic core of a coin mixing service. First, 32 Hub 1. Puzzle Promise Alice Bob 3. Puzzle Solve Alice Bob 2. Send Puzzle Alice Bob 4. Send Solution Puzzle Promise Puzzle Promise Puzzle Solve Puzzle Solve Figure 4.1: Protocol flow of the synchronization puzzle, the underlying crypto- graphic mechanism of Tumblebit and A2L. Our approach in Blind Conditional Signatures follows a similar execution. Dotted double-edged arrows indicate 2- party protocols. Solid arrows indicate secure point-to-point communication. Alice and Bob define the messages mAH : (A v−−→ H) and mHB : (H v−−→ B) where (Ui v−−→ Uj) denotes a cryptocurrency payment (e.g., on-chain transaction or a payment over payment channels) that transfers v coins from Ui to Uj . Second, Alice and Bob run the synchronization puzzle protocol with Hub to synchronize the two aforementioned transfers. Here, the signatures s and s′ are the ones required to validate the transactions defined by mAH and mHB . The anonymity of mixing follows from the fact that multiple pairs of users are executing the synchronization puzzle simultaneously with Hub, and Hub cannot link its interaction on the left to the corresponding interaction on the right. Throughout the rest of this work, we mainly focus on the synchronization puzzle as a cryptographic primitive. The application of a coin mixing protocol follows as prescribed in prior works [HAB+17, TMM21]. The A2L System In A2L, the blindness property is achieved by making use of a re-randomizable linearly homomorphic (CPA-secure) encryption. The puzzle τ contains a ciphertext c← Enc(ekH , s) encrypting the signature s under 33 the encryption key ekH of Hub. During the puzzle solve step, Alice first re- randomizes the ciphertext (and the underlying plaintext) c r−−→ c′ = Enc(ekH , s+ r) with a random scalar r. Hub then decrypts c′ to obtain s + r, which in turn reveals a signature s′ on mAH .1 Alice can then strip off the re-randomization factor r and send s to Bob later in step 4. In the analysis, it is argued that the CPA-security of the encryption scheme ensures unforgeability, whereas the re-randomization process guarantees blindness. Unfortunately, we show in this work that this claim is flawed. Counterexamples We observe that the encryption scheme is only CPA- secure, and the Hub is offering a decryption oracle in disguise. In these set- tings, the right notion of security is the stronger CCA-security, which accounts exactly for this scenario. However, CCA-security is at odds with blindness, since we require the scheme to be (i) linearly homomorphic and (ii) publicly re-randomizable.2 We then substantiate this concern by showing two coun- terexamples. Specifically, we show that there exist two encryption schemes that satisfy the prerequisites spelled out by A2L, but enable two concrete attacks against the protocol. Depending on the scheme, we can launch one of the fol- lowing attacks: • A key recovery attack that completely recovers the long-term secret key of the hub, i.e., the decryption key dkH . • A one-more signature attack that allows one to obtain n + 1 signatures on transactions from Hub to Bob, while only revealing n signatures on transactions from Alice to Hub. Effectively, this allows one to steal coins from the hub. We stress that both these schemes are specifically crafted to make the protocol fail: their purpose is to highlight a gap in the security model of A2L. As such, they do not imply that A2L as implemented is insecure, although we cannot prove it secure either. For a detailed description of the attacks, we refer the reader to Section 4.6.1. Can We Fix This? In light of our attacks, the natural question is whether we can establish formally rigorous security guarantees for the (appropriately patched) A2L system. While it seems unlikely that A2L can achieve UC-security (more discussion on this later), we investigate whether it satisfies some weaker, but still meaningful, notion of security. Our main observation here is that a weak notion of CCA-security for encryption schemes suffices to provide formal 1This is achieved via the notion of adaptor signatures, but for the sake of this overview we ignore the exact details of this aspect. 2It is well known that no encryption scheme that satisfies either of these properties can be CCA-secure. 34 guarantees for A2L. This notion, which we refer to as one-more CCA-security, (roughly) states that it is hard to recover the plaintexts of n ciphertexts while querying a decryption oracle at most n − 1 times. Importantly, this notion is, in principle, not in conflict with the homomorphism/re-randomization require- ments, contrary to standard CCA-security. Towards establishing a formal analysis of A2L, we introduce the notion of blind conditional signatures (BCS) as the cryptographic cornerstone of a synchronization puzzle. We propose game-based definitions (Section 4.7) sim- ilar in spirit to the well-established security definitions of regular blind signa- tures [Cha82, SU17]. We then prove that A2L+, our appropriately modified version of A2L, satisfies these definitions (Section 4.8). Our analysis comes with an important caveat: we analyze the security of our scheme in the linear-only encryption model. This is a model introduced by Groth [Gro04] that only mod- els adversaries that are restricted to perform “legal” operations on ciphertexts, similarly to the generic/algebraic group model. While this is far from a complete analysis, it increases our confidence in the security of the system.3 UC-Security The next question that we set out to answer is whether we can construct a synchronization puzzle that satisfies the strong notion of UC- security. We do not know how to prove that A2L (or A2L+) is secure under composition, which is why we prove A2L+ secure only in the game-based setting. The technical difficulty in proving UC-security is that blindness is unconditional, and we lack a “trapdoor mechanism” that allows the simulator to link adversarial sessions during simulation in the security analysis; the proof of UC-security in [TMM21] is flawed due to this same reason. Thus, in Section 4.9.2 we develop a different protocol (called A2LUC) that we can prove UC-secure in the standard model. The scheme relies on standard general-purpose cryptographic tools, such as 2PC, and incurs a significant increase in computation costs. We stress that we view this scheme as a proof-of-concept, and leave further improvements for practical efficiency as an open problem. We hope that the scheme will shed some light on the barriers that need to be overcome in order to construct a practically efficient UC-secure synchronization puzzle. 4.3 Related Work We recall some relevant related work in the literature. Unlinkable Transactions CoinJoin [Wik22], Coinshuffle [RMK14, RMK16, RM17], and Möbius [MM18] are coin mixing protocols that rely on interested 3We resort to the LOE model because of the seemingly inherent conflict between linear homomorphism and CCA-like security, both of which are needed for our application (in our setting, the adversary has access to something akin to a decryption oracle). Indeed, even proving that ElGamal encryption is CCA1-secure in the standard model is a long-standing open problem, and we believe that the A2L approach would inherently hit this barrier without some additional assumption. 35 users coming together and making an on-chain transactions to mix their coins. These proposals suffer from the bootstrapping problem (users having to find other interested users for the mix) in addition to requiring custom scripting language support from the underlying currency and completing the mix with on-chain transactions. Perun [DEFM19] and mixEth [SNBB19] are mixing solu- tions that rely on Ethereum smart contracts to resolve contentions among users. An alternate design choice is to incorporate coin unlinkability natively in the currency. Monero [LRR+19] and Zcash [BCG+14] are the two most popular examples of currencies that allow for unlinkable transactions without any spe- cial coin mixing protocol. This is enabled by complex on-chain cryptographic mechanisms that are not supported in other currencies. RCCA Security A security notion related to one-more CCA is that of re- randomizable Replayable CCA (RCCA) encryption scheme [PR07]. The notion guarantees security even if the adversary has access to a decryption oracle, but only for ciphertexts that do not decrypt to the challenge messages. This is slightly different from what we require in our setting, since in our application the adversary will always query the oracle on encryption of new (non-challenge) messages (because of the plaintext re-randomization). This makes it challenging to leverage the guarantees provided by this notion in our analysis. 4.4 Additional Preliminaries In this chapter, we require that any digital signature scheme ΠDS := (KGen,Sign, Verify) satisfies the standard notion of strong existential unforgeability [GMR88]. We require any non-interactive proof system NIZK (see Section 2.2) to be (1) zero-knowledge, i.e., there exists a simulator π ← Sim(td, x) that computes valid proofs without the knowledge of the witness, (2) sound, i.e., it is infeasible for an adversary to output a valid proof for a statement x /∈ L (see Definition 2), and (3) UC-secure, i.e., one can efficiently extract from the proofs computed by the adversary a valid witness (with the knowledge of the setup trapdoor td), even in the presence of simulated proofs. For formal security definitions, we refer the reader to [DMP88, CKS11]. In this chapter, we will deal with so-called hard relations R of statement- witness pairs (Y, y) which are decidable in polynomial time (see Section 2.2) but where, for all PPT adversaries A, the probability of A on input Y outputting a witness y is negligible. We will also require a PPT sa