LEDGER VOL 2379-5980 10.5195/LEDGER.2018.127 Quantum Attacks on Bitcoin, and How to Protect Against Them Divesh Aggarwal 0 Gavin Brennen gavin.brennen@mq.edu.au 0 1 Troy Lee troyjlee@gmail.com 0 Miklos Santha miklos.santha@gmail.com 0 Marco Tomamichel marco.tomamichel@uts.edu.au 0 3KNzjxAUuA199FbmWaA7ide4PvhVcKobCd G. K. Brennen 2018 3 2018 68 90

The key cryptographic protocols used to secure the internet and financial transactions of today are all susceptible to attack by the development of a sufficiently large quantum computer. One particular area at risk is cryptocurrencies, a market currently worth over 100 billion USD. We investigate the risk posed to Bitcoin, and other cryptocurrencies, by attacks using quantum computers. We find that the proof-of-work used by Bitcoin is relatively resistant to substantial speedup by quantum computers in the next 10 years, mainly because specialized ASIC miners are extremely fast compared to the estimated clock speed of near-term quantum computers. On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates. We analyze an alternative proof-of-work called Momentum, based on finding collisions in a hash function, that is even more resistant to speedup by a quantum computer. We also review the available post-quantum signature schemes to see which one would best meet the security and efficiency requirements of blockchain applications.

Introduction

cryptography currently used to secure the internet and financial transactions, and also to Bitcoin. The basic attack vectors on Bitcoin by quantum computers are known in the Bitcoin community.2 Our contribution in this paper is to more precisely and quantitatively analyze these threats to give reasonable estimates as to when they might be viable. We find that the proof-of-work used by Bitcoin is relatively resistant to substantial speedup by quantum computers in the next 10 years, mainly because specialized ASIC miners are extremely fast compared to the estimated clock speed of near-term quantum computers. This means that transactions, once on the blockchain, would still be relatively protected even in the presence of a quantum computer.

The elliptic curve signature scheme used by Bitcoin is well-known to be broken by Shor’s algorithm for computing discrete logarithms.3 We analyse exactly how long it might take to derive the secret key from a published public key on a future quantum computer. This is critical in the context of Bitcoin as the main window for this attack is from the time a transaction is broadcast until the transaction is processed into a block on the blockchain with several blocks after it. By our most optimistic estimates, as early as 2027 a quantum computer could exist that can break the elliptic curve signature scheme in less than 10 minutes, the block time used in Bitcoin.

We also suggest some countermeasures that can be taken to secure Bitcoin against quantum attacks. We analyse an alternative proof-of-work scheme called Momentum,4 based on finding collisions in a hash function, and show that it admits even less of a quantum speedup than the proof-of-work used by Bitcoin. We also review alternative signature schemes that are believed to be quantum safe. 2.

Blockchain Basics

In this section we give a basic overview of how Bitcoin works, so that we can refer to specific parts of the protocol when we describe possible quantum attacks. We will keep this discussion at an abstract level, as many of the principles apply equally to other cryptocurrencies with the same basic structure as Bitcoin.

All Bitcoin transactions are stored in a public ledger called the blockchain. Individual transactions are bundled into blocks, and all transactions in a block are considered to have occurred at the same time. A time ordering is placed on these transactions by placing them in a chain. Each block in the chain (except the very first, or genesis block) has a pointer to the block before it in the form of the hash of the previous block’s header.

Blocks are added to the chain by miners. Miners can bundle unprocessed transactions into a block and add them to the chain by doing a proof-of-work (PoW). Bitcoin, and many other coins, use a PoW developed by Adam Back called Hashcash.5 The hashcash PoW is to find a well-formed block header such that h(header) t, where h is a cryptographically secure hash function and header is the block header. A well-formed header contains summary information of a block such as a hash derived from transactions in the block,6 a hash of the previous block header, a time stamp, as well as a so-called nonce, a 32-bit register that can be freely chosen. An illustration of a block can be found in Table 1. The parameter t is a target value that can be changed to adjust the difficulty of the PoW. In Bitcoin, this parameter is dynamically adjusted every 2016 blocks such that the network takes about 10 minutes on average to solve the PoW.

In Bitcoin the hash function chosen for the proof of work is two sequential applications of

Version

Previous block header hash Merkle Root Timestamp Difficulty Nonce 0x20000012 00 : : : 0dfff7669865430b. . . 730d68233e25bec2. . . 2017-08-07 02:12:18 860,221,984,436.22 941660394 Transaction 1 Transaction 2 . .

. the SHA256 : f0; 1g ! f0; 1g256 hash function, i.e. h( ) = SHA256(SHA256( )). As the size of the range of h is then 2256, the expected number of hashes that need to be tried to accomplish the hashcash proof of work with parameter t is 2256=t. Rather than t, the Bitcoin proof-of-work is usually specified in terms of the difficulty D where D = 2224=t. This is the expected number of hashes needed to complete the proof of work divided by 232, the number of available nonces. In other words, the difficulty is the expected number of variations of transactions and time stamps that need to be tried when hashing block headers, when for each fixing of the transactions and time stamp all nonces are tried.

Miners can bundle unprocessed transactions into a block however they like, and are awarded a number of bitcoins for succeeding in the PoW task. The “generation” transaction paying the mining reward is also a transaction included in the block, ensuring that different miners will be searching over disjoint block headers for a good hash pre-image.

Once a miner finds a header satisfying h(header) t, they announce this to the network and the block is added to the chain. Note that it is easy to verify that a claimed header satisfies the PoW condition — it simply requires one evaluation of the hash function.

The purpose of the PoW is so that one party cannot unilaterally manipulate the blockchain in order to, for example, double spend. It is possible for the blockchain to fork, but at any one time the protocol dictates that miners should work on the fork that is currently the longest. Once a block has k many blocks following it in the longest chain, a party who wants to create a longest chain not including this block would have to win a PoW race starting k blocks behind. If the party controls much less than half of the computing power of the network, this becomes very unlikely as k grows. In Bitcoin, a transaction is usually considered safe once it has 6 blocks following it.

The first question we will look at in Section 3.1 is what advantage a quantum computer would have in performing the hashcash PoW, and if it could unilaterally “come from behind” to manipulate the blockchain.

The second aspect of Bitcoin that is important for us is the form that transactions take. When Bob wants to send bitcoin to Alice, Alice first creates (an ideally fresh) private-public key pair. The public key is hashed to create an address. This address is what Alice provides to Bob as the destination to send the bitcoin. Bitcoin uses the hash of the public key as the address instead of the public key not for security reasons but simply to save space.7 As we see later, this design choice does have an impact on the quantum security.

To send bitcoin to Alice, Bob must also point to transactions on the blockchain where bitcoin was sent to addresses that he controls. The sum of bitcoin received to these referenced transactions must add up to at least the amount of bitcoin Bob wishes to send to Alice. Bob proves that he owns these addresses by stating the public key corresponding to each address and using his private key corresponding to this address to sign the message saying he is giving these bitcoins to Alice.

Quantum Attacks on Bitcoin

3.1. Attacks on the Bitcoin Proof-of-Work—In this section, we investigate the advantage a quantum computer would have in performing the hashcash PoW used by Bitcoin. Our findings can be summarized as follows: Using Grover search,8 a quantum computer can perform the hashcash PoW by performing quadratically fewer hashes than is needed by a classical computer. However, the extreme speed of current specialized ASIC hardware for performing the hashcash PoW, coupled with much slower projected gate speeds for current quantum architectures, essentially negates this quadratic speedup, at the current difficulty level, giving quantum computers no advantage. Future improvements to quantum technology allowing gate speeds up to 100GHz could allow quantum computers to solve the PoW about 100 times faster than current technology. However, such a development is unlikely in the next decade, at which point classical hardware may be much faster, and quantum technology might be so widespread that no single quantum enabled agent could dominate the PoW problem.

We now go over these results in detail. Recall that the Bitcoin PoW task is to find a valid block header such that h(header) t, where h( ) = SHA256(SHA256( )). The security of the blockchain depends on no agent being able to solve the PoW task first with probability greater than 50%. We will investigate the amount of classical computing power that would be needed to match one quantum computer in performing this task.

We will work in the random oracle model,9 and in particular assume that Pr[h(header) t] = t=2256 where the probability is taken uniformly over all well-formed block headers that can be created with transactions available in the pool at any given time (such well-formed block headers can be found by varying the nonce, the transactions included in the block as well as the least significant bits of the timestamp of the header). On a classical computer, the expected number of block headers and nonces which need to be hashed in order to find one whose hash value is at most t is D 232 where D is the hashing difficulty defined by D = 2224=t.10

For quantum computers in the random oracle model we can restrict our attention to the generic quantum approach to solving the PoW task using Grover’s algorithm.8 By Grover’s algorithm, searching a database of N items for a marked item can be done with O(pN) many queries to the database (whereas any classical computer would require W(N) queries to complete the same task).

Let N = 2256 be the size of the range of h for the following. By our assumptions, with probability at least 0:9999 a random set of 10 N=t many block headers will contain at least one element whose hash is at most t. We can fix some deterministic function g mapping S = f0; 1gdlog(10 N=t)e to distinct well-formed block headers. We also define a function f which determines if a block header is “good” or not f (x) = ( 0 if h(g(x)) > t 1 if h(g(x)) t

: å axjxi ! å ( 1) f (x)axjxi: x2S x2S A quantum computer can compute f on a superposition of inputs, i.e. perform the mapping Each application of this operation is termed an oracle call. Using Grover’s algorithm a quantum algorithm can search through S to find a good block header by computing #O = p4 p10 N=t = p214p10 D oracle calls. The Grover algorithm can be adapted to run with this scaling even if the number of solutions is not known beforehand, and even if no solutions exist.11

While the number of oracle calls determines the number of hashes that need to be performed, additional overhead will be incurred to compute each hash, to construct the appropriate block header, and to do quantum error correction. We now analyze these factors to determine a more realistic estimate of the running time in two ways. First, we estimate the running time based on a well studied model for universal quantum computers with error correction.

On a classical computer, a hash function such as SHA256 uses basic boolean gate operations, whereas on a quantum computer, these elementary boolean gates are translated into reversible logical quantum gates which introduces some overhead. There are a total of 64 rounds of hashing in the SHA256 protocol and each round can be done using an optimized circuit with 683 Toffoli quantum gates.12 The Toffoli gate is a three qubit controlled-controlled not gate defined by its action on a bit string: Toffolijx1ijx2ijx3i = jx1ijx2ijx3 x1x2i. Most quantum error correction codes use T gates rather than Toffoli gates as the representative time consuming gate. The T gate is a single qubit gate defined by the action T jxi = eixp=4jxi. Like the Tofolli, the T gate is a so called non-Clifford gate which means, for most error correction codes, it is more resource demanding to implement fault tolerantly, requiring (for example) state distillation factories. A careful analysis of the cost to perform the SHA256 function call as well as the inversion about the mean used in the Grover algorithm finds a total T gate count of 474168 for one oracle call.13 In that circuit decomposition, the T gates can only be parallelized by roughly a factor of three.

There is additional overhead needed by quantum computers to perform error correction. In deciding on a good quantum error correction code there are a variety of tradeoffs to consider: tolerance to a particular physical error model, versatility for subroutines, number of qubits used, logical gate complexity, and the amount of classical processing of error syndromes and feedback. Adopting the surface code, which has advantages of a relatively high fault tolerance error threshold and local syndrome measurements, we can adapt the analysis in Suchara (et al.) to estimate the total run time of the quantum algorithm.13 The time needed to run the Grover algorithm and successfully mine a block is t = #O #G=s = p214p10 D #G=s; where #G is the number of cycles needed for one oracle call, and s is the quantum computer clock speed. Using a surface code, where the dominant time cost is in distilling magic states to implement T gates, one finds where the first factor includes the logical T gate depth for calling the SHA256 function twice as required by Bitcoin PoW, and twice again to make the circuit reversible, as well as the inversion about the mean. The second factor, ct , is the overhead factor in time needed for quantum error correction. It counts the number of clock cycles per logical T gate and is a function of difficulty and the physical gate error rate pg. For a fixed gate error rate, the overhead factor ct is bounded above by the cost to invert a 256 bit hash (maximum difficulty).

Because the quantum algorithm runs the hashing in superposition, there is no direct translation of quantum computing power into a hashing rate. However, we can define an effective hash rate, hQC, as the expected number of calls on a classical machine divided by the expected time to find a solution on the quantum computer, viz.

hQC

N=t t = 0:28 spD ct (D; pg) : Because the time overhead is bounded, asymptotically the effective hashing rate improves as the square root of the difficulty, reflecting the quadratic advantage obtainable from quantum processors.

The Grover algorithm can be parallelized over d quantum processors. In the optimal strategy, each processor is dedicated to search over the entire space of potential solutions, and the expected number of oracle calls needed to find a solution is #O = 0:39 #O=pd .14 This implies an expected time to find a solution is t =pd; and the effective hash rate using d quantum processors in parallel is hQCpd: hQC;k = 2:56 The number of logical qubits needed in the Grover algorithm is fixed at 2402, independent of the difficulty. The number of physical qubits needed is where cnQ is the overhead in space, i.e. physical qubits, incurred due to quantum error correction, and is also a function of difficulty and gate error rate.

In Appendix 1 we show how to calculate the overheads in time and space incurred by error correction. The results showing the performance of a quantum computer for blockchain attacks are given in Figure 1. To connect these results to achievable specifications, we focus on superconducting circuits which as of now have the fastest quantum gate speeds amoung candidate quantum technologies and offer a promising path forward to scalability. Assuming maximum gate speeds attainable on current devices of s = 66:7MHz,15 and assuming an experimentally challenging, but not implausible, physical gate error rate of pg = 5 10 4, and close to current difficulty D = 1012, the overheads are ct = 538:6 and cnQ = 1810:7, implying an effective hash rate of hQC = 13:8GH/s using nQ = 4:4 106 physical qubits. This is more than one thousand times slower than off the shelf ASIC devices which achieve hash rates of 14TH/s;16 the reason being the slow quantum gate speed and delays for fault tolerant T gate construction.

Quantum technologies are poised to scale up significantly in the next decades with a quantum version of Moore’s law likely to take over for quantum clock speeds, gate fidelities, and qubit Hash rate of total bitcoin network vs. single quantum computer 2020 2025 2030 2035 2040 number. Guided by current improvements in superconducting quantum circuit technology, forecasts for such improvements are given in Appendices 2 and 3. This allows us to estimate of the power of a quantum computer as a function of time as shown in Figure 2. Evidently, it will be some time before quantum computers outcompete classical machines for this task, and when they do, a single quantum computer will not have a majority of hashing power.

Nonetheless, certain attacks become more profitable for an adversary armed with quantum computers with even modest hashing power advantage over classical miners. One example is a mining pool attack wherein a malicious outside party pays pool members to withhold their valid block solutions.17 This reduces the effective mining power of the pool and increases the relative power of the adversary. Smart contracts can be added to the blockchain to enforce the attacker’s bribes and the pool members compliance if they agree to withhold. Remarkably, such an attack is profitable even when the hashing power of the attacker is well below half of the entire network. For example, an attacker with 0:1% of the total network hashing power could, with only a small bribe, cause pool revenue to decrease by 10%. This level of quantum hashing power could be realized by an adversary controlling 20 quantum computers running in parallel with specifications at the minimum of the optimistic assumptions outlined in Appendix 1 where the effective hash rate scales like hQC = 0:04 spD, assuming difficulty D = 1013 and clock speed s = 50GHz.

3.2. Attacks on Signatures—Signatures in Bitcoin are made using the Elliptic Curve Digital Signature Algorithm based on the secp256k1 curve. The security of this system is based on 4 × 106 3 × 106 the hardness of the Elliptic Curve Discrete Log Problem (ECDLP). While this problem is still believed to be hard classically, an efficient quantum algorithm to solve this problem was given by Shor.3 This algorithm means that a sufficiently large universal quantum computer can efficiently compute the private key associated with a given public key rendering this scheme completely insecure. The implications for Bitcoin are the following: (1) (Reusing addresses) To spend bitcoin from an address the public key associated with that address must be revealed. Once the public key is revealed in the presence of a quantum computer the address is no longer safe and thus should never be used again. While always using fresh addresses is already the suggested practice in Bitcoin, in practice this is not always followed. Any address that has bitcoin and for which the public key has been revealed is completely insecure. (2) (Processed transactions) If a transaction is made from an address which has not been spent from before, and this transaction is placed on the blockchain with several blocks following it, then this transaction is reasonably secure against quantum attacks. The private key could be derived from the published public key, but as the address has already been spent this would have to be combined with out-hashing the network to perform a double spending attack. As we have seen in Section 3.1, even with a quantum computer a double spending attack is unlikely once the transaction has many blocks following it. (3) (Unprocessed transactions) After a transaction has been broadcast to the network, but before it is placed on the blockchain it is at risk from a quantum attack. If the secret key can be derived from the broadcast public key before the transaction is placed on the blockchain, then an attacker could use this secret key to broadcast a new transaction from the same address to his own address. If the attacker then ensures that this new transaction is placed on the blockchain first, then he can effectively steal all the bitcoin behind the original address.

We view item (3) as the most serious attack. To determine the seriousness of this attack it is important to precisely estimate how much time it would take a quantum computer to compute the ECDLP, and if this could be done in a time close to the block interval. For an instance with an n bit prime field, a recently optimized analysis shows a quantum computer can solve the problem using 9n + 2dlog2(n)e + 10 logical qubits and (448 log2(n) + 4090)n3 Toffoli gates.18 Bitcoin uses n = 256 bit signatures so the number of Toffoli gates is 1:28 1011, which can be slightly parallelized to depth 1:16 1011. Each Toffoli can be realized using a small circuit of T gate depth one acting on 7 qubits in parallel (including 4 ancilla qubits).19

Following the analysis of Sec. 3.1, we can estimate the resources needed for a quantum attack on the digital signatures. As with block mining, the dominant time is consumed by distilling magic states for the logical T gates. The time to solve the ECDLP on a quantum processor is t = 1:28 1011 ct (pg)=s; where the time overhead ct now only depends on gate error rate, and s is again the clock speed. The number of physics qubits needed is where the first factor is the number of logical qubits including 4 logical ancilla qubits, and cnQ is the space overhead.

The performance of a quantum computer to attack digital signatures is given in Figure 3. Using a surface code with a physical gate error rate of pg = 5 10 4, the overhead factors are ct = 291:7 and cnQ = 735:3, and the time to solve the problem at 66:6 MHz clock speed is 6:49 days using 1:7 106 physical qubits. Looking forward to performance improvements, for 10GHz clock speed and error rate of 10 5, the signature is cracked in 30 minutes using 485550 qubits. The latter makes the attack in item (3) quite possible and would render the current Bitcoin system highly insecure. An estimate of the time required for a quantum computer to break the signature scheme as a function of time is given in Figure 4, based on the model described in Appendices 2 and 3.

3.3. Future Enhancements of Quantum Attacks—We have described attacks on the Bitcoin protocol using known quantum algorithms and error correction schemes. While some of the estimates for quantum computing speed and scaling may appear optimistic, it is important to keep in mind that there are several avenues for improved performance of quantum computers to solve the aforementioned problems.

First, the assumed error correction code here is the surface code which needs significant classical computational overhead for state distillation, error syndrome extraction, and correction. Other codes which afford transversal Clifford and non-Clifford gates could overcome the need for slow state distillation.20 In fact the slow down from classical processing for syndrome extraction and correction could be removed entirely using a measurement free protocol.21 Recent analysis of measurement free error correction using the surface code finds error thresholds only about 6 times worse than the measurement based approach.22 This could potentially dramatically improve overall speed of error correction.

Second, reductions in logical gate counts of the quantum circuits are possible as more efficient advanced quantum-computation techniques are being developed. For example, using a particular large-size example problem (including oracle implementations) that was analyzed in a previous work,23 a direct comparison of the concrete gate counts, obtained by the software package Quipper, has been achieved between the old and the new linear-systems solving quantum Time to break signature scheme for quantum computer 2020 2025 2030 2035 2040 algorithms,24, 25 showing an improvement of several orders of magnitude.26 Given that the quantum Shor and Grover algorithms have been well studied and highly optimized, one would not expect such a dramatic improvement, nonetheless it is likely some improvement is possible.

Third, different quantum algorithms might provide relative speedups. Recent work by Kaliski,27 presents a quantum algorithm for the Discrete Logarithm Problem: find m given b = am, where b is a known target value and a is a known base, using queries to a so called “magic box” subroutine which computes the most significant bit of m. By repeating queries using judiciously chosen powers of the base, all bits of m can be calculated and the problem solved. Problem queries can be distributed to many quantum computers to solve in parallel. While each such query would requires a number of logical qubits and gates comparable to solving the entire problem, there may be some overall speedup since the number of measurements at the end is reduced and required precision of logical gates may be less meaning lower overheads for fault tolerant implementation.

Countermeasures 4.1. Alternative Proofs-of-Work—As we have seen in the last section, a quantum computer can use Grover search to perform the Bitcoin proof-of-work using quadratically fewer hashes than are needed classically. In this section we investigate alternative proofs-of-work that might offer less of a quantum advantage. The basic properties we want from a proof-of-work are: (1) (Difficulty) The difficulty of the problem can be adjusted in accordance with the computing power available in the network. (2) (Asymmetry) It is much easier to verify the proof-of-work has been successfully completed than to perform the proof-of-work. (3) (No quantum advantage) The proof-of-work cannot be accomplished significantly faster with a quantum computer than with a classical computer.

The Bitcoin proof-of-work accomplishes items (1) and (2), but we would like to find an alternative proof of work that does better on (3).

Similar considerations have been investigated by authors trying to find a proof-of-work that, instead of (3) look for proofs-of-work that cannot be accelerated by ASICs. An approach to doing this is by looking at memory intensive proofs of work. Several interesting candidates have been suggested for this such as Momentum,4 based on finding collisions in a hash function, Cuckoo Cycle,28 based on finding constant sized subgraphs in a random graph, and Equihash,29 based on the generalized birthday problem. These are also good candidates for a more quantum resistant proof-of-work.

These schemes all build on the hashcash-style proof-of-work and use the following template. Let h1 : f0; 1g ! f0; 1gn be a cryptographically secure hash function and H = h1(header) be the hash of the block header. The goal is then to find a nonce x such that h1(H k x)

t and P(H; x) ; for some predicate P. The fact that the header and nonce have to satisfy the predicate P means that the best algorithm will no longer simply iterate through nonces x in succession. Having a proof-of-work of this form also ensures that the parameter t can still be chosen to vary the difficulty.

In what follows, we will analyse this template for the Momentum proof-of-work, as this can be related to known quantum lower bounds. For the momentum proof of work, let h2 : f0; 1g ! f0; 1g` be another hash function with n `. In the original Momentum proposal h1 can be taken as SHA-256 and h2 as a memory intensive hash function, but this is less important for our discussion. The proof-of-work is to find H; a; b such that h1(H k a k b) t and h2(H k a) = h2(H k b) and a; b 2` : (1)

First let’s investigate the running time in order to solve this proof-of-work, assuming that the hash functions h1; h2 can be evaluated in unit time. Taking a subset S f0; 1g` and evaluating h2(H k a) for all a 2 S, we expect to find about jSj2=2` many collisions. Notice that by using an appropriate data structure, these collisions can be found in time about jSj.

One algorithm is then as follows. For each H, we evaluate h2 on a subset S and find about jSj2=2` many pairs a; b such that h2(H k a) = h2(H k b). For each collision we then test h1(H k a k b) t. In expectation, we will have to perform this second test 2n=t many times. Thus the number of H’s we will have to try is about m = maxf1; 2tjnS+j2` g, since we have to try at least one H. As for each H we spend time jSj, the total running time is mjSj. We see that it is the smallest when jSj = p2n+`=t, that is when m = 1, and we just try one H. This optimal running time is then T = p2n+`=t, and to achieve it we have to use a memory of equal size to the running time, which might be prohibitive. For some smaller memory jSj < p2n+`=t the running time will be 2nt+jS`+j1 .

Now let us look at the running time on a quantum computer. On a quantum computer we can do the following. Call H good if there exists a; b 2 S such that h1(H k a k b) t and h2(H k a) = h2(H k b). Testing if an H is good requires finding a collision, and therefore necessitates at least jSj2=3 time by the quantum query lower bound of Aaronson and Shi.30 Note that this lower bound is tight as finding such a collision can also be done in roughly jSj2=3 time using Ambainis’s element distinctness algorithm.31 We have argued above that a set of size m = maxf1; 2tnjS+j` g is needed to find at least one good H. By the optimality of Grover search we know that we have to perform at least pm many tests to find a good H.32 As testing if an H is good requires time S 2=3, the total running time is at least pmjSj2=3. As the classical running time is mjSj, we see j j that unlike for the current proof of work in Bitcoin, with this proposal a quantum computer would not be able to achieve a quadratic advantage as soon as S is more than constant size. In particular, since pmjSj2=3 is minimized also when S = p2n+`=t, the running time of even the fastest quantum algorithm is at least T 2=3, which is substantially larger than T 1=2.

4.2. Review of Post-Quantum Signature Schemes—Many presumably quantum-safe publickey signature schemes have been proposed in the literature. Some examples of these are hashbased signature schemes (LMS,33 XMSS,34 SPHINCS,35 and NSW 36), code-based schemes (CFS 37 and QUARTZ 38), schemes based on multivariate polynomials (RAINBOW 39), and lattice-based schemes (GPV,40 LYU,41 BLISS,42 ring-TESLA,43 DILITHIUM,44 and NTRU 45). Each of these cryptosystems have varying degree of efficiency. For a comparison in terms of signature size and key size, see Table 2.

In the blockchain context the most important parameters of a signature scheme are the signature and public key lengths, as these must be stored in some capacity to fully verify transactions, and the time to verify the signature. Looking at Table 2, with respect to the sum of signature and public key lengths, the only reasonable options are hash and lattice based schemes.

Hash based schemes like XMSS have the advantage of having provable security, at least assuming the chosen hash function behaves like a random oracle. The generic quantum attack against these schemes is to use Grover’s algorithm which means that their quantum security level is half of the classical security level. In contrast, the best known quantum attack against DILITHIUM at 138 bit classical security level requires time 2125. Thus at the same level of quantum security, lattice based schemes have some advantage in signature plus public key length.

Although the lattice based scheme BLISS has the shortest sum of signature and public key lengths of all the schemes in Table 2, there are some reasons not to choose BLISS in practice. The security of BLISS relies on hardness of the NTRU problem and the assumption that solving this problem is equivalent to finding a short vector in a so-called NTRU lattice. It has been shown recently that this assumption might be too optimistic, at least for large parameters.46 Moreover, there is a history of attacks on prior NTRU-based signature schemes.47, 48 Perhaps most fatally, BLISS is difficult to implement in a secure way as it is very susceptible to side channel attacks. The production grade strongSwan implementation of BLISS has been attacked in this way by Pessl (et al.),49 who showed that the signing key could be recovered after observing about 6000 signature generations.

Acknowledgement

MT and GB would like to thank Michael Bremner for initial discussions. TL would like to thank John Tromp for helpful comments and discussions about proof-of-work and Ronald de Wolf for conversations about parallel quantum search. This material is based on work supported in part by the Singapore National Research Foundation under NRF RF Award No. NRF-NRFF2013-13. type I.1 I.2 I.3 I.4 I.5 I.6 I.7 II.1 III.1 III.2 III.3 III.4 IV.1 IV.2

name

ECDSA

GPV50 LYU50

BLISS42 FALCON-512*51

ring-TESLA43 qTESLA-128*52 DILITHIUM*44

RAINBOW53

LMS54

XMSS34 SPHINCS35

NSW36

CFS37 QUARTZ38 Research at the Centre for Quantum Technologies is partially funded by the Singapore Ministry of Education and the National Research Foundation under grant R-710-000-012-135. This research was supported in part by the QuantERA ERA-NET Cofund project QuantAlgo.

Author Contributions

All authors contributed equally.

Notes and References 1 Nakamoto, S. “Bitcoin: A Peer-to-Peer Electronic Cash System.” Bitcoin.org (2009) (accessed 2 October 2018) http://www.bitcoin.org/pdf. 2 Buterin, V. “Bitcoin Is Not Quantum-Safe, and How We Can Fix It When Needed.” Bitcoin Magazine (2013) (accessed 2 October 2018) http://bitcoinmagazine.com/articles/ bitcoin-is-not-quantum-safe-and-how-we-can-fix-1375242150/. 3 Shor, P. W. “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer.” SIAM Review 41.2 303–332 (1999) https://doi.org/10.1137/S0036144598347011. 4 Larimer, D. “Momentum–A Memory-Hard Proof-of-Work via Finding Birthday Collisions.” (2014) (accessed 2 October 2018) http://www.hashcash.org/papers/momentum.pdf. 5 Back, A. “Hashcash–A Denial of Service Counter-Measure.” Hashcash.org (2002) (accessed 2 October 2018) http://www.hashcash.org/papers/hashcash.pdf. 6 Specifically the root of a Merkle tree of hashes of the transactions. 7 In early versions of the Bitcoin protocol the public key could be used as an address. 8 Grover, L. K. “A Fast Quantum Mechanical Algorithm for Database Search.” In STOC ’96 Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing. New York: Association for Computing Machinery 212–219 (1996) https://doi.org/10.1145/237814.237866. 9 Bellare, M., Rogaway, P. “Random Oracles are Practical: A Paradigm for Designing Efficient Protocols.” In CCS ’93 Proceedings of the 1st ACM Conference on Computer and Communications Security. New York: Association for Comuting Machinery 62–73 (1993) https://doi.org/10.1145/168588.168596. 10 According to blockchain.info, on August 8, 2017, the hashing difficulty was D = 860 109 and target was t = 2184:4. 11 Boyer, M., Brassard, G., Høyer, P., Tapp, A. “Tight Bounds on Quantum Searching.” Fortschritte der Physik 46.4-5 493–505 (1998) https://doi.org/10.1002/(SICI)1521-3978(199806)46:4/5<493:: AID-PROP493>3.0.CO;2-P. 12 Parent, A., Ro¨tteler, M., Svore, K. M. “Reversible Circuit Compilation with Space Constraints.” CoRR (arXiv) (2015) (accessed 2 October 2018) https://arxiv.org/abs/1510.00377. 13 Suchara, M., Faruque, A., Lai, C.-Y., Paz, G., Chong, F., Kubiatowicz, J. D. “Estimating the Resources for Quantum Computation with the QuRE Toolbox.” EECS Department, University of California, Berkeley (accessed 2 October 2018) http://www2.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-119. html. 14 Gingrich, R. M., Williams, C. P., Cerf, N. J. “Generalized Quantum Search with Parallelism.” Physical Review A 61.5 052313 (2000) https://link.aps.org/doi/10.1103/PhysRevA.61.052313. 15 Kirchhoff, S., et al. “Optimized Cross-Resonance Gate for Coupled Transmon Systems.” arXiv (2017) (accessed 2 October 2018) https://arxiv.org/abs/1701.01841. 16 Using e.g. the Bitmain Antminer S9. 17 Velner, Y., Teutsch, J., Luu, L. “Smart Contracts Make Bitcoin Mining Pools Vulnerable.” IACR Cryptology ePrint Archive (2017) (accessed 2 October 20 18) http://eprint.iacr.org/2017 /230. 19 Selinger, P. “Quantum Circuits of $T$-Depth One.” Physical Review A 87.4 042302 (2013) https://link. aps.org/doi/10.1103/PhysRevA.87.042302. 21 Paz-Silva, G. A., Brennen, G. K., Twamley, J. “Fault Tolerance with Noisy and Slow Measurements and Preparation.” Physical Review Letters 105.10 100501 (2010) https://link.aps.org/doi/10.1103/ PhysRevLett.105.100501. 22 Ekmel Ercan, H., et al. “Measurement-Free Implementations of Small-Scale Surface Codes for Quantum Dot Qubits.” arXiv (2017) (accessed 2 October 2018) https://arxiv.org/abs/1708.08683. 23 Scherer, A., Valiron, B., Mau, S.-C., Alexander, S., van den Berg, E., Chapuran, T. E. “Concrete Resource Analysis of the Quantum Linear-System Algorithm Used to Compute the Electromagnetic Scattering Cross Section of a 2D Target.” Quantum Information Processing 16.3 60 (2017) https://doi.org/10.1007/ s11128-016-1495-5. 35 Bernstein, D. J., et al. “SPHINCS: Practical Stateless Hash-Based Signatures.” In E. Oswald, M. Fischlin (Eds.), Advances in Cryptology–EUROCRYPT 2015, 34th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Sofia, Bulgaria, April 26-30, 2015. Berlin: Springer 368–397 (2015) https://doi.org/10.1007/978-3-662-46800-5_15. 36 Naor, D., Shenhav, A., Wool, A. “One-Time Signatures Revisited: Have They Become Practical?” IACR Cryptology ePrint Archive (2005) (accessed 2 October 2018) https://eprint.iacr.org/2005/442. 37 Courtois, N., Finiasz, M., Sendrier, N. “How to Achieve a McEliece-Based Digital Signature Scheme.” In C. Boyd (Ed.), Advances in Cryptology—ASIACRYPT 2001. Berlin: Springer 157–174 (2001) https: //doi.org/10.1007/3-540-45682-1_10. 38 Patarin, J., Courtois, N., Goubin, L. “Quartz, 128-Bit Long Digital Signatures.” In D. Naccache (Ed.), Topics in Cryptology–CT-RSA 2001, The Cryptographers’ Track at RSA Conference 2001 San Francisco, CA, USA, April 8–12, 2001. Berlin: Springer 282–297 (2001) https://doi.org/10.1007/3-540-45353-9_ 21. 39 Ding, J., Schmidt, D. “Rainbow, A New Multivariable Polynomial Signature Scheme.” In J. Ioannidis, A. Keromytis, M. Yung (Eds.), Applied Cryptography and Network Security, Third International Conference, ACNS 2005, New York, NY, USA, June 7-10, 2005. Berlin: Springer 164–175 (2005) https://doi.org/10. 1007/11496137_12. 40 Gentry, C., Peikert, C., Vaikuntanathan, V. “Trapdoors for Hard Lattices and New Cryptographic Constructions.” In STOC ’08 Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing. New York: Association for Computing Machinery 197–206 (2008) https://doi.org/10.1145/1374376.1374407. 41 Lyubashevsky, V. “Lattice Signatures Without Trapdoors.” In D. Pointcheval, T. Johansson (Eds.), Advances in Cryptology–EUROCRYPT 2012, 31st Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cambridge, UK, April 15-19, 2012. Berlin: Springer 738–7 55 (2012 ) https: //doi.org/10.1007/978-3-642-29011-4_43. 42 Ducas, L., Durmus, A., Lepoint, T., Lyubashevsky, V. “Lattice Signatures and Bimodal Gaussians.” In R. Canetti, J. A. Garay (Eds.), Advances in Cryptology–CRYPTO 2013, 33rd Annual Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2013. Berlin: Springer 40–56 (2013) https://doi.org/10.1007/ 978-3-642-40041-4_3. 43 Akleylek, S., Bindel, N., Buchmann, J., Kra¨mer, J., Marson, G. “An Efficient Lattice-Based Signature Scheme with Provably Secure Instantiation.” In D. Pointcheval, A. Nitaj, T. Rachidi (Eds.), Progress in Cryptology–AFRICACRYPT 2016, 8th International Conference on Cryptology in Africa. Berlin: Springer 44–60 (2016) https://doi.org/10.1007/978-3-319-31517-1_3. 44 Ducas, L., Lepoint, T., Lyubashevsky, V., Schwabe, P., Seiler, G., Stehle´, D. “CRYSTALS–Dilithium: Digital Signatures from Module Lattices.” IACR Cryptology ePrint Archive, 2017 (2017) (accessed 2 October 20 18) https://eprint.iacr.org/2017 /633.pdf. 45 Melchor, C. A., Boyen, X., Deneuville, J.-C., Gaborit, P. “Sealing the Leak on Classical NTRU Signatures.” In M. Mosca (Ed.), Post-Quantum Cryptography, 6th International Workshop, PQCrypto 2014, Waterloo, ON, Canada, October 1-3, 2014. Berlin: Springer 1–21 (2014) https://doi.org/10.1007/ 978-3-319-11659-4_1. 46 Kirchner, P., Fouque, P. “Revisiting Lattice Attacks on Overstretched NTRU Parameters.” In J.-S. Coron, J. B. Nielsen (Eds.), Advances in Cryptology–EUROCRYPT 2017, 36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Paris, France, April 30 – May 4, 2017. Berlin: Springer 3–26 (2017) https://doi.org/10.1007/978-3-319-56620-7_1. 47 Nguyen, P. Q., Regev, O. “Learning a Parallelepiped: Cryptanalysis of GGH and NTRU Signatures.” In S. Vaudenay (Ed.), Advances in Cryptology–EUROCRYPT 2006, 24th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28 - June 1, 2006. Berlin: Springer 271–288 (2006) https://doi.org/10.1007/11761679_17. 48 Ducas, L., Nguyen, P. Q. “Learning a Zonotope and More: Cryptanalysis of NTRUSign Countermeasures.” In X. Wang, K. Sako (Eds.), Advances in Cryptology–ASIACRYPT 2012, 18th International Conference on the Theory and Application of Cryptology and Information Security, Beijing, China, December 2-6, 2012. Berlin: Springer 433–450 (2012) https://doi.org/10.1007/978-3-642-34961-4_27. 49 Pessl, P., Bruinderink, L., Yarom, Y. “To BLISS-B or Not to Be—Attacking strongSwan’s Implementation of Post-Quantum Signatures.” In CCS ’17 Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: Association from Computing Machinery 1843–1855 (2017) https: //doi.org/10.1145/3133956.3134023. 50 Howe, J., Poppelmann, T., O’Neill, M., O’Sullivan, E., Guneysu, T. “Practical Lattice-Based Digital Signature Schemes.” ACM Transactions on Embedded Computing Systems 14.3 41:1–41:24 (2015) https: //doi.acm.org/10.1145/2724713. 51 Fouque, P.-A., et al. “FALCON: Fast-Fourier Lattice-Based Compact Signatures over NTRU.” (2018) (accessed 2 October 2018) https://falcon-sign.info/. 52 Bindel, N., et al. “Submission to NIST’s post-quantum project: Lattice-Based Digital Signature Scheme qTESLA.” (2018) (accessed 2 October 2018) Full list of submissions and associated documentation can be found at https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/ Round-1-Submissions. 53 Petzoldt, A., Bulygin, S., Buchmann, J. “Selecting Parameters for the Rainbow Signature Scheme.” In N. Sendrier (Ed.), Post-Quantum Cryptography, Third International Workshop, PQCrypto 2010, Darmstadt, Germany, May 25-28, 2010. Berlin: Springer 218–240 (2010) https://doi.org/10.1007/ 978-3-642-12929-2_16. 66 Co´rcoles, A., et al. “Demonstration of a Quantum Error Detection Code Using a Square Lattice of Four Superconducting Qubits.” Nature Communications 6 6979 (2015) https://www.doi.org/10.1038/ ncomms7979. 67 Sheldon, S., Magesan, E., Chow, J. M., Gambetta, J. M. “Procedure for Systematically Tuning Up CrossTalk in the Cross-Resonance Gate.” Physical Review A 93.6 060302 (2016) https://www.doi.org/10. 1103/PhysRevA.93.060302. 68 Deng, X.-H., Barnes, E., Economou, S. E. “Robustness of Error-Suppressing Entangling Gates in CavityCoupled Transmon Qubits.” Physical Review B 96.3 035441 (2017) https://www.doi.org/10.1103/ PhysRevB.96.035441.

Appendix A: Attacks Estimating Error Correction Resource Overheads for Quantum

Here we describe how the overhead factors for quantum error correction are calculated in order to obtain resource costs for quantum attacks on blockchains and digital signatures. The method follows the analysis given in Fowler (et al.)55 and Matthew (et al.).56 We first determine nT and nC, the number of T gates and Clifford gates respectively needed in the algorithm. The pseudo-code to compute the overhead is given in Table 3. For the blockchain attack on nL = 2402 qubits, these values are For the Digital Signature attack on nL = 2334 qubits,57 the values are

If we look some years into the future we can speculate as to plausible improvements in quantum computer technology. If we assume a quantum error correction code that supports transversal Clifford and non-Clifford gates so there is no distillation slow down and that it is done in a measurement free manner so that no classical error syndrome processing is necessary, then the number of cycles needed for one oracle call is determined solely by the circuit depth which is 2142094. This is based on an overall circuit depth calculated as follows. The oracle invokes two calls to the SHA256 hash function, and this is done twice, once to compute it and again to uncompute it. Each hash has a reversible circuit depth of 528768. Similarly, there are two multi-controlled phase gates used, one for inversion about the mean and one for the function call, each having a circuit depth 13511, for a total depth 4 528768 + 2 13511 = 2142094 (these numbers are from Suchara (emphet al.) but could be further optimized13). Then accepting potential overhead in space and physical qubit number, but assuming no time penalty for error correction or non Clifford gate distillation, this implies an improved effective hashing rate of hQC = 0:04 spD: which is substantially faster. For superconducting circuits, ultrafast geometric phase gates are possible at 50 GHz, essentially limited by the microwave resonator frequency.58 Using the above very optimistic assumptions, at difficulty D = 1012 the effective hash rate would be hQC = 2:0 103TH/s. function CALCULATEFACTORYRESOURCES( pg, nT ) . iterates layers of error correction in factory 1 nT function CALCULATECIRCUITRESOURCES( pg, nC, nL) min nd 2 N : (80 pg) d +21 n1C o . code distance for circuit (single layer) 3:125nLdC . total physical qubits for circuit Hashes per second in the bitcoin network

Hashing difficulty 1015 1014 1013 1012 1011 1022 1020 1018 1016 1014 2015 2020 2025 2030 2035 2040 2015 2020 2025 2030 2035 2040

Appendix B: Modeling the Development of Bitcoin Network Difficulty

The total number of hashes per second in the Bitcoin network are taken from blockchain.info. The data points in Figure 5a are the hash rates for the first of January (2012–2015) and first of January and July (2016–2017). The two dotted curves correspond to optimistic and less optimistic assumptions for the extrapolations. The optimistic extrapolation assumes that the present growth continues exponentially for five years and then saturates into a linear growth as the market gets saturated with fully optimized ASIC Bitcoin miners. The less optimistic assumption assume linear growth at the present rate.

From the extrapolation of the Bitcoin network hashrate we can determine the difficulty as a function of time. The expected number of hashes required to find a block in 10 minutes (600 seconds) is given by rate(t) 600, where rate(t) is the total hash rate displayed in Figure 5a. Thus the Bitcoin hashing difficulty is calculated as D(t) = rate(t) 600 2 32 for the two scenarios discussed above. In Figure 5b we compare this with values from blockchain.info for the first of January of 2015–2017.

Appendix C: Modeling the Development of Quantum Computers

There are several aspects of the development of quantum technologies that we must model. Since only few data points are available at this early stage of the development there is necessarily a lot of uncertainty in our estimates. We therefore give two different estimates, one that is optimistic about the pace of the development and another one that is considerably more pessimistic. Nonetheless, these predictions should be considered as a very rough estimate and might need to be adapted in the future.

First, we need to make an assumption on the number of qubits available at any point of time. As we focus only on solid state superconducting implementations there are only a few data points available. We assume that the number of available qubits will grow exponentially in time in the near future. The optimistic assumption is that the number will double every 10 months whereas the less optimistic assumption assumes the number doubles every 20 months. These two 106 104 102 10− 1 10− 2 10− 3 10− 4 10− 5 Number of qubits Gate frequency 2020 2025 2030 2035 2040 2015 2020 2025 2030 2035 2040 Gate infidelity Overhead reduction 2020 2025 2030 2035 2040 2020 2025 2030 2035 2040 (c) (d) Fig. 6. Prediction of the number of qubits, the quantum gate frequency (in gate operations per second) and the quantum gate infidelity as a function of time. The fourth plot models a reduction of the overhead due to theoretical advances. extrapolations are plotted in Figure 6a. The data points are taken from the following table: We predict that the quantum gate frequency grows exponentially for the next years. This assumes that the classical control circuits will be sufficiently fast to control quantum gates at this frequencies. After a couple of years the growth slows down considerably because faster classical control circuits are necessary to further accelerate the quantum gates. We cap the quantum gate frequency at 50 GHz (for the optimistic case) or 5 GHz (for the less optimistic case), respectively, mostly because we expect that classical control circuits will not be able to control the quantum gates at higher frequencies. (See, e.g., Herr (et al.) for progress in this direction.65) This is shown in Figure 6b. The data points are taken from the following table: gate time year 2013 Co´ rcoles, A. D. (et al.) 59 Co´ rcoles, A. D. (et al.) 66

Sheldon (et al.) 67

Deng (et al.) 68 The predicted development of the gate infidelity is shown in Figure 6c. We assume that the gate infidelity will continue to drop exponentially but that this development will stall at an infidelity of 5 10 6 (optimistic case) or 5 10 5 (less optimistic case). For the optimistic case we expect that the gate infidelity will continue to follow DeVincenzo’s law which predicts a reduction of the infidelity by a factor of 2 per year. The data points are taken from the following table: year 2013 2014 Co´ rcoles, A. D. (et al.) 59 Chow (et al.) 61 Co´ rcoles, A. D. (et al.) 66

Sheldon (et al.) 67 Reynolds 64 Reynolds 64

Finally, we assume that the number of qubits and time steps required by any algorithm will be reduced over time for two reasons. First, the gate fidelity will increase over time and thus allow for more efficient fault-tolerant schemes to be used. Second, theoretical advances will allow to decrease the number of qubits and gates required to implement the algorithm and fault-tolerant schemes. We expect that this factor will be overhead(t) = b t 2017 where b 2 f0:75; 0:85g for optimistic and less optimistic assumptions, respectively.

18 Roetteler, M. , Naehrig , M. , Svore , K. , Lauter , K. Quantum Resource Estimates for Computing Elliptic Curve Discrete Logarithms .” arXiv ( 2017 ) (accessed 2 October 2018 ) https://arxiv.org/abs/1706. 20 Paetznick, A. , Reichardt , B. W. Universal Fault-Tolerant Quantum Computation with Only Transversal Gates and Error Correction .” Physical Review Letters 111.9 090505 ( 2013 ) https://link.aps.org/doi/ 10.1103/PhysRevLett.111.090505. 24 Harrow, A. W. , Hassidim , A. , Lloyd , S. Quantum Algorithm for Linear Systems of Equations.” Physical Review Letters 103 . 15 150502 ( 2009 ) https://link.aps.org/doi/10.1103/PhysRevLett.103. 25 Childs, A. M. , Kothari , R. , Somma , R. “ Quantum Linear Systems Algorithm with Exponentially Improved Dependence on Precision.” arXiv ( 2015 ) (accessed 2 October 2018 ) https://www.arxiv.org/abs/1511. 26 Scherer, A. , personal communication. 27 Kaliski, B. S. , Jr. “A Quantum “ Magic Box” for the Discrete Logarithm Problem . ” IACR Cryptology ePrint Archive ( 2017 ) (accessed 2 October 2018 ) https://eprint.iacr.org/ 2017 /745. 28 Tromp, J. “Cuckoo Cycle: A Memory Bound Graph-Theoretic proof -of-work .” In M. Brenner, N. Christin , B. Johnson , K. Rohloff (Eds.), Financial Cryptography and Data Security FC 2015 International Workshops , BITCOIN , WAHC, and Wearable, San Juan Puerto Rico, January 30 , 2015 Revised Selected Papers . New York: Springer 49- 62 ( 2015 ) https://doi.org/10.1007/978-3- 662 -48051- 9 _ 4 . 29 Biryukov, A. , Khovratovich , D. “Equihash: Asymmetric Proof-of-Work Based on the Generalized Birthday Problem . ” Ledger 2 1 - 30 ( 2017 ) https://doi.org/10.5195/ledger. 2017 . 48 . 30 Aaronson, S. , Shi , Y. Quantum Lower Bounds for the Collision and the Element Distinctness Problems . ” Journal of the ACM 51. 4 595 - 605 ( 2004 ) https://doi.org/10.1145/1008731.1008735. 31 Ambainis, A. Quantum Walk Algorithm for Element Distinctness .” SIAM Journal on Computing 37. 1 210 - 239 ( 2007 ) https://doi.org/10.1137/S0097539705447311. 32 Bennett, C. , Bernstein , E. , Brassard , G. , Vazirani , U. Strengths and Weaknesses of Quantum Computing .” SIAM Journal on Computing 26. 5 1510 - 1523 ( 1997 ) https://doi.org/10.1137/S0097539796300933. 33 Leighton, F. T. , Micali , S. Large Provably Fast and Secure Digital Signature Schemes Based on Secure Hash Functions.” Google Patents (accessed 2 October 2018 ) https://patents.google.com/patent/ US5432852A/en. 34 Buchmann, J. , Dahmen , E. , Hu¨lsing, A. “ XMSS-A Practical Forward Secure Signature Scheme Based on Minimal Security Assumptions .” In B. Yang (Ed.), Post-Quantum Cryptography , 4th International Workshop, PQCrypto 2011, Taipei, Taiwan, November 29-December 2 , 2011 . Berlin: Springer 117- 129 ( 2011 ) https: //doi.org/10.1007/978-3- 642 -25405- 5 _ 8 . 54 de Oliveira , A. K. D. , Lo´pez, J., Cabral , R. “ High Performance of Hash-Based Signature Schemes .” International Journal of Advanced Computer Science and Applications 8 .3 421 - 432 ( 2017 ) http://dx.doi.org/ 10.14569/IJACSA. 2017 . 080358 . 55 Fowler, A. G. , Mariantoni , M. , Martinis , J. M. , Cleland , A. N. “Surface Codes: Towards Practical Large-Scale Quantum Computation. ” Phys. Rev. A 86 032324 ( 2012 ) https://www.doi.org/10.1103/ PhysRevA.86.032324. 56 Matthew, A. , Di Matteo , O. , Gheorghiu , V. , Mosca , M. , Parent , A. , Schanck , J. “ Estimating the Cost of Generic Quantum Pre-Image Attacks on SHA-2 and SHA-3 .” IACR Cryptology ePrint Archive ( 2016 ) (accessed 2 October 2018 ) https://eprint.iacr.org/ 2016 /992. 57 The factor of 20 for the number of Clifford gates per T gate is based is based on the construction of T gate depth one representations of the Toffoli gate in Selinger , P. ”Quantum Circuits of $T$-Depth One .” 19 58 Romero, G. , Ballester , D. , Wang , Y. M. , Scarani , V. , Solano , E. “ Ultrafast Quantum Gates in Circuit QED .” Physical Review Letters 108 . 12 120501 ( 2012 ) https://www.doi.org/10.1103/PhysRevLett. 108.120501. 59 Co´rcoles, A. D. , et al. “ Process Verification of Two-Qubit Quantum Gates by Randomized Benchmarking .” Physical Review A 87.3 030301 ( 2013 ) https://www.doi.org/10.1103/PhysRevA.87.030301. 60 Barends, R. , et al. “ Superconducting Quantum Circuits at the Surface Code Threshold for Fault Tolerance . ” Nature 508 .7497 500 - 503 ( 2014 ) https://doi.org/10.1038/nature13171. 61 Chow, J. M. , et al. “ Implementing a Atrand of a Scalable Fault-Tolerant Quantum Computing Fabric .” Nature Communications 5 https://doi.org/10.1038/ncomms5015. 62 IBM. “ IBM Makes Quantum Computing Available on IBM Cloud to Accelerate Innovation.” (Accessed 2 October 2018 ) https://www-03.ibm.com/press/us/en/pressrelease/49661.wss. 63 IBM. “ IBM Doubles Compute Power for Quantum Systems , Developers Execute 300K+ Experiments on IBM Quantum Cloud.” (accessed 2 October 2018 ) https://developer.ibm.com/dwblog/2017/ quantum-computing-16 - qubit-processor/. 64 Reynolds, M. “ Google on Track for Quantum Computer Breakthrough by End of 2017.” New Scientist (accessed 2 October 2018 ) https://www.newscientist.com/article/ 2138373-google-on -track-for-quantum-computer-breakthrough-by- end- of-2017/. 65 Herr, Q. P. , Herr , A. Y. , Oberg , O. T. , Ioannidis , A. G. Ultra-Low-Power Superconductor Logic . ” Journal of Applied Physics 109.10 103903 ( 2011 ) http://www.doi.org/10.1063/1.3585849. 160ns 42ns Google, projected for end of 2017