Understanding Topological Quantum Error-Correction Codes Using Classical Spin Models

Understanding Topological Quantum Error-Correction Codes Using Classical Spin Models Ruben Sarard Andrist DISS. ETH No. 20588 Understanding Topolo...
Author: Horatio Powers
6 downloads 0 Views 10MB Size
Understanding Topological Quantum Error-Correction Codes Using Classical Spin Models

Ruben Sarard Andrist

DISS. ETH No. 20588

Understanding Topological Quantum Error-Correction Codes Using Classical Spin Models A dissertation submitted to Eth

Zurich

for the degree of DOCTOR

OF

SCIENCES

presented by Ruben

Sarard

Andrist

Dipl. Phys ETH, ETH Z¨ urich

Date of birth March 27th, 1984

citizen of Boltigen BE, Switzerland

accepted on the recommendation of Prof. Dr. Helmut G. Katzgraber, examiner Prof. Dr. Miguel Angel Mart´ın-Delgado, co-examiner

2012

“In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move.” — Douglas Adams

Abstract Quantum computers promise an impressive speedup for specific tasks, such as factorization and search algorithms, and they also excel at simulating physical systems. Unfortunately, sensitivity to noise makes most of the current quantum computing schemes prone to errors and nonscalable, allowing only for small proof-of-principle devices. One promising avenue to resolve this problem is to perform active quantum error correction using topology: By encoding quantum states in topological properties of a larger system, decoherence can be prevented at the hardware level. At the same time, the intrinsic locality of the operations required for error correction renders this approach suitable for actual physical realizations. The key figure of merit when analyzing topological error correction codes is the critical error threshold, which denotes the maximum tolerable error rate for reliable error correction. This stability of a code can be understood by mapping the problem onto a classical statistical spin model with disorder. Because the mapping associates quantum errors with percolating domain walls, the scenario of feasible error recovery corresponds to finding the related spin system in an ordered state. This fruitful synergy between quantum computing proposals and frustrated spin models is used in this thesis to numerically calculate – for the first time – the error threshold of several topological error-correction codes via large-scale Monte Carlo simulations. The encouraging results for the Toric code, topological color codes and topological subsystem codes for di↵erent error channels raise hopes that reliable quantum computation can be accomplished using topological error correction.

iv

Zusammenfassung Mit einem Quantencomputer k¨ onnen bestimmte Probleme erheblich effizienter gel¨ ost werden als mit klassischen Rechenmaschinen, so zum Beispiel die Zerlegung von Primzahlen, das Suchen in einer unsortierten Menge, aber auch die Simulation von physikalischen Systemen. Leider reagieren s¨ amtliche Ans¨ atze f¨ ur die Realisierung einer solchen Maschine sehr empfindlich auf ¨ aussere Einfl¨ usse und sind deswegen ¨ ausserst fehleranf¨ allig. Ein vielversprechender Weg, um Quantencomputer dennoch zuverl¨ assig zu machen, wird von der topologischen Fehlerkorrektur beschrieben: Indem ein Quantenzustand in den topologischen Eigenschaften eines gr¨ osseren Systems kodiert wird, kann man ihn bereits auf der Hardwareebene vor Degeneration sch¨ utzen. Ein weiterer Vorzug dieses Ansatzes liegt in der intrinsischen Lokalit¨ at aller notwendigen Operationen f¨ ur die Fehlerkorrektur, wodurch physikalische Umsetzungen erleichtert werden. Ein essentielles Leistungsmerkmal von topologischen Fehlerkorrekturkodes ist die kritischen Fehlerrate, bis zu welcher zuverl¨ assige Fehlerkorrekur m¨ oglich ist. Diese Stabilit¨ at kann untersucht werden, indem das Problem in ein klassisches Spin-Modell umformuliert wird. Da bei dieser Abbildung Quantenfehler zu systemdurchdringenden Dom¨ anengrenzen werden deutet das Vorliegen eines geordneten SpinZustandes auf wirksame Fehlerkorrektur hin. Mithilfe dieser n¨ utzlichen Synergie gelang es im Rahmen dieser Doktorarbeit, die kritische Fehlerrate verschiedener topologischer Codes in umfangreichen Monte-Carlo Simulationen numerisch zu bestimmen. Die vielversprechenden Resultate f¨ ur den Torischen Code, topologische Farb-Codes und topologische Teilsystem-Codes n¨ ahren die Ho↵nung, dass verl¨ assliche QuantenBerechnungen mit topologischen Fehlerkorrekturverfahren m¨ oglich sein k¨ onnten.

vi

Table of Contents

0 Front Matter Abstract . . . . . . Zusammenfassung Table of Contents . Introduction . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. iv . vi . viii . xii

1 Quantum Computing 1.1 From Classical to Quantum Computing 1.2 The Power of Quantum Parallelism . . . 1.3 Building a Quantum Computer . . . . . 1.4 State of a Quantum Bit . . . . . . . . . 1.5 Manipulating Qubits . . . . . . . . . . . 1.6 Quantum Error Channels . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

2 3 4 6 7 10 13

2 Quantum Error Correction 2.1 Classical Codes . . . . . . . . . . 2.2 Quantum Repetition Codes . . . 2.3 Stabilizer Formalism . . . . . . . 2.4 Topological Stabilizer Codes . . . 2.5 Relation to Classical Spin Models

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

18 19 21 23 25 30

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . . .

3 Classical Statistical Mechanics 36 3.1 Classical Spin Systems . . . . . . . . . . . . . . . . . . . . 37 3.2 Monte Carlo Simulations . . . . . . . . . . . . . . . . . . . 40 3.3 Continuous Phase Transitions . . . . . . . . . . . . . . . . 44 4 Topological Color Codes 4.1 Qubit Arrangement . . . . . . 4.2 Related Classical Spin Models 4.3 Numerical Simulation . . . . 4.4 Results . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

48 49 51 56 58

5 E↵ect of the Depolarizing Channel 5.1 Mapping for a General Channel . . . 5.2 Toric Code with Depolarizing Noise 5.3 Color Codes with Depolarizing Noise 5.4 Numerical Details . . . . . . . . . . . 5.5 Code Optimizations . . . . . . . . . 5.6 Results . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

62 63 65 68 70 71 73

viii

. . . .

. . . .

. . . .

6 Topological Subsystem Codes 6.1 Constructing the Code . . . . . . 6.2 Associated Classical Spin System 6.3 Monte Carlo Simulations . . . . . 6.4 Code Optimizations . . . . . . . 6.5 Numerical Results . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

7 Measurement Errors 7.1 Fault-Tolerant Toric Code . 7.2 Fault-Tolerant Color Codes 7.3 Numerical Simulations . . . 7.4 Code Optimizations . . . . 7.5 Results . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

90 . 91 . 96 . 98 . 102 . 104

. . . . .

. . . . .

. . . . .

78 79 80 82 83 85

8 Conclusion and Outlook

112

A Optimized Temperature Sets for Parallel Tempering

118

B Source Code Listing

124

Bibliography

136

Curriculum Vitae

146

Acknowledgements

152

ix

Introduction With the miniaturization of transistors on silicon chips approaching the atomic size limit and quantum e↵ects hindering further increments in clock speed, a shift to a new paradigm in computing is imminent. Harnessing the power of quantum e↵ects in a more potent “quantum computer”, while boasting impressive computational capabilities, is currently limited by the susceptibility of quantum systems to external disturbances. As a result, quantum error correction is of paramount importance for engineering scalable, useful and stable quantum computing devices. This thesis covers the analysis of topological quantum error-correction codes – a prominent candidate for efficiently dealing with the noise inherently present in systems for quantum computation. Chapter 1 outlines the broader context of the topic at hand, as well as providing the reader with a refresher of the necessary prerequisites for a discussion of quantum computing. In chapter 2, a review of quantum error-correction schemes as well as an explanation of Kitaev’s Toric code is presented, along with a detailed outline of how these error-correction codes are related to classical spin models. The mapping associates faulty qubits with defective spin-spin interactions and imperfections in the error-correction process with flipped domains. Thus the disordered state characterized by system-spanning domain walls can be identified with the proliferation of minor faults leading to an adverse e↵ect on the topologically-encoded information. Due to their disordered interactions, these complex statistical systems cannot be solved analytically and, for most realizations of topological error-correction codes, have not been investigated previously. Chapter 3 describes the methods from statistical mechanics used to investigate these classical spin models. Each of the following chapters, 4 to 7, is dedicated to the error stability of one realization of a topological error-correction code with respect to a specific type of error. Understanding how well each proposal is suited for di↵erent error channels is essential in evaluating its practicality. Through an analogous mapping as presented for the Toric code in chapter 2, each quantum setup is associated with a corresponding classical spin system, where an ordered phase corresponds to the scenario of feasible error correction. This allows for the investigation of the code’s properties using state-of-the-art numerical simulation methods and statistical techniques, as well as newly-developed observables. By utilizing highly-efficient bit-masking and a hardware-optimized code, large instances of these complex statistical systems could be simulated, allowing for a detailed finite-size scaling analysis and, thereby, for a precise numerical calculation of the respective error thresholds. The results are summarized and discussed in chapter 8. The encouraging findings accentuate the feasibility of topological error correction and raise hopes in the endeavor towards efficient and reliable quantum computation. xii

Chapter 1

Quantum Computing “There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.” — Douglas Adams The advent of electronic computing machines dates back to vacuumtubes and relay-based devices in the first half of the 20th century. One of the very first programmable devices was Konrad Zuse’s Z4, which – when installed here at ETH Zurich in 1950 – was the only operational computer on the European continent [1, 2]. Built from 2200 relays, it was able to store 64 numbers and complete one arithmetic operation every three seconds. From these early stages, development in the field was greatly catalyzed by the invention of transistors [3], to amplify electric signals, and integrated circuits [4]. The impressive pace of progress is underlined by Moore’s 1965 notion that the number of transistors on integrated circuits doubles every two years1 [5]. Since then the increasing transistor numbers have nurtured an exponential growth in processing power and Moore’s surprisingly accurate prediction has held up for nearly half a century. However, there are fundamental boundaries to the continuation of this trend: Further clock speed increases are already being stalled by quantum e↵ects, with the lack being compensated by packing increasingly more conventional cores onto the same chip. The scalability of this approach is limited by the relatively high power consumption and 1 Moore’s original 1965 paper noted that “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year”.

2

1.1 From Classical to Quantum Computing associated cooling requirements. Furthermore, at the current pace, transistors in integrated circuits would reach the size of individual molecules by 2020. And even if we discovered means to deal with quantum e↵ects and the limitations of the current photolithographic production process at that stage, transistors on silicon wafers would hit the atomic size limit merely ten years later. As Moore’s projection has shrugged o↵ other obstacles in its way before, this will likely not signify the end of the steep progress curve, but a shift to a new paradigm in computing – for instance towards optical [6, 7], DNA [8] or quantum computers [9, 10] Amongst these candidates, quantum computing seems the furthest out of reach, yet it entices with some very compelling arguments: While its raw processing speed is, currently, reminiscent of that in last centuries’ vacuum tubes, the theoretical computing capabilities are believed to exceed that of any conceivable classical calculator [11]. By exploiting quantum phenomena in its very building blocks, quantum computers promise an impressive speedup for specific tasks such as prime factorization [12] and search algorithms [13]. And in particular, they also excel at simulating physical and chemical systems [14, 15] – the very key to understanding our universe. Unfortunately the quantum systems used for computation are highly susceptible to external noise [16], which makes quantum error correction one of the prime prerequisites for successfully building the next generation of computers. This chapter introduces the broader context of the topic as well as providing the reader with a refresher of the notation and tools for dealing with quantum systems.

1.1

From Classical to Quantum Computing

The basic unit of information in current computers is the binary digit, or “bit”, an entity that can store a single discrete value: 0 or 1. For processing, all information is translated into streams of bits, which are then manipulated using elementary bit operations. The state of a classical computer at any given moment is uniquely identified by the contents of all available storage units and computation is achieved by defining how the machine transitions from the current state to the next one. The generic model for such a computing machine was introduced by Alan Turing in 1936 [17]. He outlined that a program for any conceivable computer could be translated into an equivalent instruction set that would run on his “Universal Turing Machine”. In classical complexity theory this notion has led to the development of the strong Church-Turing thesis [17, 18], stating that “any algorithmic process can be simulated efficiently using a (probabilistic) Turing machine”. Here the word efficient refers to the ability of achieving the task within a number of steps that grows at most polynomially with the task to simulate (as opposed to exponentially). 3

Quantum Computing The strong Church-Turing thesis was challenged for the first time by the concept of classical analog computers [19], whose registers can store continuous information (for instance as an electric current). This enhanced building block could, in theory, allow for more complex tasks to be solved efficiently. Unfortunately, the realization of such a device was thwarted by the presence of noise, which could not be dealt with efficiently. Apparently analog computers do not have a significant advantage over their their digital counterparts under realistic noise assumptions [20]. This has procured the understanding that error correction plays an important role in the conception and analysis of new computing paradigms. In an e↵ort to derive an improved version of the Church-Turing thesis, David Deutsch tried to reformulate it using the boundaries known to modern physics as a foundation [9]. Naturally, this led him to the consideration of quantum systems as the essential building blocks in a new model for computation – a concept that had also been contemplated [10] by Richard Feynman a few years earlier. The universal quantum computer devised by Deutsch in 1985 uses a quantum two-level system as its basic unit of information. In accordance with the laws of quantum mechanics, a quantum bit, or “qubit”, can not only store its two pure basis states, but also arbitrary (complex) superpositions thereof. There are two key features that make Deutsch’s proposal particularly interesting: First, there are interesting quantum e↵ects inherent to multi-qubit systems which can be utilized to develop novel, more powerful algorithms. With quantum superposition and entanglement the state of a quantum register is much more versatile than its classical counterpart [10, 11]. Unfortunately, due to the wave function collapse during a quantum measurement, harnessing this potential is a very challenging, yet not entirely impossible, task [12, 13]. Second and more importantly, in contrast to analog computers, it has been shown that error correction is feasible – albeit very costly – for qubits in a quantum computer [21].

1.2

The Power of Quantum Parallelism

The first quantum algorithms that showed a significant improvement over their classical counterparts were published by Peter Shor for prime factorization and the discrete logarithm [12]. The central ingredient to his insights is quantum parallelism [20, 22], which refers to the ability of evaluating a function for superposition input states. That is, if we are able to implement a function in a quantum circuit, f (x) : {0, 1} ! {0, 1}, then we can compute a superposition of multiple function values by supplying the appropriate input state: ✓ ◆ |0i + |1i |f (0)i + |f (1)i p p f = . (1.1) 2 2 4

1.2 The Power of Quantum Parallelism The resulting quantum state on the right hand side is remarkable because it contains information about both return values of the function f (x) – despite only having evaluated it once, with a single quantum circuit. Note that this is in contrast to the current trend towards classical parallelism on silicon chips, which builds on multiple circuits. The essential di↵erence being that quantum parallelism can make use of entangled multi-qubit states, allowing it to scale linearly to arbitrary many qubits. The drawback on the other hand is that these solutions, while actually generated in parallel, only exist as a quantum superposition: Any conceivable measurement will cause a collapse of the wave function, extracting at most a classical amount of information. In fact, a direct measurement would just yield one of the solutions at random, which would be no improvement over a randomized classical solver. The best we can hope to do is to perform a carefully-chosen measurement in such a way that it allows us to extract useful information about the superposition state. A minimal example to demonstrate the power of quantum parallelism is the quantum coin toss: Given an unknown coin, the task is to determine whether it is fair or forged (i.e., whether the two sides are di↵erent or equal). Classically we need to perform two measurements – looking at either side once. In a quantum world, we can do better using what is known as Deutsch’s algorithm [9]: Suppose the sides are represented by a function f (x) : {0, 1} ! {0, 1}, where the input specifies the side to be measured and the binary output means heads or tails. If the coin function f (x) is implemented in a quantum circuit, we can apply the unitary transformation Uf : |x, yi ! |x, y f (x)i to a suitable superposition input state 

|0i + |1i p 2



|0i |1i p 2

Uf

!



( 1)f (0) |0i + ( 1)f (1) |1i p 2



|0i |1i p 2

,

where the bit-wise addition on the second qubit, y f (x), leaves its anti-symmetric state invariant for f (x) = 0 and introduces a sign for f (x) = 1. Since the sign change depends on the function values of f (x), the resulting state of the first qubit is symmetric when f (0) = f (1) and anti-symmetric otherwise. Furthermore, the overall phase is physically irrelevant and because the two states are orthogonal, there exists a quantum measurement to distinguish them with certainty. Hence we can solve the task in just one quantum evaluation of f (x). Note that the carefully crafted two-qubit input state and function evaluation have caused quantum interference that sets one state’s amplitude to zero. P While we have only gained one bit of information [i.e., the parity of x f (x)], it is important to appreciate that it is a bit of summary information which – classically – would require all function values to be evaluated. 5

Quantum Computing

1.3

Building a Quantum Computer

Given the prospective quadratic or even exponential speedup due to quantum parallelism, the next question to ask is how to construct a device capable of harnessing its power. For a quantum system to be a suitable component, it should feature an ideal two-level quantum system that is well isolated from external influences, yet still highly accessible to allow for precise manipulations and measurements. Furthermore it must allow for efficient multi-qubit coupling to implement controlled quantum gates, as well as to transfer the stored quantum state to a di↵erent qubit. There are several proposals for building an actual physical qubit, all of which have their respective advantages and disadvantages. Cavity quantum electrodynamics focuses on quantized electromagnetic fields as a highly-mobile carrier of quantum information, allowing for the use of many existing optical techniques. Because photons do not interact readily, a mediator is needed to have photons exchange quantum information. This can be achieved, for instance, by coherently coupling the photon to another quantum object in a single mode cavity. In circuit quantum electrodynamics [23], this process takes place in an electronic circuit, where the photon is stored in an on-chip resonator and interacts with superconducting qubits. These are built from superconducting circuits coupled to Josephson junctions, where the quantum information is stored either as a charge (presence/absence of a cooper pair) or in the magnetic flux. Despite the low temperatures required for superconductivity and to reduce thermal noise, this potent combination is one of the most promising candidates for quantum computation. Another approach for the realization of a controllable quantum twolevel system relies on storing the quantum information in stable electronic states of a charged atom. For the purpose of precise manipulations and interactions, these ions are trapped by means of a magnetic field [24, 25]. Radiation with electro-magnetic pulses can then be used to accomplish single qubit operations as well as to control interactions between neighboring atoms. The information stored in these suspended quantum systems has proven to be very stable and single qubit manipulations can be carried out reliably. Furthermore, coherent entanglement of up to 14 trapped ions has recently been demonstrated [26], making this another example of a viable architecture for quantum information processing. Its main challenge lies in scaling to larger numbers of qubits, because interactions are limited to nearby atoms within a linear trap. Further approaches include electrons in a quantum dot [27] as well as controlling ensembles of molecules using nuclear magnetic resonance [28]. More recently advancements have also been made in the concept of using Majorana fermions for quantum computation [29]. At the time of this writing, a hybrid system combining the desired properties from di↵erent 6

1.4 State of a Quantum Bit approaches appears to be the most probable to succeed. While this poses new problems in terms of engineering a reliable interface between di↵erent technologies, it would allow for highly stable storage entities, which are connected by a mobile “bus system” to units that allow for fast manipulations – each utilizing the approach best suited for the task. However each of these components must operate reliably, underlining the absolute necessity of quantum error correction when trying to build such a device.

1.4

State of a Quantum Bit

A quantum two-level system can be in a superposition state that is an arbitrary linear combination of the chosen basis states. The following equivalent notations describe the single-qubit basis states and their dual counterparts: ✓ ◆ ✓ ◆ 1 0 |0i = |"i = , |1i = |#i = , (1.2) 0 1 h0| = h"| = 1 0 ,

h1| = h#| = 0 1 .

(1.3)

The bra-ket notation is useful when calculating with quantum states: The inner product is expressed as h | i, which refers to a (vertical) state vector being multiplied by its dual (horizontal) counterpart. Two states are said to be orthogonal when their inner product vanishes, h'| i = 0. Furthermore, the outer product is useful for defining quantum projection operators: It can be written as |'ih'|, which applied to a state | i yields: (|'ih'|) | i = |'i h'| i . | {z }

(1.4)

scalar

This has both the interpretation of an operator being applied to the state (left hand side) and of multiplying |'i with the complex scalar resulting from the inner product of | i and |'i (right hand side). As an alternative pair of basis states, the following symmetric/anti-symmetric superposition states are frequently used in examples for quantum computation: ✓ ◆ |0i + |1i 1 1 p |+i = = p , (1.5) 2 2 1 ✓ ◆ |0i |1i 1 1 p | i= = p . (1.6) 1 2 2 The orthogonality of these two superposition states can be verified by using the vector notation; their relative phases are 0 and ⇡, respectively. 7

Quantum Computing Given a computational basis, the general state of a two-level quantum system can be any superposition of |0i and |1i with complex amplitudes ↵ and . The Bloch sphere representation [20] provides a geometrical ¯ = 1, interpretation of a single-qubit state: Because we know that ↵↵+ ¯ the state can be written in terms of an overall phase , a relative phase ' 2 [0, 2⇡) and a mixing angle ✓ 2 [0, ⇡]: h i | i = ↵ |0i + |1i = ei cos(✓/2) |0i + ei' sin(✓/2) |1i . (1.7) Note that cos(✓/2) is real and non-negative; i.e., both the imaginary part as well as the sign of ↵ are absorbed in the overall phase ei , which is physically meaningless. We can thus represent | i on a 2-sphere using ✓ as the inclination and ' as the azimuth: | i ! ~r = (cos ' sin ✓, sin ' sin ✓, cos ✓) .

(1.8)

Defined in this manner, the north and south pole correspond to the basis vectors |0i and |1i, respectively (see figure 1.1). Furthermore, the alternative basis states |±i from equation (1.6) are mapped to opposite points on the equator at ~r = (±1, 0, 0). In fact, any two antipodal points on the sphere’s surface represent orthogonal, pure quantum states. The Bloch sphere representation will be particularly useful in understanding the e↵ect of single-qubit operations and error channels. More exciting than the state of a single-qubit are the properties that arise in multi-qubit systems. The state of a composite system consisting of independent qubits can be written as the tensor product of their states: |+i1 ⌦ | i2 =

|0i1 + |1i1 |0i2 |1i2 1 p p ⌦ = [|00i 2 2 2

|01i + |10i

|11i] .

(1.9) However, not all composite states can be decomposed into a tensor product of individual qubit states. Consider for instance the following correlated state: |00i + |11i p | EPR i = . (1.10) 2 We call this an entangled state because a measurement on the first qubit allows us to infer what a measurement on the second qubit will yield. The particular example in equation (1.10) is dubbed an Einstein-PodolskyRosen pair or Bell state; it finds application in examples of quantum teleportation [30] and superdense coding [31].

8

1.4 State of a Quantum Bit z |0i

| i

|+i x y |1i

Figure 1.1: The Bloch sphere is a geometrical representation of a single-qubit state. By convention, the north and south pole are identified with the basis states |0i and |1i. Any superposition state is associated with a point on the sphere’s surface, with its orthogonal state placed on the opposite side. The visualization is useful as an intuitive way of understanding single-qubit states as well as manipulations and error channels acting on them.

Indeed, entanglement is what enables quantum parallelism and makes this computing paradigm much more powerful than a classical computer: The state of a set of N classical bits describes a point in an N -dimensional, discrete vector space over Z2 . In particular, the dimension of the classical state space grows only linearly with the number of bits we add into the system. In contrast, for a set of N qubits we must consider the possibility that they could be in any superposition of the 2N basis states. Therefore, each state represents a point in the Hilbert space H⌦N , which has dimension 2N . For instance, the following 8 = 23 parameters can be used to describe a three-qubit state: | i

= +

↵000 |000i + ↵001 |001i + ↵010 |010i + ↵100 |100i ↵110 |110i + ↵101 |101i + ↵011 |011i + ↵111 |111i .

(1.11)

As a result qubits can, in principle, store exponentially more information between measurements, but only a classical amount can be extracted.

9

Quantum Computing

1.5

Manipulating Qubits

Manipulations in quantum mechanics are represented as linear operators. In bra-ket notation such an operator is a linear combination of outer ˆ | i is used to denote the result of applying the operator products and A ˆ to the state | i: A ˆ ˆ| i= A(| i) := A

X i

ai |'i i h

i|

| i.

(1.12)

The result is a linear combination of |'i i with each amplitude dictated by ai and the inner products h i | i. If the resulting state is the same ˆ | i = | i, then | i as the original one (up to a complex prefactor), A ˆ with eigenvalue . is considered to be an eigenstate of operator A By selecting an operational basis, i.e., a set of states {| i i} that span a given Hilbert space H, each state can be expressed as a complex state vector ~v , which denotes a linear combination of the basis states: X | i= vj j . (1.13) j

ˆ can be represented by a matrix Aij acting Furthermore each operator A on the state vector. Due to the linearity, the action of such an operator is completely determined by the action on the basis states, X X X X ˆ ˆ j i) = A vj j = vj A(| vj Aij | i i . (1.14) j

j

j

i

The basic single-qubit manipulations can be represented as 2 ⇥ 2 maˆ phase-flip (Z) ˆ and trices, which are usually referred to as qubit-flip (X), ˆ = iX ˆ Z). ˆ They correspond to the Pauli matrices: their combination (Y ✓ ◆ ˆ = ˆX = 0 1 , X (1.15) 1 0 ✓ ◆ i ˆ = ˆY = 0 Y , (1.16) i 0 ✓ ◆ 0 ˆ = ˆZ = 1 Z . (1.17) 0 1 ˆ operator exchanges the amplitudes of |0i and |1i, which is why it The X is referred to as a qubit-flip gate. Furthermore, the states |+i and | i are eigenvectors of ˆ X with eigenvalues +1 and 1, respectively. Hence ˆ and are mapped onto themselves these two states are una↵ected by X 10

1.5 Manipulating Qubits z |0i

| i

|+i x y |1i

Figure 1.2: E↵ect of Pauli gates on the Bloch sphere: The qubitˆ can be visualized as a rotation about the x-axis by an flip gate X angle of ⇡. Therefore the basis states |0i and |1i are exchanged as expected. Furthermore, we note that the eigenstates |+i and | i of ˆ gate are not a↵ected by this rotation. In a similar way, the Pauli X ˆ and Z ˆ can be visualized as rotations about the y the Pauli gates Y and z-axis, respectively.

when considering the action of the operator on the Bloch sphere. For a general state, | i : ↵ |0i + ei' |1i

!

|0i + e

i'

↵ |1i ,

(1.18)

ˆ the e↵ect of the X-gate corresponds to a rotation of the Bloch sphere by ˆ and Z ˆ operators ⇡ about the x-axis (see figure 1.17). Similarly, the Y correspond to rotations about the y and z-axis, respectively. The peculiarity of quantum mechanics lies in the implicit influence of the observer: a system’s behavior depends on whether and how it is measured during the course of its evolution. This phenomenon is referred to as collapse of the wave function. When an operator is applied as a measurement, the outcome can only be one of the operator’s eigenvalues and the post-measurement state is the eigenstate corresponding to this ˆ has the eigenstates |0i eigenvalue. For instance, the Pauli operator Z and |1i with eigenvalues ±1, respectively: ˆ |0i = + |0i , Z

ˆ |1i = Z

11

|1i .

(1.19)

Quantum Computing ˆ the possible outcomes are therefore ±1 and For a measurement of Z, the post-measurement state will be |0i if +1 is measured, and |1i for ˆ can be used to distinguish the basis 1. Therefore, a measurement of Z states |0i and |1i. Furthermore, when multiple eigenstates of an operator share the same eigenvalue , they are considered to span an eigenspace of states. Note that, from the degenerate measurement outcome, the states sharing an eigenvalue cannot be distinguished. Therefore there is no collapse of the wave function with respect to this subspace. This is used in projective measurements, which are used frequently in quantum error correction. They are constructed from projection operators which project a quantum state onto the subspace spanned by a set of orthogonal states {| i i}: X ˆ := P | i ih i | . (1.20) i

ˆ with eigenvalue +1, the Because by definition all i are eigenstates of P corresponding eigenspace is spanned by all states | i and the result of ˆ is a projection of the state into this eigenspace. By combining applying P multiple such projection operators with di↵erent eigenvalues, the Hilbert space of a quantum system can be decomposed into several orthogonal eigenspaces. Finally, to achieve quantum computation, we also require a way to have multiple qubits interact. The quantum CNOT-gate flips a target qubit based on the state of a control qubit [20]. Its 4 ⇥ 4 matrix representation can be written as: 0 1 1 0 0 0 B0 1 0 0 C C UCNOT = B (1.21) @0 0 0 1 A . 0 0 1 0 Equivalently, we can interpret the gate as a binary addition (modulo 2) on the second qubit: |', i ! |', 'i. In combination with arbitrary single-qubit operations, the CNOT-gate can be used to construct any conceivable multi-qubit operator [20].

12

1.6 Quantum Error Channels

1.6

Quantum Error Channels

Quantum superposition states are a rather fragile occurrence: While quantum mechanics dictates that a system’s evolution must be unitary (and thus reversible), this requirement only holds for a perfectly isolated quantum system. When allowed to interact with its environment, only the composite system adheres to the rule. This can convey the impression that the subsystem evolved non-unitarily whenever an exchange of information with the environment takes place. This phenomenon is referred to as decoherence; its symptoms are not unlike the e↵ect of a quantum measurement that causes the system’s wave function to collapse. The post-decoherence state may di↵er from the original one or even be replaced by an entirely unknown state. Because quantum systems can never be perfectly isolated, dealing with decoherence represents a big challenge in trying to build reliable quantum computing devices. For the analysis and discussion of di↵erent strategies to cope with decoherence, it is important to have a concise description of the type and frequency of errors that can occur. Since the need for classical error correction had its origin in unreliable communication channels, error types are referred to as noisy (quantum) channels in information theory [32]. The typical assumption is that individual errors are random and uncorrelated both in time and location. The extent to which an error channel disturbs a set of qubits can either be quantified by the maximum number of a↵ected qubits or with a constant qubit error rate p. While perfect error correction can be achieved for the former scenario (by adding as many qubits as needed), we can only hope to reduce the error rate using a code in the latter situation. For a small error rate, even though concurrent errors on all qubits are possible, the chance of such an event is highly unlikely (pN , p ⌧ 1). Therefore, we consider an error channel to be correctable, if restoring the original information succeeds with very high probability. Describing the e↵ect of an error channels is more convenient in the density operator formulation of quantum mechanics, because it provides simple means to describe a state which is not completely known. In particular, it will allow us to write the output of an error channel as a mixed state of the possible error channel results. The density operator ⇢ describes a mixed state as consisting of a weighted combination of several pure states | i i: X ⇢ = pi | i ih i | . (1.22) i

Equivalently, this can also be interpreted as a statistical ensemble of states, where the pi denote the fraction of ensemble states that are in state | i i. Thus, if the e↵ect of an error channel is measured many times, the ensemble interpretation provides an explanation for the distribution 13

Quantum Computing of measurement outcomes. Note that superposition states such as |+i are not mixed states, because we have complete information about the qubit state (both amplitude and the relative phase) and all the states in the corresponding ensemble would be identical. On the other hand, an unknown state which was prepared as | i or | 0 i with respective probabilities p and (1 p), is best described as a mixed state (or ensemble) with density operator ↵⌦ 0 ⇢ = p | ih | + (1 p) 0 . (1.23)

In order to interpret such mixed quantum states in the picture of the Bloch sphere, the matrix representation of the density operator ⇢ can ˆ and the Pauli matrices be expanded in terms of the identity matrix 1 ~ = ( X , Y , Z ) [20]: ⇢ =

ˆ + ~r · ~ 1 1 = 2 2



1 + rz rx + iry

rx 1

iry rz



.

(1.24)

The real, three-dimensional vector ~r = (rx , ry , rz ) in this definition maps pure states to their previously defined coordinates on the Bloch sphere’s surface, but also allows for the identification of the Bloch sphere’s interior with mixed quantum states. A general quantum error channel is a mapping E(| i) : ⇢ ! ⇢0 which acts on an ensemble of states. It can be represented by a set of operation elements {Ei }, X E(⇢) = pi Ei ⇢Ei† , (1.25) i

where pi represents the probability for Ei to act on the ensemble. Error correction then corresponds to fining a mapping R that, when composed with the error mapping, acts trivially on any code state: (R E)(⇢ ) = ⇢ .

(1.26)

As an example, consider the qubit-flip channel, which is the quantum analogon to the classical bit-flip channel. The qubit-flip gate, described ˆ exchanges the states |0i and |1i by performing a rotation by ⇡ by X, about the x-axis. A qubit a↵ected by the qubit-flip channel will thus remain unharmed with a probability of (1 p), while having a qubit-flip ˆ applied with probability p: gate X Eflip (⇢ ) = (1

ˆ X ˆ. ˆ 1 ˆ + pX⇢ p)1⇢

(1.27)

ˆ ˆ and X. The operation elements of the qubit-flip channel are therefore 1

14

1.6 Quantum Error Channels

z |0i

| i

|+i x y |1i

Figure 1.3: E↵ect of the qubit-flip channel on the Bloch sphere: Identification of the Bloch sphere’s inside with mixed quantum states allows for the visualization of the qubit-flip channel with probability p as shrinking the Bloch sphere by a factor of (1 p) in both the y and z-axis. This nicely illustrates that the eigenstates |±i of ˆ are not a↵ected by the qubit-flip channel – apart from a global X phase that is attained.

The e↵ect on the Bloch sphere can be analyzed by applying the operation elements to the Bloch sphere representation (1.24) of a density operator. For the basis states, the resulting mixed ensemble of states lies on the z-axis between the two poles. At a qubit-flip rate of p = 0.5, the output is completely unknown and therefore the basis states are mapped onto the center, which is the completely mixed state. On the other hand, ˆ operator, |+i and | i are not a↵ected by X, ˆ the eigenstates of the X resulting in an overall rescaling of the Bloch sphere in both the y and z-axis, but not along the x-axis (see figure 1.3). The depolarizing channel is a more realistic type of noise that takes into account possible correlations between qubit-flip and phase-flip errors: With a probability of (1 p) the state remains unharmed, while ˆ Y ˆ and Z ˆ occurs with a probability of p/3. each of the possible errors X, ˆ Because a Y-gate is a combined flip and phase error, this introduces correlations, and the corresponding channel is given by: E(⇢) = (1

p)⇢ +

p ˆ ˆ ˆ Y ˆ + Z⇢ ˆ Z) ˆ . (X⇢X + Y⇢ 3 15

(1.28)

Quantum Computing

z |0i

| i

|+i x y |1i

Figure 1.4: E↵ect of the depolarizing channel on the Bloch sphere: The depolarizing channel has the same probability for each type of error to occur. Consequently the e↵ect on the Bloch sphere is isotropic: the sphere is rescaled by a factor of (1 p). Since the depolarizing channel also has the interpretation of replacing a qubit with a completely mixed state, its outcome will always be an ensemble that is closer to the Bloch sphere’s center than the original density operator.

Equivalently, the depolarizing channel can also be described as replacing the state with a completely mixed state with a chance of p0 = 4p/3 [20]. The e↵ect of this error channel on the Bloch sphere can therefore be thought of as isotropically shrinking the sphere (see figure 1.4). Errors arising from faulty channels such as the qubit-flip and the depolarizing channel complicate the use of quantum computers, but as will be shown in chapter 2, they are not entirely unmanageable with proper error-correction techniques in place. Unfortunately, the measurements and manipulations carried out during the error-correction procedure itself are also quantum operations, and as such they are equally susceptible to disturbance. Consequently, even if we have the best errorcorrection strategy in place, a faulty measurement can distort our perception of whether and where an error happened, leading to a faulty errorcorrection procedure. This has forced the development of new strategies to deal with the added difficulty arising from possible measurement errors; a concept which is summarized under the term of fault-tolerant quantum computation [33, 34, 35]. The performance of topological codes under this error channel is analyzed in chapter 7. 16

Chapter 2

Quantum Error Correction “The impossible often has a kind of integrity to it which the merely improbable lacks.” — Douglas Adams

An error-correction code is a scheme to prevent loss of information due to an error channel. Whenever information is stored or transmitted on a fragile carrier, the chance of degradation can be reduced greatly by storing the information redundantly. Then, as long as the damage caused by the error channel is not too severe, error recovery is possible by reconstructing the missing information from the redundant parts. In classical storage media, stability to errors is achieved at the hardware level, for instance, by (redundantly) magnetizing a whole patch on the hard disk, rather than just a single atom. Topological quantum error correction tries to accomplish the same by encoding the logical information on a surface of non-trivial topology. Because the code states are separated by global excitations, the quantum information stored in such a topological system is well protected as long as errors remain small and local. Indeed these topological stabilizer codes are currently amongst the best candidates for efficiently dealing with decoherence in a quantum computer, in particular due to the locality of all the operations involved in the error-correction process. The analysis of an error-correction code consists of the likelihood and types of errors which can be dealt with, as well as the consideration of possible undetectable errors. After outlining these concepts for a classical code, this chapter explains how they can be adapted in a quantum world, followed by a detailed discussion of topological quantum error-correction codes and their relation to classical spin models. 18

2.1 Classical Codes

2.1

Classical Codes

Concepts from classical error-correction theory are already used abundantly in current technology: Since digital information can easily be duplicated, simply having one or two backups at hand is sufficient to safeguard against potential failure – a strategy that is coined a “repetition code”. In fact, smarter error-correction codes even allow us to detect and correct errors on the spot, using only a few extra bits [36, 37]. In order to discuss classical error correction codes, a notion of distance is instrumental: The Hamming distance of two states is the number of bits by which they di↵er in their binary notation. With this definition, every bit-flip error will move the state by a distance of one. In general, the addition of redundant bits e↵ectively increases the dimensionality (and thus the volume) of the space of possible states H. This allows for the limitation of all operations to a suitable code subspace C ⇢ H of valid code states (see figure 2.1). To construct, for instance, a classical code that stores the logical information of four bits in seven physical bits, we need to choose 16 = 24 states from all possible 7-bit states which we deem valid code states. This is essentially a mapping ⌦7 of all possible 4-bit states 2 Z⌦4 =: H of all 2 , to a subset C ⇢ Z2 possible states. In doing so, we use three extra bits to redundantly store the information. An example of how these states can be chosen is shown in table 2.1. Without the encoding, any 4-bit state 2 Z⌦4 that is 2 0000 1000 0100 1100 0010 1010 0110 1110

! ! ! ! ! ! ! !

0000000 1110000 1001100 0111100 0101010 1011010 1100110 0010110

0001 1001 0101 1101 0011 1011 0111 1111

! ! ! ! ! ! ! !

1101001 0011001 0100101 1010101 1000011 0110011 0001111 1111111

Table 2.1: Example for encoding the logical information of four bits (24 = 16 values) in seven physical bits, known as the Hamming(7,4) code [36]. For the above translation table, only 16 of the 27 7bit states were selected to represent valid code states that are used to store information. They can be verified by checking that the parity of the bit-groups (1,3,5,7), (2,3,6,7) and (4,5,6,7) is even. The remaining states are error states which can only occur if a bit of a code state is flipped. Notice that all the code states di↵er by at least three bits (i.e., the minimum Hamming distance is d = 3). As a result recovery from any single flip error is possible: because bit flips can only move the state by a distance of one, the resulting error state always has a unique closest neighbor in the code subspace.

19

Quantum Error Correction

C

(b)

H

(c)

(a)

Figure 2.1: Code states of a classical code: Redundantly encoding information corresponds to selecting a few valid code states from a larger state space. (a) A code state is said to be correctable, if the original state can be uniquely identified. If d describes the minimum distance between any two code states, then error correction is possible when the number of errors is less than d/2: The resulting code state will always be closer to the original code state than any other code state. (b) If the number of errors is between d/2 and d, errors can still be detected reliably (because no other code state can be reached in less than d steps), but recovery is not possible. (c) For d or more errors, there is a possibility for errors to cause a direct change between code states. This is considered an undetectable error.

a↵ected by a bit-flip error channel would be transformed into another 4-bit state: E4 ( ) : 4 ! 40 2 Z⌦4 (2.1) 2 , thus corrupting the information. For encoded information, the stability to errors depends on the choice of the code states: Ideally, all encoded states are surrounded 1 by states outside the code, such that any single bit-flip error E : 7 ! 70 transforms a code state 7 2 C into a neighboring non-valid state 70 2 H \ C. For the code in table 2.1, single bit-flip errors cannot transform one valid code state into another, because they were carefully chosen to di↵er by at least three bits. This property, the minimal Hamming distance of any pair of code states, is referred to as a code distance of d = 3. As a result, whenever we find a state that is not from the list of valid code states, we know that an error has occurred. 1

The surrounding of a state is the set of all states which have a distance of 1.

20

2.2 Quantum Repetition Codes Furthermore, since the code distance in this case is d = 3 and every bit-flip moves the state by a distance of one, the code can detect up to d 1 = 2 errors with certainty (see figure 2.1). Error recovery on the other hand is only feasible when the output of the error channel allows for the unique identification of the original state (i.e., the error state is guaranteed to be closer to the original state than to any other code state). This is equivalent to requiring that the e↵ect of the error channel cannot map two di↵erent code states to the same error state. For the code described in table 2.1, this translates to d/2 1 = 1 bit-flip errors being correctable, while recovery from multiple errors could potentially change the code state and thus a↵ect the encoded information. Furthermore, a number of errors equal to the code distance d = 3 can directly transform one code state into another, which is referred to as an undetectable error. The important realization is that the total volume of the state space H grows exponentially with every bit, while the number of neighbor states only increases polynomially (for a fixed number of errors). Therefore we can always achieve error correction by adding an adequate number of redundant bits. A large variety of classical error-correction codes has been developed [36, 38, 37] with numerous applications in the electronics industry, ranging from compact disks to transmission protocols, as well as two-dimensional bar-codes (see figure 3.1).

2.2

Quantum Repetition Codes

While classical bits can only su↵er from a single discrete type of error – the bit-flip – the task of error correction turns out to be much more delicate in a quantum world. Because the internal state of a qubit has multiple continuous degrees of freedom, there exists a whole class of possible errors that could occur. Furthermore, it turns out that quantum information, contrary to its classical counterpart, cannot be duplicated. This circumstance is known as the quantum no-cloning theorem [39, 40], and it is the main reason why error correction is such a challenging task. P. Shor demonstrated [21], that error correction is possible in spite of these difficulties. His 9-qubit code encodes the logical information of one qubit such that recovery from an arbitrary single-qubit error is possible. While inspired by classical repetition codes, the strategy has to account for the constraints of quantum measurements: We cannot directly measure the individual qubits to detect errors as this would cause their superposition state to collapse. The basic idea can be demonstrated with the following simplified three-qubit repetition code that can correct individual qubit-flip errors. Given an arbitrary state | i = ↵ |0i + |1i, 21

Quantum Error Correction we generate an entangled three-qubit quantum state of the form | i ! ↵ |000i +

|111i .

(2.2)

Note that this is not copying of the state, but rather a distribution of the information to all qubits by encoding it in an entangled subspace. In analogy to the code states in classical codes, we consider the subspace with base vectors |000i and |111i to be the code subspace of the whole Hilbert space H⌦3 . The goal is to find measurements which allow us to test whether a state is in the code subspace, without actually measuring individual qubits. This can be achieved by projective two-qubit measurements of the form ˆ = |00ih00| + |11ih11| S

|01ih01|

|10ih10| .

(2.3)

When applied to the first two qubits in the code, the measured eigenvalue ˆ 12 of S12 will be positive when the qubits agree, and negative when they di↵er. Since all states in the code space described by equation (2.2) have qubits aligned, this allows us to infer whether a qubit-flip error occurred on one of the two qubits. However, because the eigenvalue 1 is degenerate, we cannot determine which qubit was a↵ected by the channel. Likewise, because the eigenvalue +1 is degenerate, we cannot distinguish code states and the encoded state does not collapse when the measurement is applied. Furthermore, by also applying the same check ˆ23 to qubits two and three, we can gain enough information operator S to locate the error. This can be seen by considering the e↵ect of a single qubit flip channel on the encoded state. The possible outcomes are given by 8 ↵ |100i + |011i 1st qubit flipped > > < ↵ |010i + |101i 2nd qubit flipped E(| i) = . (2.4) ↵ |001i + |110i 3rd qubit flipped > > : ↵ |000i + |111i no error Note that the measured eigenvalues 12 , 23 2 {±1} have di↵erent values for all four possibilities. These sets of eigenvalues correspond to four eigenspaces of the Hilbert space H⌦3 = C++

C0

+

C0

0 C+ ,

(2.5)

where the first one (corresponding to 12 = 23 = +1) is the code ˆ12 and subspace. Thus, the e↵ect of measuring the check operators S ˆ S23 is the projection of the state onto one of these subspaces with the measured eigenvalues denoting which one. If the number of errors is constant, this three-qubit code can correct exactly one qubit-flip error on an arbitrary physical qubit. For an error 22

2.3 Stabilizer Formalism rate of p per qubit, the chance of successful error correction is given by p0 = (1 p)3 + 3(1 p)2 p. For error rates p 6= 0.5, this provides improved reliability compared to using a single physical qubit to hold the state. This gain can be increased by repeatedly applying the same strategy to further encode the physical qubits. Through this code concatenation we can, in principle, reach arbitrary precision in quantum manipulations. Of course this does not account for the increased complexity of handling a large number of qubits, which might o↵set the gain in reliability. For a general code, the error threshold theorem [35] asserts that a quantum computer built from flawed components can still reliably store [41] and even compute [42] with quantum information, provided that the e↵ects of all errors are sufficiently small per qubit and computational step. This requirement is expressed as a maximum tolerable error rate, the critical error threshold. Below this limit we can, in principle, reach an arbitrary precision in our computation by concatenating the code multiple times.

2.3

Stabilizer Formalism

Analogous to the code words in a classical error-correction scheme, a quantum code has a representation of the logical values as code states. The stabilizer formalism [43, 44] allows for the systematic construction of a code subspace, given a number of physical qubits. Let Pn be the group of tensor products of single qubit Pauli operators for each of the ˆ 2 Pn n physical qubits. A state | i is said to be fixed by an operator S if it satisfies ˆ| i = +| i . S (2.6) This allows for the definition of the stabilizer S of a code C as the Abelian group of all operators which fix all code states | i 2 C: ˆ 2 P n |S ˆ | i = + | i 8 | i 2 C} . S = {S

(2.7)

In fact, it is sufficient to show that the requirement (2.6) holds for a ˆ of independent stabilizer generators, which can be used to write set {S} any element in S as a product. Equivalently, starting from a set of k ˆ ⇢ Pn , the Hilbert space H⌦n of n qubits can be check operators {S} k partitioned into 2 subspaces according to the eigenvalues of each check ˆi : operator S M H= C{ } , (2.8) { }

where each subspace is labeled by the set of measurement outcomes { }, called the error syndrome, and the code space is the simultaneous +1 eigenspace of the generators (by convention). The very same check 23

Quantum Error Correction ˆi } can then be used for error detection by measuring the operators {S error syndrome: only an eigenvalue of +1 for all check operators signals a valid code state. A code’s degeneracy describes the association of codes states with the encoded logical information. If the mapping is one-to-one, i.e., each logical state corresponds to exactly one code state, the code is called nondegenerate and any change of code state must be prevented to protect the encoded information. For a degenerate code, multiple code states correspond to the same logical information and a change between these states does not a↵ect the encoded quantum information. In this picture, the Pauli operators share the property that they map eigenspaces onto eigenspaces. In order to discuss their e↵ect on the code subspace, the notion of the code’s normalizer is important: The normalˆ 2 Pn for which each commutator izer N (S) is the group of all operators P with an element from S is another (possibly the same) element from S: ˆ 2 Pn N (S) = {P

ˆ 2 S 9S ˆ02 S : P ˆS ˆ0 = S ˆP} ˆ . 8S

(2.9)

Since such an operator will not change the sign of the measured eigenvalues, all operators in N (S) cause an undetectable change of code state. For a degenerate code, only some of these constitute actual errors a↵ecting the encoded information, while others act trivially on the encoded information. Errors can thus be divided into three categories: • Operators in Pn N (S) map the code subspace to another eigenspace and are thereby detectable as their e↵ect alters the error syndrome. They are also correctable if the target eigenspace is unique among all possible errors. • Elements in the normalizer N (S) do not a↵ect the error syndrome because they map the code subspace onto itself. But if they are not in the stabilizer, they act on the code space non-trivially. These are considered undetectable errors. • Finally, elements in the stabilizer S are undetectable and act trivially on the code subspace by definition. Each of the check operators which generate a stabilizer can be thought of as a projective measurement into eigenspaces. Since the spectrum for operators consisting of Pauli matrices is degenerate and consists only of the values ±1, each check operator has two such eigenspaces. Because we have defined the code subspace to be the simultaneous +1-eigenspace, the measurement of a negative eigenvalue will signal that an error has occurred and the state has been projected to an eigenspace outside the code. 24

2.4 Topological Stabilizer Codes The projective measurements demonstrated for the three-qubit code in ˆ1Z ˆ 2 and Z ˆ2Z ˆ 3 . As equation (2.3) correspond to the check operators Z another illustration, consider the following set of k = 4 check operators which generate the stabilizer for a 5-qubit code that can correct both qubit-flip and qubit-phase errors: ˆ1 S ˆ2 S ˆ3 S ˆ4 S

ˆ 2Z ˆ3Z ˆ4X ˆ5 = ˆI1 X ˆ 1ˆI2 X ˆ 3Z ˆ4Z ˆ5 =X ˆ1X ˆ 2ˆI3 X ˆ 4Z ˆ5 =Z ˆ1Z ˆ2X ˆ 3ˆI4 X ˆ5 . =Z

(2.10)

The code encodes the logical information of n = 1 qubit using k = 4 extra qubits (i.e., in n + k = 5 physical qubits). Besides the pure case, ˆ i, Y ˆ i, Z ˆ i } on each qubit), compared there are 3(n + k) possible errors ({X to 2k distinct error syndromes. It is straightforward to verify that any single-qubit error has a unique error syndrome for the four stabilizers specified above. Indeed, we can see that the code is optimal for n = 1 qubit: the 2k possible error syndromes cannot distinguish and identify 3(k + 1) + 1 di↵erent error situations for k < 4 (i.e., at least 4 extra qubits are needed for reliable encoding).

2.4

Topological Stabilizer Codes

The threshold theorem [35] establishes that quantum computation can be carried out with arbitrarily high precision if the qubit error rate does not exceed a critical value. The increased reliability in spite of errors is achieved by employing concatenated codes – codes nested within other codes [45] – until the desired precision is reached. However, this construction of combined codes heavily relies on the unrealistic assumption that mutli-qubit gates can be applied reliably to any set of physical qubits regardless of spatial separation. Fortunately, it has been shown that the threshold theorem still applies even if quantum operations are restricted to qubits which are geometrically close [35, 46]. A particularly interesting class of codes which are intrinsically local are topological error-correction codes [47, 48, 49, 50, 29, 51]. By encoding the logical quantum state on a topologically non-trivial surface, these codes are able to attain superb protection from decoherence, while still only requiring local gates for the error-correction procedure. As an added benefit, scaling the number of physical qubits is straightforward: rather than resorting to concatenated codes, simply adding qubits to enlarge the lattice does the job. Because all the steps for error correction are local, the setup also lends itself as a starting point for fault-tolerant quantum computation [33, 34, 35], which considers the possibility of an 25

Quantum Error Correction imperfect error correction process (see chapter 7). Furthermore, while the approach currently relies on active error correction, it is a step towards self-correcting codes, which achieve error tolerance at the hardware level [52, 53]. The first topological code introduced [47] was the Toric code by Kitaev in 1997. He demonstrated a systematic way of defining local stabilizer generators for a large number of physical qubits to yield a code that exploits the topology of the underlying setup. It requires qubits to be placed in a rectangular arrangement on the surface of a torus, with check operators acting on groups of four qubits that are situated next to each other. In an e↵ort to provide a consistent description of the Toric code and topological color codes, a slightly adapted2 explanation from the original publication will be used. Given the setup on a square lattice, the language of the stabilizer formalism can be used to describe the code. After assigning colors to the faces of the lattice in a checkerboard pattern, the colored plaquettes are ˆi of the form X ˆ ⌦4 and Z ˆ ⌦4 , respecassociated with a check operators S ˆ or Z ˆ operators acting tively. Each of these is the tensor product of X on the four qubits adjacent to its location (see figure 2.2). Defined in ˆi } are the generators of a stabilizer this manner, the check operators {S S (see section 2.3). The corresponding code subspace C is the simultaneous +1 eigenspace of the generators, such that the number of qubits that contribute a negative eigenvalue must be even for each plaquette operator. For instance, if we consider the case where all qubits are in an ˆ ⌦4 -plaquette must have an independent basis state |0i or |1i, then each Z even number of |1i qubits to be a valid code state, because the eigenvalue ˆ is negative. of |1i, with respect to Z, If the lattice has dimension L ⇥ L, then there are N = L2 physical qubits, which have a Hilbert space H⌦N . Likewise, there are N stabilizers, but they are not all independent: Because the lattice is on a torus, we can represent any stabilizer as the product of all other stabilizers of the same type: O w ˆw ˆj ˆ Y ˆ. S S w = X, (2.11) i = j6=i

Because Pauli operators of the same type commute and square to the identity, the tensor products in equation (2.11) contains exactly the four 2 ˆw Pauli operators in S 2 independent check i . Therefore, there are only L operators leaving a code subspace that can encode two qubits. Note that this is an e↵ect of the underlying topology; without relation (2.11) encoding would not be possible. 2 The original paper considered qubits to reside on the edges of a rectangular lattice, with check operators acting on the four qubits around a site or plaquette.

26

2.4 Topological Stabilizer Codes

ˆ ⌦4 X (a)

ˆ ⌦4 Z

(b)

(c)

Figure 2.2: Arrangement of qubits and check operators for the Toric code. (a) The Toric code places qubits at the vertices of a square ˆX and S ˆZ attached to plaquettes in a lattice, with check operators S checkerboard pattern (one check operator per plaquette). The underlying topological structure is established by periodic boundary conditions on the two-dimensional lattice. (b) An error operator acting on one of the qubits gives rise to a change in eigenvalue for the two adjacent check operators it anticommutes with. A qubitˆ on the blue qubit causes the eigenvalue of the two phase error Z ˆX to become negative. (c) adjacent plaquette operators of type S When multiple errors are next to each other, we consider them to form an error chain. Only the check operators at the beginning and end of the chain attain a negative eigenvalue, while intermediate plaquettes are adjacent to two qubits (i.e., their eigenvalue is ˆ negated twice). Note that a chain consisting of qubit flip errors (X) spans multiple plaquettes which have an associated check operator ˆZ and vice versa. of type S

27

Quantum Error Correction Because the check operators of the Toric code consist of products of ˆ or Z ˆ but not both, we can treat error correction for qubit-flip and X phase-flip errors separately. For simplicity, the following explanation ˆ and will thus focus only on qubit-flip errors described by operators X ˆ ⌦4 . To understand which errors the respective stabilizers of the form Z can be detected and corrected, it is useful to introduce the notion of an error chain E, which consists of a set of faulty qubits. It deserves its name because a line of faulty qubits connects plaquettes forming a chain. Any plaquette with two faulty qubits is considered “in the middle” of the chain. We call an error chain closed if it does not start or end at any plaquette – or equivalently if the number of faulty qubits adjacent to each check operator is even. If some plaquette operator has an odd number of faulty qubits, it is considered a boundary and we call the error chain open (see figure 2.2). The notion of error chains is relevant in describing the code states as well as the operators which leave the code space invariant. Note that the eigenvalue with respect to each check operator is negative when an odd number of faulty qubits are adjacent. Therefore an error chain E will change the eigenvalue for the check operators forming its boundary @E. Closed error chains have no boundary and do not change any of the eigenvalues. Because they commute with all generators, they are part of the normalizer N (S). We can check that closed loops do indeed commute with check operators because they share an even number of qubits, therefore the sign does not change. But which of these undetectable operators a↵ect the encoded information? It turns out that this depends on the homology of the error chain in question: • A chain is called homologically trivial if it can be expressed as a product of check operators. That is, we can find a subset of check ˆZi , such that their tensor product represents E. Since operators S the Pauli operators shared by adjacent stabilizer generators cancel, this corresponds to the possibility of tiling the “inside” of the chain E. Such a chain is contained in the stabilizer S and does not a↵ect the encoded information. • A chain is considered homologically non-trivial if it cannot be expressed as a product of check operators. This is the case when (part of) the error chain spans the system: Such an error chain cannot be detected, because it is in the normalizer N (S) but it a↵ects the logical information, because it is not in the stabilizer itself. They represent a non-trivial mapping of the code space onto itself. Note that these loops have no tilable “inside” (see figure 2.3). Error correction for the Toric stabilizer code is performed by measuring the eigenvalues of all check operators, yielding the error syndrome. This 28

2.4 Topological Stabilizer Codes

(b)

(a)

(c)

Figure 2.3: Di↵erent types of error chains on the lattice of the Toric code. (a) A chain which starts and ends at some plaquette operators is considered to be open. Equally this property can be expressed using the notion that for some plaquette operators, the number of adjacent faulty qubits is odd, causing a change in eigenvalue. These plaquettes are considered the boundary and the operator producing the chain does not commute with the stabilizer generators on the chain’s boundary. (b) A chain which has an even number of faulty qubits for all plaquette operators is considered closed. Since it does not have a boundary, it commutes with all check operators and is therefore in the normalizer of the stabilizer, N (S). Chains which are homologically trivial can be represented as a product of check operators, because their inside is tilable. Therefore they are in the stabilizer and, while undetectable, also do not alter the encoded information. (c) Chains which are homologically non-trivial cannot be represented as a product of check operators, hence they are not in the stabilizer and act non-trivially on the encoded information. At the same time they are boundary-less, and therefore undetectable. The appearance of such a system-spanning error chain is what needs to be prevented by performing active error correction.

29

Quantum Error Correction information essentially denotes the border @E of the error chain that a↵ected the system and the task is to deduce, from this, the location of the actual errors which occurred. However, the error syndrome is highly ambiguous, such that identifying the right chain from all the possible solutions seems a hopeless endeavor. Fortunately, the error detection does not need to be perfect: It is sufficient to identify the correct error up to homology. Since local loops are in the stabilizer and act trivially on the code subspace, correcting for an error E 0 which only di↵ers from E by a (set of) homologically trivial loops does not a↵ect the encoded information. Therefore, it makes sense to classify errors into the same ¯ when they are identical up to an element from the stabilizer error class E S (i.e., they share the same homology and have the same e↵ect on the encoded state). Since many error histories have an equivalent e↵ect, ¯ has occurred the ideal strategy is to compute which equivalence class E ¯ with the highest total probability P(E). Reliable error correction is then possible with high probability when there is a class of errors that dominates over the others.

2.5

Relation to Classical Spin Models

The initial paper relating topological error-correction codes to classical statistical spin systems with disorder was published by Dennis, Kitaev, Landahl and Preskill in 2002 [52]. Starting from Kitaev’s Toric Code they realized that, for a careful choice of the interaction constant J and the temperature T , the various choices for connecting the end points of a quantum error syndrome could be related to the configurations of a disordered spin system defined on the dual lattice. The following is a brief summary of how the mapping was established for the Toric code under the qubit-flip channel. In the subsequent chapters of this thesis, the method will be adapted for other topological stabilizer codes and error channels. As introduced in section 2.4, Kitaev’s proposal for the Toric code ˆ ⌦4 and Z ˆ ⌦4 acting arranges qubits on a square lattice with operators X on the qubits around each plaquette. During the error correction process, all check operators are measured. The resulting error syndrome denotes the end points of the error chain, but it is highly ambiguous in terms of the actual error chain E, where the errors occurred. Successful error correction amounts to applying the error-correction procedure for an error from the correct error class. For a constant qubit error rate p, the probability for a specific error chain E is determined by the number of faulty qubits |E|: P(E) = (1 p)N |E| (p)|E| , (2.12) where N is the number of qubits in the setup. Equivalently, we can 30

2.5 Relation to Classical Spin Models

ˆ ⌦4 Z

E0

(b) E

(c)

(a)

E 00

Figure 2.4: Ambiguity of the error syndrome and error classification. (a) An error chain E only anticommutes with the check operators on its boundary @E. Therefore the error syndrome only signals where the boundary of the actual error chain is – a piece of information that is highly ambiguous. (b) An alternative error chain E 0 which shares the same boundary @E 0 = @E and thus also the same error syndrome. During the error-correction procedure, these two errors cannot be distinguished using the information from the error syndrome. Notice however, that this error chain only differs from E by a product of check operators (the ones on plaquettes surrounded by C = E + E 0 ). As a result, correcting for E 0 when E was the actual error only introduces a homologically trivial cycle C, which does not a↵ect the encoded information. Hence E and ¯ (c) Yet another error E 0 are said to be in the same error class E. chain E 00 with the same boundary. However, in this case the cycle C 0 = E + E 00 spans the system, which means that correcting E 00 when E has occurred introduces a non-trivial loop, thereby altering the encoded information.

31

Quantum Error Correction describe the error chain with boolean values nE ` 2 {0, 1} for each qubit, describing whether an error occurred. Then the probability (2.12) can be expressed as P(E) =

Y

p)N

(1

nE `

E

(p)n` = (1

p)N

`

Y✓ `

p 1

p

◆ nE `

,

(2.13)

where the product is over all qubits `. Because the stabilizer measurements only yield the boundary of the error chain, there are many other error chains E 0 that are compatible with the same error syndrome. If the chains E and E 0 share the same boundary, then they can only di↵er by a (set of) cycles C, which have no boundary (see figure 2.4). The relative probability of E 0 = E +C compared to E depends on the relative number of faulty qubits, which increases for every qubit in C \ E but decreases for qubits in C \ E. Therefore, using analogous Boolean descriptors for 0 the chain, nC ` , we can write the relative probability P(E )/P(E) as: Y P(E 0 ) = P(E) `



p 1

p

◆nC ` (1

2nE ` )

/

Y `

exp[ J⌧` (1 2nC )] . | {z `}

(2.14)

u`

The newly defined variable u` 2 {±1} is negative for all links in C and we have introduced carefully-chosen coupling constants ⌧` 2 {±1} and a factor J such that e

2 J⌧`

= [p/(1

p)]1

2nE `

.

(2.15)

Note that the sign of ⌧` is dictated by the presence of an error in the reference chain E, and J is related to the error probability via the Nishimori condition [54]: ✓ ◆ p 2 J = ln . (2.16) 1 p The constraint for C to be cyclic (i.e., have no boundary) imposes the additional requirement that the number of adjacent faulty qubits ` 2 C must be even for every plaquette. A simple way to satisfy this condition is to introduce new Ising variables si 2 {±1} for each plaquette of the opposite color. For any choice of these spin variables si , the variables u` = si sj , with ` the edge between plaquettes i and j, describes such a cyclic set C. Note that this approach describes only cycles which are homologically trivial, because the construction implicitly satisfies the tiling condition (see figure 2.5).

32

2.5 Relation to Classical Spin Models

"

"

"

"

#

"

E0 "

" (b)

"

E

#

#

" (a)

"

"

"

"

Figure 2.5: Example spin configuration to represent an error chain ¯ (a) The reference error chain E defining from an error class E. ¯ The di↵erent spin configurations are used to the error class E. sample error chains from the same error class, according to their relative probability. Interactions between spins along the reference error chain are considered to be negative, such that the Boltzmann weight of the reference chain is proportional to (p/(1 p))|E| . (b) An alternative error chain which di↵ers from the reference chain by a product of spin-plaquettes. Together with the reference chain, it forms the boundary E + E 0 of a flipped domain of spins. Along the domain wall, spins are therefore anti-aligned and the corresponding spin products si sj in the Hamiltonian are negative. As a result, all the terms along E 0 increase the energy, while the ones along E (which were originally positive) are satisfied. Thus the Boltzmann 0 0 weight for E 0 is proportional to (p/(1 p))|E | |E | , producing the correct relative weight in comparison to the reference chain E.

33

Quantum Error Correction With this we have found that the spin configurations enumerate all error chains E 0 that di↵er from the reference chain E by tileable set of cycles. With relation (2.16) their Boltzmann weight is also proportional to the probability of the respective error chain. Therefore, it is possible to sample the fluctuation of error chains within the same error class by sampling configuration from the classical statistical model described by the partition function X X Z{⌧ij } = exp J ⌧ij si sj . (2.17) {s}

hi,ji

Here J is dictated by the Nishimori condition [equation (2.16)] and ⌧ij are quenched, disordered interaction constants which are negative if the associated qubit was faulty in the reference chain. Because the mapping identifies error chains with domain walls and their di↵erence with a flipped patch of spins, we can identify the ordered state with the scenario where error chain fluctuations are small and correct error classification is feasible. And while this sampling does not implicitly consider homologically non-trivial cycles, we can interpret percolating domain walls as error fluctuations which are too strong to reliably distinguish cycles of di↵erent homology. To summarize, we can gain a better understanding of a quantum errorcorrection code by studying the phase diagram of the related classical spin system. Because the Nishimori condition from the mapping imposes a requirement on the relation of J and p, the synergy is only valid along the Nishimori line. Once the phase boundary Tc (p) of the classicalstatistical model is known, the critical error threshold pc can be identified as the intersection point of the phase boundary and the Nishimori line, i.e., the largest p such that the Nishimori line is still within the ordered phase. For the Toric code with qubit flip errors, this was carried out by Dennis et al. in 2002. By performing numerical simulations of the related random bond Ising model with small amounts of disorder p, they were able to show [52] that the critical error threshold of the Toric code with qubit-flip errors is approximately pc = 0.1094(2).

34

Chapter 3

Classical Statistical Mechanics “It is a mistake to think you can solve any major problems just with potatoes.” — Douglas Adams

The field of classical statistical mechanics deals with the thermodynamic behavior of systems consisting of many particles. Rather than dealing with individual particles or specific states, it seeks to explain the properties arising in these large systems using probability theory. For magnetic systems, the Ising ferromagnet has emerged as the toy model of choice [55, 56]. It has earned its reputation as the drosophila among statistical models partly due to its rich behavior despite its simplicity, but mostly because there is a plethora of applications in related and even peripheral fields [57, 58, 59]. The partition function found in the mapping for Kitaev’s Toric code (chapter 2.5) can be interpreted as a classical spin model where the di↵erent spin configurations are weighted proportional to the likelihood of the error chain they represent. The existence of such a relation is instrumental in understanding the fluctuations of error chains, because it allows for the existing methods and numerical tools from statistical mechanics to be applied. This chapter introduces classical statistical spin models, using the Ising ferromagnet and Ising spin glasses as examples. Furthermore, Monte Carlo simulations and possible optimizations are discussed as a way to numerically calculate the properties of complex statistical spin systems. Particular attention is given to the observables needed to detect and locate continuous phase transitions as this will be essential to identify the ordered state which signals stability of the related quantum error-correction code. 36

3.1 Classical Spin Systems

3.1

Classical Spin Systems

The Ising ferromagnet was developed in 1925 by Ernst Ising and Wilhelm Lenz as a simple model to investigate the interaction of localized magnetic moments [55]. These Ising spins are two-valued entities which represent highly anisotropic magnetic moments, pointing either up, or down. They are placed on the sites of a hypercubic lattice and the Hamiltonian describing their interaction is given by: X X H= J si sj h si . (3.1) i

hi,ji

The sum in the first term is over nearest neighbors hi, ji on the lattice and energetically favors alignment of spins si,j 2 {±1} when the global interaction strength J is positive. The second term represents an interaction of each individual spin with an external field of strength h. The resulting Hamiltonian in equation (3.1) can be solved exactly in one dimension [55], as well as in two dimensions for a vanishing external field h = 0 [60]. Furthermore, the model exhibits a finite-temperature transition to an ordered state when the space dimension is larger than one. In the high-temperature regime the system’s dynamics are dominated by thermal fluctuations and the tendency for nearby spins to align is small. As the temperature is lowered towards the critical temperature Tc , the typical distance at which spins are correlated grows until it spans the system. For lower temperatures, the spin-spin interactions are more relevant than thermal fluctuations and the system orders. Typical states for h = 0 and di↵erent temperature regimes are shown in figure 3.1. Note that the ground state, where all spins point in the same direction, is two-fold degenerate. This is due to the global spin reversal symmetry of the Hamiltonian in equation (3.1) for a vanishing external field. In equilibrium statistical physics, we are interested in the (averaged) system properties for an ensemble of states. When the temperature T is fixed, the appropriate ensemble is the canonical ensemble, where the partition function is given by the sum of Boltzmann factors for all states: X H({s})/k T B Z= e . (3.2) {s}

Here kB denotes the Boltzmann constant and T is the constant ensemble temperature; they are usually combined as the inverse temperature = 1/kB T . The equilibrium probability for the system to be in a specific state {s} (called its weight) is then given by the Boltzmann distribution: P{s} = e

H({s})

37

/Z ,

(3.3)

Classical Statistical Mechanics

(a)

(b)

(c)

Figure 3.1: Typical states of an Ising ferromagnet, with di↵erent magnetization shown as black and white. (a) At low temperature T < Tc , the system is in an ordered state with most spins aligned in the same direction. The direction of the magnetization is arbitrary and chosen spontaneously when cooled past the transition temperature. (b) At the transition temperature thermal fluctuations and spin interactions compete; clusters of all sizes coexist. (c) In the high temperature regime, dynamics are dominated by thermal fluctuations and there are little correlations between neighboring spins.

where the numerator is the state’s Boltzmann factor and the division by the partition function ensures proper normalization. Given any observable O, we can calculate its equilibrium ensemble average via: hOi =

X {s}

O({s})P{s} =

1 X O({s}) e Z

H({s})

.

(3.4)

{s}

The sum is over all possible configurations {s} of the system and the expectation value can be interpreted as the weighted average of the observable value for each configuration. For the Ising ferromagnet, the transition to an ordered state can be quantified by studying the average magnetization of the system: m :=

N 1 X si . N i

(3.5)

This property acts as a local order parameter [61]: At high temperatures the fluctuating spin orientation is random and the average value vanishes, while the spin alignment in the ordered regime causes the order parameter to be non-zero: ⇢ 0 disordered phase, hmi = (3.6) non-zero ordered phase. Therefore, for an expectation value close to one for the local order parameter, the system is in an ordered state. The ground state of the Ising ferromagnet is fully magnetized (all spins up or all down). 38

3.1 Classical Spin Systems However, such perfect and pure systems as modeled by the Ising ferromagnet are a rare sight in the real world. Therefore it is natural to ask how the physical properties change if disorder is introduced. While all nearest-neighbor interactions in the idealized Ising ferromagnet have the same coupling strength J, considering varying interaction constants can allow for a more detailed modeling of actual physical systems [61]. As a generalization of the Ising ferromagnet, bond-dependent coupling strengths can be introduced into the Hamiltonian by adding additional, bond-dependent parameters ⌧ij : H=

J

X

⌧ij si sj .

(3.7)

hi,ji

These disordered bond strengths ⌧ij are quenched random variables.1 The assumption that interactions remain unchanged over time greatly simplifies the study of the Hamiltonian’s equilibrium properties. The variation in magnitude renders some of the Hamiltonian terms more significant (energetically) while reducing others to mere bystanders. Rather than considering a specific set of interaction strengths {⌧ij }, they are typically drawn from a disorder distribution that is specified along with the disordered model. This allows for the definition of an ensemble of disorder realizations, where each possible choice of the interactions is represented according to its probability. Equilibrium expectation values of such a disordered model can be calculated by averaging over the disorder distribution in addition to the trace over Z: Z ⇥ ⇤ 1 hOiT av = d⌧ P(⌧ ) O({s})e H({⌧ },{s}) , (3.8) Z

where h·iT is the thermal average for the thermodynamic ensemble and [·]av stands for an average over disorder realizations. Numerically, equation (3.8) is estimated by a statistical estimate from a set of Nsa randomly chosen disorder realizations from the disorder distribution P(⌧ij ). The field of spin glasses [61, 62] deals with systems whose quenched interaction constants are chosen randomly from a distribution that is typically bimodal or Gaussian. The competing interactions in such models cause frustration in situations where the links around a plaquette cannot be satisfied simultaneously. This gives rise to a rough energy landscape, thus increasing the complexity of the model considerably. The resulting disordered systems display stunning properties such as diverging time scales for spin rearrangements at low temperatures [63, 64] and mem1 The typical notation of the Ising spin-glass Hamiltonian combines J and ⌧ij into one bond-dependent constant Jij . However, for the discussion of statistical systems related to quantum error correction, separating the disorder {⌧ij } from J is more useful. Furthermore h = 0 is assumed here.

39

Classical Statistical Mechanics ory e↵ects [63, 65, 66]. They are of particular interest not only because they model certain physical materials [61], but because they represent generic NP-hard optimization problems [67] that find applications in many other fields of science – for example, also the topological quantum error-correction codes studied in this thesis.

3.2

Monte Carlo Simulations

In classical statistical mechanics we are interested in computing, for a given ensemble of states, what the expectation values of the model’s relevant physical properties are. Typically, these are observables such as energy, magnetization or the specific heat, which can be computed by performing a trace over the system’s partition function Z. As shown in section 3.1, this would require sampling all possible system configurations {s} according to their weight in the current ensemble: hOiT =

1 X O({s})e Z

H({s})

,

(3.9)

{s}

where the sum is over all possible configurations. Unfortunately, the exponentially growing phase space volume of typical models in statistical mechanics renders full sampling unfeasible for numerical simulations. For instance: A system with just 42 Ising spins – tiny by any standard in statistical physics – has over four trillion configurations. Instead, in Monte Carlo simulations [68], observables are estimated by performing a randomized sampling of states in configuration space. Simple sampling Monte Carlo would require us to pick an entirely new state from the whole configuration space (at random, according to the respective ensemble weights) for each step. Given the phase space volume, this is often an inefficient and impractical endeavor. However, the process can be simplified using Markov-chain sampling, where the next state is chosen by proposing a small change to the current configuration, which is then accepted or rejected based on the resulting change in energy. In place of a set of random, independent system configurations, this generates a chain of states where each di↵ers at most by a small modification from the previous one. While the desired Boltzmann distribution is reproduced implicitly with simple sampling, two criteria must be met for Markov-chain sampling to achieve the same: • Ergodicity – All possible configurations must be reachable via such a chain of small modifications to the initial state. Otherwise, if there are two states with non-zero weight for which no connecting Markov-chain exists, at least one of them will not be sampled despite its weight. 40

3.2 Monte Carlo Simulations • Balance Relation – In equilibrium, the resulting distribution should remain stable. This is only possible if the following balance relation is satisfied: X X ! Lij Peq (i) = Lji Peq (j) , (3.10) i6=j

i6=j

where Lij is the transition probability from state i to j. For an equilibrium distribution to be stable, the change in Peq due to transitions to neighboring states j must be compensated by equal total influx from (other) states j ! i. Equation 3.10 is necessary, as there cannot be a stable equilibrium distribution otherwise. One way to satisfy this general balance requirement is by enforcing detailed balance: ! Peq (i)Lij = Peq (j)Lji . (3.11) Using the transition probability matrix Lij , each step in the Markovchain can be written as: X Pt+1 (j) = Lij Pt (i) . (3.12) i

We want to show that Pt!1 converges to some equilibrium distribution Peq . This can be achieved by looking at the properties of the transition matrix L. Suppose ergodicity is given, then we can choose n such that any state can be reached within n steps. This assures that (Ln )ij > 0 8i, j. The theorem by Perron and Frobenius states that any matrix with strictly positive elements has a non degenerate largest eigenvalue. Since we are dealing with a probability matrix, the largest possible eigenvalue is 1. Therefore, if we can assure that a stable distribution exists, then we know it is unique. Furthermore, by considering the limit lim (Ln )m Pt (i) ! P1 (i) , (3.13) m!1

it can be seen that only the equilibrium distribution, which has eigenvalue 1, will persist in the long term limit, while all others are suppressed. The Metropolis-Hastings Algorithm [69] is a Monte Carlo method published in 1953 as a numerical “method [. . . ] of calculating the properties of any substance which may be considered as composed of interacting molecules”. Advertised as such a generic one-size-fits-all solution, it calculates numerical estimates of physical properties by performing a Markov-chain sampling over configuration space. Applied to spin systems, the method updates the physical system in single spin steps by proposing an update at each site, that is accepted with a probability depending on the energy change it would cause. For Ising spin systems this translates to attempting a spin flip for a spin si , then accepting it 41

Classical Statistical Mechanics with a probability given by Paccept = min(1, e

E

),

E

= Eafter

Ebefore .

(3.14)

The methods’ particular advantage is the ability to sample from a probability distribution P({s}) whilst only requiring knowledge of a function that is proportional to P({s}). With this we can avoid calculating the entire partition function Z, which is merely a normalization factor for the Boltzmann weight of each state. Sampling the equilibrium distribution Peq requires for a sufficient number of steps in the Markov Chain process to ensure di↵usion to all possible states. For complex systems such as spin glasses, which are usually characterized by a rough energy landscape, Markov Chain sampling is biased as long as the system remains localized in a metastable state (see figure 3.2). Furthermore, the time required to overcome an energy barrier to populate neighboring minima grows exponentially with its height and with inverse temperature: Since the typical energy di↵erence E 1 is large, no moves are accepted. Consequently, equilibration is a very challenging task, especially in the low temperature regime. Several simulation methods have been developed trying to alleviate this difficulty, such as simulated annealing [70] and genetic algorithms [71]. A particularly efficient one, which is at the same time relatively simple to implement, is parallel tempering [72], also known as replica exchange Monte Carlo [73] or expanded ensemble methods [74]. Rather than equilibrating an individual simulation of the system at a specific temperature, NT replicas are simulated concurrently for a temperature set [T1 , . . . , TN ], ranging from below the expected transition to well above it, TN ⇡ 2Tc . Because observables often need to be investigated as a function of temperature, the overhead for parallel tempering is minimal. At the same time, introducing a possibility for neighboring systems to swap configurations allows for the di↵usion of replicas in temperature space. As a result, rather than remaining in local minima, configurations can now be heated up to higher temperatures, where they can readily move in phase space before cooling back down in a di↵erent location (see figure 3.2). This enhancement can greatly reduce the time required for equilibration. Note that the possibility of exchanging replicas is in addition to the regular Metropolis single-spin dynamics, as it does not (by itself) sample the system’s phase space. After a sweep of single spin updates, a proposed exchange of spin configuration amongst neighboring temperatures is accepted with probability Pswap = min(1, e(E1

E2 )( 1

2)

),

(3.15)

which can be shown to satisfy detailed balance [72]. Furthermore it can 42

3.2 Monte Carlo Simulations

H({s}) TN T2 T1

(a)

(c)

(b)

{s}

Figure 3.2: Depiction of metastable states in a rough energy landscape: (a) In disordered spin systems with competing interactions, even a small change can cause a large di↵erence in the energy. This is represented by the steep energy barriers surrounding local minima. (b) The rough energy landscape can cause the system to remain localized in a metastable state rather than sampling the whole phase space in a Monte Carlo simulation. (c) Parallel tempering can overcome these energy barriers by heating up the system, allowing for the randomization of its configuration, then cooling back down in a di↵erent portion of phase space.

be seen from equation (3.15) that, for parallel tempering to be most efficient, the temperature set must be chosen such that there is a reasonable overlap in energy distribution for ensembles at neighboring temperatures. If the overlap is too small, the energy di↵erence in the exponent of equation (3.15) is typically large, resulting in a low acceptance rate for swaps and less di↵usion in temperature space. A more thorough discussion of how to generate a good temperature set can be found in appendix A. Proper equilibration needs to be verified in all these numerical simulations. This can be achieved numerically using logarithmic binning of the data, where observables are measured after (and averaged over) an exponentially growing number of simulation steps 2te . The resulting measurement O(T, te ) is the thermal average of an observable between time steps t1 = 2te 1 and t2 = 2te 1. By investigating the time evolution of O(T, te ) as a function of te , the equilibration of each observable can be plotted as a function of (logarithmic) time. A simulation is deemed in thermal equilibrium only if the last three bins agree within error bars for every observable considered. Combined with the exponentially growing bin size, this constitutes a conservative test for equilibration. 43

Classical Statistical Mechanics

3.3

Continuous Phase Transitions

Continuous phase transitions [75] in statistical physics are characterized by the absence of latent heat as well as a continuous free energy of the system. Instead, the singularity of the transition is found in derived observables of the free energy, which exhibit a power-law behavior near the critical temperature. For instance, the correlation length ⇠ which indicates the typical distance at which spins are correlated, diverges at the transition as ⇠ ⇠ |T Tc | ⌫ , (3.16) where ⌫ is the critical exponent describing the divergence near the transition temperature Tc . In fact, in the vicinity of the transition, the behavior of several observables is described by a power law: The magnetization, while singular, does not diverge but instead shows a singular kink at the transition. Its scaling relation is given by m ⇠ |T

Tc | ,

(3.17)

where is the corresponding scaling exponent. The set of scaling exponents for di↵erent observables can be shown to be related using arguments from renormalization group theory [76]; typically only two of them are independent while the rest can be expressed via scaling relations. Numerical simulations of statistical systems can only consider the properties of finite systems in order to extrapolate what the values in a thermodynamic (infinite) system would be. However, in finite-size systems, the e↵ects of the transition are typically washed out considerably and system properties can exhibit systematic deviations from their value in the thermodynamic limit. To account for the resulting nonanalytic behavior, the observables can be expressed in a finite-size scaling form [77]. For instance, in an Ising system of linear extent L, the finitesize average of the order parameter (equation (3.17)) is asymptotically given by: ˜ [L1/⌫ (T Tc )] , hmL iT ⇠ L /⌫ M (3.18) ⌦ ↵ while the magnetic susceptibility = Ld ( m2 hmi2 ) is expected to scale as /⌫ ˜ C[L1/⌫ (T Tc )] . (3.19) L ⇠ L ˜, C ˜ are Here , and ⌫ are scaling exponents and the functions M unknown scaling functions whose argument vanishes at the transition, where T = Tc . As a result, the lines for di↵erent system sizes are expected to cross at the transition. Unfortunately, because the scaling forms in equations (3.18) and (3.19) are only correct in the limit of large L ! 1, therefore systematic deviations from the crossing point can 44

3.3 Continuous Phase Transitions still arise. Furthermore estimating the value in the thermodynamic limit limL!1 hmL i is complicated because the scaling exponents are typically not known and would need to be extracted by simultaneously fitting Tc and the scaling exponents. A far better approach is the construction of quantities whose finitesize scaling form is dimensionless. Typical quantities are the Binder ratio [61] as well as the dimensionless two-point finite-size correlation length divided by the system size [78]. For systems with a spin-reversal symmetry, the latter has proven [79] to be a more precise estimate because the Binder ratio becomes negative and steep at the transition, rendering detection of the crossing difficult. The two-point finite-size correlation function is a finite-size adaptation of the correlation length ⇠ which can be constructed from the system’s order parameter by considering the wave-vector dependent susceptibility: m

~ ij 1 X ⌦ X ↵ i~k·R (~k) = sj e . N ij i

(3.20)

From this, the two-point finite-size correlation function ⇠L is then calculated by performing an Ornstein-Zernike approximation [78]: 1 ⇠L = 2 sin(kmin /2)

s

[ [

0)]av m (~ ~

m (kmin )]av

1.

(3.21)

Here [·]av denotes an average over disorder samples and ~kmin is the smallest non-zero wave vector of the system. Near the transition ⇠L is expected to scale as ˜ 1/⌫ (T Tc )] , ⇠L /L ⇠ X[L (3.22) ˜ is an unknown, dimensionless scaling function, i.e., in contrast where X to the finite-size scaling forms in equations (3.18) and (3.19), there is no size or scaling exponent dependent prefactor for this dimensionless quan˜ is zero at the transition tity. Because the argument of the function X temperature (up to scaling corrections) and hence independent of L, we expect lines of di↵erent system sizes to cross at this point. If however the lines do not meet, we know that no transition occurs in the studied temperature range. Furthermore, to account for the non-analytic corrections to scaling, a careful finite-size scaling analysis is required. This can be achieved by comparing the properties of systems at multiple finite sizes and taking the limit L ! 1: lim Tc (L) = Tc (1) =: Tc .

L!1

45

(3.23)

Classical Statistical Mechanics Note that this numerical estimate is feasible for dimensionless quantities, because we only need to fit for the transition temperature Tc (1), but not for any of the scaling exponents. In practice, when the crossing of the two-point finite-size correlation functions ⇠L in equation (3.21) is used to detect the transition, each pair of distinct system sizes (L1 , L2 ) gives rise to a crossing point Tc⇤ (L, 2L). To cleanly estimate the transition temperature in the thermodynamic limit, only the crossing points for systems which di↵er by the same factor (L, 2L) are considered. The limit value can then be estimated via a linear fit to the data points Tc⇤ (L, 2L) in a 1/L-plot, where the infinite system corresponds to the vertical axis. While the linear fit works surprisingly well for a wide range of systems (when the linear system sizes L are sufficiently large), it is important to individually verify the quality of each fit, to avoid an adverse e↵ect from small systems which exhibit severe finite-size e↵ects and to ensure there is not a systematic deviation from the assumption of a linear fit. In order to estimate error bars for this extrapolation, a careful statistical analysis of the sample obtained through Monte Carlo simultaions is necessary. The bootstrap resampling method [80] generates a new sample from the initial set by picking N individual measurements, at random, with repetition. As a result, these generated samples are distributed according to the distribution in the original sample (which is our best guess of what the actual population looks like). By calculating Tc for each resampling, we obtain a histogram of Tc values, allowing for a detailed discussion of the variability in this observable.

46

Chapter 4

Topological Color Codes “The knack of flying is learning how to throw yourself at the ground and miss.” — Douglas Adams

The first topological code introduced was Kitaev’s Toric code [47], which combined the stabilizer formalism with a regular qubit arrangement on a torus to generate an intrinsically local error-correction code that is both very stable and scalable. While the code introduced in chapter 2.4 is defined on a regular square lattice with checkerboard decomposition, analogous codes can be defined for any lattice with two-colorable faces; a generalization which is referred to as surface codes [47]. In 2006 H´ector Bomb´ın and Miguel Angel Mart´ın-Delgado showed that the concept could also be realized on trivalent lattices with threecolorable faces. The topological color codes they introduced [48] share the same properties as the Toric code – the stabilizer generators are intrinsically local and the encoding is in a topologically-protected code subspace. Furthermore, color codes are able to encode more qubits than the Toric code and, for some specific lattice structures, even gain additional computational capabilities. This chapter introduces the quantum setup required for topological color codes as well as the mapping to related classical statistical spin systems with disordered plaquette interactions. In addition to the comparison with the Toric code, one particular question of interest is whether the enhanced capabilities of topological color codes on a specifically-crafted square-octagonal lattice come at the price of increased susceptibility to external disturbances (compared to the regular hexagonal arrangement). 48

4.1 Qubit Arrangement

4.1

Qubit Arrangement

Topological color codes are stabilizer codes which are defined for qubit arrangements on a two-dimensional lattice which has trivalent vertices and three-colorable faces. A simple example which satisfies both conditions is the regular hexagonal or “honeycomb” lattice. For simplicity the definition is initially carried out specifically for the hexagonal lattice, but it can be generalized to any arrangement that fits the description. For the construction of the code, consider a physical system with a qubit ˆi : For at every lattice site and introduce the following check operators S each hexagon 7i there are two types of check operators that correspond ˆ ⌦6 and Z ˆ ⌦6 , to Pauli operators of the form X O O ˆX ˆ` , ˆZi := ˆ` . S X S Z (4.1) i := `27i `27i

ˆi acts on the qubits ` adjacent to the Each of these check operators S hexagon 7i , analogous to the plaquette operators in the Toric code. These operators are intrinsically local and they commute pairwise, because the number of qubits shared between any pair of stabilizers is even. The Abelian group generated by the check operators is the stabilizer S of the code. As described in chapter 2.3, the code subspace C fixed by S is the simultaneous +1 eigenspace, i.e., the subspace of all states which have an eigenvalue of +1 for all check operators: C = {| i

ˆX ˆZ ˆX ˆZ S i | i = Si | i = + | i 8 Si , Si 2 S} .

(4.2)

ˆX ˆZ Note that there are both S i and Si check operators for each plaquette, in contrast to the alternating placement for the Toric code (see figure 4.1). In order to understand the e↵ect of di↵erent error operators E 2 Pn , a generalization of the error chains from the Toric code is instrumental. Note that, because every vertex is trivalent, a single faulty qubit will cause a change in eigenvalue for the three stabilizer generators on adjacent plaquettes. Furthermore, flipping a second qubit next to the initial one will correct two plaquettes but flip a fourth one. The coloring of the lattice’ faces demands that the two plaquettes flipped in this manner are of the same color, and we can consider the resulting error set as an error chain connecting the two plaquettes. Because of the coloring properties, these error chains come in three flavors, which can also be branched when chains of each color meet at one vertex (see figure 4.1). In analogy to the Toric code an error chain E is said to have a border @E whenever some plaquette operator has an odd number of adjacent faulty qubits. Such an error chain anticommutes with some of the stabilizer generators and is therefore a detectable error. The normalizer of the code consists of all 49

Topological Color Codes

B

ˆ ⌦6 Z ˆ ⌦6 (a) X

A (b)

C

(c) (d)

Figure 4.1: Qubit arrangement for topological color codes on a regular honeycomb lattice. (a) Hexagon plaquettes are the location ˆX = X ˆ ⌦6 and S ˆZ = Z ˆ ⌦6 each. of two check operators, one of type S (b) The faces of the lattice are three-colorable (labeled A, B and C) and we can associate a color with each of the stabilizer generators. (c) Connected sets of qubits can be considered chains of errors connecting the plaquettes at the boundary of the chain. However, with the colorability of the underlying lattice, there are now chains of three di↵erent colors, depending on the color of the plaquettes they reside on. Note that there are three colors of chains for each error type (i.e a total of six for color codes, compared to one per type in the Toric code). (d) Branching between loops of di↵erent colors is a novel feature for color codes which is not present in the Toric code.

50

4.2 Related Classical Spin Models closed error chains. Loops which are local leave the code space invariant because they can be represented as a product of check operators (and are therefore in the stabilizer S). Homologically non-trivial error chains on the other hand a↵ect the code subspace and are considered undetectable errors. The smallest possible loop consists of the six qubits around an elementary hexagon plaquette (see figure 4.2). Note that these minimal loops are elementary equivalences that can be used to construct chains of the same homology. Apart from the regular hexagonal arrangement, which was studied earlier by Katzgraber et al. [79], another interesting setup to study is the square-octagonal lattice (see figure 4.3). While topological quantum memories excel at preserving quantum states, they can also be used for computation under certain conditions. In the context of a quantum error code, transversal gate implementation refers to the possibility of applying a gate to the encoded logical state simply by applying a (set of) gates to the individual physical qubits. While some gates can be applied transversally for all topological codes, this is not the case for the whole Cli↵ord group of quantum gates – an important group of manipulations that is used frequently in examples of quantum computation and in particular quantum error correction. Both the Toric code as well as topological color codes in a hexagonal arrangement require code deformations [52, 81, 82] to implement all gates from the Cli↵ord group. These manipulations are more complicated and prone to errors because they require modifications of the code. The square-octagonal arrangement on the other hand has the special property that all its stabilizer generators have support on a number of qubits that is a multiple of four. This fact, together with the properties of general color codes, has been shown to allow for the transversal implementation of the whole Cli↵ord group of gates [48]. By comparing the error threshold of color codes in this potent arrangement to the regular hexagonal lattice and to the Toric code, we can estimate whether the enhanced computing capabilities come at the price of reduced robustness against environmental noise.

4.2

Related Classical Spin Models

Because topological color codes are Calderbank-Shor-Steane (CSS) codes [83, 84] (i.e., they have a structure with stabilizer generators that are ˆ or Z ˆ Pauli operators, but not both), qubit-flip and either products of X ˆ cause qubit-phase errors can be treated independently. Errors of type X ˆ ˆ violations of Z-type check operators while commuting with X-type check operators and vice versa. Considering the regular hexagonal lattice and only qubit flip errors for now, the procedure is as in the mapping for the Toric code (chapter 2.5): The goal is to formulate the fluctuations 51

Topological Color Codes

(c)

(b) (d) (a)

Figure 4.2: Error chains and cycles of di↵erent homology: (a) This red loop represents the smallest possible loop, which consists of the qubits around a single hexagonal plaquette. Since this correspond to a stabilizer generator, such loops do not a↵ect the code subspace. Indeed all homologically trivial loops can be decomposed into these elementary loops. (b) The green system-spanning loop represents a homologically non-trivial error chain which a↵ects the code subspace and is at the same time not detectable. (c) An open blue error chain which causes negative eigenvalues in the error syndrome for the two blue hexagons at its boundary. The error can be fixed, for instance, by reconnecting the two points (as indicated with the dashed blue chain). While this corrects for a di↵erent error than has actually occurred, the di↵erence is merely a trivial cycle which does not a↵ect the encoded information. (d) Branching of error chains can lead to a more complex error syndrome where check operators of di↵erent color have a negative eigenvalue. Such an error can be fixed by applying another branched error chain with the same boundary.

52

4.2 Related Classical Spin Models

(a)

ˆ ⌦8 Z

ˆ ⌦4 Z

(b) (c)

Figure 4.3: Square-octagonal arrangement of qubits: Besides satisfying the constraints on vertex coordination number and coloring properties, all its stabilizer generators have support on a number of qubits that is a multiple of four. (a) This set of errors constitutes an open blue chain which causes negative eigenvalues in the error syndrome for the two blue squares at its boundary. (b) This local red loop is homologically trivial and does not a↵ect the error syndrome or the encoded information. (c) The green error chain spans the system and represents a non-trivial error which a↵ects the code subspace and is at the same time not detectable.

of error chains in terms of flipped domains in a related classical spin ˆZi = Z ˆ ⌦6 on all the hexagons system. There are check operators of type S (not just one color as in the Toric code), each of which will contribute an eigenvalue i 2 {±1} to the error syndrome. Starting from a reference error chain E which occurs with probability P(E) = (1

p)N

|E|

= (1

p)N

Y✓ `

p 1

p

◆nE `

,

(4.3)

we would like to sample all errors which share the same error syndrome as E. To share the same syndrome, they can only di↵er by a (set of) closed cycles and can therefore be generated by adding elementary hexagonal loops to the reference chain. This can be expressed by attaching a classical spin variable si to each plaquette, i.e., where the stabilizer generators ˆZi reside. From this it can be seen that any choice of the classical spins S 53

Topological Color Codes

" " "

"

"

"

"

" "

"

"

E0

# #

"

" "

"

"

E

" "

"

Figure 4.4: Placement of Ising spins: The classical spins are placed " " on each of the plaquettes of the quantum lattice (i.e., they are situated on the triangular lattice, which is dual to the honeycomb lattice). Because each of the Ising spins corresponds to an elementary hexagon loop, the set of all possible spin configurations represents all " " possible cycles which are homologically trivial. When considering a reference chain E, the spin configurations enumerate all possi¯ In ble fluctuations of chains which are in the same error class E. " reference chain E di↵ers from " the sampled chain E 0 the figure, the (dashed) by a homologically trivial loop with area two. Note that applying X ⌦6 to the plaquettes with flipped spins does indeed swap the two error chains.

"

"

" "

" "

"

54

4.2 Related Classical Spin Models si does indeed denote a valid set of cycles C via: ⇢ +1 ` 2 6 C, uC ` = si sj sk = 1 `2C,

(4.4)

where the indices i, j, k denote the spins on plaquettes adjacent to the qubit site `. In particular, flipping a spin is equivalent to applying an ˆ ⌦6 which flips all the qubits adjacent to elementary chain operator X i the plaquette 7i . Notice that adding an elementary loop to the chain does not change the error syndrome or a↵ect the encoded information ˆ ⌦6 is in the stabilizer S). In accordance with (because the operator X i the mapping for the Toric code, the relative probability of the modified error chain E 0 can then be expressed as Y P(E 0 ) = P(E) `



p 1

p

◆nC ` (1

2nE ` )

/

Y

exp[ J⌧` u` ] ,

(4.5)

`

which is valid if the constants ⌧` describe a reference chain and the Nishimori condition is met: e

2 J⌧`

p)]1

= [p/(1

2nE `

.

(4.6)

In this representation of chains as spin configurations, the error state of each qubit is dictated by the product of spins in equation (4.4). Therefore each qubit contributes a term of the form J⌧ijk si sj sk , yielding an overall partition function of the form X X Z{⌧ijk } = exp J ⌧ijk si sj sk , (4.7) {s}

i,j,k2{4}

which describes a disordered statistical system with multi-spin interaction for each plaquette. Note that the spins si defined for the mapping are located on the triangular lattice which is dual to the original hexagonal arrangement. Every qubit corresponds to a triangle in the new lattice and dictates the sign of the associated plaquette interaction via ⌧ijk . Thus the Hamiltonian for the system related to color codes [as described by equation (4.7)] is given by H=

J

X

⌧ijk si sj sk ,

(4.8)

i,j,k2{4}

where J is a global coupling constant chosen according to the Nishimori condition (4.6), the Ising spins take values si 2 {±1} and the mapping requires ⌧ijk to be quenched random interactions, distributed according

55

Topological Color Codes to the error rate p: P(⌧ijk ) =



+1 1

1 p

p

(4.9)

While the mapping and resulting partition function is identical for a topological color code in the square-octagonal arrangement, the related classical statistical system is defined on a di↵erent lattice: The dual lattice to the square-octagonal setup consists of triangles arranged in a pattern resembling the “Union Jack”, i.e. with a coordination number of 4 or 8 at each lattice site.

4.3

Numerical Simulation

The Hamiltonian in equation (4.8) belongs to the family of (disordered) p-spin models. For the particular case at hand, i.e., p = 3 in two dimensions, it has recently been shown [85] that finding the ground state is an NP-hard problem. The dual lattice considered in the mapping of topological color codes inherits the coloring properties of the initial qubit arrangement: The vertices of the triangular lattice can be decomposed into three colored sublattices. Because each term in the Hamiltonian is a product of di↵erently colored spins, the system has a global Z2 ⇥ Z2 symmetry that is realized by flipping the spins on two of the sublattices. Note that applying the global symmetry does not a↵ect the error chains under consideration because the two flipped spins in equation (4.4) cancel out. Furthermore, because of the symmetry, the absolute value of the total magnetization is not a good order parameter as it is not invariant under the symmetry. Instead the ordering of the spins on a particular sublattice is investigated: hm i =

1 X si , N

(4.10)

i2

where N denotes the number of spins in the sublattice. This observable is by design not a↵ected by the symmetries and can be used as an order parameter. When finite systems with periodic boundary conditions are studied, particular care must be taken in order to ensure these sublattices are well defined. In the numerical simulations this is achieved by investigating linear system sizes which are a multiple of 6: While the regular triangular lattice requires the linear system size to be a multiple of 3 to ensure three-colorability (depending on the lattice representation), the Union Jack lattice requires L to be even. To locate the critical temperature of the phase transition, a finite-size correlation function ⇠L is constructed from the sublattice magnetization as 56

4.3 Numerical Simulation described in chapter 3.3. The Hamiltonian in equation (4.8) is then simulated, at fixed error rates p, using the parallel tempering Monte Carlo technique [72]. Equilibration of the individual simulations is tested using logarithmic binning of the data, where the last three bins are required to agree within error bars. The detailed simulation parameters can be found in table 4.1. Table 4.1: Simulation parameters: L is the linear system size, Nsa is the number of disorder samples, teq = 2te is the number of equilibration sweeps, Tmin [Tmax ] is the lowest [highest] temperature, and NT the number of temperatures used. p 0.00 0.00 0.00 0.01 0.01 0.01 0.02 0.02 0.02 0.03 0.03 0.03 0.04 0.04 0.04 0.06 0.06 0.06 0.08 0.08 0.08 0.10 0.10 0.10 0.10 0.12 0.12 0.12

— — — —

0.11 0.11 0.11 0.11

L 12, 18 24, 30 36 12, 18 24, 30 36 12, 18 24, 30 36 12, 18 24, 30 36 12, 18 24, 30 36 12, 18 24, 30 36 12, 18 24, 30 36 12, 18 24, 30 36 42 12, 18 24, 30 36

Nsa 50 50 50 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 5 000 10 000 10 000 10 000 10 000 5 000 5 000 5 000

te 18 19 20 18 19 20 18 19 20 18 19 20 18 19 20 18 19 20 18 19 20 18 19 20 22 18 19 20

57

Tmin 2.200 2.200 2.200 1.900 1.900 1.900 1.900 1.900 1.900 1.700 1.700 1.700 1.700 1.700 1.700 1.600 1.600 1.600 1.400 1.400 1.400 0.750 0.750 0.750 0.750 0.750 0.750 0.750

Tmax 2.350 2.350 2.350 2.400 2.400 2.400 2.400 2.400 2.400 2.200 2.200 2.200 2.200 2.200 2.200 2.100 2.100 2.100 2.000 2.000 2.000 2.600 2.600 2.600 2.600 2.600 2.600 2.600

NT 31 31 31 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 61 61 61 38 38 38 38 38 38 38

Topological Color Codes

4.4

Results

At low temperatures T and small disorder rates p, the system is expected to exhibit ferromagnetic ordering for the individual sublattices. Above a disorder-dependent critical temperature Tc (p), there is a phase transition to a disordered, paramagnetic state. The critical transition temperature Tc (p) is determined by performing large-scale Monte Carlo simulations of the Hamiltonian in equation (4.8) for di↵erent finite system sizes L. The transition is then identified by locating the crossing of the twopoint finite-size correlation functions ⇠L , as indicated in chapter 3.3. By calculating the transition temperature Tc (p) for multiple discrete values of the disorder rate, the phase boundary in the T – p phase diagram can be pinpointed. The critical error threshold is then found where the Nishimori line from equation (4.6) intersects this phase boundary. For the pure system, where the error rate vanishes, the transition temperature Tc (p = 0) can be calculated analytically [86] to agree with the one for the two-dimensional Ising model, Tc ' 2.2692: Interestingly, the three-body Ising model on the Union Jack lattice shares the same transition temperature as the regular Ising model, despite being in a di↵erent universality class [86, 87]. This is also what we find numerically as indicated in figure 4.5. As errors are introduced into the system, the transition temperature is slowly reduced. At p = 0.100 there is still a clean crossing for the correlation functions indicating a transition at Tc (p) ⇡ 1.29. For p ⇡ 0.109 critical behavior sets in, and for p = 0.110 no sign of a transition is visible. The critical error rate is thus estimated as pc = 0.109(2), as indicated in figure 4.5. The findings are summarized with the depiction of the whole estimated phase diagram in figure 4.6. These results show that the error threshold for topological color codes in a square-octagonal arrangement agrees with the previous estimates for hexagonal color codes [79] and for the Toric code [88, 89, 90, 91]. In fact the whole phase boundary of the spin models related to these three quantum codes are identical within error bars – despite their pronounced differences in arrangement and connectivity. In particular, this also shows that the enhanced computing capabilities on the square-octagonal lattice does not come at the expense of increased susceptibility to external noise.

58

4.4 Results

⇠L L

(a)

1.4 1.2

⇠L L

L = 12 L = 18 L = 24 L = 30 L = 36

(b)

p = 0.100

1.0

0.8

1.0 0.8

p = 0.000

0.6

Tc ⇡ 2.269 2.24

⇠L L

1.0

2.26 (c)

0.6

0.4 2.28

2.30

1.1

T

p = 0.109

Tc ⇡ 1.29

⇠L L

1.2

1.3 (d)

1.4

1.5

T

p = 0.110

0.9

0.9

0.8

0.8

0.7

0.7 0.6 0.6

Tc ⇡ 1.01

0.5

0.5

0.7 0.8 0.9 1.0 1.1 1.2 1.3 T

0.7 0.8 0.9 1.0 1.1 1.2 1.3 T

Figure 4.5: Finite-size correlation function ⇠L /L as a function of temperature T : (a) The pure model with p = 0.000 shows a clear crossing of the correlation functions for di↵erent system sizes. The critical temperature coincides with the one found for the twodimensional Ising model. (b) At a disorder rate of p = 0.100, a clear crossing is still visible, albeit at a far lower transition temperature of Tc ⇡ 1.28. (c) For p . pc = 0.109 there is still a signature of a transition (data for di↵erent L cross). (d) At the critical error threshold, corrections to scaling are large and detecting the transition temperature becomes difficult.

59

Topological Color Codes

Phase diagram for the octagon-square arrangement Tc (p)

!

2.5

Color Codes Nishimori Line

Tc,Ising

2.0

pc,Toric

1.5

stable ordered phase error correction feasible

1.0 0.5

pc = 0.109(2)

0.0 0.00

0.02

0.04

0.06

0.08

!

0.10

0.12

p

Figure 4.6: Estimated T – p phase diagram for color codes on a square-octagonal lattice: The dashed phase boundary is a guide to the eye, while the actual numerical results for Tc (p) are represented by the data points. The critical error threshold is found where the Nishimori line (solid blue) intersects the phase boundary. For disorder rates exceeding the critical error threshold of pc = 0.109(2), the ferromagnetic ordering on the sublattices is lost. Note that the error threshold coincides with the results for the Toric code and topological color codes in a hexagonal arrangement.

60

Chapter 5

E↵ect of the Depolarizing Channel “We have normality. I repeat, we have normality. Anything you still can’t cope with is therefore your own problem.” — Douglas Adams For simplicity, the color codes introduced in chapter 4 were analyzed for individual qubit-flip errors independent of potential phase errors. This is possible whenever errors are uncorrelated and the check operators are products of only one type of Pauli operator (i.e., for CSS codes). However, while this simplest model for noise lends itself for an initial code analysis, it is not necessary a good description of the noise we expect to find in real world applications. In particular, knowledge of the typical correlations in an error channel might allow for the design of a customtailored code that performs better under these specific circumstances. This chapter outlines the mapping for a general error channel whose error types can potentially be correlated. Because the e↵ects of singleˆ Y ˆ qubit operators can be decomposed into the three Pauli matrices X, ˆ such a general error channel can be characterized by the probaand Z, bility pw for each of these errors to occur: 8 ˆ| 1 > > < ˆ X| Egeneral (| i) : | i ! ˆ| > Y > : ˆ Z|

i i i i

(1 px py pz .

P

w

p) .

(5.1)

The subsequent numerical analysis in this chapter focuses on the stability of the Toric code and topological color codes under the depolarizing channel, where each type of error occurs with equal probability. This channel plays a fundamental role in quantum-information protocols where noise has to be taken into account, including quantum cryp62

5.1 Mapping for a General Channel tography [92, 93], quantum distillation of entanglement [94], and even quantum teleportation [95]. Experimentally, controllable depolarization has been realized recently in photonic quantum-information channels [96] with a Ti:sapphire pulsed laser and nonlinear crystals, as well as twoqubit Bell states [97].

5.1

Mapping for a General Channel

The ultimate goal is to relate the resilience of the stabilizer code to depolarizing noise to the ordered phase of a suitably chosen classical spin model. It is instructive to carry out the mapping for the general channel in equation (5.1), whose e↵ect on the density operator ⇢ is: Ep (⇢) = (1

p)⇢ +

X

pw ˆ w⇢ˆ w .

(5.2)

w=x,y,z

Here the P probability for each type of error to occur is pw 2 [0, 1] with p := w pw . Note that the depolarizing channel is the special case where pw = p/3. When the channel is applied to each physical qubit in a quantum memory, the probability for a set of Pauli errors E 2 Pn is P(E) = (1

p)

N

Y

w=X,Y,Z



pw 1

p

◆|Ew |

,

(5.3)

where N is the total number of qubits and |Ew | is the number of appearances of ˆ w in the tensor product forming E. In a generalization of the mapping in chapter 2.5, the related classical spin Hamiltonian is then constructed as follows: ˆw 1. Attach a classical spin sw i to the site of each check operator Si . For the Toric code, this associates one spin with every plaquette, alternating in a checkerboard pattern of type sX and sZ . For color codes, because there are multiple check operators per plaquette, there are multiple Ising spins at the same site of the dual lattice, one for each type of check operators. 2. Associate with each single-qubit Pauli operator ˆ`w an interaction term of the form Y w0 X X Z Z Z Jw s i = Jw s X (5.4) 1 s 2 · · · s NX s 1 s 2 · · · s NZ , i,w0

0

such that the spins sw in the product correspond to the check i 0 ˆw operators S which anticommute with ˆ`w . For instance, an error i 63

E↵ect of the Depolarizing Channel of type ˆ`Y anticommutes both with adjacent check operators of ˆ as well as of type Z. ˆ Therefore the product in (5.4) contains type X ˆ ` on the other all adjacent spins sX and sZi . For an error of type Z i hand, the product only contains the spins of type sX i . 3. Attach to each coupling an interaction variable ⌧`w = ±1 dictated by the Pauli error E, through the conditions ˆ`w E = ⌧`w E ˆ`w . Thus the interaction ⌧`w is negative when the corresponding Pauli error ˆ`w is part of the chain. The resulting Hamiltonian takes the general form X Y w H= Jw ⌧`w si , `,w 7i 3`

(5.5)

where the sum is over all Pauli operators ˆ`w and there are only three di↵erent couplings Jw since we set J ˆ`w := Jw , with w = x, y, z and ` the qubit label. For the mapping, we require the interaction constants to be Jw =

1 px py pz log 2 , 4 pw (1 p)

w = x, y, z .

(5.6)

This relates the error probability in equation (5.3) to the Boltzmann factor for the ordered state, {si = 1}, given the interactions generated by E: P(E) / e HE ({si =1}) . (5.7)

Note that replacing the error E ! E 0 = E + C, with C a homologically trivial cycle, is equivalent to considering a di↵erent spin configuration in the classical Hamiltonian where the “inside” of C is flipped. Therefore di↵erent spin configurations correspond to error chain fluctuations within the same error class. Finally, to complete the mapping, the label E in the Hamiltonian must, as in the previous mappings in chapters 2.5 and 4.2, describe quenched randomness. In particular, the coupling configuration associated with an error chain E must appear with probability P(E). Equivalently, this means that for each qubit ` the random coupling signs are distributed according to P(+1, +1, +1) = 1

P(+1, 1, 1) = px , P( 1, +1, 1) = py , P( 1, 1, +1) = pz . 64

p,

(5.8)

5.2 Toric Code with Depolarizing Noise This defines the disorder distribution required to sample for a generic channel. In the case of the depolarizing channel, this simplifies to px = py = pz = p/3

and

Jx = Jy = Jz = J.

The resulting model has two parameters, p and condition becomes p/3 4 J = ln . (1 p)

5.2

(5.9)

J, and the Nishimori (5.10)

Toric Code with Depolarizing Noise

In contrast to the qubit-flip channel, the depolarizing channel also enˆ = iX ˆZ ˆ error (otherwise this would compasses a chance for a combined Y only happen with a frequency of py = px pz ). As such, the depolarizing channel is more general, because it allows for the unified, correlated effect of all three basic types of errors. Furthermore, this form of noise is important, because it can also be interpreted as an error channel which replaces the qubit with the completely mixed state with a probability of p0 = 4p/3 [20]: Edepolarize (⇢) = (1

p)⇢ +

X

w=x,y,z

p w w ˆ ⇢ˆ = (1 3

p0 )⇢ + p0

I . 2

(5.11)

This models the situation where there might be strong interaction with the environment that is likely to a↵ect the entire qubit state by replacing it with one that is completely unknown. ˆ and Z ˆ produce twoFor the Toric code, the single-qubit operators X body interactions, because each bit flip [phase flip] a↵ects the stabilizer ˆ generators on two neighboring plaquettes (see chapter 2.5). The Y-errors combine correlated qubit-flip and phase-flip errors and anticommute with ˆX and S ˆZ check operators. Therefore they introduce four-body both S interactions as indicated in figure 5.1. The result is a spin model with both two-spin and four-spin interactions, where the coupling signs ⌧` are parametrized by a Pauli error chain E ⇢ Pn : X X X X Z Y X Z Z HE = J (⌧ij si sj + ⌧k` sZk sZl + ⌧ijk` sX (5.12) i sj sk sl ) . i,j,k,l2{+}

The set {+} describes all qubit locations, which corresponds to the crossˆ ing points of error chains or, equivalently, to the crossing of X-type and ˆ Z-type interactions in the classical models: As a geometrical interpretation, we can consider the spins of type sX i to reside on the bottom layer of two stacked two-dimensional Ising square lattices, while spins of type 65

E↵ect of the Depolarizing Channel

Figure 5.1: Graphical rendering of the lattice for the Toric code under the depolarizing channel. The spin Hamiltonian in the mapping can be thought of as two stacked square lattices which are shifted by half a lattice spacing. Apart from the regular Ising two-spin interactions (which are indicated in orange and light blue), there is an additional four-body term in the Hamiltonian which introduces interactions between the two layers (shown in green color).

sZk are situated on the top layer (see figure 5.1). The two sublattices correspond to the checkerboard decomposition of the plaquettes in the original Toric arrangement and they are therefore shifted by half a lattice spacing (see figure 5.2). Without the third interaction term (i.e., for Jy = 0), these are two independent random bond Ising systems with the typical two-body nearest neighbor interactions and bond strength Jx , Jz , respectively. The additional ˆ`Y -term introduces vertical correlations between the two layers by adding an additional four-spin interaction with weight Jy where the bonds cross.

66

5.2 Toric Code with Depolarizing Noise

" "

" "

" "

"

"

"

"

"

" "

" "

"

" "

#

" "

"

#

"

" "

#

"

" "

"

"

"

"

"

#

"

"

"

"

"

"

" "

"

"

Figure 5.2: Placement of classical spins in the mapping: Because " " " " the check operators are assigned to di↵erent subsets of faces in a checker board decomposition, the error chains to be considered are on di↵erent sublattices. As for the Toric code under the qubit-flip channel, the spins on the darker tiles are associated with qubit-flip errors (red chain) and reversed domains correspond to fluctuations in qubit-flip error chains. The analogous solution for independent qubit-phase errors (blue chain) gives rise to the sublattice of spins on light tiles. Since the depolarizing channel considers the combined and correlated e↵ect of both qubit-flips and qubit-phase errors, both sets of spins are needed in this case.

67

E↵ect of the Depolarizing Channel

5.3

Color Codes with Depolarizing Noise

The mapping for a general channel can also be applied to color codes [48], in which case the physical qubits are placed on the vertices of a trivalent lattice with three-colorable faces, such as, for example, the honeycomb ˆX ˆZ lattice. Now there are two check operators S i and Si attached to each plaquette 7i taking the form O X O Z ˆX ˆZi := S ˆi , S ˆi , (5.13) i := `27i `27i

respectively, with ` running over the qubits on the vertices of the plaquette 7i . Because the computing capabilities of color codes depend on the underlying lattice where the qubits are placed [98], two di↵erent arrangements are investigated: the regular honeycomb lattice for its simplicity, and the square-octagonal lattice which allows for the implementation of additional types of quantum gates. In the mapping to a statistical-mechanical model these two arrangements are related to their dual lattices: The regular triangular and Union Jack lattices, respectively. Since there are two stabilizers for each plaquette, the mapping places Z two Ising variables sX i and si per plaquette. For convenience, an artificial Y X Z spin variable si = si si can be introduced. The Hamiltonian is then given by: X X X X Z Z Z Z Y Y Y Y H= J ⌧ijk sX (5.14) i sj sk + ⌧ijk si sj sk + ⌧ijk si sj sk . i,j,k2{4}

Note that the first two terms describe the ordinary triangle plaquette interactions that were analyzed in the mapping of color codes (chapter 4.2). The Hamiltonian (5.14) can be interpreted as two triangular models on stacked lattices. Without the third term in the Hamiltonian (i.e., Jy = 0), the two lattices are independent, representing the scenario ˆ and Z. ˆ When present this of independently correcting errors of type X term gives rise to six-body interactions (because each sY i is actually a product of two spins), which introduces correlations between the two layers (see figures 5.3 and 5.4). Interestingly the resulting model can be considered an interacting eight-vertex model on a Kagom´e lattice [99].

68

5.3 Color Codes with Depolarizing Noise

"" ""

"" ""

""

"" ""

"" #" ## "#

"" ""

"" ""

"" ""

"" ""

""

"" ""

"" ""

Figure 5.3: Placement of the classical spins for topological color codes under the depolarizing channel: Because there are two check operators for each hexagonal plaquette in color codes, the mapping accordingly attaches two Ising spins per plaquette. Both qubitflip and qubit-phase errors commute with one of the check operator types and therefore a↵ect only one spin per plaquette. Furˆ thermore, the combined Y-errors a↵ect both spins on all adjacent plaquettes. Consequently, the classical statistical spin model exhibits three-body and six-body interactions. At the same time, with spins sharing the same plquette, the resulting stacked triangular spin models are not shifted, in contrast to the Toric code. The red [blue] error chains indicate sets of qubits with a qubit-flip [phase] error, with the first [second] spins flipped on enclosed plaquettes, accordingly.

69

E↵ect of the Depolarizing Channel

Figure 5.4: Graphical rendering of the lattice for topological color codes under the depolarizing channel: The arrangement arising from the mapping can be visualized as two stacked triangular lattices with triangle plaquette interactions. The hamiltonian terms for the top (bottom) lattice are the product of the three red (blue) spins in a triangle, with an additional quenched random disorder factor of ±1, indicated in orange (light blue). Furthermore, the possible error correlations for the depolarizing channel introduce an additional sixbody interaction term between the two lattices, depicted in green.

5.4

Numerical Details

As illustrated in figure 5.1, the lattice of the Toric code can be partitioned into sublattices , such that the only interconnection is given by the four-body-interactions of the Hamiltonian in equation (5.12). The ground state of the pure system is then realized when the spins of each sublattice are aligned (i.e., there is a global Z2 symmetry). In this case the sublattice magnetization is a good order parameter, m=

1 X Si , N

(5.15)

i2

where denotes the sublattice and N is the number of spins in . Similarly, both of the triangular layers for topological color codes are tripartite with spins aligned in each sublattice when the system is in 70

5.5 Code Optimizations the ordered ground state. Hence, an order parameter analogous to equation (5.15) can be defined for a suitable sublattice . Note that particular caution is required when implementing the periodic boundary conditions to ensure that these distinct sublattices are well defined. In accordance with chapter 3.3, a two point finite-size correlation length can then be derived from the sublattice magnetization to detect the transition. In all simulations, equilibration is verified by ensuring that all observables averaged over logarithmically-increasing amounts of simulation time are time independent. Once the data for all observables agree for three logarithmic bins the Monte Carlo simulation for that system size is deemed to be in thermal equilibrium. The detailed simulation parameters can be found in table 5.1. Table 5.1: Simulation parameters: L is the linear system size, Nsa is the number of disorder samples, teq = 2b is the number of equilibration sweeps (system size times number of single-spin Monte Carlo updates), Tmin [Tmax ] is the lowest [highest] temperature, and NT the number of temperatures used. For the Toric code, we use L = {12, 16, 18, 24, 32, 36}, while for color codes L = {12, 15, 18, 24, 30, 36} following the coloring constraints that the system size must be a multiple of 3. p 0.00 0.00 0.00 0.04 0.04 0.04 0.08 0.08 0.08 0.15 0.15 0.15 0.17 0.17 0.17

5.5

0.05 0.05 0.05 0.12 0.12 0.12

0.20 0.20 0.20

12 18 30 12 18 30 12 18 30 12 18 30 12 18 30

L 16 24 36 16 24 36 16 24 36 16 24 36 16 24 36

Nsa 5 000 1 000 500 5 000 1 000 500 5 000 1 000 500 5 000 1 000 500 5 000 1 000 500

b 18 19 20 20 21 22 20 22 24 20 22 24 21 23 25

Tmin 3.500 3.500 3.500 3.200 3.200 3.200 2.700 2.700 2.700 2.300 2.300 2.300 1.500 1.500 1.500

Tmax 4.000 4.000 4.000 3.800 3.800 3.800 3.500 3.500 3.500 3.200 3.200 3.200 2.800 2.800 2.800

NT 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42

Code Optimizations

When investigating the Hamiltonians in equations (5.12) and (5.14) using the parallel tempering Monte Carlo method, simulations are severely slowed by the multi-spin interactions. A na¨ıve calculation of the energy change for an individual spin-flip has to consider all adjacent Hamiltonian terms (up to 16 for the Union Jack lattice) and calculate, for each of 71

E↵ect of the Depolarizing Channel them, the product of its spins (up to 6 spins in a term). Fortunately, because all spins and interaction constants are ±1 and can be represented using a single bit, this process can be optimized drastically: • We first note that in a single update sweep, each Hamiltonian term will be calculated several times – once for each of the spins involved. With this, the complexity of calculating the change in energy for an individual spin flip can be reduced considerably be keeping track of the intermediate product for each Hamiltonian term: Rather than multiplying individual spins, all that is needed is the sum of a few precalculated spin products. However, this compromise also complicates spin updates – along with every spin that is flipped, all a↵ected intermediate results also need to be updated. Nevertheless, the speed increase for this basic optimization is a factor between 2 and 3, depending on the system size. • Because every intermediate result is only one bit of information as well, the Monte Carlo update process can be further streamlined by storing both the spins and intermediate results of an entire unit cell as bits in the same variable. This corresponds to two spin bits and 6 interaction bits both for the Toric code and color codes.1 The benefit of this approach is that all intermediate results within the same unit cell can be updated together, along with the spin, simply by applying a XOR bit-mask to the entire variable. The additional speedup from this optimization is approximately 25% to 35% percent. • Since each unit cell is represented by 8 bits, each block of four unit cells can comfortably be loaded into a 32-bit variable, allowing for updating of all the terms within the block by applying a single bit mask to the entire variable: The resulting energy di↵erence can be calculated by applying a bit mask to select all the relevant Hamiltonian terms, followed by using the native processor instruction for bit counting to determine the resulting energy change.2 The very same bit mask, along with an extra bit for the spin to be flipped, can then be applied as bit-wise XOR to update both the spin as well as all the intermediate results in one instruction. This provides another speedup of approximately 15%. 1 The Union Jack lattice is represented as a lattice of alternating square unit cells in a checkerboard pattern. On a 64-bit machine this is achieved by storing both unit cells together as 16 bits and loading a block of four double unit cells. 2 Curiously, on some exotic machines (which presumably emulate the bit-count instruction), using a lookup table for the number of bits in each byte making up the 32-bit variable is actually faster.

72

5.6 Results • Since all unit cells are loaded into the same variable, shifting the block of four to the right simply corresponds to storing the two cells on the left, applying a bit shift and loading the next two cells on the right. With this access to memory can be further reduced by reusing the data already loaded for the previous block. While the speedup from data reuse is limited due to caching, it has still proven to consistently increase simulation speed by approximately 5-8% in large systems. Using all these optimizations, the code runs approximately four times faster than the na¨ıve approach, thus allowing for the simulation of a system twice the size within the same time. The details of this efficient bit-coding technique are illustrated in figure 5.5.

5.6

Results

For the pure system (p = 0) there is a sharp transition visible directly in the sublattice magnetization. The transition temperature Tc,pure ⇡ 3.64 coincides with the value found analytically. For larger amounts of disorder, a transition can still be located precisely by means of the crossings in the two-point finite-size correlation function ⇠L for di↵erent system sizes, as described in chapter 3.3. Sample data for color codes in a square-octagonal arrangement are shown in figure 5.6. A clear crossing can be seen for disorder rates below 17%. As the critical threshold is approached, the correlation functions ⇠L for di↵erent system sizes only touch marginally such that at p = 0.188 both the scenario of a crossing as well as no transition are compatible within error bars. This gives rise to the large error bars near the critical error threshold in the phase diagram shown in figure 5.7. For error rates exceeding the critical threshold, p > pc , the lines do not meet, indicating that there is no transition in the temperature range studied. The error bars are calculated using a bootstrap analysis of 500 samples. The crossing of the critical line Tc (p) with the Nishimori line from equation (5.10) determines the error threshold to depolarization. Remarkably, the error thresholds for the di↵erent setups investigated here all agree within error bars. The (conservative) estimate for the critical error threshold for all three models is pc = 0.189(3). These results are summarized in figure 5.7, which shows the estimated phase diagram of all three code realizations. The numerical value of the error threshold was also verified by Masayuki Ohzeki, using an alternative approach known as the duality method [90], which was originally developed within the context of spin glasses. His estimates for all three codes agree with the result found in the numerical simulations within error bars. 73

E↵ect of the Depolarizing Channel (a)

(b)

01 0

4 6 64 2

2 3 7

5

1

3 57

(d)

(c)

0x 42 02 05 88

0x 42 02 05 88

Figure 5.5: Bit-coding for the depolarizing channel. (a) The unit cell for the Toric code contains two spins and four links (two red and two blue). In addition to the 6 bits required to store this information, we also keep track of the four-spin interactions arising from the depolarizing channel (dotted green). These act on the four spins adjacent to an intersection (i.e., a combination of red and blue links). The numbers indicate the bit position in a byte where the respective spin/interaction is encoded. (b) The shape of the unit cell is slightly di↵erent for color codes, but there are also 2 spins and 6 triangle interaction terms per unit cell. (c) Local updates can be performed in a block of four unit cells. For the state 0x42020588, the first cell is described by 0x42, which translates to one negative correlated interaction term (green) and one flipped blue spin (the other cells are analogous). Flipping this blue spin will a↵ect all adjacent blue and green interaction terms. The whole update can thus be applied as XOR 0xf2a05000, which toggles both the spin and all the a↵ected interaction terms in one instruction. (d) The same bit-pattern has a slightly di↵erent meaning for color codes (i.e., bits denote flipped spins or triangle interactions with a negative contribution to the energy). Local updates can still be performed within a block of four unit cells, using adapted bit-masks.

74

5.6 Results

⇠L L

⇠L L

L = 16 L = 24 L = 32 L = 48

(a)

1.5

p = 0.100

(b)

1.0 0.8

1.0

0.6 0.4

0.5

p = 0.000 0.2

Tc ⇡ 3.65 0.0

Tc ⇡ 2.88

0.0 3.60

⇠L L

3.65

3.70

⇠L L

p = 0.170

(c)

2.7

T

2.8

2.9

3.0

T

p = 0.188

(d)

0.4

0.6

0.3 0.4 0.2 0.2

0.1

Tc ⇡ 2.13 0.0

0.0 1.8

2.0

2.2

2.4

1.4

T

1.6

1.8

2.0

2.2

2.4 T

Figure 5.6: Crossings of the correlation function ⇠L /L for color codes in a octagon-square arrangement. The results for all three codes are virtually identical, therefore just one sample is shown here. The codes can be compared in the phase diagram in figure 5.7. (a) The pure system shows a clear transition at a critical temperature of Tc = 3.65. (b) As disorder is introduced, the transition temperature is lowered and becomes less pronounced. (c) At a disorder rate of p = 0.170 the data still exhibit a clear crossing at a transition temperature of Tc (p) ⇡ 2.13(2). (d) At p = 0.188, i.e., close to the critical error rate, corrections to scaling become large and the system is difficult to equilibrate. With large error bars, both the scenario of a crossing as well as no crossing are compatible with the picture found.

75

E↵ect of the Depolarizing Channel Phase diagram for the depolarizing channel Tc (p)

Nishimori Line Toric Code Color Codes Tri Color Codes UJ

4.0

3.0 stable ordered phase

2.0

error correction feasible

1.0

pc = 0.189(3)

0.0 0.00

0.05

0.10

0.15

!

0.20

p

Figure 5.7: Phase boundary estimated from Monte Carlo simulations for the estimation of the error threshold of the Toric code, as well as two realizations of color codes. The error threshold pc corresponds to the point where the Nishimori lines intersects the phase boundary. Remarkably, the phase boundaries for all three codes agree within error bars. The stable ordered phase (shaded) corresponds to the regime where quantum error correction is feasible.

These results are remarkable for two reasons. First, all three realizations of topological codes appear to exhibit the same error threshold, regardless of the particular choice of the lattice. As for the case of the qubitflip channel (see chapter 4), the individual di↵erences do not appear to have an e↵ect. This suggests that there is a fundamental connection between these codes that dictates the maximum tolerable amount of noise. Second, the numerical value for the error threshold at pc = 0.189(3) exceeds the value one would expect from previous results: Individually correcting errors at a maximum rate of px = pz = 0.109(2) would only allow for the correction of depolarizing noise up to a threshold value of p0c = 3px /2 ⇡ 0.164. This shows that a code can be more efficient by exploiting knowledge of the correlations in the error source.

76

Chapter 6

Topological Subsystem Codes “The mere thought hadn’t even begun to speculate about the merest possibility of crossing my mind.” — Douglas Adams

All topological codes share the advantage that the operators involved in the error-correction procedure are local, thus rendering actual physical realizations more feasible. However, in practice the decoherence of quantum states is not the only source of errors: Both syndrome measurement and qubit manipulations are delicate tasks and should be kept as simple as possible to minimize errors. For the Toric code and topological color codes considered in chapters 4 and 5, the check operators – while local – still involve the combined measurement of four, six or even eight qubits at a time. Independent from topological error correction, the pursuit of codes that build on even simpler check operators has produced a class of codes referred to as stabilizer subsystem codes. These are the result of applying the stabilizer formalism to operator quantum error correction [100]: By considering some of the encoded logical qubits to be gauge qubits where no information is stored, they allow for complicated measurements to be split into several ones that involve a smaller number of qubits [53, 101]. Furthermore, during a calculation, these ancillary gauge qubits can be used to absorb decoherence e↵ects. This chapter investigates a combination of these two paradigms, which was introduced by H´ector Bomb´ın [50]: a class of topological subsystem codes that share the characteristic properties of both worlds. These codes are local in a two-dimensional setting and require only measurements of neighboring qubit pairs for error syndrome retrieval. Furthermore the topologically-encoded state is well protected as long as correction succeeds up to homology. 78

6.1 Constructing the Code

6.1

Constructing the Code

In subsystem codes logical quantum information is only encoded in a suitable subsystem A of the whole system C = A ⌦ B, while the state of the gauge qubits encoded in subsystem B is irrelevant. Rather than requiring that we find a recovery operation R to restore the whole system, (R E)(⇢) = ⇢, it is therefore sufficient to find a mapping such that (R E) (⇢A ⌦ ⇢B ) = ⇢A ⌦ ⇢0B ,

(6.1)

i.e., the subsystem A is reverted to the original state, while the state of subsystem B changes to some arbitrary state ⇢0B without the encoded logical information in A being a↵ected. Stabilizer subsystem codes are determined by a gauge group G ⇢ Pn and the choice of a stabilizer S which is the center of this group. Gauge operators Gi 2 G should not a↵ect the encoded state, i.e., each of the stabilizer eigenspaces Cs can be further decomposed into Cs = As ⌦ Bs such that elements in G act trivially on As , where the logical information is encoded. If N (S) is the normalizer of the stabilizer, detectable errors are then given by Pn N (S), while the undetectable ones are in N (S) G. A topological subsystem color code [50] can be constructed from a two-dimensional lattice ⇤ with triangular faces and three-colorable vertices. Here we consider the regular triangular lattice as the simplest possible representation of lattice satisfying these requirements, but a code could in principle be generated for any lattice that is the dual of a ¯ 2-colex [48]. As a first step in constructing the code C⇤ , a new lattice ⇤ is generated by extrusion of the vertices and edges in ⇤. That is, the new lattice will contain an additional face for each original edge and vertex. This process is illustrated in figure 6.1 (a) and (b). After placing phys¯ the connecting links are then assigned ical qubits on the vertices of ⇤, ˆZ ˆ and generators such that the edges around triangle faces are of type Z ˆ ˆ ˆ Y, ˆ the links extending from a triangle are alternating of type XX and Y as indicated in figure 6.1 (c). The resulting lattice is represented graphically in figure 6.2. Note that all the check operators introduced in this manner are 2-local, i.e., they are products of only two Pauli operators.

79

Topological Subsystem Codes

A

(a)

(b)

(a)

Qj

(b) YY ZZ

B

XX ZZ

C

(c)

A

(d)

(c)

sy

sx

z

z

s

(d)

s

sy szz

sx

B

C

Figure 6.1: (a) A regular triangular lattice satisfies the vertex threecolorability requirement (indicated by A, B, C). (b) To construct a topological subsystem code, we extrude the edges and vertices by placing three qubits (red balls) inside each of the triangular unit cells and connecting them with ˆ Z ⌦ ˆ Z gauge generators (dotted blue lines). The links between these triangles are assigned ˆ X ⌦ ˆ X and ˆ Y ⌦ ˆ Y gauge generators (yellow and green solid lines, respectively). (c) For the mapping, gauge generators represented by colored lines are associated with Ising spins sx,y,z and the qubits with interactions. (d) Introducing new Ising spin variables sZZ = sZ s0Z allows for the removal of the local Z2 symmetry.

6.2

Associated Classical Spin System

To construct the related classical statistical-mechanical system, Ising spins si = ±1 are placed at the site of each gauge generator Gi . Single qubit Pauli operators ˆ`w are mapped onto classical interaction terms according to the generators Gi with which they anticommute. This gives rise to a Hamiltonian of the general form H⌧ (s) :=

J

X X `

w=x,y,z

⌧`w

Y

gw

si i` ,

(6.2)

i

where i enumerates all Ising spins and ` all physical qubit sites, respecw tively. For each spin si the exponent gi` 2 {0, 1} is 0 if ˆ`w commutes with 80

6.2 Associated Classical Spin System

Figure 6.2: Graphical representation of the qubit arrangement for topological subsystem color codes on a regular triangular lattice. Each of the triangular unit cells (large grey triangles) contains three physical qubits (red balls). The two-qubit gauge generators w ⌦ w are shown in green (w = x), yellow (w = y) and blue (w = z). These are the lines connecting the qubits (red balls). They are arranged such that the physical qubits in each triangle are connected by ztype links and links extending from the triangle are alternatingly of type x and y.

Gi , and 1 if it anticommutes. The signs of the couplings ⌧`w = ±1 are then quenched random variables satisfying the constraint ⌧`X ⌧`Y ⌧`Z = 1. For the depolarizing channel, they are distributed according to: 8 (+1, +1, +1) (1 p) , > > < (+1, 1, 1) (1 p) , X Y Z (⌧` , ⌧` , ⌧` ) = (6.3) ( 1, +1, 1) (1 p) , > > : ( 1, 1, +1) (1 p) .

They are all positive with probability 1 p and the other three configurations have probability p/3 each. In the specific case studied here, the Hamiltonian therefore has the geometry depicted in figure 6.2 and takes the form n X Y X Z Z Y H= J (⌧`X sY ¯` + ⌧`Z sX (6.4) ` + ⌧` s` )s` s ` s` , `

81

Topological Subsystem Codes where ` enumerates qubit sites and spin types are assigned as shown in figure 6.1. Notice that z-labeled spins are arranged in triangles, and that flipping each of these triads of spins together does not change the energy of the system. Therefore, there is a local Z2 gauge symmetry which can be fixed by introducing new Ising variables sZZ = sZ` s¯Z` . Notice that ` these spins are constrained: If j, k, ` are three qubit sites in a triangle, ZZ ZZ sZZ j sk s` = 1. The Hamiltonian to be simulated therefore reads: H=

J

n X

ZZ Y Y ZZ Z X Y ⌧`X sX ` s ` + ⌧` s ` s ` + ⌧` s ` s ` .

(6.5)

j

This Hamiltonian has no remaining local gauge symmetry and consists only of two-spin interactions apart from the constraints on the spins sZZ ` .

6.3

Monte Carlo Simulations

After freeing the Hamiltonian from the local symmetry in equation (6.5), it still has a global Z2 ⇥ Z2 symmetry. Indeed, we can color spins according to their nearest colored vertex in the original lattice, producing three sublattices A, B and C, as indicated in figure 6.1(a). Flipping all the spins in two of these sublattices together is compatible with the constraints on sZZ and leaves the energy invariant, giving rise to the indicated symmetry. We are thus left with a disordered spin system with the two parameters J and p. The system is expected to exhibit ordering at low temperatures T and disorder rates p: In the ground state each sublattice has aligned spins and thus the sublattice magnetization is a good order parameter: 1 X m= si , (6.6) N i2

2

where N = L /3 (L the linear system size) represents the number of spins in one of the sublattices. From this, the two-point finite-size correlation function can be constructed as indicated in chapter 3.3. In all simulations, equilibration is tested using a logarithmic binning of the data: Once the data for all observables agree for three logarithmicallysized bins within error bars we deem the Monte Carlo simulation for that system size to be in thermal equilibrium. The detailed simulation parameters can be found in table 6.1.

82

6.4 Code Optimizations Table 6.1: Simulation parameters: p is the error rate for the depolarizing channel, L is the linear system size, Nsa is the number of disorder samples, teq = 2b is the number of equilibration sweeps, Tmin [Tmax ] is the lowest [highest] temperature, and NT the number of temperatures used. p 0.000 0.000 0.000 0.030 0.030 0.030 0.045 0.045 0.045

6.4

– – – – – – – – –

0.020 0.020 0.020 0.040 0.040 0.040 0.060 0.060 0.060

L 9, 12 18 24 9, 12 18 24 9, 12 18 24

Nsa 3 200 1 600 400 4 800 2 400 800 9 600 4 800 2 400

b 17 18 19 18 19 20 19 21 24

Tmin 1.40 1.40 1.40 1.25 1.25 1.25 0.9 0.9 0.9

Tmax 2.50 2.50 2.50 2.40 2.40 2.40 2.20 2.20 2.20

NT 24 24 28 28 28 32 32 36 48

Code Optimizations

For the numerical simulations, the bit-coding technique introduced for the depolarizing channel (chapter 5.5) can be adapted to represent the spins and interaction terms of topological subsystem codes. However, here the main challenge lies in the complex lattice structure arising from the mapping, compared to the models found in the previous chapters, 2, 4 and 5. For the na¨ıve approach, this is solved by using a precalculated lookup table of nearest neighbors, which can lead to severe slowdown and non-efficient use of caches. However, the use of an explicit adjacency list can be avoided in this case by the choice of a suitable unit cell. In particular, the implicit neighbors for a cell in a rectangular grid can easily be calculated on the fly. By considering a larger unit cell consisting of two of the original triangles, such a rectangular shape suitable for simulations can be achieved. This large unit cell consists of 6 qubits and 12 stabilizers connecting them both internally and to neighboring cells. With spins assigned to each stabilizer (one each) and interaction terms dictated by qubits (3 each), the total information to store is 30 bits, which can be fitted in a 32-bit variable. The goal is then to formulate all spin updates such that they involve only one or two unit cells and that they can be applied efficiently using bit-masks. An example of how this can be achieved is detailed in figure 6.3. Note that particular care was taken when assigning bit locations to spins and interaction terms, such that entities which are needed for updates across cell boundaries are stored together. For instance, the spins with number 0 and 4 correspond to stabilizers that act on qubits from two di↵erent cells (which are stacked vertically). With this smart choice of the bit-coding scheme, only the last byte of the top cell and the first 83

Topological Subsystem Codes two bytes of the bottom cell need to be access to perform a local update. Likewise, the update across the horizontal border can also be achieved by using the top two bytes from the left cell and the third byte form the right cell. Thus this particular choice for the encoding simplifies the handling of interactions across cell boundaries. Furthermore the encoding also respects the similarities of the top and bottom triangle such that updates can be performed using the same bit-masks for both instances. The detailed source code for these local updates can be found in appendix B.6.

4

(a)

A

0 28

B

(b) 20

D

24

B

20

30

D 12

29

C

14

16

E

8

C

16

E 4

13

30

29

ˆC Y

ˆC X

C 8

16

F

ˆC Z

0

Figure 6.3: Optimization via bit-coding for topological subsystem codes. (a) The lattice structure for subsystem codes is more complex than in previous codes. A suitable choice of a rectangular unit cell contains 6 qubits (red circles) and 12 stabilizers (rectangles). For the simulation one needs to keep track of the spins (which are attached to the stabilizers), as well as the interaction terms (3 per qubit). These 30 bits of information can conveniently be encoded in a 32-bit variable. The number within each rectangle denotes the location where the corresponding spin is encoded. Each of the qubits contributes three interaction terms to the Hamiltonian in equation (6.5), which are assigned to the three-bit gaps between spin locations. The advantage of this interleaved encoding of spins and interaction terms is two-fold: First, the bits are arranged such that the updates involving multiple unit cells are closeby (i.e., all spins and interactions at the bottom are stored in the last byte). This simplifies the handling of cell boundary conditions considerably. Second, the bit arrangement for both blue triangles is identical such that the same bit-masks can be used for local updates (up to bit-shifting). (b) Close-up of the interaction terms arising from ˆ introduces a two-spin errors on an individual qubit. A phase-error Z ˆX and S ˆY stabilizers. Both X ˆ and interaction term of the spins for S ˆ errors produce three-spin interactions. Y

84

6.5 Numerical Results

6.5

Numerical Results

For the pure system (p = 0) there is a sharp transition visible directly in the sublattice magnetization. The transition temperature Tc,pure ⇡ 1.65(1) has not been computed before. For non-vanishing amounts of disorder p > 0, a possible transition can be located precisely by means of the two-point finite-size correlation function ⇠L . Sample data for a disorder strength of p = 0.048 (i.e., this would mean that on average 4.8% of the physical qubits have failed) are shown in figure 6.4, indicating a transition temperature of Tc (p) = 1.251(8). At p = 0.055(2), the lines only touch marginally such that both the scenario of a crossing as well as no transition are compatible within error bars. For error rates p > pc , the lines do not meet, indicating that there is no transition in the temperature range studied. The threshold pc is recovered as the critical p along the Nishimori line [54], 4 J = log

p/3 , 1 p

(6.7)

where the ferromagnetic phase of a sublattice is lost [52]. The entire phase diagram is shown in figure 6.5, with a (conservative) estimate for the crossing point at pc ⇡ 0.055(2). While this critical error rate is (numerically) lower than the threshold calculated for the Toric code and topological color codes (see chapter 4 and 5), it is important to note that this is the consequence of a compromise for a much simpler syndrome-measurement and error-correction procedure. The relevant error rate for an actual physical realization of a topological quantum memory is the expected number of errors during the time it takes to apply one round of error correction. As long as this remains below the theoretical threshold, quantum information can be preserved over time. For the topological subsystem codes investigated here, the error correction procedure involves only two-local measurements and is therefore much simpler to implement. As a result, the qubits are given less time to decohere, and the actual error rate will be lower. Seen in this light, the critical error threshold of pc = 0.055(2) is a very encouraging number that underlines the feasibility of error correction using topological subsystem codes.

85

Topological Subsystem Codes

⇠L L

7.0

⇠L L

L= 9 L = 12 L = 15 L = 18 L = 24

(a)

6.0

(b)

6.0 5.0

5.0 4.0 4.0 p = 0.048 3.0

3.0

Tc ⇡ 1.253

2.0

p = 0.052 Tc ⇡ 1.17

2.0 1.1

1.2

⇠L L

1.3

1.4 T

1.0 ⇠L L

(c)

6.0

5.0

5.0

4.0

4.0

3.0 p = 0.054

3.0

1.1

1.2

1.3

T

(d)

p = 0.057 2.0

Tc ⇡ 1.10

2.0 0.9

1.0

1.1

1.2

1.3 T

0.9

1.0

1.1

1.2

1.3

1.4 T

Figure 6.4: Crossing of the two-point finite-size correlation function ⇠L /L for di↵erent fixed disorder rates p. (a) At a disorder rate of p = 0.048, there is a clear crossing visible for the lines of di↵erent system sizes L, indicating a transition temperature of Tc (p) ⇡ 1.251(8). (b) At p = 0.052 the transition temperature is lowered to Tc (p) ⇡ 1.17(4). The shaded area corresponds to the error bar in the estimate, calculated via bootstrap resampling. (c) For p = 0.054, as we get closer to the critical error threshold, the crossing in the correlation functions is less pronounced, with the transition occuring at Tc (p) ⇡ 1.10(5). (d) For a disorder rate of p = 0.057, the lines for di↵erent system sizes approach but do not touch within error bars. At this amount of disorder, no transition to an ordered state occurs in the system.

86

6.5 Numerical Results

Phase diagram for the topological subsystem codes Tc (p)

Nishimori line Tc (p)

1.5 stable ordered phase error correction feasible

1.0

pc = 0.055(2)

!

0.5

0.0 0.00

0.01

0.02

0.03

0.04

0.05

0.06

p

Figure 6.5: Computed phase diagram for the classical disordered spin Hamiltonian in equation (6.5). Each data point Tc (p) on the phase boundary (dashed curve separating white and shaded regions) is calculated by locating the crossing in correlation function ⇠L /L for di↵erent system sizes L at a fixed disorder rate p. The Nishimori line (blue solid line) indicates where the requirement for the mapping holds. The error threshold pc ⇡ 0.055(2) is found where the Nishimori line intersects the phase boundary between the ordered phase (shaded) and the disordered phase (not shaded, larger T and p). Below pc ⇡ 0.055(2) the system’s ordered state indicated that error correction is feasible. The (red) shaded vertical bar corresponds to the statistical error estimate for the error threshold pc .

87

Chapter 7

Measurement Errors “The major di↵erence between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.” — Douglas Adams

The error-correction codes presented in the previous chapters assumed perfect syndrome measurement, a daring assumption given that the manipulations for error correction are of a quantum nature as well. For real-world applications, it is necessary to devise strategies that succeed in preserving quantum information over time, even if some of the intermediate measurements can be faulty. This approach gives rise to the notion of fault-tolerant quantum computation [33, 34, 35]: In addition to a qubit error rate p, the error model also considers a chance for measurement outcomes to be wrong, i.e., the measured eigenvalue of each check operator is incorrect at a constant rate q. As for qubit errors, the assumption is made that there are no correlations between any of the individual errors. In particular, measurement errors are not considered to be caused by defective hardware and are therefore non-systematic. Upon measuring the same check operator again at a later time, the outcome is still correct with a chance of (1 q) and incorrect with probability q – independent of whether it was erroneous earlier. This allows for repetition of the error-correction process in order to amend errors introduced by earlier faulty measurements. Indeed the best strategy to preserve the encoded information is to repeatedly measure and correct errors. In this chapter, the error-correction schemes arising with possible measurement errors are discussed, followed by an analysis of the Ising lattice gauge theories found in the mapping to classical spin systems. 90

7.1 Fault-Tolerant Toric Code

7.1

Fault-Tolerant Toric Code

The mapping for the fault-tolerant Toric code was introduced by Dennis et al. [52]. In their 2002 paper they demonstrate that the Toric code with imperfect syndrome measurement is related to a three-dimensional Ising lattice gauge theory with disorder. The “crude lower bound” noted in the paper estimates pc > 0.0114 for equal qubit-flip and measurementerror rates p = q. After Wang et al. improved the lower bound to pc 0.029 using zero-temperature considerations [102], Ohno et al. studied the model numerically in 2004. By investigating the area and perimeter of di↵erent loops in the system, they estimated an error threshold of pc ⇡ 0.033, consistent with previous findings [103]. While this result is drastically lower than the threshold without faulty measurements (pc,q=0 = 0.109(2)), it is still large enough to consider physical realizations to be feasible. The following is an outline of how the setup is related to a classical spin model. For simplicity, the repeated error-correction process is considered to occur at discrete time steps, possibly with operations for quantum computation occurring in between. Each round of error correction consists of two phases: First, all check operators are measured to identify the error syndrome – which might be incorrect due to faulty measurements. Second, error correction is carried out for the syndrome attained in the first phase. Note that this will actually introduce errors whenever the syndrome is faulty and these errors can only be corrected in a later round of error correction. This process can be modeled by considering vertically stacked replicas of the original quantum setup, each representing one round of error correction. Since these occur at discrete time steps, it is instructive to think of the additional vertical dimension as time. The state of each layer is related to the one immediately below it by the e↵ect of the error channel, followed by syndrome measurement and error correction. For perfect syndrome measurement, q = 0, the eigenvalues of all check operators can be determined perfectly, and the subsequent error-correction procedure restores the system to a valid code state. In this case the state of all layers will be in the code space and the quantum information is preserved if p is below the error threshold for the Toric code without measurement errors. However, if measurements can be incorrect, some of the chain boundaries signaled by the error syndrome are not caused by actual qubit errors, but are merely virtual images of such errors due to the faulty measurements. This can be illustrated by considering – for now – a one dimensional cut through the Toric code’s checkerboard lattice along a line of qubits ˆZ . In the fault-tolerant regime, repliconnected by plaquette operators S cas of the resulting row of qubits are stacked vertically, with each layer representing one round of error correction. Since syndrome measurement 91

Measurement Errors

(c)

(b)

Z⌦4

(a)

Z⌦4

Figure 7.1: Error histories for a single row of qubits in the Toric code: The cut shown above considers a row of qubits which is replicated and stacked along the vertical (time-like) axis. (a) Horizontal links correspond to chains of faulty qubits. They are detected by check operators at their boundary. (b) Plaquettes with check operators are connected by vertical links, which represent measurements. Errors in the measured syndrome lead to faulty qubits during error correction. (c) A temporal equivalence consists of a “vertical plaquette” with two qubit errors and two measurement errors. This can be interpreted as two consecutive qubit errors which pass unnoticed because the corresponding measurements are faulty.

occurs between the layers, measurements can be considered to reside on vertical links connecting the plaquettes. In this simplified setup, a linear chain of qubit errors gives rise to two check operators with a negative eigenvalue – one at either boundary of the chain. The subsequent errorcorrection procedure consists of flipping the a↵ected qubits back to the proper state (see figure 7.1). However, the exact same error syndrome is also measured if no qubit-flips occurred, but the check operators su↵ered from measurement errors. The error-correction procedure then flips the same qubits, introducing an error that will only be discovered at a later stage, when the measurement outcomes are correct again. In this simple 1+1 dimensional case, the resulting lattice consists of vertical plaquettes with qubits on the horizontal links and measurements on the vertical ones. Analogous to the error chains in earlier chapters, we introduce the concept of an error history, consisting of a set of faulty qubits and measurements. These error histories can likewise be classified into equiv¯ such that whenever two error histories share the same alence classes E, class, correcting for E 0 even though E has happened still leads to successful error recovery.

92

7.1 Fault-Tolerant Toric Code

(a)

(c)

(b)

Figure 7.2: Stacked layers representing fault-tolerant quantum computation: The Toric code introduced in chapter 2.4 arranges qubits on a checkerboard lattice with stabilizer generators attached to colored plaquettes. (a) Horizontal links correspond to the usual error chains for the Toric code: flipping the four qubits around a plaquette is an elementary equivalence because its operator is in the stabilizer of the code. (b) As for the case of a single row of qubits (figure 7.1), the vertical links represent measurements which can be faulty. (c) The resulting lattice for error histories consists of spatial and time-like links forming a cubic lattice.

With this terminology, we can now investigate the setup for the full lattice: Starting from a set of vertically stacked checkerboard lattices, ˆZ to be connected by vertical consider the plaquettes with generators S links representing syndrome measurements. The result is a cubic lattice formed by the spatial and time-like links connecting plaquette centers (see figure 7.2). The qubits and measurements reside on the links of this lattice and error histories are sets of links in this three-dimensional grid which are faulty. The probability of a specific error history consisting of b qubit-flip errors and m faulty measurements is given by P(E) = (1

p)N

b

pb · (1

q)M

m m

q

/



p 1

p

◆b ✓

q 1

q

◆m

, (7.1)

where N is the total number of qubits and M the total number of check operators. From this, the goal is to express the probabilities of error classes in terms of the partition function of a suitable statistical model, X X ¯ := P(E) P(E) / ZE (T ) := e HE (s) , (7.2) ¯ E2E

s

for a suitably parametrized Hamiltonian HE . Note that error histories which are in the same class can only di↵er by elementary equivalences, i.e., histories which have no boundary and are homologically trivial. 93

Measurement Errors These equivalences are loops consisting of fours links around a plaquette: • Horizontal plaquettes correspond to the error chains from the Toric code without measurement errors. Since no time-like links are involved, these equivalences are considered “spatial”. • Vertical loops consist of two qubit links and two measurement links which are faulty. This corresponds to the scenario where two consecutive qubit-flips occur on a qubit, but the process remains undiscovered because there were concurrent measurement errors on the adjacent stabilizers. In order to generate all error histories in the same class as a reference history E, classical Ising spins sti , ssi are attached to the elementary equivalences. A spin state of si = 1 specifies that the sampled error history E 0 di↵ers from the reference history E by the equivalence associated with the spin si . To express the relative probability of E 0 with respect to the reference chain E, the relative number of qubit and measurement errors is needed. We start by defining a set of variables ⌧` 2 {±1} which are negative when the respective qubit (or measurement) ` is in the error history. Now let si = 1 for the set of elementary equivalences by which a sampled history E 0 di↵ers from the reference history E. This allows us to express whether a qubit or measurement is in E 0 using the variables ⌧` and the spins attached to the equivalences. • Whether a qubit error ` is part of the history E 0 can be expressed as the signum of the product ⌧` ssi ssj stk stl .

(7.3)

The spins refer to the two elementary spatial error loops containing ` and the two spatial equivalences that the qubit takes part in. • Likewise the presence of an error loop can be expressed as ⌧` sti stj stk stl ,

(7.4)

where all four related equivalences are time-like. With this, the sum of flipped qubits and faulty measurements can be written as a Hamiltonian of the form: X s s s t t X t t t t t HE = J ⌧⇤ s i s j s k s l K ⌧⇤ s i s j s k s l , (7.5) ⇤



Where the sum in the first [second] sum is over all spatial [time-like] plaquettes of the lattice and we have introduced positive interaction 94

7.1 Fault-Tolerant Toric Code constants J and K which need to be chosen according to the (adapted) Nishimori condition [54]: e

2 J

= p/(1

p) ,

2 K

e

= q/(1

q) .

(7.6)

With this choice, the Boltzmann factor of the reference history is: 1 exp[ HE (si = 1)] Z P P J ⇤ ⌧⇤ + K ⇤ ⌧⇤ =e

P(E) =

=e

J(N

2b)

=e

(JN +KM )

e

K(M



p 1

(7.7)

2m)

p

◆b ✓

q 1

q

◆m

,

where N [M ] denotes the total number of qubits [check operators] and b [m] is the number of qubit-flip [measurement] in E. The expression found is indeed proportional to the probability we expect according to equation (7.1). Moreover, if two spin configurations {s} and {s0 } only di↵er by the sign of a single spin, and two error histories E and E 0 are equivalent up to the corresponding elementary equivalence, then HE ({s}) = HE 0 ({s0 }).

(7.8)

Putting together equations (7.7) and (7.8), we obtain the desired result: ¯ / ZE ( J) := P(E)

X

e

HE (s)

.

(7.9)

s

This partition function describes an Ising lattice gauge theory with multispin interactions and four parameters, J, K, p and q. The mapping is valid along the Nishimori sheet described by equation (7.6). Interestingly, for p = q the resulting Hamiltonian is isotropic: X H= ⌧⇤ s i s j s k s l , (7.10) ⇤

where ⇤ iterates over all plaquettes in the lattice (both vertical and horizontal). Because p = q also implies J = K via equation (7.6), the model to investigate for this special case has only two parameters, namely J and p.

95

Measurement Errors

7.2

Fault-Tolerant Color Codes

For topological color codes, consider a three-dimensional lattice consisting of stacked triangular layers, each representing one round of error syndrome measurement. The qubits reside on intermediate hexagonal layers and are connected to their respective check operators via vertical links as indicated in figure 7.3. In this case, an error history is represented by a set of faulty qubits and vertical links and there are two distinct types of elementary equivalences. First, if two error histories di↵er exactly on six triangles (qubits) surrounding a vertex i, they are equivalent because ˆX ˆ⌦6 is in the stabilizer and has no e↵ect the corresponding operator S i = S on encoded states. Since there are no vertical (time-like) links involved, we call this equivalence “spatial”. The second, “time-like” equivalence

(b)

Z⌦6 (c)

(a)

Figure 7.3: Construction of the lattice gauge theory for topological color codes on a hexagonal lattice: (a) The resulting lattice structure consists of stacked stacked triangular and hexagonal layers such that each triangle vertex is centered above one of the three-colored hexagons. The classical spins at each of these sites represent the measurement history of a particular check operator, with time corresponding to the vertical axis. As for the color code in chapter 4, qubits reside on the vertices of the trivalent (hexagonal) lattice. The resulting lattice gauge theory has two elementary equivalences: (b) Colored loops in the hexagonal planes correspond to spatial equivalences, with the minimal loops (as the one indicated in red) taking the role of elementary equivalences. This represents the scenario where a set of (six) qubit errors changes the state within the code subspace, i.e., without altering the encoded information. (c) Temporal equivalences consist of two errors on the same qubit and three faulty measurements. This scenario is analogous to the case of two unnoticed consecutive errors for the Toric code (see figure 7.2).

96

7.2 Fault-Tolerant Color Codes is uncovered by considering two stacked qubits along with the triangle and vertical links connecting them. This represents again the scenario of two subsequent qubit-flips on the same qubit, which remain unnoticed because of three concurrent measurement errors. Using analogous spins si and variables variables ⌧4· [⌧7 ] to describe the qubit [measurement] errors in the history E, we can determine the errors in the sampled history E 0 as follows. • A qubit-flip error on qubit ` is a↵ected by three spatial equivalences (the three colored elementary loops including `), as well as timelike equivalences right above it and just beneath. Thus a qubit’s error state is determined by the sign of ⌧4· ssi ssj ssk st` stm ,

(7.11)

where ⌧4· 2 {±1} is negative if this qubit-flip error is in the reference history E. ˆZi is a↵ected by the six time-like equivalences • A measurement of S that include its vertical link. Whether a measurement error occurs is thus dictated by the sign of ⌧7 sti stj stk st` stm stn ,

(7.12)

where ⌧7 2 {±1} is negative if the measurement error is in the error history E and ssi are the six time-like equivalences that include this particular measurement error. With this, the Hamiltonian for topological color codes with measurement error can be written as HE =

J

X 4 ·

⌧4· ssi ssj ssk stl stm

K

X 7

⌧ 7sti stj stk stl stm stn ,

(7.13)

where the first [second] sum is over all qubits [measurements], represented by the five [six] equivalences a↵ected by the corresponding error. The choice of the constants J and K is identical to equation 7.6 for the Toric code. This Hamiltonian also describes an Ising lattice gauge theory, but even for the choice of p = q it is not isotropic as was the case for the Toric code with measurement errors.

97

Measurement Errors

7.3

Numerical Simulations

The study of the Hamiltonians in equations (7.5) and (7.13) constitutes a considerable numerical challenge because the models have no local order parameter due to the presence of a local gauge symmetry. Therefore, the construction of observables such as the two-point finite-size correlation function (which was derived from the magnetization) is not possible. Instead, observables which work for lattice gauge theories are required. In previous studies [103], the area and perimeter law of increasingly larger Wilson loops were studied to locate the phase transition. Another simpler method consists of locating the peak position the specific heat. While not as precise as the area and perimeter law, it su↵ers from less finite-size errors. For topological color codes, the presence of disorder and the complexity of the lattice render a clean scaling analysis using the area/perimeter laws [104] difficult. Furthermore, large Wilson loops show strong corrections to scaling even in the mediocre system sizes which can be simulated numerically. Therefore, to minimize finite size e↵ects for color codes, we investigate in detail the smallest Wilson loops in the system, Y W7 := (7.14) k, k27 which correspond to single hexagon plaquettes loop value is then given by w=

1 X W7 . N7 7

7.

The average Wilson

(7.15)

Without disorder (p = 0) this order parameter shows a clear signal of a transition. However the detection of the transition temperature Tc becomes increasingly difficult when a finite error probability (p > 0) is introduced. Therefore, we also study the whole distribution f (w). For a first-order phase transition we expect a characteristic double-peak structure to emerge near the transition. To better pinpoint the transition, we introduce a derived order parameter ⇣ related to the skewness of the distribution ⇥⌦ 3 ↵ ⇤ ⇥⌦ 2 ↵ ⇤3/2 ⇣(w) ˜ = w ˜ T av / w ˜ T av , (7.16) ⇥ ⇤ where w ˜ = w hwiT av , h· · ·iT denotes a thermal average and [· · ·]av represents an average over the disorder. The skewness of a symmetric function is zero and becomes positive (negative) when the function is slanted to the left (right). Therefore we can identify the point where ⇣ crosses the horizontal axis with an equally-weighted double peak structure in f (w), i.e., ⇣(T = Tc ) = 0. 98

7.3 Numerical Simulations

[hwi] -0.6

[hwi] -0.7

p = 0.030

-0.8 -0.7 -0.8

-0.9 -1.0 1.0 1.1 1.2 1.3 T

-0.9

L= 6 L= 9 L = 12 L = 15

p = 0.000

-1.0 1.1

1.2

1.3

1.4

1.5

1.6

T

⇥ ⇤ Figure 7.4: Average Wilson loop value hwiT av as a function of the simulation temperature T for di↵erent system sizes L. There is a sharp drop visible at the transition, which becomes more pronounced for larger system sizes L. However, this picture is much less clear once disorder is introduced into the system. At a qubitflip and measurement error rate of only p=0.030 (inset), the transition is washed out considerably. This renders locating the transition temperature purely from the average Wilson loop value impractical and imprecise.

Figure 7.4 shows the average Wilson loop value as a function of temperature for the pure system (p = 0) and for an error rate of p = 0.030 (inset). For the system without disorder, the transition is clearly visible. Note that extrapolating to the thermodynamic limit shows that for p = 0 the transition seems to also be first order. However, when p > 0 it is difficult to determine the location of the transition. An alternative view is provided by the histogram of Wilson loop values f (w) for di↵erent temperatures (figure 7.5). Below Tc (left panel), one can observe the development of a shoulder that eventually becomes a second peak. The two peaks have a symmetric weight distribution at the transition temperature (center panel). Above Tc the initial peak starts to disappear (right panel) and the distribution slants to the right. This property is mirrored by the skewness of the distribution shown in figure 7.6.

99

Measurement Errors

f (!)

25

T = 1.438

Tc ⇡ 1.440

T = 1.442

T = 1.120

Tc ⇡ 1.130

T = 1.140

20 15 10 5 0 10 8 6 4 2 0 1.0

0.8

1.0

0.8

1.0

0.8

!

Figure 7.5: Wilson loop histograms f (w) for systems of size L = 15 with p = 0.000 (top row, blue) and p = 0.030 (bottom row, red). The panels show data for temperatures slightly below (left), at (center), and above the transition (right). The double-peak structure is indicative of a first-order transition. For T ⇡ Tc the peaks have equal weight, whereas for T < Tc (T > Tc ) it is slanted to the left (right), thus motivating the use of the skewness ⇣(w) ˜ as an order parameter.

The skewness ⇣(w) ˜ has a positive [negative] peak where the second [first] peak in f (w) develops [disappears], with a zero transition where the weight distribution is symmetric, i.e., at T = Tc . Comparison of the results from the skewness to the peak position of the specific heat, as well as to a Maxwell construction shows that, not only does the skewness deliver precise results, but for large disorder rates (i.e p & 0.03) it is also the only method that reliably shows a sign of a possible transition. Since there are still some corrections to scaling from the di↵erent system sizes, the transition temperature Tc in the thermodynamic limit can then be estimated by plotting the size-dependent crossing point Tc⇤ (N ) against 1/N and applying a linear fit. This is shown in the inset of figure 7.6 for a disorder rate of p = 0.030. The estimated value for the transition temperature, Tc⇤ (1) ⇡ 1.12(3) in the thermodynamic limit is found where the fitted line intersects the vertical axis.

100

7.3 Numerical Simulations

⇣(˜ !) 6.0

p = 0.00 p = 0.02 p = 0.03 p = 0.04

Tc⇤ (N )

1.5 1.0

4.0 2.0

0.5

p = 0.030

0.0 0.000

0.001

1/N

L = 15

0.0 -2.0 0.9

1.0

1.1

1.2

1.3

1.4

1.5

1.6

1.7

T

Figure 7.6: Skewness ⇣(w) ˜ for di↵erent disorder rates p as a function of simulation temperature T with a fixed system size of L = 15. The crossing points with the horizontal axis (arrows) correspond to the temperature where both peaks in the Wilson loop histograms have equal weight, thus signaling the transition temperature found in a Maxwell construction. The inset shows the extrapolation of this transition ⇣[w, ˜ T = Tc⇤ (N )] = 0 for larger system sizes at p = 0.030. By plotting the transition temperature Tc⇤ (N ) against the inverse system size 1/N , the value in the thermodynamic limit can be estimated with a linear fit. From this analysis, at an error rate of 3%, the transition temperature is found to be Tc⇤ (1) = 1.12(3).

For the numerical simulations we study systems of size L2 ⇥ M , with L (M ) denoting the system size along the vertical direction. Periodic boundary conditions are applied in all dimensions to reduce finite-size e↵ects. The colorability and stacking of the layers requires that L (M ) is a multiple of 3 (2). For the system to be as isotropic as possible, we choose parameter pairs such that M 2 {L, L 1}. Because the numerical complexity of the system increases considerably when p > 0, we use the parallel tempering Monte Carlo technique [72]. The listing of the detailed simulation parameters is shown in table 7.1. In all simulations, equilibration is tested via logarithmic binning of the data: Once the last three bins agree within statistical error bars, the simulation for that system size is deemed to be in thermal equilibrium.

101

Measurement Errors Table 7.1: Table of the simulation parameters: L is the layer size, M is the number of layers, Nsa is the number of disorder samples, teq = 2te is the number of equilibration sweeps, Tmin [Tmax ] is the lowest [highest] temperature, and NT the number of temperatures.

p 0.00 0.00 0.00 0.02 0.02 0.02 0.03–0.039 0.03–0.039 0.03–0.039 0.04–0.060 0.04–0.060 0.04–0.060

7.4

L 6, 9 12 15 6, 9 12 15 6, 9 12 15 6, 9 12 15

M 6, 8 12 14 6, 8 12 14 6, 8 12 14 6, 8 12 14

Nsa 1600 800 400 1600 800 400 1600 800 400 1600 800 400

te 15 15 17 16 17 19 17 19 21 18 20 22

Tmin 1.20 1.20 1.20 0.90 0.90 0.90 0.70 0.70 0.70 0.50 0.50 0.50

Tmax 2.00 2.00 2.00 1.80 1.80 1.80 1.40 1.40 1.40 1.20 1.20 1.20

NT 64 64 64 52 52 52 52 52 52 52 52 52

Code Optimizations

For the Toric code and topological color codes with measurement errors, the simulation can also be accelerated using the bit-coding technique demonstrated in chapters 5.5 and 6.4. Here the unit cell has a simpler form than for subsystem codes, but due to the three-dimensional lattice, an additional neighboring cell in the vertical direction is needed for the local updates (see figure 7.7). For the Toric code, the Hamiltonian consist of four-spin plaquette terms, both for vertical as well as horizontal interactions. The resulting unit cell contains three spins (one on each primary edge) and the three plaquette terms which contain two of these spins. In comparison, the lattice found for topological color codes is a bit more complex: Here there are two hexagon spins and one triangle spin, one hexagon plaquette interaction term and two vertical, time-like terms involving five qubits. However, as the bit-coding shows, apart from the larger number of involved spins, these two setups are not very di↵erent: In both cases we have two in-plane spins and one vertical one, as well as one inplane interaction and two vertical ones. This similarity is also reflected in the bit-masks used for updating, which di↵er only in a few bits. Listings 7.1 and 7.2 show an excerpt from the source listing for codes with measurement errors, appendix B.9 and B.10. Each snippet shows the bit-masks for local updates (in octal notation) the Toric code and color codes, respectively. Within UPDATE, the first bit-mask is used for vertical 102

7.4 Code Optimizations interactions and the second one for horizontal ones. This separation is necessary to allow for di↵erent error rates p 6= q, which require di↵erent interaction constants J 6= K. In this representation, the bit-masks for both codes are identical up to the additional spins in the interactions for topological color codes. Finally, the last argument denotes the spin to be updated. It is remarkable that the two codes have such a similar representation even in the regime of faulty measurement errors. Listing 7.1: Bit-masks (octal) for local updates in the Toric code UPDATE (s ,040004 ,0200000002 ,010); // v UPDATE (s ,000404 ,0100000001 ,020); // v UPDATE (s ,000000 ,0000010203 ,040); // h

40 41 42

Listing 7.2: Bit-masks (octal) for local updates in color codes UPDATE (s ,04040004 ,0200000002 ,010); // 5 UPDATE (s ,04000404 ,0100000001 ,020); // 5 UPDATE (s ,00000000 ,0003010203 ,040); // 6

40 41 42

(a)

(b) 0

(c) 0

4

5

4

5 1

2

3

2

1 3

Figure 7.7: Optimizations for the Toric code and topological color codes using bit-coding. (a) The spin model related to the Toric code is a classical Ising lattice gauge theory with spins on the edges of a cubic lattice and four-spin plaquette interactions in all planes. Thus each unit cell contains three spins and three plaquette interaction terms (denoted as arrows around the plaquettes, though the direction is irrelevant for an Ising gauge theory). (b) Local updates involve a spin on an edge, and the four plaquettes which contain this edge. To allow for updates along every axis, a set of 5 unit cells needs to be loaded, as shown here. (c) The unit cell for color codes likewise contains three spins, two which reside in the horizontal (hexagonal) layer and one which is along the vertical axis as for the Toric code. There is a hexagon plaquette interaction as well as two vertical interaction terms involving 5 spins. Even though the overall setup (3 spins and 3 interaction terms) is the same as for the Toric code, each term involves a larger number of spins (5 or 6).

103

Measurement Errors

7.5

Results

The Toric code and topological color codes with measurement errors were shown to be related to Ising lattice gauge theories in sections 7.1 and 7.2. Recall that the Nishimori sheet in equation (7.6) is the locus describing a quantum computer in the presence of external noise, while the rest of the phase diagram is merely introduced as an auxiliary tool to locate the multicritical point. On the Nishimori sheet, the e↵ects of thermal fluctuations and quenched randomness are in balance. For weak disorder p and low temperatures T , the lattice gauge theory is in an ordered “Higgs phase”. In terms of the error-correction code, this indicates that all likely error histories for a given error syndrome are in the same equivalence class and error recovery is feasible. However, at a critical disorder value pc (and a corresponding temperature determined by Nishimori’s relation), magnetic flux tubes condense and the system enters the disordered “confinement phase”. In this case, the disordered state means that the error syndrome cannot point to likely error patterns belonging to a unique topological class; therefore the topologically encoded information is vulnerable to damage. This allows us the plot phase diagrams analogous to the previous chapters, with the critical transition temperature Tc (p, q) as a function of both the qubit and measurement error rate. The following sections summarize the results for the Toric code and topological color codes both for the special case of q = p (i.e. equal qubit and measurement error rates) as well as for q 6= p.

Toric Code with Measurement Errors For the Toric code with equal qubit-flip and measurement error rates q = p, the resulting Ising lattice gauge theory was analyzed previously by q=p Ohno et al., who found a critical error threshold of pTC, = 0.033 [103]. c This agrees with the result calculated using the peak position in the specific heat. Furthermore, the plot also shows results from simulations with non-equal qubit-flip and measurement error rates, which suggest that measurement errors have a more severe impact on the performance of a code than qubit errors. Preliminary data for the Toric code with relative error rates q = 2p and q = p/2 are shown in figure 7.8. If the measurement error rate is double the qubit-flip rate, q = 2p, the error threshold is estimated to be pc = 0.017(3). This means that if 3.2% of the measurements are faulty, the fault-tolerant Toric code can at most sustain a qubit error rate 1.7%. For a reduced measurement error rate of q = p/2, the estimated error threshold is increased to pc = 0.047(6), meaning that up to 4.7% flip errors can be fixed if only 2.3% measurements are faulty.

104

7.5 Results

Estimated phase diagram for fault-tolerant Toric codes Tc (p)

Ohno et al., q = p Toric Code, q = p Toric Code, q = p/2 Toric Code, q = 2p Nishimori Line

1.6 1.4 1.2 1.0 0.8 0.6

pq=p/2 = 0.047(6) c

pq=p = 0.033 c

0.4 pq=2p = 0.017(3) c

0.2 0.0 0.00

0.01

0.02

0.03

0.04

0.05

0.06 p

Figure 7.8: For the Toric code with measurement errors, the thresh,q=p = 0.033 was first calculated by Ohno et al. Their old of pTC c results indicate that, for equal qubit and measurement error rates, the Toric code can preserve quantum information up to an error threshold of 3.3%. This numerical result agrees with the value recovered when using the peak position in the specific heat as an indicator for the transition. The resulting phase boundary is shaded in red (for the data by Ohno et al., the dashed line is a guide to the eye). The error threshold is found where this line intersects the blue Nishimori line. Furthermore, for the case of non-equal qubit-flip and measurement error rates q 6= p, the figure indicates current preliminary results: At an increased measurement error rate of q = 2p (i.e., measurement errors are twice as probable as qubit-flip errors), the resulting error threshold is reduced to pq=2p = 0.017(3) (shown c in green). Conversely, when the number of faulty measurements is below the qubit error rate, i.e., q = p/2, the resulting error threshq=p/2 old is estimated to be pc = 0.047(6). Because these numerical results were calculated using the peak position in the specific heat, the error bars for these estimates are considerably larger than the precision claimed by Ohno et al.

105

Measurement Errors

( p) Tc 1. 6

Es

1. 4

tim

1. 2

ate

dp

se ha

E st

1. 0

p)

0. 8

0. 6

0. 4

0. 2

Tc (

With this additional information for non-equal qubit and measurement error rates, we can also estimate the dependence of the error threshold pc on the measurement error rate q. For optimal syndrome measurement and the special case p = q, the threshold has been calculated previously (see chapter 2 and 7.1). Furthermore there are estimates for q = p/2, 2p from our numerical simulations which allow us to fill in a p – q phase diagram as indicated in figure 7.9. Note that the dashed line is only an estimate and more detailed simulations are needed to exactly determine this dependence. In particular, the continuation of the phase boundary is difficult to extrapolate near p ⌧ q, i.e., for the scenario of a significantly higher measurement error rate.

i ma

1

0. 0

es co d ric To p nt = er a p t ol ., q l a ltq = /2 et f au e, p no od f or Oh ic C , q = = 2p m r e gr a To Cod e, q Line di a od ri ric To ric C himo To Nis

ted Tc ( p pToric susceptibility to qubit and measurement errors 1h.6ase) code: diag Esti ram ma 4 q 1 q =p fqo= .4 ead r tf2p u p 0.06 l t 1. 2 hatol se eran Ohn dia t Tori 1. 0 0.05 q = p/2 g c co o d es Tor et al.ram 0. 8 i , c for T Co d q = pq=2p = 0.017(3) oric C pfa c 0.04 e, q 0. 6 o⇡d0.033 ul Torpq=p = e , c p t -t ic C q= ole o 0. 4 p/2 0.03 O=h 0.047(6) Nis de, q = pq=p/2 ra h nt i mo n 2c 0. 2 To ri L T p Tor o e ineori ic t a ric 0.02 c Co l . , To co .0 C q=0 q d de ric o d pc ⇡ 0.11 e, = s e Co , q q = p . 00 0.01 N i sh d e, = error correction feasible p im q = p/2 0. 0 o 0.00 ri 1 1 Li 2p 0.10 0.00 0.02 0.04 0.06 0.08 p ne 0.02 0. 0 0.023 Figure 7.9: Measurement error rate dependence of the error threshold. This plot compares the current estimates for di↵erent p/q ratios 0. 0.in 04 the fault-tolerant Toric code. For a vanishing measurement03error rate of q = 0 (the horizontal axis), the error threshold ,q=0 = 0.109(2) 0.05 as indicated in chapter 2.5. Furthermore, is pTC c ,q=p ⇡ 0.033 is indicated along the estimate0.by pTC c 04 Ohno et al. 0.0of the line q = p. With the current 6 preliminary estimates for q = 2p p dependence pc (q). Note that the and q = p/2, we can estimate the 0 dashed line is a guide .to 05 the eye – further investigations are needed 1. 6

0. 0

2

3

0. 0

0. 0

4

5

0. 0

6 p 0. 0

to get a more detailed understanding of this dependence.

0. 0 6

p

106

7.5 Results

Topological Color Codes with Measurement Errors In comparison, the analysis for topological color codes under the faulttolerant regime shows a slightly di↵erent picture: while the error thresholds are identical when q = 0, the codes appear to di↵er when measurement errors are taken into consideration. The error threshold determined for q = p, using the skewness of Wilson loop distributions, is ,q=p pCC = 0.048(2). The findings are summarized and compared to the c Toric code in the phase diagram shown in figure 7.10. Estimated phase diagram for fault-tolerant codes Tc (p)

Nishimori Line Color Codes ⇣(w) Color Codes cv Toric Code cv Ohno et al.

1.6 1.4 1.2 1.0 0.8

stable ordered phase

0.6 0.4

TC

pc ⇡ 0.033

0.2

pCC c ⇡ 0.048



Toric Code

0.0 0.00

0.01

0.02

Color Codes

0.03

0.04

0.05

0.06 p

Figure 7.10: Estimated p –Tc phase diagram of the Ising lattice gauge theories found in the mapping of color codes. The detailed analysis of Wilson loop distributions and their skewness indicates a critical error threshold of pc ⇡ 0.048(2). The dashed lines are guides to the eye, while the solid blue line represents the Nishimori line. Close to the critical error rate, determination of Tc (p) is difficult, resulting in large error bars. The blue square at T = 0 is computed using a Maxwell construction of the Wilson loop order parameter and agrees perfectly with the estimate from the skewness. The results for color codes highlight that for fault-tolerant quantum computation, the Toric code and topological color codes are indeed di↵erent. To further underline this result, the transition of both the Toric code and color codes was also located be looking at the position of the peak in the specific heat (indicated in red and green). Finally, the purple data for the Toric code, shown for comparison, were computed by Ohno et al. [103].

107

( p)

1. 2

1. 0

1. 6

Es

4

tim

0. 6

1. 4

1.02 .

ate

1.00.2

C

r am iag ed

0

0. 0

0. 2

4 Tc (p)

0. 6

0. 4

0. 6

)

1.6 the validity Es of these results, we also calculate the transition To ensure Est temperature im position of the specific heat for both the using the tpeak iToric m 1.a4t code as well as topological at color codes. From this we can conclude e e d that the two behavior when measurephaproposals really dod exhibit ph bydi↵erent 1.2ment errors se present, are the comparison in figure 7.10. a dia asCCindicated ,q=p ed numerical value gof = s0.048(2) means that recovery from rapc 1.0 The m iag (at an equal rate) is feasiconcurrent qubit-flip and measurement errors for ra that is significantly larger ble up to a critical rate of 4.8% –faauthreshold lt-tcode than the 3.3% calculated for the Toric oler min fothe same regime. The ant r allow us to enhance estimates for non-equal error rates q 6= p further Cfoalu the di↵erences the previous p – q phase in figureC7.9 olotor clearly outline ltr- cfault C Codcodes in othe between the Toric code and topological olor color toodestolerant Est regime (see figure 7.11). e C , le im q o C T 0.08.0

0. 8

Measurement Errors

( p) T c 0. 8

6

(p

s C C or s r pha olo himcode s C i t ed N ric i ma To E st t p n = er a p ,q tol al . q = lt2 et fau / e, p) no od = p p for T c( 2 q Oh c C m or i o d e , , q = i n e gr a 1. 6 T e iL C di a od r se ric ha To ric C imo 1. 4 dp To Nish

Tc

d e, C = ra olor c( olop p) nt qmeasurement pToric o MATCHING ERROR RATES = C code: susceptibility to qubit and errors l E 1h.6asNON r ode C opr / C e di Co st i a 2 , N gra m .4 qo= C q 1 o i lor s m a h d l imq o= p or 2p od fqo= .4 ead e, r tf2p e r u i C p lth-t 0.06 LNin o , q q = 1. 2 pq=2p = 0.025(4) asoeleran c ishe de, = t Tor Ohn pdq=p 1. 0 i p/ p a qq= = 0.048(3) i 0.05 = p/2 c co i m o et c gra 2 des or Tor al . , m 01 0. 8 2 i i c q f T Co =o Li p pq=2p = 0.017(3) oric c 0.04 Cod de, q = rpfau q=p/2 ne 0.0 0.6 Tor q=p = 0.066(7) p lt-pt c icpcC ⇡e,0.033 q= 2 ole o 0. 4 p 0.03 Nis de, q = /2 Oh ra 0 h nt i n 2 m or .03 0. 2 To i Li Top Tori o et ric ne ri 00 2 c C al 0.02 c . To co .0 0.0 Co o d , q 0.03 de r pq=p/2 = 0.047(6) c ic = d e e, 4 s 0. 0 q C , p 0.01 q = o 0.04 0 = 0.11 p 0.0error correction feasible Nishi dpeq=0 ⇡ , m c q = p/2 0.05 5 0. 0 or 0.00 i L 2p 1 1 0.00 0.020.0 0.04 0.06 0.08 0.10 in p 0 . e 606 0 . 02 0. 0 0 .0 rate dependence of the error 0.023 Figure 7.11: Measurement 0 07 error 0.08(shaded in red) and topologithreshold, both for the .Toric code 7 cal 0color .003.04 codes (shaded in yellow). For a perfect error syndrome p 0.0 measurement, i.e., 0 (the horizontal axis), the error thresholds 0.05q =TC ,q=0 = pCC ,q=0 = 0.109(2) as indicated in 8 of both codes agree, p c c 0. 0 chapters 2 and measurement error rate q > 0, 0.06 44. For a non-vanishing p the error threshold estimates di↵er between the two p considerably codes. This suggests0that color codes can withstand a higher rate .5 of measurement errors0than the Toric code. Note that the dashed 1. 6

at e d

2 0.0

d r co olo tC p ran ole q= 2 l t -t / d e, fau Co q = p p f or 2 lor e, = d Co o q r C o d e, i ne iL olo

1 0.0

1

0. 0

3 0.0

0. 0

2

4 0.0

0. 0

5 0.0

3

4

6 0.0

0. 0

5

7 0.0

0. 0

8 0.0

6 p

0. 0

p

lines are guides to the eye and more detailed results are needed to 0.0of the threshold on both the qubit pinpoint the exact dependence 6 and measurement error rate.

p

108

co

de

s

7.5 Results

Discussion Remarkably, the error thresholds calculated for topological color codes and the Toric do not agree when measurements can be faulty – in contrast to the scenarios of perfect error syndrome measurement studied in chapters 2, 4 and 5, where perfect agreement is found across the board. This seems to suggest that color codes can withstand a higher rate of measurement errors than the Toric code. Furthermore, the transition for the lattice gauge theory related to color codes seems to exhibit signs of first-order phase transition, as underlined by the double peak structure of Wilson loop histograms – this feature is not observed for the faulttolerant Toric code. This pronounced di↵erence is even more remarkable because the source code to simulate both models only di↵ers by a few lines where the bit-masks are slightly altered to change the connectivity within the unit cell (see section 7.4). Further investigations are needed to gain a better understanding of why the two codes suddenly behave di↵erently when measurement errors are taken into consideration.

109

Chapter 8

Conclusion and Outlook “I may not have gone where I intended to go, but I think I have ended up where I needed to be.” — Douglas Adams In the course of this thesis, several realizations of topological quantum error-correction codes were analyzed with respect to their stability under specific error channels. This was accomplished by relating each arrangement to a classical statistical spin model with disorder. Because error chains give rise to domain walls under this mapping, the ordered state can be identified with the scenario of feasible error correction, while the proliferation of errors in the quantum setup is associated with the disordered state. After numerically calculating the p – T phase diagram of the classical-statistical spin model, the error threshold can be identified as the intersection point of the Nishimori line, where the mapping is valid, with the boundary of the ordered phase. This critical error threshold represent the maximum amount of perturbation each setup can sustain. The results are summarized in table 8.1. The first project (chapter 4) investigated topological color codes on both the honeycomb and square-octagon lattice under the qubit-flip channel. The latter arrangement is of particular interest, because, contrary to both the Toric code and color codes on a honeycomb lattice, it allows for the transversal implementation of the whole Cli↵ord group of quantum gates. A comparison of the results with the previously calculated error threshold for the Toric code showed that the extended computing capabilities on a square-octagonal lattice do not come at the expense of an increased susceptibility to external noise. The error threshold pc = 10.9% for both realizations of color codes agrees within error bars with the value for the Toric code. In fact the phase diagram for all three models are identical within error bars. 112

The implications of possible error correlations were analyzed in the second project (chapter 5), which dealt with the e↵ects of the depolarizing channel. Rather than correcting the individual types of errors independently, knowledge of the error correlations can be exploited in a code that deals with the entire e↵ect of the error channel at once. The mapping shown for a general error channel provides the means to study any ˆ Y ˆ and Z ˆ errors. When applied to the (correlated) combination of X, depolarizing channel, where the rates of all three basic types of errors are equal, it relates the setup of the Toric code to an alternating 8vertex model. Similarly, topological color codes are related to stacked triangular lattices with plaquette interactions, which are related to eightvertex models on Kagom´e lattices. Remarkably, the calculated threshold of approximately pc = 19% is considerably larger than the stability of p0c = (3/2)pc,flip ⇡ 16.4% one could obtain when ignoring correlations and treating errors independently. This suggests that detailed knowledge of the correlations in the error source can allow for a more efficient, custom-tailored code to be designed. With topological subsystem codes, this thesis also considered a code that emphasizes the simplicity of the required check operators. The setup investigated in chapter 6 only requires the combined measurement of two qubits at a time for error syndrome retrieval (compared to 4, 6 or even 8 for the Toric code and topological color codes). This is achieved by incorporating elements from stabilizer subsystem codes, which consider some of the logical qubits to be gauge qubits, where no information is stored. The numerically-calculated error threshold of pc = 0.055(2) is remarkable given the simplicity of the error-correction procedure: with a streamlined syndrome measurement process, the physical qubits are given less time to decohere and the error rate between rounds of error correction in an actual physical realization will be smaller. Finally, with the fault-tolerant schemes studied in chapter 7, the thesis also investigated the performance of codes under more realistic conditions, taking into account possible measurement errors. For the Toric code, an estimate for the interdependence of qubit and measurement error rate is provided from current estimates using the specific heat. The mapping for fault-tolerant codes gave rise to three-dimensional Ising lattice gauge theories, providing an interface to yet another field in physics. The estimated error threshold of 4.8% for topological color codes exceeds previous estimates for the Toric code considerably and nurtures hope for the realization of fault-tolerant quantum computation using topological color codes. Remarkably, topological color codes appear to be more stable to measurement errors than the Toric code. On one hand, the results discovered as part of this thesis provide a much more detailed insight in the stability of various quantum setups than was previously available, as well as being instrumental in under113

Conclusion and Outlook standing the importance of each code’s properties. At the same time, they also outline numerous prospects for future research. The thorough analysis of the depolarizing channel has shown that the error threshold for correlated noise is considerably larger than should be expected from the result of the qubit-flip channel. This has procured the understanding that detailed knowledge of the error source can indeed allow for a better, channel-adjusted code to be designed. However, the depolarizing channel is only one special case of the general channel for which the mapping was carried out. Further investigations on di↵erent choices of the individual error rates will be instrumental in understanding how noise correlations can give rise to this increased stability. In particular, it would be interesting to investigate how the error threshold changes when the rate for correlated noise is tuned between 0 and p/3, i.e., the two special cases studied so far. Topological subsystem codes have been shown to have an impressive error threshold while only requiring 2-local check operators that are much simpler than those in other topological codes. As this is a very promising combination for physical implementations, continued research with respect to di↵erent error sources and variations of the initial lattice (such as the Union Jack lattice) will be essential in evaluating this proposal in more detail. For fault-tolerant quantum computation, only the simplest possible error source, the qubit-flip channel, has been studied in conjunction with measurement errors. However, the regime applies to any code which is to be analyzed with respect to real world applications. In particular, it is of interest to test whether the increased error threshold for depolarizing noise also carries over to the fault-tolerant regime. Finally, the results for fault-tolerant quantum error correction raise new and interesting questions themselves: For all previous instances the error threshold for the Toric code and topological color codes were identical, yet when measurement error are present the two setups appear to show a di↵erent stability. At the same time, investigation of Wilson loop distributions suggests that the transition for the hypercubic gauge theory is continuous, while the one related to color codes is first-order. Further investigations for di↵erent qubit and measurement-error rates p 6= q might allow for a more detailed understanding of what makes codes behave di↵erently. Preliminary data suggest an error threshold of pc ⇡ 0.017(3) when q = 2p and a threshold of pc ⇡ 0.047(6) when q = p/2. Furthermore, employing methods known from the field of quantum chromodynamics might allow for more precise observables to locate the Higgs-to-confinement transition in these lattice gauge theories. Indeed, the profound relationship between complex statistical-mechanical models and topological quantum error-correction codes promises to deliver a plethora of new paradigms to be studied in coming years. 114

Error Source Qubit-Flip Qubit-Flip Qubit-Flip Depolarization Depolarization Depolarization Q.-Flip & Meas. Q.-Flip & Meas. Q.-Flip & Meas. Q.-Flip & Meas. Q.-Flip & Meas. Q.-Flip & Meas.

Topological Error Code Toric code Color codes, honeycomb Color codes, square-oct. Toric code Color codes honeycomb Subsystem codes Toric code Toric code, q = 2p Toric code, q = p/2 Color codes, Color codes, q = 2p Color codes, q = p/2

Threshold pc 0.109(2)† 0.109(2) 0.109(2) 0.189(2) 0.189(2) 0.055(2) 0.033†† 0.017(3) 0.047(6) 0.048(3) 0.025(4) 0.066(7)

Table 8.1: Summary of the error thresholds calculated numerically as part of this thesis: Note that the numbers for di↵erent error channels cannot be compared directly: For the qubit-flip channel pc only refers to the maximum amount of flip errors which can be sustained, while for the depolarizing channel pc is the sum of all three basic error types. Furthermore the lower threshold for subsystem codes is the result of a compromise for a simpler error-correction procedure. Likewise a lower value of pc is to be expected in fault-tolerant schemes due to the additional presence of measurement errors. Remarkably, the behavior of the Toric code and topological color codes appears to be di↵erent in the fault-tolerant regime, despite finding perfect agreement in all other instances. † The threshold for the Toric code is shown as a reference, it was calculated numerically by Dennis et al. [52]. †† The numerical estimate for the fault-tolerant Toric code was estimated by Ohno et al. [103].

115

Appendix A

Optimized Temperature Sets for Parallel Tempering Parallel tempering Monte Carlo [72, 73, 74] reduces equilibration times by allowing systems to move in temperature space and thus more easily overcome free-energy barriers which would be unsurmountable: At higher temperatures, thermal fluctuations allow a system to escape metastable states in time scales several orders of magnitude faster. This is achieved by concurrently simulating M copies of the system at di↵erent fixed temperatures [T1 , . . . , TM ] and exchanging the configurations of neighboring systems after each sweep with the probability given in equation (3.15), namely Pswap = e(E1 E2 )( 1 2 ) . (3.15) However, the efficiency of the approach strongly depends on the particular choice of the temperatures: When the temperatures are too far apart, both E and in equation (3.15) will be typically be large and the majority of swap attempts will be rejected. As a result, the M replicas are essentially independent simple Monte Carlo simulations with no benefit from parallel tempering. Likewise, when temperatures are too close, the overhead required to simulate many replicas is not o↵set by the speedup from parallel tempering and CPU time is wasted. There is a handful [105, 106, 107, 108, 109, 72] of recipes on how to select the temperature set for parallel tempering Monte Carlo to run efficiently. For parallel tempering to be efficient, the temperature set should be chosen to allow configurations to easily move in temperature space. This can be expressed, for instance, as a flat distribution of the acceptance rates at di↵erent temperatures acc(T ). If the average acceptance rate at a specific temperature is low, this essential cuts the temperature set in half with little exchange of systems between the two sides. An118

other way to express the ability of systems to move in temperature space is via the typical round trip time of a system. When no information about the critical temperature and properties of a possible transition are known, a geometric progression lends itself as a starting point. Given the desired range [T1 , TM ] and size M of the set, the temperatures can be generated using Ti = T1 ·



TM T1



i 1 M 1

.

(A.1)

The geometric progression produces a smaller step width at low temperatures, where relaxation is slow. This works particularly good for disordered systems, where the specific heat of the system does not exhibit a strong divergency. As a next step, information gathered from the initial run (i.e., using a linear set of geometric progression) can be used to generate an adapted temperature set. Because the energy of a system typically equilibrates rather fast, energy-derived observables can produce usable estimates even if the initial run was not equilibrated very well. In particular, since the exchange probability for neighboring replicas depends only on and E, the typical value of these di↵erences is all that needs to be estimated. As the first estimator written during my thesis, I used the average energy per spin to accomplish this. The approach follows the following steps: • Data from the initial run is read from a file, • A low pass filter is applied to reduce statistical fluctuations, • An approximator function f is fitted to the data such that f (x) can be used to estimated the energy at any temperature. As a good start, linear interpolation can be used – the current implementation uses splines for a smoother result. • f (x) is used to estimate the step width for a flat histogram, initially assuming the energy distribution to be of vanishing width: P(

1

!

2)

= e(

1

2 )[f ( 1 )

f ( 2 )]

.

(A.2)

The target acceptance rate which yields the desired range and size of temperature set is then found via binary search. While this is a very crude estimator, its advantage lies in only requiering a very rough estimate of the energy as a function of the temperature T . As for the geometric progression, results are best when there is no strong divergency in the specific heat. In particular for spin glasses, this method has proven to reliably produce flat histograms using very 119

Optimized Temperature Sets for Parallel Tempering little run-time information. The initial approach outlined above can be improved further by using information about the specific heat (provided by the user) or by numerical di↵erentiation of the energy. Furthermore, the acceptance probability itself can also be used to further improve a temperatuer set, provided that this information was measured during the simulation. A sample implementation as an R-script is given in listing A.1. One of the more involved ones uses feed-back optimiztion to improve round trip times [105]. This particular estimator was also implemented to allow for a more detailed estimate when the required measurements from a previous run are available. The resulting combined estimator calculates the resulting temperature set from the five methods mentioned above (linear, geometric, energy, acceptance and feedback) and represents them as a density in temperature space. The default behavior of the script is to combine results from the feedback method, energy estimator and a geometric progression, using each method only if the necessary information is available from a previous run. In particular the feedback optimization increases the temperature density near the transition, while the energy estimator and geometric progression provide a sensible default when the measurements for the more involved methods are not available. The resulting temperature set is not highly optimized (which would require detailed investigation of round trip times and acceptance rates), but it works surprisingly well for most situations. Furthermore, the fact that it can be fully automated and requires only very little run-time information makes it suitable for the inclusion in comprehensive simulation libraries, such as, for instance, the ALPS library [110, 111, 112].

120

Listing A.1: R-script for temperature set generation 1

# ! / usr / bin / Rscript

2 3

args & sys ){ for ( uint i =0; i < sys . size (); i ++){ sys [ i ]. pt_hasSwapped =0; if ( i < sys . size () -1 && ( sys [ i ]. energy > sys [ i +1]. energy || rng . unif () < exp ( ( sys [ i ]. energy - sys [ i +1]. energy ) * ( sys [ i ]. beta - sys [ i +1]. beta ) ) ) ){ swap ( sys [ i ]. energy , sys [ i +1]. energy ); swap ( sys [ i ]. pt_up , sys [ i +1]. pt_up ); swap ( sys [ i ]. pt_down , sys [ i +1]. pt_down ); sys [ i ]. s w a p _ c o n f i g u r a t i o n ( sys [ i +1]); sys [ i ]. pt_hasSwapped =1; } if ( i ==0) { sys [ i ]. pt_up =1; sys [ i ]. pt_down =0;} if ( i == sys . size () -1 && i >0) { sys [ i ]. pt_up =0; sys [ i ]. pt_down =1; sys [ i ]. pt_hasSwapped = sys [i -1]. pt_hasSwapped ; } } }

32 33 34 35 36 37 38 39 40

virtual void Simulation :: timestep (){ p a r a l l e l _ t e m p e r i n g ( systems ); sing le_updat es ( systems ); if ( te >= tem_min && t %(1 < < std :: max (0 , te - tem_max ))==0){ p r e p a r e _ m e a s u r e m e n t s ( observers ); a p p l y _ m e a s u r e m e n t s ( systems , observers ); } }

41 42 43 44 45 46 47 48 49 50 51

void Simulation :: simulate (){ running = true ; for ( te =0;(1 < < te ) < t ; te ++); while ( running ){ this - > timestep (); this - > e s t i m a t e _ w a l l _ c l o c k _ t i m e (); for ( t ++ , te =0;(1 < < te ) < t ; te ++); if ( te > te_max || sigterm ) running = false ; } }

127

Source Code Listing Listing B.4: Subsystem class 1 2 3 4 5

# include # include # include # include # include

" libglass / configuration . hpp " " libglass / simulation . hpp " " libglass / observer . hpp " " libglass / system . hpp " " libglass / archive . hpp "

6 7

class SubSys : public A bstract System {

8

private :

9 10

static static static static

11 12 13 14

int L , N ; // linear system size and number of spins uint * J ; // disorder realization for this simulation double p ; // error rate per qubit double * cosx , * sinx ; // wave vector lookup tables

15

uint * spins ; // spin and Hamiltonian term configuration double * myexp ; // temperature dependent lookup table

16 17 18

public :

19 20

SubSys (); virtual ~ SubSystem ();

21 22 23

// functionality for the simulation void init ( Configuration & config ); static void static_init ( Configuration & config ); void s w a p _ c o n f i g u r a t i o n ( SubSys & other ); double hamiltonian (); void single_update (); void measure ( Observer & obs );

24 25 26 27 28 29 30 31

// archive class for automated checkpointing template < class Archive > static void st atic_arc hive ( Archive & archive ){ archive & L & p ; N = L * L ; } template < class Archive > void archive ( Archive & ar ){ Abstr actSyst em :: archive ( archive ); for ( int i =0; i < N ; i ++) archive & spins [ i ]; }

32 33 34 35 36 37 38 39 40 41 42

};

128

Listing B.5: Subsystem initialization 1 2 3 4 5 6 7

static uint d_channel ( double p ){ double r = rng . unif (); if (r < p /3.0 f ) return 0 x6u ; if (r < p *2/3.0 f ) return 0 xau ; else if (r < p ) return 0 xcu ; return 0 x0u ; }

8 9 10 11 12

static void SubSys :: static_init ( Configuration & config ){ require ( p ); CONDITION ( 0.0 f