Coogee’19
Sydney Quantum Information
Theory Workshop
David Poulin
(Sherbrooke, Canada)
Quantum
computing with topological wormholes
Locality plays a fundamental role in quantum computation but also severely restricts our ability to store and process quantum information. We argue that this restriction may be unwarranted and proceed to introduce new defects on the surface code called wormholes. These novel defects entangle two spatially separated sectors of the lattice. When anyonic excitations enter the mouth of a wormhole, they emerge through the other mouth. Wormholes thus serve to connect two spatially separated sectors of a flat, 2D lattice. We show that these defects are capable of encoding logical qubits and can be used to perform all gates in the Clifford group.
Markus Kesselring
(FU Berlin, Germany)
The
twists and boundaries of the 2D color code
Topological error-correcting codes are not
only interesting for their capacity to fault tolerantly store and manipulate
quantum information, but also as toy examples of topological phases of matter.
Here we explore the color code phase. Specifically, we describe and classify
all of its boundaries and twist defects, and discuss
their interactions. Importantly, we also give lattice realisations
of all of these objects. This greatly extends the toolbox available to us for
color code quantum computation. To demonstrate this, we give some new applications
of these objects. To investigate the phase further, we show how to map or
‘unfold’ these defects onto simpler phases. In addition to the more commonly
known mapping onto the surface code, we also show an elegant new mapping onto
two copies of the three-fermion model.
Earl Campbell
(Sheffield, UK)
A
theory of single-shot error correction for adversarial noise
Single-shot error correction is a technique
for correcting physical errors using only a single round of noisy check
measurements, such that any residual noise affects a small number of qubits. We
propose a general theory of single-shot error correction and establish a
sufficient condition called good soundness of the code's measurement checks.
Good code soundness in topological (or LDPC) codes is shown to entail a
macroscopic energy barrier for the associated Hamiltonian. Consequently, 2D
topological codes with local checks can not have good
soundness. In tension with this, we also show that for any code a specific
choice of measurement checks does exist that provides good soundness. In other
words, every code can perform single-shot error correction
but the required checks may be nonlocal and act on many qubits. If we desire
codes with both good soundness and simple measurement checks (the LDPC
property) then careful constructions are needed. Finally, we use a double
application of the homological product to construct quantum LDPC codes with
single-shot error correcting capabilities. Our double homological product codes
exploit redundancy in measurements checks through a process we call metachecking.
Wei-Wei Zhang
(Sydney, Australia)
Quantum
topology identification with quantum walks and deep neural network
Topological quantum
materials are a promising platform for topological quantum computation and the
design of novel components for quantum computers. A priority and necessary
condition for both the discovery of novel topological quantum systems and the
analysis of novel topologically protected edge modes is the identification of
topological invariants for quantum matters. We have shown that continuous-time
quantum walks governed by a two-dimensional spin-orbit lattice Hamiltonian
can reveal transitions between topological phases, and the
corresponding experimental implementation with ultra-cold atoms.
In this talk, I will
present a universal automatic method for topology identification which first combines
quantum walks and Deep Neural Network. Consider the well-developed experimental
technologies for quantum walks and the great learning ability of deep neural
network, our work enables the efficient discovery and analysis of novel
topological materials and therefore the design of practical quantum computer.
[1] https://arxiv.org/abs/1811.12630
[2] https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.250403
[3] https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.197401
Anirudh Krishna
(Sherbrooke, Canada)
Performing
fault-tolerant gates on hypergraph product codes
Hypergraph product codes are a class of quantum LDPC codes introduced by Tillich and Zémor. They possess an asymptotically non-vanishing rate and a distance that grows as the square-root of the block size. In this work, we shall demonstrate how to perform Clifford gates on this class of codes using code deformation. Although generic methods to perform Clifford gates on stabilizer codes already exist, this is the first technique that will take advantage of the underlying structure of the codes. Our methods can be expressed in a purely algebraic manner without the need to resort to topology. It will be a generalization of the techniques presented earlier by David Poulin.
Naomi Nickerson
(PsiQuantum,
USA)
Measurement
based fault tolerance beyond foliation
Quantum
error correction is most commonly considered from a ‘circuit-based’ point of
view, as codes being operated on with measurements. Alternatively
all of quantum error correction can be phrased in the language of measurement
based quantum computing (MBQC) as the construction of fault tolerant cluster
states (FTCSs). While MBQC is often thought of as a hardware driven choice, it
is in fact a useful theoretical tool regardless of the eventual physical
implementation. Surprisingly, while any 2-d stabilizer code can be used to
construct an equivalent FTCS through foliation, there are FTCSs that cannot be
constructed through foliation of a stabilizer code. I will talk about some
examples of this type of cluster state with remarkable properties.
Daniel Barter
(ANU, Australia)
Computing
Renormalization Invariant Properties of Levin-Wen Models
Levin and Wen defined a 2D lattice model
associated to any fusion category. Despite their importance, little is known
about these models except in the smallest examples. We explain how to compute
renormalization invariant properties of LW models, in particular, their full
defect theory. This is joint work with Jacob Bridgeman and Corey Jones.
Michael Freedman
(Microsoft, USA)
Quantum
computing with Octonions
We will describe the common geometric foundation of “measurement only quantum computation”. It is systems of “equiangular subspaces”. The Octonions turn out, in a precise sense, to be the richest source of these.
Dominic Berry
(Macquarie, Australia)
TBA
Ryan Babbush
(Google, USA)
Balancing compactness and complexity in quantum
simulations of chemistry
The efficient simulation of quantum chemistry is among the most anticipated applications of quantum computers. However, as the age of industrial quantum technology dawns, so has the realization that even “polynomial” resource overheads are often prohibitive. There remains a large gap between the capabilities of existing hardware and the resources required to quantum compute classically intractable problems in chemistry. This talk will discuss the overhead required to error-correct these computations using state-of-the-art simulation methods such as qubitization and interaction picture simulation. We will then discuss the tradeoffs between basis compactness and computational complexity associated with simulating different representations of molecular systems, including second quantized plane waves, first quantized plane waves, and tensor factorizations of the arbitrary basis molecular Hamiltonian.
Guifre Vidal
(Perimeter Institute, Canada)
Tensor
networks, geometry and AdS/CFT
The multiscale entanglement renormalization ansatz (MERA) is a d+1 dimensional tensor network that describes the ground state of a d-dimensional critical quantum spin system. In 2009, Swingle argued that MERA is a lattice realization of the AdS/CFT correspondence, with the tensor network representing a time slice of AdS, namely hyperbolic space H2. Other authors have since then argued that, instead, MERA represents de Sitter spacetime dS2. I will introduce a criterion, based on CFT path integrals, to unambiguously attach a geometry to a tensor network and then conclude that MERA is neither H2 or dS2, but actually a light cone L2. Finally, I will introduce two new tensor networks, euclidean MERA and lorentzian MERA, corresponding to H2 and dS2 and discuss the implications of these results for holography and the study of quantum field theory in curved spacetime.
Nele Callebaut
(Princeton, USA)
Entanglement,
2-dimensional gravity and tensor networks
I will discuss the 2-dimensional theory of gravity that governs the dynamics of the entanglement entropy and entanglement Hamiltonian in a 2-dimensional boundary CFT. It goes by the name Jackiw-Teitelboim (JT) gravity. The regularized boundary of the JT theory of CFT entanglement dynamics is described by an effective 1-dimensional theory called Schwarzian quantum mechanics, which is derived using a process of entanglement renormalization, reminiscent of cMERA. Such a connection between tensor networks and the theory of CFT entanglement dynamics, also known as kinematic space, could have been expected, based on a conjectured relation between kinematic space and MERA.
Ben Brown
(Sydney, Australia)
Parallelized
quantum error correction with fracton topological
codes
Topological phases with fractal excitations
have a large number of emergent symmetries that enforce a rigid structure on
the excited states of the system. Remarkably, we find that the symmetries of a
quantum error-correcting code based on a fractal phase enable us to design
highly parallelized decoding algorithms. Here we propose decoding algorithms
for the three-dimensional X-cube model where decoding is subdivided into a
series of two-dimensional matching problems, thus simplifying the most time consuming component of the decoder significantly. In
addition to this, we also show that we can leverage the rigid structure of the
syndrome data to improve threshold error rates beyond those of other
three-dimensional models with point-like excitations. With the explicit example
we have considered here we conclude by motivating the exploration of
parallelizable codes. In particular we discuss their relationship with other fracton topological codes together with other modern
notions that have arisen in the field of quantum error correction such as
single-shot codes and self correction.