Survey · Part II of III — Algorithms

Quantum Physics · York University, 2019

Deutsch, Shor, and the Geometry of Qubits

RSA encryption protects global banking, government communications, and the internet. It relies on one assumption: that factoring large numbers into primes is computationally hard. In 1994, Peter Shor proved that assumption is only true for classical computers.

This is not an academic concern. RSA-2048, the standard encryption protecting most internet traffic, would take a classical supercomputer millions of years to break. Shor's algorithm, running on a sufficiently large quantum computer, breaks it in hours. The algorithm exists. The mathematics is settled. Only the hardware remains unbuilt at scale.

Before Shor, there was Deutsch — and before the algorithms, there was the geometry. This is the story of how quantum parallelism becomes useful computation.

The First Algorithm That Proved a Point

The Deutsch-Jozsa algorithm, published in 1992 by David Deutsch and Richard Jozsa, has almost no practical application. Its value is entirely philosophical — and that makes it one of the most important papers in the history of the field.

The problem it solves: given a function f that maps n-bit strings to a single bit (0 or 1), determine with certainty whether f is constant (always outputs 0, or always outputs 1) or balanced (outputs 0 for exactly half of inputs and 1 for the other half).

Classically, in the worst case, you need to evaluate f on more than half of its 2n possible inputs before you can be certain of the answer. For n = 100, that is more evaluations than there are atoms in the observable universe. Deutsch-Jozsa does it in a single evaluation of f, with certainty, using quantum parallelism and interference.

The mechanism: prepare a quantum register in an equal superposition of all 2n input states using n Hadamard gates — a transformation that takes |0⟩ to (|0⟩ + |1⟩)/√2, creating superposition. Evaluate f on this superposition simultaneously using the quantum gate Uf. The interference of amplitudes then encodes the answer: if f is constant, a specific measurement result appears with certainty; if f is balanced, a different result appears with certainty. One call to Uf suffices.

"This was the first example of a quantum algorithm that outperforms any classical algorithm." — Rieffel & Polak, Quantum Computing: A Gentle Introduction

The point was not the problem. The point was the proof of concept: quantum parallelism, combined with interference, produces a computational advantage that is not merely faster but categorically different.

Simon's Hidden String

Daniel Simon's 1994 algorithm was the bridge between Deutsch-Jozsa and Shor — and crucially, it was the algorithm that inspired Shor to look at factoring.

Simon's problem: given a function f such that f(x) = f(y) if and only if x ⊕ y = a (where a is some hidden n-bit string), find a. Classically, the best algorithm requires O(2n/2) evaluations of f — exponential in n. Simon's quantum algorithm finds a in O(n) quantum evaluations, followed by O(n²) classical post-processing steps. The speedup is exponential.

The technique was new: quantum parallelism to create a superposition over all inputs, measurement to collapse to a state orthogonal to the hidden string, and classical linear algebra to extract a from enough such measurements. This combination — quantum sampling followed by classical post-processing — became the template for quantum algorithms.

The Algorithm That Should Terrify Banks

Peter Shor's 1994 factoring algorithm is the reason quantum computing moved from academic curiosity to national security concern.

The problem: given a composite integer N, find its prime factors. The best known classical algorithm for this — the general number field sieve — runs in time exponential in the cube root of the number of digits. For N with 2048 binary digits (RSA-2048), classical factoring is computationally infeasible in any reasonable timeframe.

Shor's key insight, inspired by Simon's algorithm, was that factoring reduces to finding the period of the function f(a) = xa mod N, for a randomly chosen x. If you know the period r of this function, you can extract the prime factors of N using elementary number theory (specifically, by computing the greatest common divisor of xr/2 ± 1 and N using Euclid's algorithm on a classical computer).

The period r can be exponentially large — classical period-finding is infeasible. But on a quantum computer, the period can be found using the quantum Fourier transform, which operates on the 22L-dimensional state space of 2L qubits. The exponential dimensionality of this space permits the Fourier transform of an exponentially long sequence — something impossible classically — reducing to O(L³) quantum operations.

The quantum Fourier transform finds the period of a sequence in polynomial time by exploiting a Hilbert space whose dimension grows exponentially with the number of qubits.

The algorithm has two main components: modular exponentiation — computing xa mod N for all values of a in superposition — and the inverse quantum Fourier transform (QFT). Together they take O(L³) operations. The entire factoring algorithm runs in polynomial time.

In 2001, IBM demonstrated Shor's algorithm on a 7-qubit Nuclear Magnetic Resonance quantum computer, factoring 15 = 3 × 5. In 2007, photonic qubits demonstrated compiled versions. In 2012, a 9-element solid-state processor factored 15 using four phase qubits and five superconducting resonators. The same year, a scalable qubit-recycled version factored 21 — the largest integer yet achieved with Shor's algorithm at that time. Each demonstration revealed both progress and the gap remaining.

The Noise That Eats Computation

The problem with quantum computation is that quantum states are fragile. Any interaction between a qubit and its environment — a stray electromagnetic field, a phonon, a photon — causes decoherence: the qubit becomes entangled with the environment, its superposition collapses, and the quantum information is lost.

Decoherence is not just a practical inconvenience. It is a fundamental physical process. And as computations grow longer — more qubits, more gates, more time — errors accumulate. The output of a sufficiently long quantum computation becomes indistinguishable from noise.

Shor addressed this directly. In 1995, he showed that quantum error correction is possible: by encoding one logical qubit into a subspace of multiple physical qubits, errors can be detected and corrected without measuring — and therefore without collapsing — the encoded quantum information. The key insight is that error correction requires only measuring certain correlations between qubits (the syndrome), not the qubit states themselves.

A further refinement followed: fault-tolerant quantum computation. Shor's 1996 paper showed that for any quantum computation with t gates, a polynomial-size quantum circuit exists that can tolerate O(1/(log t)c) inaccuracy and decoherence per gate for some constant c. Below a threshold error rate per gate, arbitrarily long computations become possible by adding redundancy.

"Any quantum computer realistically built is expected to have unreliable components. Inaccuracies in quantum state transformations during computation pile up, rendering the output of long computations unreliable." — Peter Shor, 1996

The threshold theorem is the theoretical foundation for practical quantum computing. It says: build your qubits well enough — below a certain error rate per gate — and the error correction overhead is manageable. The physical error rate for current superconducting qubits is approaching but has not yet reliably crossed this threshold for large systems.

What the Algorithms Reveal

The four algorithms of this era — Deutsch-Jozsa, Simon's, Shor's factoring, and the error correction framework — collectively define what quantum computing is and is not.

Quantum computing is not universally faster. For most problems — sorting, searching unstructured data, most optimisation tasks — quantum speedup is modest or nonexistent. Grover's algorithm provides a quadratic speedup for unstructured search (O(√N) vs O(N) classically), useful but not revolutionary.

Quantum computing provides exponential speedup for a specific class of problems: those with hidden algebraic structure — periodicity, symmetry, Fourier-transformable properties — that quantum interference can exploit. Factoring is one such problem. Discrete logarithms are another. Quantum simulation of quantum systems — Feynman's original motivation — is a third.

The question Sycamore was built to answer was different: not whether quantum computers can solve useful problems faster, but whether they can perform any computation beyond the reach of classical supercomputers. That is quantum supremacy. And that is Part III.

Part III — Upcoming

Sycamore, Willow, and the Post-Classical Horizon

Original Survey Paper

This essay is adapted from a 43-source graduate survey written at York University in 2019. The original document includes complete mathematical treatment of the Deutsch-Jozsa and Shor algorithms, circuit diagrams, Hamiltonian derivations, and full bibliography.

Kshitij Govil  ·  York University, 2019  ·  Updated for Web, 2026