Survey · Part I of III — Foundations

Quantum Physics · York University, 2019

Turing, Feynman, and the Machine that Thinks in Superposition

"Those who are not shocked when they first come across quantum theory cannot possibly have understood it." — Niels Bohr

Bohr wasn't being dramatic. He was being precise. Quantum mechanics does not ask you to update your intuitions. It asks you to abandon them entirely, and then rebuild your understanding from mathematics that has no everyday analogy. Most people never do this. And so most people have no idea what is actually at stake when physicists talk about quantum computers.

This is an attempt to do it properly. Not to oversimplify, and not to hide behind the equations either. The field spans five decades of physics, computer science, and philosophy of mind. What follows covers the foundations: the classical machine, its limits, and the three people who understood — before anyone had built a single qubit — that nature itself computes differently than we do.

The Machine That Cannot Not Compute

In 1936, Alan Turing published a paper that changed the question. Not "what can machines do?" but "what does it mean to compute?" He proposed a theoretical device of elegant minimalism: an infinitely long tape divided into squares, each holding a symbol. A read/write head scans one square at a time. Preset rules determine what the head does based on the current symbol and the machine's internal state — whether to write, move left, move right, or stop.

That's it. No processor. No memory chip. No operating system. And yet, Turing proved that any computation that can be performed at all can be performed by this machine. The "Universal Turing Machine" — a Turing machine that reads the description of another Turing machine and simulates it — became the theoretical foundation for every computer ever built.

"It was possible to invent a Universal Computing Machine that can be used to compute any computable sequence." — Alan Turing, 1936

The Church-Turing hypothesis, formalised by Turing and Alonzo Church independently in 1936, went further: every function which would naturally be regarded as computable can be computed by the universal Turing machine. This is not a theorem. It is a conjecture about the nature of computation itself — and it held unchallenged for nearly fifty years.

What Classical Computing Actually Is

Before quantum computing makes sense, classical computing needs to be understood not as familiar background, but as a specific physical implementation of Turing's abstract model — with specific physical limits that follow from it.

Every digital computer encodes information in two states: high voltage (roughly 5V, representing True or 1) and low voltage (near zero, representing False or 0). Logic gates — AND, OR, NOT and their combinations — process these binary signals according to Boolean logic. An AND gate outputs True only if both inputs are True. An OR gate outputs True if either input is True. Chain enough gates together and you have arithmetic, memory, decision-making.

Information is stored in flip-flops — circuit elements that hold a high or low voltage until reset. Each flip-flop stores one bit. Ordered arrays of flip-flops store the 16, 32, or 64-bit "words" that processors handle. The entire architecture — gates, flip-flops, buses — is capable of any Boolean operation. Which, by Church-Turing, means it can compute anything computable.

The question Feynman asked in 1982 was not whether classical computers were powerful. It was whether they were the right kind of thing.

Feynman's Question

Richard Feynman's 1982 paper "Simulating Physics with Computers" is not primarily a paper about computers. It is a paper about physics, and about the cost of lying.

Classical physics is deterministic. Given initial conditions, you can calculate what happens next. Classical computers simulate classical systems efficiently — the simulation scales proportionally with the system being modelled. But quantum mechanics is probabilistic, and quantum systems are described by wavefunctions that encode not just states but superpositions of states. When you try to simulate a quantum system on a classical computer, the required resources grow exponentially with the size of the system.

For a system of R particles, each potentially at N points in space, a classical simulation requires NR configurations. With R = 40 particles and N = 100 spatial points, that is already 1080 configurations — more than the estimated number of atoms in the observable universe. The simulation doesn't just get hard. It becomes impossible in principle.

Feynman's proposal was direct: if classical computers cannot simulate quantum systems efficiently, perhaps we need a computer that is itself quantum mechanical. A system of discrete quantum elements — spins, two-level systems — with locally specifiable interactions, obeying quantum mechanical laws, could simulate quantum physics without the exponential blowup. The system would not represent quantum probability. It would be quantum probability.

"Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical." — Richard Feynman, 1982

The Reversibility Problem, and Why It Matters

Before anyone could build a quantum computer, a foundational objection had to be resolved: computation, as classically understood, is irreversible. When an AND gate with inputs (1,0) produces output 0, information is lost — you cannot recover the inputs from the output. Information destruction, by the second law of thermodynamics, generates heat. Rolf Landauer had shown in 1961 that erasing one bit of information releases a minimum of kT·ln2 of energy as heat — about 3×10-21 joules at room temperature, small but unavoidable.

Charles Bennett resolved this in 1973 by proving that computation need not be irreversible. He showed explicitly that any irreversible computation can be carried out by a reversible one — by saving all intermediate results rather than erasing them, printing the desired output, then retracing all steps in reverse to restore the machine to its original state. A type of 3-tape Turing machine. No information lost. No thermodynamic minimum heat generation.

This was not merely a theoretical curiosity. Quantum mechanical evolution is inherently reversible — the Schrödinger equation runs equally well forwards and backwards. If classical computation could be made reversible, then the door to a genuinely quantum mechanical computer was open.

Benioff Builds the First Quantum Model

Paul Benioff was the first to construct a fully quantum mechanical model of a computer. His 1980 paper showed that for any Turing machine Q and any number N of computation steps, there exists a Hamiltonian — a quantum mechanical energy operator — whose time evolution correctly enacts those N computation steps on an appropriate initial quantum state.

The result was striking: the total energy of the system remained unchanged after the completion of each model step. The computation dissipated zero energy at each step and proceeded at finite speed — without the need for arbitrarily slow processing. The constraint that computation must generate heat, or must run slowly to avoid it, had been broken at the quantum mechanical level.

The model was coherent, closed, and conservative. It showed that a Hamiltonian description of computation is not merely conceivable — it is mathematically explicit.

Enter Deutsch: The Universal Quantum Computer

David Deutsch's 1985 paper is where quantum computing becomes a formal discipline. Deutsch's critique of Benioff was precise: the quantum mechanical models that existed showed that computation could be performed without violating quantum mechanics. But none of them demonstrated any genuinely quantum property — no interference, no non-separability, no indeterminism visible in the output. They were classical computations dressed in quantum clothing.

Deutsch's contribution was to restate the Church-Turing hypothesis in a physical, unambiguous form:

"Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means." — David Deutsch, 1985

Then he built the machine. Deutsch's universal quantum computer has two components: a finite processor consisting of M two-state observables, and an infinite memory — the quantum equivalent of Turing's infinite tape. Its state is a unit vector in a Hilbert space spanned by simultaneous eigenvectors of position, processor states, and memory states. A single constant unitary operator U encapsulates its entire dynamics.

The crucial addition was eight operations beyond the classical universal set — unitary transformations on a single Hilbert space — that allow programs to evolve computational basis states into linear superpositions of each other. This is quantum parallelism: the ability to evaluate a function on many inputs simultaneously by placing the input in superposition. The computer doesn't choose one input. It processes all of them at once.

Deutsch also defined quantum circuits — the quantum generalisation of classical logic circuits — and showed that a universal quantum gate exists: a gate that, together with unit wires, can construct any quantum computational network. A quantum gate acts on qubits — two-level quantum mechanical systems — and is described by a unitary matrix. For n qubits, the gate is a 2n × 2n unitary matrix.

What the Qubit Actually Is

A classical bit is either 0 or 1. A qubit is a two-level quantum mechanical system whose state is a complex unit vector in a two-dimensional Hilbert space:

|ψ⟩ = α₀|0⟩ + α₁|1⟩

where α₀ and α₁ are complex numbers satisfying |α₀|² + |α₁|² = 1. This state vector can be visualised as a point on the surface of a unit sphere — the Bloch sphere. The poles represent the classical states |0⟩ and |1⟩. Every other point on the sphere is a superposition: both 0 and 1 simultaneously, weighted by the probability amplitudes α₀ and α₁.

When measured, the qubit collapses to |0⟩ with probability |α₀|² or to |1⟩ with probability |α₁|². The measurement is irreversible. The superposition is destroyed. The art of quantum computing is in performing useful computation before measurement — extracting information from the interference of amplitudes without prematurely collapsing the state.

Geometrically, all quantum state transformations on qubits are rotations of the 2n-dimensional complex state space. The circuit model makes this concrete: each quantum gate is a rotation, and a quantum algorithm is a sequence of rotations chosen so that the desired output state has high amplitude when measured.

The quantum computer does not choose between possibilities. It holds all of them simultaneously, and uses interference to amplify the right answer.

Complexity: What Problems Actually Cost

Bernstein and Vazirani extended the Church-Turing thesis in 1993 to the computational complexity setting, asserting that any reasonable model of computation can be effectively simulated on a probabilistic Turing machine with at most polynomial slowdown. They presented a Universal Quantum Turing Machine whose simulation overhead was polynomially bounded — and critically, whose interference patterns were preserved during universal simulation.

Andrew Yao improved on this with the quantum circuit model: a rigorous mathematical model for quantum computation, analogous to the Boolean circuit model in classical complexity theory. The complexity of a quantum circuit is defined as the number of simple gates it contains. Circuit complexity, query complexity, communication complexity — these became the tools for understanding what quantum computers could do faster than classical ones, and by how much.

The stage was now set. The theoretical machine existed. Its complexity properties were understood. What remained was to find problems where the quantum speedup was not merely polynomial but exponential — where quantum computers could solve in hours what classical computers could not solve in the age of the universe.

That is the story of Part II.

This is Part I

Foundations · Turing · Feynman · Deutsch

Original Survey Paper

This essay is adapted from a 43-source graduate survey written at York University in 2019, covering the full arc from Turing's model through Google's Sycamore processor. The original academic document — with full mathematical appendices, Hamiltonian derivations, and complete bibliography — is available below.

Kshitij Govil  ·  York University, 2019  ·  Updated for Web, 2026