The evening after I published the last article, I came home to find Duoqi Tsukuru clumsily swiping his paw across the phone screen (probably trying to like something). When he saw me come in, he couldn't wait to speak: Meow, meow meow meow meow meow, meow meow meow, meow meow.

Which meant: I thought you were onto something big, but turns out it's nothing special — logic gates, composition, von Neumann architecture — I've got it all figured out! He swished his tail with utter nonchalance.

I said, while brewing tea: We're far from done. The last article only introduced one type among the many computers in the real world. There are other kinds, like DNA computers, and different computers work on different principles. It's like a mother cat giving birth to nine kittens — every kitten is different. Look at you, a calico; your big brother Watanabe Noboru is orange-and-white; your little brother Hatfield is a tuxedo cat. If I only study you, calico Duoqi Tsukuru, can I claim to understand cats? You three brothers — indeed, all cats — share something in common, and understanding that commonality is what it means to begin to understand cats. Computers work the same way.

Do tell, said Duoqi Tsukuru.

You know that computers can be programmed, right? I saw Duoqi Tsukuru nod and continued: The fact that computers are programmable means one computer can "simulate" another through programming. If Computer A can simulate Computer B through programming, then Computer A's "capability" is greater than or equal to Computer B's. Because whatever Computer B can do, Computer A can accomplish through the simulated Computer B'. Any questions so far?

Duoqi Tsukuru nodded.

I took a sip of tea and went on: If two computers, Computer A and Computer B, can simulate each other, then their computational capabilities are equivalent — whatever A can do, B can do too, and vice versa. Since they are equivalent in capability, we only need to study A, and the results will apply to B. Furthermore, if we can find a computer that can simulate every conceivable computer, then it represents the upper limit of all computational capability, and we can base our study of computation entirely on it. "Wonderful!" Duoqi Tsukuru said excitedly. "Wonderful, let's get started! We could become young scientists!" I smiled wryly: Someone already did this in 1936 — over eighty years ago. His name was Turing, and the computational model he proposed is called the Turing Machine.

Duoqi Tsukuru: Very well, please introduce the Turing Machine.

This is the Turing Machine. I pointed at the diagram and explained to Duoqi Tsukuru: A Turing Machine consists of an infinite tape and a read/write head. The tape is divided into cells, as shown above. Each cell can be blank, or contain a 0 or a 1. The read/write head can do the following things: move left or right (N cells), read the contents of the current cell, and write to the current cell (0, 1, or blank).

The read/write head's behavior is defined by a "state transition table." A typical state transition table looks like this (I printed it on paper and showed it to Duoqi Tsukuru): Symbol Read | Write Instruction | Move Instruction — Blank | Write 1 | Move tape right — 0 | Write 1 | Move tape right — 1 | Write 0 | Move tape right — "Wait!" Duoqi Tsukuru raised his paw. "Wait, I have a question — it might not be right — but how does the Turing Machine's read/write head know what to do next? How does it work?" Here's the thing, I said, handing Duoqi Tsukuru a cup of tea. The Turing Machine is an abstract model. By "abstract model," I mean we don't concern ourselves with its physical construction, but with its properties. Its properties are: it has a tape, it has a read/write head, and the read/write head can perform a limited set of actions. What the read/write head specifically does is determined by the table above. Once we give the Turing Machine such a table, it operates accordingly. This is the premise of the Turing Machine — we simply accept it. It can do this; in other words, anything that can do this, we call a Turing Machine. For instance, you could be a Turing Machine — your paw can move along the tape, read what's on it, and write on it, and you can consult a table to decide your next step. In that sense, you are a Turing Machine. Understand? "When I do long multiplication on paper, I'm a Turing Machine?" said Duoqi Tsukuru.

A promising student, indeed. Now, where does this table come from? I know you're about to ask, so I'll answer preemptively. The table represents different computational tasks. Designing this table is essentially programming the Turing Machine. We create the table, submit it to the Turing Machine, and it acts accordingly. The only constraint is that we can only base our instructions on the symbol the read/write head reads, and the actions we can set are limited to writing 0/1 or moving the read/write head.

In fact, the table above is a simple program that performs bitwise inversion on a binary number.

Duoqi Tsukuru's whiskers were covered in water droplets from drinking tea. He shook his head and turned to look at me: Boring. Can't you do something actually interesting? Last time it was single-digit addition, this time it's bitwise inversion — even a dog could be trained to do this.

It was just for illustration, I said. No need to make it complicated. I could explain multi-digit addition or multiplication, or solving the "chickens and rabbits in a cage" problem, but constructing those examples would involve too many details — like how to arrange and map the input. Is that really appropriate for a popular science article? Getting the idea is enough.

Duoqi Tsukuru: Fine. I get it, it's somewhat interesting. But what's the use? What's the significance? This seems like an even less practical computer than the one from the last article. I don't feel like it provides any knowledge.

Me: Fully explaining the significance of the Turing Machine exceeds the scope of this article. But today we can touch on a few points, some of which you've already sensed — for instance, you realized that in a sense you yourself are a Turing Machine. This is actually central to the context in which the Turing Machine was proposed. The problem the Turing Machine addresses is, first and foremost, the question of human logical capability. Turing asserted that any process of human logical reasoning can be expressed by a Turing Machine. Therefore, by studying the properties of the Turing Machine, we can study human logical reasoning — or, one might say, cognitive capability.

Duoqi Tsukuru: Alright. So what was discovered?

Me: Mainly two things. The first is about what humans can do; the second is about what humans cannot do. What humans can do is addressed by the Church-Turing thesis. What humans cannot do is addressed by the Halting Problem.

The Church-Turing thesis states that any problem computable by an algorithm can be computed by a Turing Machine. You might ask: what does "computable by an algorithm" mean? Unfortunately, this concept doesn't have a rigorous definition. Essentially, defining this concept is equivalent to inventing a computational model, and every computational model we've invented so far has turned out to be equivalent in power to the Turing Machine. So the thesis effectively defines computability through the Turing Machine. But what exactly is computable? I think it is, to some extent, a question of mind. That's why the Church-Turing thesis remains merely a "thesis" — it can't even be called a "hypothesis," because it's unlikely we can formulate a statement that can be proven or disproven. This is a consequence of its fundamentally mind-related nature. But it still provides important knowledge: it is a formal model of a very large part of human cognitive ability, enabling us to understand what we can and cannot do.

And what we (Turing Machines) cannot do is the Halting Problem. In The Return of the Condor Heroes, the great swordsman Yang Guo gave Guo Xiang three golden needles, each representing a chance to have a wish fulfilled — whatever Guo Xiang asked for, Yang Guo would make it happen. As a kid, I always thought: every time I got down to the last needle, I'd just wish for three more. The story's original point is to satirize human greed, but here it illustrates the concept of "meta-narrative." The Turing Machine promises us that all computable algorithms can be computed. So the question becomes: does there exist an algorithm that can determine whether any given algorithm will produce a result (halt)? Turing asserted that no such algorithm exists on a Turing Machine. Moreover, this assertion is rigorous — it is theoretically impossible. The proof constructs an isomorphic relationship between the algorithm being judged and the algorithm doing the judging: they express the same thing in different ways, and the Halting Problem leads to a contradiction between the two.

Duoqi Tsukuru: Sounds very esoteric. Can you prove it?

Me: I can, but let's save it for another day — this article is already long enough. I just want to add one thing: the proof structure of the Halting Problem is very similar to the construction of Gödel's Incompleteness Theorem, but I haven't yet figured out the exact relationship between them. It has some connection to Russell's "Barber Paradox," which was later resolved through axiomatic set theory — we consider that paradox to be a problem of incomplete mathematical language. But the Halting Problem and Gödel's Incompleteness Theorem suggest that within human logical capability, there seems to be a fundamental self-referential structure... "CRASH!" Just as I was getting into it, I heard a loud bang: Duoqi Tsukuru had knocked over my teapot with one swipe of his paw and scurried away. The tea spilled across the table, soaking the reference books I'd bought from Duozhuayu. They were: Functional Programming: Practice and Theory, The Annotated Turing, and Computability and Automata Theory.