The Fallacy of the "Singularity"

Singularity” is a nice buzzword. It sounds so scientific and futuristic. It plays on a great fear of our time: obsolescence by machines.

It’s a fear as old as the Industrial Revolution, when a single machine began doing the work of a man. Then two. Then ten. Now complex robots can not only do work instead of a person—instead of many, many people—they can actually perform tasks that a human could not.

It’s a fear sensationalized in 2001, Terminator, and Blade Runner.

But is it remotely rational?

The basic version is this: eventually we will build a computer “smart” enough to design a smarter computer. At that moment, we humans will no longer be able to stop the forward march of computers and robots, who will surely build homicidal androids and lead a bloody revolution.

Computers, however, have not really gotten “smarter” in any meaningful sense. Ever. They’ve gotten much, much faster.

Programmers have gotten smarter, and have learned, invented, discovered new techniques for solving problems, new problems, new solutions, and so on. New instruction sets have been added to processors to encompass complex, but common, tasks and simplify programs. More transistors allow more data to go through, and new materials mean that we will continue to cram more information into fewer electrons with less friction—though we can never really pass one bit/lepton. 64-bit processors mean we can keep more of those bits in RAM and 128-bit file systems can store more data than will ever exist on earth (at least without boiling the oceans).

Shelly Blake-Plock asked “what do we mean by smart?” when we talk about computers? It’s an important question.

Since computers are not capable of independent thought—a human is the ultimate source of all their “thoughts”—I think the best definition of “smart” relates to the kinds of problems it can solve. If computer A can solve more problems than computer B, then it is “smarter.” Right?

Except, by that definition, computers haven’t gotten any smarter since the 1930s!

The two important classes of problems are *undecidable *problems, like the Halting Problem, and NP-complete problems, like the Traveling Salesman.

For undecidable problems, no computer will ever be able to come up with an answer, unless we undergo a fundamental shift in computing, and then there will probably just be a different set of undecidable problems.

For NP-complete problems, new computers are able to work on them faster, and reach new solutions, but only because we were too impatient to wait for the old ones. Sometimes, people come up with solutions to solve a particular NP-complete problem, or a particular instance of it, a little faster, it is likely that there is an absolute limit on this. (That is if NP≠P, as seems likely.)

The trick is that all computers are less powerful than abstract machines like Universal Turing machines or RASPs. Given enough time, a simple, abstract Universal Turing machine could solve any problem the most advanced super-computer could solve. And these mathematical limitations apply to perfect, abstract machines with literally infinite storage (way more than 128-bit) and infinite time. These mathematical limitations define the word “computable” itself.

So really, computers haven’t gotten one bit (or byte) smarter. Ever.

I think we’re safe from the robot revolution.

Now, if someone builds a machine that runs at one kilohertz, but can think for itself, we’re in trouble.