If Isaac Asimov designed your computer...

| Comments (8) |
Was in the library the other day and in a fit of nostalgia I picked up a bunch of old Isaac Asimov "robots" books. Like nearly all science fiction authors of that era, Asimov got computers pretty much all wrong, in at least three major ways.

Some vague spoilers below, though really these books are so old that you don't have much to complain about.

Form Factor
Probably the most excusable miscalculation is size. In Asimov-land, computers come in two varieties: gigantic tube or relay-based monstrosities like multivac and positronic brained robots (see the relevant Wikipedia entry). A positronic brain is about the size of a football but seems to be basically a solid mass of compute power. By contrast, all but the largest modern microprocessors actually have a tiny volume. This isn't that surprising, of course, since Asimov wrote most of these stories before the invention and popularization of the IC, so he was left to either extrapolate from existing computer technology (resulting in gigantic machines) or invent something fundamentally new and unspecified (resulting in positronic brains).

UI versus AI
As the Wikipedia article works out, the capabilities of Asimov's imagined computers don't very well match how computers actually work:

Although these stories are well-known, it is hardly ever recognized that Asimov's robots are nothing at all like computers, as the main series of them predated any of the major computer projects. The main stumbling block is that writing a program that would be able to determine whether any of the three laws would be violated is far more difficult than writing one for machine vision, or speech recognition, or even comprehending the activities and motivations in the human world, which is only attempted by determining a vast list of rules to check. Also, the stories' robots never get programming viruses, or require updates. They may, however, have new features installed (like R. Giskard, as we are told in Robots and Empire). Most importantly, they only stop functioning due to a clash between the (hypothetical) subroutines which determine whether one of the laws has been violated, never a crash of a subroutine itself: they are never at a loss as to what is going on, only what to do about it.

Actually, the problem is worse than that. Like many SF authors, Asimov seems to have assumed that AI was mostly an emergent property of assembling enough computational power in one place, but decent UI seems to be much harder. See, for instance, All The Troubles Of the World, in which Multivac is clearly intelligent but you communicate with it more or less via teletype--not even by something as primitive as a text terminal:

Othman used the instrument on Gulliman's desk. His fingers punched out the question with deft strokes: "Multivac, what do you yourself want more than anything else?"

The moment between question and answer lengthened unberably, but neither Othman nor Gulliman breathed.

And there was a clicking and a card popped out. It was a small card. On it, in precise letters, was the answer:

"I want to die."

Even in the robot novels, it's assumed that AI-type functionality is easier than better UI. See, for instance, Robbie, which features a clearly intelligent robot that understands speech but can't speak.

Hardware and Software
Probably the most interesting difference between Asimov computers/robots and real computers is how much is done in hardware. Modern computers are general purpose computing engines with most of the behavior controlled via software. Based on the fairly sketchy information in the robot stories, it appears that robots operate on a rather different principle.

Here's Asimov's description of a robot which observed the death of a human (from The Naked Sun:

A robot entered. It limped, one leg dragging. Baley wondered why and then shrugged. Even among the primitive robots on Earth, reactions to injury of the positronic paths were never obvious to the layman. A disrupted circuit might strike a leg's functioning, as here, and the fact would be most significant to a roboticist and completely meaningless to anyone else.

And an example of a robot given contradictory orders from The Robots of Dawn:

What was troubling the robot was what the robotocists called an equipotential of contradiction on the second level. Obedience was the Second Law and R. Geronimo was now suffering from two roughly equal and contradictory orders. Robot-block was what the general population called it or, more frequently, roblock for short.

Slowly the robot turned. Its original order was the stronger, but not by much, so that its voice was slurred. "Master, I was told you might say that. If so I was to say--" It paused, then added hoarsely, "I was to say--if you were alone."


For a moment, Baley played irritably with the notion of strengthening his own order and making the roblock more nearly complete, but that would surely cause the kind of damage that would require positronic analysis and reprogramming. The expense of that would be taken out of his salary and it might easily amount to a year's pay.

Asimov is of course never clear how robots really work, but the above passages suggest two things. First, all this talk of robots actions being chosen by potentials is supposed to make you think of voltages and robots as being sort of an analog computer as opposed to a modern digitally programmed one. Second, if the robot gets sufficiently confused its brain can suffer physical damage. At another point in The Naked Sun we see a robot which has witnessed the death of a human rendered totally inoperable beyond any repair. Runaround is another example of an early story with a similar theme of apparent global malfunction caused by a specific conflict.

By contrast, while modern computers crash all the time, this virtually never causes any hardware damage and nobody would expect that just because you gave the computer some input it wasn't prepared to handle it would somehow physically break it. You can reboot the system and it comes online just fine. Even in the worst case, all you have to do is reinstall the software and everything is normal. If the computer crashes and the hardware is broken, the arrow of causation almost certainly goes the other way. The reason, of course, is that most of the interesting stuff in a modern computer happens at the software level, so that's where crashes happen too. From the perspective of the hardware, a malfunctioning computer program looks much like a properly functioning one.

That said, this inaccuracy isn't surprising. First, Asimov wrote these stories long before the age of modern software, so he wouldn't have had the appropriate sense of how computers work. In particular, the earliest robot stories were written in the 40s and while Turing completeness was known it certainly hadn't entered the general consciousness--indeed ENIAC had yet to be built--so it wouldn't have been obvious that the most flexible approach was to build a general purpose computing platform and then run software on top of it. Second, the electrical circuit metaphor is a lot more evocative and of course having a crashed robot be physically destroyed is more interesting from a narrative perspective. That probably explains why Asimov continued to use the same basic model long after the pre-eminence of software in real computers had become apparent.


There's one story (can't remember the title, sorry) in which a robot is employed to draw charts for a scientific journal, using pencil and paper -- a slightly surprising failure to extrapolate the technology of the time, I remember thinking.

The AI/UI thing is mostly a result of the truly stupefying naivete that plagued the entire scientific-intellectual community of the era regarding AI. It appears that back then, people really believed that people's brains worked by making logical deductions. I guess the stereotype of scientists understanding abstract concepts but having no insight into people has some validity to it.

As for the hardware/software issue, Asimov may not have predicted the *current* state of technology, but the amount and sophistication of functionality that's migrated to hardware is increasing all the time. It's possible that future advances in software specification and verification would make it possible for the sophisticated software of a future robot to be committed to hardware as well--especially if robots ever become widely deployed commodity devices, as they appear to be in many of Asimov's works.

(Do I believe that'll happen? No, of course not, because I don't believe robots that are anything like what Asimov describes will ever be built, given his fundamental misconceptions about AI. Still, we should distinguish between faulty predictions based on fundamental misconceptions and those that appear so far to be erroneous, but might still be borne out in the future.)


I'm not sure I agree that the curve of stuff in hardware vs. software is as monotonic as you seem to be implying. In my experience, things move back and forth as functionality and hardware costs trade off
(cf. winmodems... or Sutherland and the wheel of reincarnation.

eh? Come on, they're just stories, how on Earth could he have been able to predict anything that happened 20-50 years down the line? They are still fantastic stories. Any writer today that successfully predicts how technology will evolve in 50 years time should be investing in stocks, not writing books!

Good read, thank you!

-- I wonder if in 50 years time, somebody will write a piece on, er, this piece. Stating that the author can't be blamed for his visions, 'cause bioprocessors were merely theoretical in his time, and that he couldn't have known that these biopros resulted in devices much like described by Asimoc ;) --

As much as people like to think there is a huge difference between software and hardware their really isn't. They are reasonably interchangeable assuming a bare minimum of hardware exists. Assuming that robot mind is built upon "Society of Mind" type principals and assuming that hardware will replace software where maximum speed is needed, it is quite likely that emotional circuits would be partially implemented in hardware and that they would be at least extremely persistent (EEPROM for example) if not actually gradually migrated to custom circuit designs within the positronic brain.

I never got the impression that Asimov was trying to predict the technology. He said himself that he just picked the word "positronic" because it was something new & sounded good.

His stories were usually more about human nature (individually & collectively) than about technology.

In "Galley Slave", a robot that edits text sits reading the galleys (printed bound drafts) by sitting there and flipping the pages.

Dedicated intelligent computing devices make no appearance in the Asimov robot-powered universe, except for "That Thou Are Mindful Of Him", which posits a possible robot takeover by having them assume other shapes such as animals and insects.

Leave a comment