Brooksworm, run!

| Comments (2) | Misc
Matthew Yglesias approvingly quotes David Brooks:
David Brooks really nails an important part of the internet experience:
Until that moment, I had thought that the magic of the information age was that it allowed us to know more, but then I realized the magic of the information age is that it allows us to know less. It provides us with external cognitive servants -- silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves.
Right. I had a weird experience on Monday of playing on a pub trivia trivia team after not having done so for several years. Every time a question got asked that I didn't know the answer to, I felt this overwhelming urge to reach for my iPhone, a device I didn't have back in my earlier quizzing days. The idea of being limited to the information that was actually in my head was very distressing.

So, I started to write about how this is all really obvious and old news —I've seen people referring to their PDAs as external brains since long before they were even networked—but it's actually 6000 year old news. The first major technology that let you expand your ability to offload substantial amounts of work previously done by your brain was writing. The second was mathematics.

Computer scientists like to talk about a memory hierarchy: a computer can have a lot of different kinds of storage: registers, onboard cache (on the chip), offboard cache (on the motherboard), main memory, hard drive (typically a cache plus the disk), online tape, archival tape, etc. The general principle is that the further away you get from the CPU the larger the capacity is, but the longer the access time. So, performance is to a great extent limited by your ability to keep important data at hand. If the working set of your program is too large to be contained in the close/fast/small levels of the memory hierarchy, the CPU tends to idle as data needs to be moved in and out of memory levels.

Obviously, brains aren't constructed in this way, but you do have short and long term memory, and written words provide a form of ultra-long-term memory, as well as (of course) a way to communicate sections of your memory to others. One way to think about this second feature is that it's the ability to have things in your "memory" that you never actually learned. I.e., they never passed through your biological memory, you just look them up when you need them.

The disadvantage of this form of ultra-long-term memory, of course, is that it's unbelievably slow. I keep paper notebooks, but actually finding things in them can be quite difficult ("you can't grep dead trees"). An electronic memory is obviously much easier to find stuff in. The "remember stuff you never knew" feature is even worse. First, you need to actually find the book you're looking for, then you need to actually find the section of the book, then you need to actually digest the information. Only then is it in short term and you actually know it. Compare this to long-term memory, where you just need a reminder and it all starts to swap back into short-term (though this can take quite some time.)

So, the basic problem with paper memories is that the gap between them and the next step up in the memory hierarchy is just too large. One way to think of electronic memories is that they close that gap. At one level, that's great, but at another it still pretty much sucks. It's massively slower to find (let alone assimilate) things from the Internet than it is to remember them (assuming I actually can). What I really want is to just have the information piped directly into my brain. We're a ways away from that, but if we ever can, it will seem like every bit the miracle that the Internet does now, and Google will look just as clunky as books by comparison.

The above is all about memory, but there are actually at least four mental tasks you can outsource: memory, processing, input, and output. Our current technology lets us outsource all of these to some extent, but really it's quite far from what you'd like.

Required reading:
This sort of enhancement is one of Vernor Vinge's writing. "Bookworm, Run!", which is the first place I saw this kind "The Peace War" is very focused on outsourced processing. "Rainbows End" is a more complete vision of both the potential of this kind of technology and of the threats that come along with it.
P.J. Denning's "The Working Set Model for Program Behavior".

2 Comments

You forgot the outsourced "soul", such as the electric monk in Dirk Gently.

I was discussing this idea just the other day with some colleagues. I described it as having three tiers: short-term memory, long-term memory, and Wikipedia.

I have a concern that isn't really mentioned, I wonder if any of your sources cover it. To me, it seems like brain memory is used in ways that computer memory is not (yet, at least).

I suppose there is some cognitive query response system whereby my conscious finds it needs to know, for instance, where Henry V of England defeated Charles VI of France in 1415, and through some system of address lookup and retrieval, the word Agincourt is delivered to my conscious.

Wikipedia just adds a few more steps, such as a page fault in brain memory, and the external lookup.

But brain memory is used in more subtle ways, and I wonder if we are giving up too much if we allow too much of our knowledge to be 'outsourced', as you are calling it.

An important one that lies at the heart of the creative process is how knowledge is sometimes called to mind unrequested, perhaps because of some sort of subconscious pattern filter that recognized the relevance of some past experience and suggests it -- like a cognitive Clippy the Paperclip.

I'm sure we are all familiar with this phenomenon. I can imagine something similar might have happened to Alan Turing when he was looking at configurations of the Halting Problem, and suddenly remembered Cantor's Proof in which he demonstrated the real numbers are uncountable. This allowed him to adapt a diagonalization proof to show the halting problem undecidable.

If Alan Turing had seen Cantor's Proof and thought 'I won't spend too much time on this, because I can just look it up on wikipedia if I ever need it', then the insight would have been missed.

Leave a comment