Software performance homeostasis

| Comments (6) | Software
I was talking to Allan Schiffman tonight and he observed that computers are something like two orders of magnitude faster than when he worked on Smalltalk. In fact, it's probably more than that: the computer I had in college was, I think, an 80286, which had a maximum clock speed of 25 MHz [and this in an era where Dell advertised a similar machine as "fast enough to burn the sand off a desert floor"]. I'm typing this on a 1.6 GHz Core Duo (yes, yes, I know clock speed isn't everything, but it's close enough for these purposes). Storage has improved even more: I remember paying $1000+ for a 1GB hard drive and now terabyte drives go for about $100. That's all great, but surely you've noticed that the end-to-end performance of systems hasn't improved anywhere near as much. In fact, the UI on my Air is distinctly less zippy than that of X11 systems circa 1995.

There are of course plenty of places to point the finger: GUI chrome, code bloat, more use of interpreted and translated languages like Java, Flash, and it's true that the systems just do a lot more than they used to, but those are just symptoms. I suspect the underlying cause is something more akin to risk homeostasis. When engineers get more compute power, they spend less time worrying about how to make systems faster and a lot more time worrying about how to add more features, so the overall performance of the system stays somewhere in the "barely acceptable" range. Friends and I used to joke that engineers should be given old, slow machines to work on so that they would be incentivized to think about performance. I'm still not sure that's entirely crazy, though I must admit that it's a lot less fun to be an engineer under those conditions.

6 Comments

I believe at least one division of at least one large software company adopted precisely such a policy at one time. The associated product's commercial success suggests that the idea isn't "entirely crazy".

We had that policy at ParcPlace Systems (circa '88): porting engineers had slower computers than customers, less memory too. I imagine I may have told you about the policy when you complained about the speed of the computers at EIT in '94.

I think having the "porting engineers" having customer-like systems is the right way to go. Having said that, I would rather not see the "who's more important" pissing matches that would ensue between the two sets of engineers.

I'm not sure how homeo the performance is. While it's certainly much more level than the number of FLOPS, I contend there has been pretty substantial improvement. I remember the original Mac 128KB and then a Mac SE with two floppies. My current machines are all dramatically more responsive.

I think you may be misremembering the "zippiness" of those circa 1995 systems. Have you tried one lately?

Yeah, but you now take things like streaming video for granted. I remember interviewing in the multimedia group in 1991, and being shown a postage stamp sized talking head with synchronized audio, and pretty much being completely blown away. That machine was using 100% resources to do that. Your not so-zippy machine is now expected to be able to stream cheesy 80s video full screen to your second monitor while you are editing text in the primary. (While indexing the drive, serving up some wiki pages, and animating a few GIFs in the banner ads on the web page you are looking up an API on.)

Leave a comment