April 2009 Archives

 

April 30, 2009

This is interesting news. McDonald, Hawkes and Pieprzyk claim that they have reduced the collision strength of SHA-1 to 2^{52}. As usual, don't panic: collisions are hard to exploit. However, this implies that the wise CA would transition at least to randomized serial numbers and that the SHA-256 transition is now more important.

I'm still trying to decipher this Schnorr presentation entitled "Average Time Fast SVP and CVP Algorithms: Factoring Integers in Polynomial Time". Presumably, if this led to a practical attack, Schnorr would have presented it differently, but I'd be interested to see an analysis of the impact of it, if any, from a real cryptographer.

 

April 28, 2009

Bruce Schneier links to this article about a plane between France and Mexico being diverted because a passenger on board was on the US no-fly list and the plane would have gone over the US. I agree with Bruce that the no-fly list is basically stupid, but once you accept its premises this strikes me as not entirely crazy. If your concern is that someone is going to hijack the plane and crash it into a building, then he doesn't even have to land to do that, just get close enough to the target that it's hard to know what's up and divert him. So, with that reasoning I can see why you would think that it was undesirable to even let him into US airspace. Moreover, it has the side benefit of letting TSA look like they're really trying hard to keep you safe, while (mostly) only inconveniencing foreigners. What's the downside from their perspective?
 

April 27, 2009

Dwallach pointed me at Shazam, an iPhone app which does a nice job of music identification. You start up the app, it listens for a sec, thinks for a while, and then tells you what you're listening to; not just the song but also the album, etc. I ran it on a not-so-random selection of songs from the radio and my CD collection (Ice Cube, Murder City Devils, Rolling Stones...) and it nailed all of them. Only objection is that it's fairly slow, but presumably that's a matter of CPU power... Anyway, pretty cool.
 

April 26, 2009

The appropriate cryptographic key length for any given application is a popular topic in the communications security/cryptography community. There's a fair amount of consensus on the resistance of any particular algorithm/key-length pair to the best known attacks.
  • To a first order, the strength of symmetric encryption algorithms against the best known attacks is the length of the key itself.
  • Currently the strength of our standard digest algorithms (SHA-x) against preimage attacks is roughly that of the output length, and against collision attacks it's something less than half the output length (this is a topic of active research on SHA-1, which currently stands at 63 bits as opposed to the theoretically optimal 80 bits).
  • The situation is somewhat less clear for asymmetric algorithms: again, you can see keylength.com for details, but a 1024-bit RSA key is very approximately equivalent to an 80-bit symmetric key, a 2048-bit RSA key to 100 bits or so. Elliptic curve keys of size 2n are approximately equivalent to symmetric keys of size n.
Again, this all assumes the best known current attacks.

There is also some consensus on the near-term lifespan of any set of parameter: keylength.com has a good summary here. Whats a lot less clear is the long term. For instance, ECRYPT's recommendation is that 128-bit symmetric keys are fine for "Long term protection: Generic application-independent recommendation, protection from 2009 to 2038." NIST's recommendations aren't that different, though they recommend 192 bits for >> 2030 and 256 bits for >>> 2030, whatever that means. In general, guidelines for keys > 128 bits are somewhat vague. The basic problem is that it's very hard to imagine any circumstances in which a 128-bit symmetric key will become susceptible by brute force attack with conventional computers. Hopefully Dan Simon will chime in here, but as I understand the situation if we got quantum computers working they would potentially offer a significant improvement against the main public key algorithms and a potential square root improvement against symmetric algorithms. Of course, we're nowhere near having a working quantum computer anything like this good.

Absent working quantum computers, the only plausible avenue of attack on any system with an effective key strength of 128 bits is an analytic attack of some kind. Each asymmetric algorithm is basically the same at any key length (the keys are just bigger) so you would expect any analytic attack at (e.g., 1024 bits) to also be useful at another key size (e.g., 2048 bits). The relationship between AES-128, AES-192, and and AES-256 is more complicated, but the underlying structure is the same, so you would expect an attack on AES-128 to have an impact on AES-256 as well. Beyond that, it's very hard to predict what form such attacks would take; otherwise those doing the predicting would be hard at work on their CRYPTO submissions. In particular, it's hard to predict what the relative strength of the algorithms would be after such an attack.

This isn't to say that AES-256 isn't likely to be stronger than AES-128—most likely it is, even if there are better analytic attacks, if for no other reason than it uses more rounds—but it's not really 2128 bits stronger in any meaningful way. Either we don't have any attacks better than those we have now, in which case there's no practical attack and AES-256 is just super-duper secure (but infinity * 2128 = infinity), or there are better attacks, in which case all bets are off. Much the same is true for the asymmetric algorithms.

A related issue is "matching" the strength of the symmetric algorithms to the asymmetric algorithms. In protocols such as TLS, one usually uses an asymmetric algorithm (e.g., RSA) for key establishment and a symmetric algorithm (e.g., AES) for encryption. Some communities (the NSA especially) are very hot on choosing a set of algorithms with roughly equivalent nominal strengths (within the assumptions listed above). This can be quite problematic, however, because of the quantization of the symmetric algorithms. For instance, it's very natural to use AES-128, but if you really only care about having 80 bits of equivalent security, it's not at all clear why the fact that you're using AES-128 should drive you to use 3000 bit RSA keys; it's just that the minimum standard symmetric algorithm is AES-128. Similarly, if you're worried about cryptanalysis of AES so you've decided to use AES-256 it's not clear that that implies that you ought to be using an equivalent RSA key size (15000 bits!). Flipping things around, if you're worried about quantum computers, then the improvement for symmetric algorithms isn't the same as for asymmetric algorithms, so it's not clear why it would make sense to just dial up the strengths in parallel under either theory.

 
Turns out that REI carries the Vibram FiveFingers (yes, I know you can order them from Vibram, but I have unusual shaped feet so I'd like to try them on before being committed) I stopped by the local store and tried on the 45s (USA 11 1/4), but they're too big even though I normally wear 12s. Unfortunately, 45 was all they had in stock, but I ordered a pair of 44s. Will update when I've had a chance to try them.
 

April 25, 2009

I've been waiting for years for someone to produce a decent retail level heads-up display. Maybe they'll turn out to be useless at the end of the day, but it seems like there are some cool potential applications. Anyway, the NYT has an article on some new technologies:
"People who work on head-mounted displays are hungering for something that people would be willing to wear for more than an hour," he said, "something that would go in one's eyeglasses and not be too much clunkier than regular eyeglasses."

No price has been set for the SBG eyeglasses, which are still in the prototype stage, said Jonathan Waldern, the company's founder and chief technology officer. SBG is concentrating on military and avionics applications, with consumer uses to follow.

...

Contact lenses are also being developed for mobile displays. Babak A. Parviz, an associate professor of electrical engineering, with his team at the University of Washington in Seattle, has created a biocompatible contact lens that has miniaturized electronics and optoelectronics integrated into the lens.

It's cool to see some new vendors enter the market, but really I'm more interested to see this kind of technology making it into the NYT. If this represents some real public interest, then we might see some actual consumer-level products.

 

April 23, 2009

OK, so this is pretty clever:
Biological molecules exhibit homochirality and are optically active. Therefore, it is possible that the scattering of light by biological molecules might result in a macroscopic signature in the form of circular polarization. If this is the case, then circular polarization spectroscopy, which may be utilized in remote sensing, can offer a powerful indicator of the presence of a universal biosignature, namely homochirality. Here, we describe laboratory experiments designed to investigate this idea. We focus on photosynthetic microorganisms, and also show results from macroscopic vegetation and control minerals. In the microorganisms, we find unambiguous circular polarization associated with electronic absorption bands of the photosynthetic apparatus. Macroscopic vegetation yields a stronger and more complex signature while the control minerals produce low-levels of circular polarization unrelated to their spectra. We propose a heuristic explanation of our results, which is that the polarization is produced by circular dichroism in the material after the light has undergone its last scattering event. The results are encouraging for the use of circular polarization spectroscopy in remote sensing of a generic biomarker from space or the ground.

Writeup here. I don't know if it will work, but clever...

 

April 21, 2009

This Boston Globe article covers the growing interest in minimalist running shoes:
That's right. Running shoes are a failed experiment. After nearly four decades of technological gimmicks and outrageous prices, they simply do not perform the function that's their only reason for existence -- protecting your feet. You can now buy running shoes with steel bedsprings embedded in the soles or with microchips that adjust the cushioning, but the injury rate hasn't decreased in almost 40 years. It's actually inched up; Achilles' tendon problems have risen by 10 percent since the '70s.

...

So how do the Tarahumara, running in shoes that barely qualify as shoes, do it? Three years ago, I trekked into the Copper Canyons of Mexico in search of the secret. And once I learned how to run barefoot-style -- landing on the balls of the feet, while keeping my feet directly under my hips -- like the Tarahumara, my ailments suddenly disappeared. Plantar fasciitis, Achilles tendinitis, sore knees -- all gone. Today, I wear something similar to a rubber glove for the foot (it has the thinnest of soles to guard against abrasions), and I haven't looked back.

...

But the unmistakable fact is that there's a trend across the shoe industry toward creating more "minimal" shoes -- those intended to duplicate the experience of, you guessed it, running barefoot. Still, those models just aren't simple enough

I'm not sure I buy this set of arguments, especially the one about injury rates. I hate to keep promulgating homeostasis theories, but I think there's some relevance here. Certainly, if you're an elite athlete you're going to train as hard as you can, which basically means that you keep ramping up the intensity and volume until either (1) you run out of time to train (2) you get injured or (3) you get so overtrained you can't keep jacking up the training level. So, even if you suddenly had better injury prevention technology, you wouldn't expect the injury rate to change that much. I don't have any empirical evidence here, but if you compare performances in the 70s to performances today, they've improved really dramatically: even in the marathon, which was very competitive in the 70s, the WR has come down by almost 5 minutes (~4%) since 1969. If you look at something like the Ironman, times have come down over an hour (more than 10%) since the 1980s. I suspect a lot of that improvement is that people are training harder. I don't know to what, if any extent this is pulling the average person's training load up.

This isn't to say that I'm convinced that modern running shoes improve the situation: the Inov-8 trail shoes I've been running in lately are deliberately unstructured, and even just wearing Injinjis rather than ordinary socks feels like your feet are more flexible and less constrained. I'm certainly enjoying training in them, and my long-term ankle injuries seem better. On the other hand, the one time I tried running a significant distance on road on them, I started to worry about how much impact I was putting into my legs. After all, we didn't evolve to run on asphalt either. Lately I've also been trying out Sanuks, which are basically sandals attached to a soft nylon shoe upper. Like Inov-8s the theory seems to be that they're not really going to support your foot. There does seem to be some kind of impact from this little support: the first few days my legs hurt but then I adapted. I don't have any reason to believe it's for the better, though. At some point I'll need to check out the research on this topic.

What I'm really interested in trying, though, is the new Vibram FiveFingers shoes, which are basically a rubber-soled sock. It's pretty clear you need to try them on to get the fit right, but Zombie Runner claims they're getting them soon so hopefully I'll be able to try them and report back.

 

April 20, 2009

Look if John Boehner wants to believe that global warming isn't happening or isn't bad or whatever, then fine. But can we at least be spared this kind of stupidity:
STEPHANOPOULOS: So what is the responsible way? That's my question. What is the Republican plan to deal with carbon emissions, which every major scientific organization has said is contributing to climate change?

BOEHNER: George, the idea that carbon dioxide is a carcinogen that is harmful to our environment is almost comical. Every time we exhale, we exhale carbon dioxide. Every cow in the world, you know, when they do what they do, you've got more carbon dioxide. And so I think it's clear...

OK, so this is, as Wolfgang Pauli is supposed to have said "not even wrong". First, nobody is claiming that CO2 is a carcinogen. The reason people want to reduce CO2 emissions isn't that they give you cancer, it's that CO2 causes global warming. So, the fact that you exhale it hardly leads to the conclusion that it's somehow a great idea to radically increase the CO2 content of the atmosphere.

Even if the reason to restrict CO2 was that it was bad for humans instead of the environment (like, say mercury) this wouldn't follow. Have you noticed that you're inhaling CO2? It's a waste product from aerobic respiration (look up the Krebs Cycle). Boehner's argument is like suggesting that feces isn't bad for you because you emit it regularly, as do cows, etc., but I'm assuming he'd like to minimize his feces consumption.

Interestingly, while CO2 is a waste product, it's not actually toxic. You wouldn't want to breathe an all CO2 atmosphere, but CO2 is what stimulates the breathing reflex. Oxygen, on the other hand, is fairly toxic once you get too far above the normal partial pressures in the atmosphere.

 
Well, one I've posted before, but revised and submitted, one totally new:

On the Security of Election Audits with Low Entropy Randomness
Eric Rescorla
ekr@rtfm.com

Secure election audits require some method of randomly selecting the units to be audited. Because physical methods such as dice rolling or lottery-style ping pong ball selection are inefficient when a large number of audit units must be selected, some authors have proposed to stretch physical methods by using them to seed randomness tables or random number generators. We analyze the security of these methods when the amount of input entropy is low under the assumption that the the attacker can choose the audit units to attack. Our results indicate that under these conditions audits do not necessarily provide the detection probability implied by the standard statistics. This effect is most pronounced for randomness tables, where significantly more units must be audited in order to achieve the detection probability that would be expected if the audit units were selected by a truly random process.

PDF

@Misc{rescorla-audit-entropy-2009,
  author = 	 {Eric Rescorla},
  title = 	 {{On the Security of Election Audits with Low Entropy Randomness}},
  howpublished = {In submission},
  month = 	 {April},
  year = 	 2009,
  note = 	 {\url{http://www.rtfm.com/audit-entropy.pdf}}}

Understanding the Security Properties of Ballot-Based Verification Techniques
Eric Rescorla
ekr@rtfm.com

As interest in the concept of verifiable elections has increased, so has interest in a variety of ballot-oriented mechanisms that offer the potential of more efficient verification than traditional precinct- or machine-level audits. Unfortunately, threat analysis of these methods has lagged their design and in some cases implementation. This makes it difficult for policy makers to assess the merits and applicability of these techniques. This paper provides a fairly non-technical description of the security threats facing these systems with the intent of informing deployment decisions.

PDF

@Misc{rescorla-bba-threat-2009,
  author = 	 {Eric Rescorla},
  title = 	 {{Understanding the Security Properties of Ballot-Based Verification Techniques}},
  howpublished = {In submission},
  month = 	 {April},
  year = 	 2009,
  note = 	 {\url{http://www.rtfm.com/bba-threat.pdf}}}
 

April 18, 2009

It's conference submission time ( EVT/WOTE 2009) and along with conference submission time comes its friend, fighting with LaTeX time. The big problems I usually have are avoiding bad breaks and convincing LaTeX's broken float algorithm to put my figures (I like figures) where I want them instead of three pages later. Anyway, I recently ran into a problem (on a friends paper, not my own) with a long author list. What we wanted was to have an author list with a separate affiliation list and then a footnoted contact address, like so (click to see a PDF):

LaTeX's built-in \author mode is pretty lame, but Authblk lets you use "author block" mode, with separate author names and affiliations and footnote-style superscripted numbers to connect the two. The code you want is:

\author[1,2]{Charles Kinbote}
\author[1]{John Shade}
\author[1]{Charles Xavier Vseslav}
\author[3]{Humbert Humbert}
\author[4]{Clare Quilty}


\affil[1]{Kingdom of Zembla}
\affil[2]{Wordsmith College}
\affil[3]{Independent}
\affil[4]{Beardsley Women's College}

But this is only a partial solution because it doesn't give you the footnote with the author's address. If you're willing to have the author's address attached to the affiliation block, you can just do a separate affiliation that contains the email address of the author:

\author[1,2,*]{Charles Kinbote}
\author[1]{John Shade}
\author[1]{Charles Xavier Vseslav}
\author[3]{Humbert Humbert}
\author[4]{Clare Quilty}


\affil[1]{Kingdom of Zembla}
\affil[2]{Wordsmith College}
\affil[3]{Independent}
\affil[4]{Beardsley Women's College}
\affil[*]{To whom correspondence should be addressed. Email: \url{kinbote@example.com}}

This does work, but it looks pretty terrible. You can attach a footnote to the author's name as a footnote, but this isn't quite what you want either, for two reasons. First, the asterisk shows up after the name, before the superscripted affiliation numbers, when you really want it afterwards. Second, it's on the baseline of the affiliation numbers, when you really want it aligned with the top of the numbers.

What you need is a combination strategy: you use the fake affiliation with an asterisk, but don't provide a \affil block. This just creates a bare asterisk superscript, but no footnote. To create the footnote, you need to use \footnotetext. Unfortunately, if you just use \footnotetext, you end up with a numeric marker attached to the footnote text at the bottom of the page. What you want is an asterisk. To get this to work, you need to override the footnote style with \renewcommand{\thefootnote}{\fnsymbol{footnote}}, and then reset it so that you get numeric footnotes elsewhere:

\let\oldthefootnote\thefootnote
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[1]{To whom correspondence should be addressed. Email: \url{kinbote@example.com}}
\let\thefootnote\oldthefootnote

Putting it all together:


\author[1,2,*]{Charles Kinbote}
\author[1]{John Shade}
\author[1]{Charles Xavier Vseslav}
\author[3]{Humbert Humbert}
\author[4]{Clare Quilty}


\affil[1]{Kingdom of Zembla}
\affil[2]{Wordsmith College}
\affil[3]{Independent}
\affil[4]{Beardsley Women's College}

\pagestyle{empty}

\begin{document}
\maketitle
\thispagestyle{empty}

\let\oldthefootnote\thefootnote
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[1]{To whom correspondence should be addressed. Email: \url{kinbote@example.com}}
\let\thefootnote\oldthefootnote

Have fun.

Acknowledgement: Body text from the Lorem Ipsum Generator.

 

April 16, 2009

In some effort to demonstrate maximal fogeyness George Will has decide to devote today's column inches to (and I'm not making this up) denouncing blue jeans:
It is, he says, a manifestation of "the modern trend toward undifferentiated dressing, in which we all strive to look equally shabby." Denim reflects "our most nostalgic and destructive agrarian longings -- the ones that prompted all those exurban McMansions now sliding off their manicured lawns and into foreclosure." Jeans come prewashed and acid-treated to make them look like what they are not -- authentic work clothes for horny-handed sons of toil and the soil. Denim on the bourgeoisie is, Akst says, the wardrobe equivalent of driving a Hummer to a Whole Foods store -- discordant.

So, let's start with this: the rap against Hummers (and SUVs in general) is that they're conspicuous consumption: they're expensive, consume a lot of gas, accelerate slowly, and handle badly. So, unless you're doing a lot of driving offroad or through downtown Mogadishu, a Hummer isn't a particularly good choice on its own merits. What it is good for, however, is signalling that you have a lot of money. Not that this argument applies to some extent to sports cars, but at least a fast, good-handling car is fun to drive even if you're not doing a lot of speeding. None of this applies to jeans, which are generally fairly cheap: you certainly can buy expensive jeans but you can also buy cheap jeans (I generally wear 501s which run about $30-40). Obviously, you can pay a couple of hundred dollars for jeans, but outside of sweats they're about the cheapest pants you can buy, so the conspicuous consumption angle doesn't really hold water.

Really, Will has this exactly backwards: jeans are cheap, comfortable, and durable. Even more than durable, they wear well: you can wear jeans long after the point where the level of wear would require you to discard (say) khakis. In other words they're practical clothing. It's people who wear (for instance) suits, who are paying more to make a statement about their wealth and taste and getting a less functional garment.

Long ago, when James Dean and Marlon Brando wore it, denim was, Akst says, "a symbol of youthful defiance." Today, Silicon Valley billionaires are rebels without causes beyond poses, wearing jeans when introducing new products. Akst's summa contra denim is grand as far as it goes, but it only scratches the surface of this blight on Americans' surfaces. Denim is the infantile uniform of a nation in which entertainment frequently features childlike adults ("Seinfeld," "Two and a Half Men") and cartoons for adults ("King of the Hill"). Seventy-five percent of American "gamers" -- people who play video games -- are older than 18 and nevertheless are allowed to vote. In their undifferentiated dress, children and their childish parents become undifferentiated audiences for juvenilized movies (the six -- so far -- "Batman" adventures and "Indiana Jones and the Credit-Default Swaps," coming soon to a cineplex near you). Denim is the clerical vestment for the priesthood of all believers in democracy's catechism of leveling -- thou shalt not dress better than society's most slovenly. To do so would be to commit the sin of lookism -- of believing that appearance matters. That heresy leads to denying the universal appropriateness of everything, and then to the elitist assertion that there is good and bad taste.

I'm tempted to let this little bit of fuddy-duddyism (Seinfeld! Video games! Get off my lawn you damn kids!) stand on its own, but no, let's take it seriously. Will's argument here, such as it is, appears to be that jeans represent a denial of the concept of taste, but let's go back to the beginning of the screed where he complains that people buy jeans which are acid-washed, distressed, etc.—in other words they're not just pulling whatever crap they can find off the shelves, they're exercising, you guessed it, taste. Moreover, I'm pretty certain that if you're the kind of person who buys high-end denim you actually have a pretty good idea of what constitutes good taste in denim and what doesn't, let alone good taste in clothes generally. [For more than you ever wanted to know about high-end denim, check out Style Forum (link from Hovav Shacham).] The lesson here is simple: just because Will can't distinguish stylish denim from non-stylish denim (any more than I can distinguish a high-end tie from a low-end one) doesn't mean there isn't a difference.

Denim is the carefully calculated costume of people eager to communicate indifference to appearances. But the appearances that people choose to present in public are cues from which we make inferences about their maturity and respect for those to whom they are presenting themselves.

At this point, things have gone pretty far off the rails: jeans are simultaneously a calculated costume but yet their wearers don't believe in taste. Huh? Even if we ignore my above point about how jeans actually do embody quite a few taste cues, the wearing of jeans only makes sense as a fashion cue if embedded in a cultural matrix (I've always wanted to say that) in which people who are supposed to dress a certain way to signal good taste and maturity—it just subverts that sense of taste by preferring something that's nominally something that is worn by the working class. If Will wants to complain that people who wear jeans are thumbing their nose at the man, fine, but that's a totally different complaint than that people who wear jeans are denying that such norms exist.

Do not blame Levi Strauss for the misuse of Levi's. When the Gold Rush began, Strauss moved to San Francisco planning to sell strong fabric for the 49ers' tents and wagon covers. Eventually, however, he made tough pants, reinforced by copper rivets, for the tough men who knelt on the muddy, stony banks of Northern California creeks, panning for gold. Today it is silly for Americans whose closest approximation of physical labor consists of loading their bags of clubs into golf carts to go around in public dressed for driving steers up the Chisholm Trail to the railhead in Abilene.

This sentence is actually unfortunate, since there is a much richer target than jeans, which is to say outdoor wear. Hang out in your average Starbucks and you'll see plenty of people wearing Mountain Hardwear or Patagonia gear you could probably use to scale the North Face of the Eiger (I own jackets from both MH and Patagonia here, so I'm not exactly immune from this criticism), which is a bit overkill in terms of keeping you warm at the refrigerator in front of the counter1.

This is not complicated. For men, sartorial good taste can be reduced to one rule: If Fred Astaire would not have worn it, don't wear it. For women, substitute Grace Kelly.

And now we come to Will's true (though absurd) objection: his sense of men's style was frozen sometime in the 1950s and 1960s and he resents that times have changed. Except that he's just making this up, because I'm pretty sure that Fred Astaire would not have worn this visor:

Moreover, I can't help myself: why Fred Astaire? Why not, say, Beau Brummell, or maybe, Alcibiades. The answer, of course, is that George Will was born in 1941 and so Fred Astaire lines up with his formative years. If he'd been born in 1841 he'd be complaining about how kids didn't wear frock coats any more. Nothing wrong with having your taste set in a certain era—it happens to everyone eventually—but acting as if your particular taste is universal law does make you look fairly silly.

1. Yeah, yeah, I ripped this off from Clueless.

UPDATE: Young fogey Hovav Shacham informs me that I needed to change morning coat to frock coat.

 

April 13, 2009

I went through my mail today and discovered that I'm a proud recipient of the American Community Survey. What's that, you ask? Well, it turns out that for the 2010 census the census bureau has decided to switch things up a bit. Well, I'll let them tell it:
In the past, most households received a short-form questionnaire, while one household in six received a long form that contained additional questions and provided more detailed socioeconomic information about the population.

The 2010 Census will be a short-form only census and will count all residents living in the United States as well as ask for name, sex, age, date of birth, race, ethnicity, relationship and housing tenure - taking just minutes to complete.

The more detailed socioeconomic information is now collected through the American Community Survey. The survey provides current data about your community every year, rather than once every 10 years. It is sent to a small percentage of the population on a rotating basis throughout the decade. No household will receive the survey more often than once every five years.

More detailed is right. The ACS is 27 fricking pages and has something like 54 distinct questions for every single person in the household, plus a bunch of household generic questions, including such puzzlers as: "13. What is this person's ancestry or ethnic origin?" (Mrs. G suggests that we're all originally from Africa, so I should just write African); "45. What kind of work was this person doing?", which I can't distinguish from "46. What were this person's most important activities and duties." and "18. What is the annual payment for fire, hazard, and flood insurance on THIS property?" (had to look that one up).

Oh, did I mention that compliance with this puppy is required by law? The fine for failing to return it appears to only be $100, but the fine for false statements is $500. Seems like fun for the whole family.

 

April 12, 2009

I was talking to Allan Schiffman tonight and he observed that computers are something like two orders of magnitude faster than when he worked on Smalltalk. In fact, it's probably more than that: the computer I had in college was, I think, an 80286, which had a maximum clock speed of 25 MHz [and this in an era where Dell advertised a similar machine as "fast enough to burn the sand off a desert floor"]. I'm typing this on a 1.6 GHz Core Duo (yes, yes, I know clock speed isn't everything, but it's close enough for these purposes). Storage has improved even more: I remember paying $1000+ for a 1GB hard drive and now terabyte drives go for about $100. That's all great, but surely you've noticed that the end-to-end performance of systems hasn't improved anywhere near as much. In fact, the UI on my Air is distinctly less zippy than that of X11 systems circa 1995.

There are of course plenty of places to point the finger: GUI chrome, code bloat, more use of interpreted and translated languages like Java, Flash, and it's true that the systems just do a lot more than they used to, but those are just symptoms. I suspect the underlying cause is something more akin to risk homeostasis. When engineers get more compute power, they spend less time worrying about how to make systems faster and a lot more time worrying about how to add more features, so the overall performance of the system stays somewhere in the "barely acceptable" range. Friends and I used to joke that engineers should be given old, slow machines to work on so that they would be incentivized to think about performance. I'm still not sure that's entirely crazy, though I must admit that it's a lot less fun to be an engineer under those conditions.

 

April 11, 2009

Julian Sanchez's post about the difficulty of evaluating technical arguments has been circulating fairly widely. In the middle is a somewhat strained analogy to cryptography:
Sometimes, of course, the arguments are such that the specialists can develop and summarize them to the point that an intelligent layman can evaluate them. But often--and I feel pretty sure here--that's just not the case. Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic. A specialist would surely see through it, but in an argument between us, the lay observer wouldn't necessarily be able to tell which of us really had the better case on the basis of the arguments alone--at least not without putting in the time to become something of a specialist himself. Actually, I have a possible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what's true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience.

Come to think of it, there's a certain class of rhetoric I'm going to call the "one way hash" argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of "fingerprint" for messages based on the same mathematical idea: It's really easy to run the algorithm in one direction, but much harder and more time consuming to undo. Certain bad arguments work the same way--skim online debates between biologists and earnest ID afficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it's both intelligible--even somewhat intuitive--to the layman and sounds as though it might qualify as some kind of insight. (If it seems too obvious, perhaps paradoxically, we'll tend to assume everyone on the other side thought of it themselves and had some good reason to reject it.) The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it's really possible to explain why the talking point is wrong. So the setup is "snappy, intuitively appealing argument without obvious problems" vs. "rebuttal I probably don't have time to read, let alone analyze closely."

Unfortunately, Sanchez has the cryptography pretty much wrong. He's confused two totally separate cryptographic concepts: public key cryptography, and one-way hashes. Some PKC (but not all) involves multiplying large prime numbers. Hash functions (with the exception of VSH, which is impractically slow and not in wide use), don't involve prime numbers at all. Neither does symmetric encryption, which is what you actually use to encrypt data (PKC is used primarily for key exchange and authentication/signature). Now, it's true that prime multiplication is indeed a one-way function (or at least we hope it is) and hash functions are intended to be as well, but other than that, there's not much of a connection.1 That said, however, I've seen this post referenced several places, and with the exception of Paul Hoffman, who pointed it out to me, few seem to have noticed, that, well, it's horseshit. I suppose this is an argument in favor of Sanchez's thesis.

1.Hash functions are actually one-way in an important sense that prime number multiplication is not: any given integer only has one unique factorization, whereas there are many messages that hash to a single value, so it's always possible with enough computational effort to reconstruct the original primes. However, given a hash value and no other information, it's not possible to determine which of the possible input messages was fed in.

 

April 9, 2009

I have a simple orange juice purchasing strategy: I go to Safeway and buy whichever of the major brands (Tropicana, Minute Maid, Florida's Natural) is on sale. No doubt people with better taste than me can distinguish these, but I don't much care. Anyway, I was in Safeway the other day and nearly missed the (on sale) Tropicana because they had changed the packaging and I initially mistook it for a generic brand.

Helpful comparison photo here

Turns out I wasn't the only person who wasn't impressed:

Tropicana's previous design, with orange and straw, will soon be brought back. The PepsiCo Americas Beverages division of PepsiCo is bowing to public demand and scrapping the changes made to a flagship product, Tropicana Pure Premium orange juice. Redesigned packaging that was introduced in early January is being discontinued, executives plan to announce on Monday, and the previous version will be brought back in the next month.

...

The about-face comes after consumers complained about the makeover in letters, e-mail messages and telephone calls and clamored for a return of the original look.

Some of those commenting described the new packaging as "ugly" or "stupid," and resembling "a generic bargain brand" or a "store brand." "Do any of these package-design people actually shop for orange juice?" the writer of one e-mail message asked rhetorically. "Because I do, and the new cartons stink."

The juice tasted fine, though.

 

April 8, 2009

The WSJ reports that there has been significant penetration of the US power grid and other infrastructure networks:
The spies came from China, Russia and other countries, these officials said, and were believed to be on a mission to navigate the U.S. electrical system and its controls. The intruders haven't sought to damage the power grid or other key infrastructure, but officials warned they could try during a crisis or war.

"The Chinese have attempted to map our infrastructure, such as the electrical grid," said a senior intelligence official. "So have the Russians."

...

The U.S. electrical grid comprises three separate electric networks, covering the East, the West and Texas. Each includes many thousands of miles of transmission lines, power plants and substations. The flow of power is controlled by local utilities or regional transmission organizations. The growing reliance of utilities on Internet-based communication has increased the vulnerability of control systems to spies and hackers, according to government reports.

So, obviously this is bad.

The first question you should be asking at this point is why these infrastructure systems are connected to the Internet at all. Protecting a computer in an environment where the attacker is allowed to transmit arbitrary traffic is an extremely difficult problem. I'm not sure that anyone I know would feel comfortable guaranteeing that they could secure a computer under conditions of concerted attack by a dedicated attacker. This doesn't mean that nobody should ever connect their computer to the Internet. After all, it's not like the entire reources of some national intelligence agency are going to be trained on the server where your blog is hosted. But the situation is different with things like the electrical grid which are attractive national-scale attack targets. [And rumor in the system security community is that these targets are not that well secured.]

It's natural to set up a totally closed network with separate cables, fiber, etc., but I'm not sure how much that actually helps. If you're going to connect geographically distributed sites, then that's a lot of cable to protect, so you need to worry about attackers cutting into the cable at some point in the middle of nowhere and injecting traffic at that point. The next step is to use crypto: if you have point to point links then you can use simple key management between them and it's relatively simple to build hardware-based link encryptors which reject any traffic which wasn't protected with the correct key. Obviously you still need to worry about subversion of the encryptors, but it's a much harder attack target than subversion of a general purpose computer running some cort of crypto or firewall or whatever.

Unfortunately, this is only a partial answer because you still have to worry about what happens if one end of the link gets compromised. At that point, the attacker can direct traffic to the other end of the link, so we're back to the same problem of securing the end systems, but at least the attack surface is a lot smaller because someone first has to get into one of the systems. So, you need some kind of defense in depth where the end systems are hardened behind the link devices.

Ideally, of course, you wouldn't network these systems at all, but I suspect that's pretty much a nonstarter: the grid is pretty interdependent and the control networks probably need to be as well. Most likely the best we can do here is try to have as many airgaps and choke points as possible to try to make it hard to get into the system in the first place and then make it hard for malware to spread.

P.S. It's not a computer security issue per se, but it's worth observing that the electrical grids have non-computational cascading failure modes. See, for instance, the Wikipedia article on the 2003 blackout. This implies that even if you have excellent informational isolation, you still need to worry about localized informational attacks leading to large scale failures by propagation through the grid rather than propagation through the computer network.

 

April 6, 2009

From Section 6 of the Cyber security bill:

(3) SOFTWARE SECURITY.--The Institute shall establish standards for measuring the software security using a prioritized list of software weaknesses known to lead to exploited and exploitable vulnerabilities. The Institute will also establish a separate set of such standards for measuring security in embedded software such as that found in industrial control systems.

Now, not to say that this is totally impossible, but it's not like it's a straightforward matter of standardization like defining a set of screw thread gauges. The problem here is that we don't have a meaningful model for the severity of security vulnerabilities, CVSS notwithstanding, let alone for the probability that they will be exploited. Quoting myself:

I certainly agree that it's useful to have a common nomenclature and system for describing the characteristics of any individual vulnerability, but I'm fairly skeptical of the value of the CVSS aggregation formula. In general, it's pretty straightforward to determine linear values for each individual axis, and all other things being equal, if you have a vulnerability A which is worse on axis X than vulnerability B, then A is worse than B. However, this only gives you a partial ordering of vulnerability severity. In order to get a complete ordering, you need some kind of model for overall severity. Building this kind of model requires some pretty serious econometrics.

CVSS does have a formula which gives you a complete ordering but the paper doesn't contain any real explanation for where that formula comes from. The weighting factors are pretty obviously anchor points (.25, .333, .5) so I'm guessing they were chosen by hand rather than by some kind of regression model. It's not clear, at least to me, why one would want this particular formula and weighting factors rather than some other ad hoc aggregation function or just someone's subjective assessment.

Even if we assume that something like CVSS works, we just have the same problem writ large. Say we have two systems, one with three vulnerabilities ranked moderate, and another with one vulnerability ranked severe. Which system is more secure? I don't even know how to go about answering this question without a massive amount of research. We don't even know how to answer the question of the probability of a single vulnerability being exploited, let alone the probability that a system with some vulnerability profile will be exploited.

This isn't to say, of course, that NIST can't come up with some formula for ranking systems based on their vulnerability profiles. After all, you could just invent some ad hoc formula for combining vulnerabilities with arbitrarily chosen weights. But it wouldn't be anything principled and while it would be "objective", it's not clear it would be meaningful. That said, this is an awful specific proposal for some lawmaker or his staff to come up with on their own; I wonder who suggested to them that this was a good plan.

 

April 5, 2009

You've heard of I Can Has Cheezburger, but have you seen Fuck You, Penguin? How about Fuck me? No. Fuck you, Fuck you, Penguin?

ANJ's Gorbachev from the the game Stalin vs. Martians. Didn't even know there was a Stalin vs. Martians game, did you? Me neither.

 

April 3, 2009

You may or may not have seen this article (Bill here courtesy of Lauren Weinstein; þ Joe Hall):
Key lawmakers are pushing to dramatically escalate U.S. defenses against cyberattacks, crafting proposals that would empower the government to set and enforce security standards for private industry for the first time.

OK, I'm going to stop you right there. I spend a large fraction of my time with computer security people and I don't think I've ever heard any of them use the term "cybersecurity", "cyberattacks", or pretty much "cyber-anything", except for when they're making fun of govspeak like this. Next they'll be talking about setting up speed traps on the Information Superhighway. Anyway, moving on...

The Rockefeller-Snowe measure would create the Office of the National Cybersecurity Adviser, whose leader would report directly to the president and would coordinate defense efforts across government agencies. It would require the National Institute of Standards and Technology to establish "measurable and auditable cybersecurity standards" that would apply to private companies as well as the government. It also would require licensing and certification of cybersecurity professionals.

So, it's sort of credible that NIST would generate some computer security standards. They've already done quite a few, especially in cryptography and communications security, with, I think it's fair to say, pretty mixed results. Some of their standards, especially the cryptographic ones like DES, AES, and SHA-1 have turned out OK, but as you start to move up the stack towards protocols and especially systems, the standards seem increasingly overconstrained and poorly matched to the kinds of practices that people actually engage in. In particular, there have been several attempts by USG to write standards about systems security (e.g., Common Criteria, and Rainbow Books) I think it's fair to say that uptake in the private sector has been minimal at best. Even more limited efforts like FIPS-140 (targeted at cryptographic systems) are widely seen as incredibly onerous and a hoop that developers have to jump through, rather than a best practice that they actually believe in.

I haven't gone through the bill completely, but check out this fun bit:

(4) SOFTWARE CONFIGURATION SPECIFICATION LANGUAGE.--The Institute shall, establish standard computer-readable language for completely specifying the configuration of software on computer systems widely used in the Federal government, by government contractors and grantees, and in private sector owned critical infrastructure information systems and networks.

I don't really know what this means but it sounds pretty hard. Even UNIX systems, which are extremely text-oriented, don't have what you'd call a standard computer readable configuration language. More like 10 such languages, I guess. I'm definitely looking forward to hearing about NIST's efforts to standardize sendmail.cf

The licensing and certification clause seems even sillier. There are plenty of professional security certifications you can get, but most people I know view them as more a form of rent seeking by the people who run the certifying classes than as a meaningful credential. I don't know of anyone that I know has one of these certifications. I'm just imaginine the day when we're told Bruce Schneier and Ed Felten aren't allowed to work on critical infrastructure systems because they're not certified.

More as I read through the actual document.

 

April 1, 2009

I'm experimentally trying using a task management app—not planning to do any sort of GTD thing, just looking for a little technical help with keeping track of all the crap I have to do. The general consensus seems to be for either Things or OmniFocus. and somewhat arbitrarily I selected Things: it's cheaper and seems a bit simpler to use. So far it's working fine, and I figured it was time to buy the iPhone app that goes along with it (OF has this as well).

Here's where things start to go off the rails. Once you have the iPhone app, you want it to sync up with the app on your computer: otherwise you have two disjoint systems, which is pretty useless. Unfortunately, it seemms that third party apps apparently can't sync with your computer the way that Apple apps sync, so the vendors need to come up with some hacky network-based scheme. Things' version seems to rely on Bonjour discovery and OF uses a WebDAV server. I don't really want to set up a WebDAV server somewhere and I'm way too paranoid to want to have random apps on my machine talking to random other computers on my network; that's why I have a firewall, after all. So, the bottom line is I'm hosed. A little bit of web searching quickly reveals hordes of people complaining about this (indeed, at least one of the early hits is about Things).

As far as I can tell, this is a basic limitation of the iPhone, but it's not clear to me if it's something Apple really doesn't want you to do or they just haven't gotten around to offering it yet. In either case, it's not very convenient.