EKR: October 2006 Archives

 

October 31, 2006

From Steve Bellovin over in IP:
Today's Wall Street Journal has an article on the liquid carry-on rules promulgated by TSA. Much of it is familiar to readers of this list, including the screeners who don't really understand the rules. One thing that I haven't seen mentioned much before is that they're really concerned about containers, not just liquids:
To travelers, some of the regulations are bewildering. You can buy a filled water bottle at an airport shop inside security, for example, but you can't carry your own empty water bottle through security and fill it at a water fountain inside security. Mr. Hawley says there's a classified security reason for that related to the characteristics of liquid explosives. In addition, X-ray machines can detect containers, just not what's inside. So getting all containers out of carry-on bags speeds up security screening.

"As stupid as we may look, we didn't miss that one," Mr. Hawley said.

...

Mr. Hawley said there is method in the madness of requiring everything to be in a bag and strictly limiting the size of containers, not the volume of liquid or gel. Containers larger than three ounces could pose a threat -- a place to mix enough liquid explosives to create a bomb. "It's not the ounces. It's the container we're after," he said.

I don't even know where to start here.

  1. They do let you carry on empty bottles of liquid onto the plane. I've flown twice with an empy Nalgene bottle.
  2. Any water bottle you can buy inside security will be >3 oz. so I don't see how this really restricts containers. Heck, in the duty free shop you can buy a glass wine bottle, suitable for all sorts of chemical reactions. And get this, after you get on the plan, airline employees come through the aisles to hand you 12 ounce aluminum containers!
  3. You're still allowed to carry all sorts of things which could be used as containers in a pinch. For instance, my messenger bag is vinyl-lined and waterproof.

Wouldn't it be nice if the TSA actually did some real threat modelling rather than just making up stuff on the spot?

 

October 30, 2006

Over at Volokh, Max Boot writes:
Even where government has played a big role in the development process, as with the Internet and the electronic computer, the key advances were usually made by people not on its payroll: William Shockley, John Bardeen, and Walter Brattain (the transistor); Jack Kilby and Robert Noyce (the microchip); Ted Hoff (the microprocessor); Paul Allen and Bill Gates (MS-DOS and Windows); Tim Berners-Lee (the World Wide Web); Marc Andreessen and Eric Bina (the Mosaic browser); and many others.

Well, this is sort of true and sort of not. First, I think the phrase "payroll" is confused. The way that the government does a lot of its research is by handing out grant money to other people to do the work for it. So, for instance, much of the Internet work was paid for by DARPA under contract. As a practical matter, the vast majority of university-level research work in science in the US is paid for by government grants.

More importantly, either Boot doesn't really understand the history of computers or these examples are cherrypicked, because these aren't really the key moments at all. First, they start way too late. The main theory of the modern computer was worked out by people who worked more or less directly the government (Turing and later von Neumann) and the first electronic stored program computer in the US (ENIAC was built for the government in order to compute ballistic tables)1. Real computers existed (based on tubes) substantially before transistors were in wide use.

I certainly agree that Boot's first three examples (transistors, ICs, and microprocessors) are really important and were developed in the commercial sector. But then when we get to MS-DOS and Windows, things go badly awry. Obviously, Microsoft is incredibly important but as a technology DOS and Windows are nothing special. MS-DOS is basically an imitation of CP/M and even in 1980, when MS-DOS was released, UNIX2 was far superior to MS-DOS (it had pre-emptive multitasking, among other things). Similarly, Windows is an imitation of MacOS, which has roots in interfaces designed at PARC and SRI. The bottom line here is that the history of operating systems and UIs is complicated and innovations were made in a whole bunch of places and systems, some government funded and some not.

Similarly, when we turn to the Internet, packet switching was originally proposed by Baran, who worked for RAND, which was mostly a government contractor, Davies, at the UK National Physical Laboratory, and Kleinrock at MIT (remember that almost all university research work in CS is paid for by government grants). And, as I said earlier, almost all of the post-Baran Internet work before 1993 or so was paid for by DARPA (and then later ARPA). Finally, when we turn to his last two examples, they're just plain wrong. Tim Berners-Lee was at CERN, which is basically a multi-country national lab. Marc Andreessen and Eric Bina were at NCSA, which, as the name "National Center for Supercomputing Applications" suggests, was government funded. Finally, even during the .com boom, a lot of the .com companies came out of universities, where, again, research is mostly funded by NSF, DARPA, etc.

1. Yeah, yeah, I know about Zuse, who may have been first, but who's research didn't really end up on the main line.
2. Remember, 4BSD dates from November of 1980.

 
There's been a lot of fuss over Chris Soghoian's fake boarding pass generator (since taken down, link goes to Boing Boing coverage). What this appears to be is a demonstration of a vulnerability I pointed out back in October of 2003 (Schneier made the same point in August of 2003.) Most of the coverage here is of how the FBI is torturing 1 Soghian, which, frankly, is pretty silly but is of course what happens when you show that someone important's security system doesn't work, so it's not exactly unexpected. What I'm interested in, though, is the security aspects.

When the generator first came out, there were claims that you could use it for three purposes:

Use this handy boarding-pass generator to: 1) get through airport security without a ticket, 2) bypass the "extra screening" if you have "SSSS" printed on your ticket, or 3) -- and this is harder -- snag yourself a Business Class seat with a Coach ticket.

It's clear how the first two work. All the processes used by the TSA types are paper-based without any checks to a back-end database, so all you have to do is edit whatever boarding pass HTML/image/whatever they give you for print-at-home and you can have it show whatever you want for the TSA guys. And since the only thing that the gate scanners read is the bar code you're good to go.

I'm not so sure about the third one. Obviously, you can change the seat letters on your boarding pass, but business tends to be full, so you're likely going to be in someone's seat and they'll notice. Moreover, if you've ever flown in business, you know that the FAs know your name without looking at your boarding pass2. That's because they have a list of who is suppposed to sit where. So, if you get lucky and business is empty and you decide to give yourself an upgrade, what ordinarily happens when you're in business and you shouldn't be is that they notice and ask to see your boarding pass and then ship you back to your rightful seat. If you've modified your boarding pass, they'll know it doesn't match their list and probably investigate the problem at which point you can look forward to being thrown off the plane (at best) or more likely arrested.

As a side note, Soghoian doesn't seem to have claimed you could make a totally fake boarding pass that would pass the gate scanners. I suspect that's pretty hard. I haven't studied the bar codes on those passes, but I suspect they contain the ticket number which is used to look up your PNR in the database, in which case it's pretty hard to forge one. You could, design the system so that the tickets were self-contained, in which case forgeries would maybe be possible (unless they were cryptographically protected) but I don't think that's how it actually works.

1. It used to be you could say they were torturing him it would just be idiomatic for "giving him a really hard time", but now I have to put in clarifying notes like this so you don't think that I'm saying the USG is waterboarding the guy. Thanks, Congress!
2.They also know your elite level, which is how they know what order to take meal preferences in and how much to suck up to you.

 

October 29, 2006

Price of Clif bars at Safeway:
FlavorSize (oz)Priceprice/oz
Banana Nut Bread2.41.600.67
Black Cherry Almond2.41.600.67
Chocolate Chip2.41.690.70
Chocolate Chip Peanut Crunch 2.4 1.60 0.67
Oatmeal Raisin Walnut 2.41.600.67
Peanut Butter2.41.600.67

One of these things is not like the other. It's true that chocolate chip really is better than the other flavors listed here, but one usually doesn't see this fine-grained pricing.

 

October 26, 2006

A common problem that occurs in computer and networking applications is to generate a unique identifier. One obvious application is detecting message replays. If you receive two identical messages (e-mails, IP datagrams, etc.) how do you know whether the sender sent them twice or the network just duplicated the message? The standard approach here is to require the sender to include a "unique" identifier in each message. Then you compare the identifiers and if they're the same it's a retransmit and if they're different the sender sent it twice.

Identifier Scope
At this point, the question immediately arises: unique within what scope? For some applications (e.g., identifying files within a global p2p network) you need an identifier that's globally unique. For others (e.g., IP packet reassembly) you need one that's unique to a given host over a short period of time (the lifetime of packets in the network). So, when we talk about uniqueness we need to know what scope the identifier needs to be unique within. The two most common cases are probably:

  • Globally unique: This identifier will never be used for anything else by anyone ever.
  • Locally unique: This identifier will never be re-used by this particular system.

Note that these cases are very closely related; if one thinks of the system identity as part of the identifier, then a locally unique identifier becomes a globally unique one. Indeed, it's common to create globally unique identifiers by gluing a fresh locally unique identifier to a static globally unique name. Thus, for instance, e-mail message IDs look like: E1GdBEu-0005LF-Fz@stiedprstage1.ietf.org.

Locally unique identifiers can be generated in two basic ways: counters and randomly generated.

Counters
There's nothing mysterious about a counter: the first identifier is 1, the second 2, etc. Another variant of this is to use the current time--though you have to be careful to use a high enough resolution timer that successive calls to it produce different results. In either case, counters are simple and easy to implement.

The big problem with a counter is that you have to keep the counter state stored somewhere to make sure you avoid re-using the counter. This isn't necessarily a problem, but there are a bunch of ways it can go wrong. A simple case is persisting the counter across machine reboots. In order to do avoid starting again from zero, you need to save the counter to disk after each update. Consider the following code:

1 send_message(msg) {
2    msg.id = ctr;
3 
4    write(msg);
5
6    ctr++;
7    save(ctr);
8 }

Figure 1

Now, what happens if the program crashes or the machine reboots in between line 4 and line 7. In that case, you'll reuse ctr for the next message you send. In order to be safe, you need to save ctr before sending the message. Then if the machine crashes you skip a counter but you don't reuse one. The code looks something like this:

1 send_message(msg) {
2    msg.id = ctr;
3    ctr++;
4    save(ctr)
5
6    write(msg);
7 }

Figure 2

Now at this point you might be thinking that neither you nor anybody you know would write the code in Figure 1, but the code in Figure 2 isn't guaranteed to work either because modern operating systems and hard drives have write caches, so just because you've tried to save the data to disk doesn't mean it actually has ended up on the hard drive platter. Similarly, if you're using time of day, you have to worry about the clock being set back. So, making sure values are never reused is actually fairly tricky in practice.

Another problem with counters is that they're inherently local. After all, it's pretty likely that everyone else is going to start from 0 or 1 too. If you want to have a globally unique identifier you need to scope the counter by some globally unique value like your domain name. This only works if you have such a globally unique value, which not all systems do.

Randomly Generated
The second approach is to use a random number generator. The idea here is that you generate a random number of suitable length and there's a high probability it will be unique. The appropriate length is governed by the math of the birthday paradox, but in general, if you want to generate 2n identifiers, they need to be somewhat greater than 2n bits long in order to have a high probability of uniqueness. By contrast, because counter values are assigned sequentially, if you want to generate 2n counter values, they have to be of length approximately n bits. Accordingly, counters are more space efficient.

The big advantage of randomly generated IDs is that you don't need to worry about keeping state around--well, sort of. Strictly speaking, this is only true if you have a true random number generator like dice or radioactive decay. In practice, most systems use pseudorandom number generators (PRNGs). These are functions that start with some seed data to set the initial state and then crank out a bunch of random-appearing values. storing state. If you feed a PRNG the same seed then you'll get the same set of random data. So, in principle you need to do state management here too. However, in practice, this isn't a real problem because you can leverage the existing state of the machine (the clock time, network traffic, process timing info, maybe a stored state file...) as seed data. Even though the entropy of each individual piece of state is low, the probability that all the system state will ever be exactly replicated is extremely low and so you can treat this as a random generator.1

By contrast to counters, you don't need to do anything special to make your randomly generated IDs globally unique. It's just a matter of generating a long enough value. This is obviously very convenient since it means no need to have any global naming scheme.

The issue that typically freaks people out about randomly generated IDs is that they're not guaranteed to be unique, they're just statistically unique. There's some (low) chance that two randomly generated IDs will be the same. By contrast, counters (if properly implemented) appear to be guaranteed to be unique. I hear this argument a couple of times a year and in my experience no amount of working through the math will convince people that the probability is vanishingly small. They want a guarantee.

Unfortunately, this is a case where people's intuition leads them astray; computers seem deterministic so people trust that a properly implemented counter will always increase. This isn't really so: memories and hard drives are both subject to random errors, so even if you're correctly incrementing your counter, there is some chance that it will (for instance) come back as zero the next time you read it. Of course, storage devices have error correcting codes designed to prevent this kind of thing, but they're based on the same kind of statistics that assure you that randomly generated IDs are OK, so that shouldn't give you any confidence. I don't have good estimates for the probability of this kind of system error but given that my computer acts up randomly reasonably often, it's pretty likely that the real-world probability of counter reuse exceeds the expected probability of RNG collisions.

1. Note that this is a much weaker condition than that the seed data be unpredictable. For our purposes here it just has to be unique. Of course in practice people typically use the same cryptographic random number generators for this purpose as well.

 

October 24, 2006

Coming through customs and immigration in Vancouver on my way back into the US I was selected for secondary screening (apparently because I made the mistake of asking one of the agents where I could wait for Fluffy.) Nothing particularly unusual about that, of course, but once I got into the back room he wanted to know about other trips I'd been on, see my driver's license, know where I worked, how long I'd worked there, see a business card, etc. (All this after I'd passed the immigration checkpoint.)

Of course, I've certainly been asked questions by immigration people when entering other countries, but since I'm a US citizen, it's not like there's any question that I have a legitimate right to reenter the country. Presumably this is all designed to detect whether I'm smuggling beavers into the country by seeing if I act nervous under questioning. If so, I must have failed, since he ended up searching my bag.

 

October 21, 2006

Today was the first time I flew since the Great Liquid Explosive Conspiracy, so I arrived at SJC with my liquids/gels/aerosols (hereafter LGAs) dutifully placed in the required clear quart (or liter) ziptop bag of safety (hereafter CQ(OL)ZBOS) (which I actually had to go out and buy since I only had 1 gallon bags, which are apparently tools of Al Qaeda). At the start of the security checkpoint you're instructed to declare the presence of your LGAs. It's not clear what the TSA personnel are supposed to do at this point because when it looked like he was going to dig around in my GQ(OL)ZBOS, I insisted he replace the rather discolored nitrile gloves he was wearing with new ones. Apparently changing gloves is a lot of effort so he just had me hold up my CQ(OL)ZBOS for inspection and let me go. I shoved my LGAs (still in the bag) into my messenger bag and proceeded to the checkpoint. Apparently you're supposed to let them x-ray the CQ(OL)ZBOS containing your LGAs but nobody seemed to notice.

Anyway, while I'm waiting for my stuff to make its way through the x-ray, the TSA tech monitoring the machine calls over for a bag check. I'm thinking that they're going to hassle me about the empty Nalgene bottle in my bag, but instead they've decided that the elderly Asian man in front of me needs some secondary screening action. The problem, it appears, is that he has some Ls, Gs, or As which haven't been placed in a CQ(OL)ZBOS. At this point, the TSA officers spend a bunch of time doing the following:

  1. Rummage through the guys bag trying to find all the LGAs.
  2. Try to decide if they're over the critical mass of three ounces.1 This involves calling over another TSA guy to ask him "hey, is this more than three ounces"? The problem appears to be that the tube is a metric size and nobody can really remember how to convert metric to English. Apparently just eyeballing it isn't precise enough because liquid explosive is like plutonium in that if you have just short of critical mass it fizzles but if you have just over it then you get a satisfyingly earth-shattering kaboom.
  3. Tell the elderly Asian man that he was supposed to have placed his LGAs in a CQ(OL)ZBOS. I didn't catch this entire conversation, but it sure sounded like they were going to send him back through the security checkpoint to get one: "You need to go to Senor Jalapeno and get... How long from now is your flight?"

At this point, I realized that I had not correctly understood the purpose of the CQ(OL)ZBOS. I'd always assumed that the idea was to make it easier for the TSA officers to look at all of your LGAs in isolation to determine if you had anything bad (hence the requirement for it to be clear). Apparently, however, the idea is that in the case that your LGAs are in fact liquid explosives rather than (say) hand sanitizer, that the CQ(OL)ZBOS will contain the explosion, thus saving the lives of everyone else on the plane, which raises the question: for my safety and the safety of others shouldn't they require that people use the thicker and hence safer freezer storage bags?

1. This sentence originally was ungrammatical. Correction due to Terence Spies.

 

October 20, 2006

I recently finished Marc Levinson's The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger. It's all very interesting, but I particularly enjoyed Chapter 7, which describes the process of standardization. The basic problem was that all the pioneers had developed containers with different dimensions and fittings.

The process should be familiar to anyone who's worked in network standards, complete with multiple standards bodies:

These concerns were unrepresented when Marad's two export committees held their first meetings on successive days in November 1958. Neither Pan-Atlantic nor Matson was seeking government construction subsidies, so the only two companies actually operating containerships in 1958 were not invited to join in the process of setting standards for the industry they were creating.

Controversy arose almost immediately. After much debate, the dimension committee agreed to define a "family" of acceptable container sizes, not just a single size. It voted unanimously that 8 feet should be the standard width, despite the fact that some European railroads could not carry loads wider than 7 feet; the committee would "have to be guided mainly by domestic requirements, with the hope that foreign practice would gradually conform to our standards." Then the committee took up container heights. Some maritime industry representatives favored containers 8 feet tall. Trucking industry officials, who were observers without a vote, argued that 8 1/2-foot-tall boxes would let customers squeeze more cargo into each container and allow room for forklifts to work inside. The committee finally agreed that containers should be no more than 8 1/2 feet high but could be less. Length was a tougher issue still. The diversity of containers in use or on order presented a serious operational problem: while a short container could be stacked atop a longer one, its weight would not rest upon the longer one's loadbearing steel corner posts. To support a shorter container above, the bottom container wold not rest upon the longer one's load-bearing steel corner posts. To support a shorter container above, the bottom container would require either steel posts along its sides or thick, load-bearing walls. More posts or thicker walls, though would increase weight and reduce interior space, making the container more costly to use. The length question was deferred.

...

The [ANSI] MH-5 subcommittees, involving many of the same participants, went to work on the same issues. The MH-5 subcommittee on dimensions quickly reached a consensus that all pairs of lengths in use or about to be used—12 and 24 feet, 17 and 35 feet, 20 and 40 feet—would be considered "standard." The subcommittee rejected only a proposal to endorse 10-foot containers, because members thought them too small to be efficient, and, in any case, none were planned.

The MH-5 process was dominated by trailer manufacturers, truck lines, and railroads. These interests wanted to reach a decision on container sizes quickly, because once standard dimensions were approved, the domestic use of containers was expected to burgeon. The specifics mattered less: within the limits set by state laws, trucks and railroads could accomodate almost any length and weight. The maritime interests that were influential in the Marad committees, in contrast, cared greatly about the specifics. A ship built with cells for 27-foot containers could not easily be redesigned to carry 35-foot containers.

...

Meanwhile, yet another player entered the standards business. The National Defense Transportation Association, representing companies that handled military cargo, decided that it too would study container dimensions. ... By late summer of 1959 it had agreed unanimously that "standard" containers would be 20 feet or 40 feet long, 8 feet wide, and 8 feet high. The other lengths approved by the MH-5 and Marad committees, and the 8 1/2-foot-high boxes supported by some truckers and most ship lines, would not be acceptable for military freight—a decision Forgash's committee was able to reach only because no one from the maritime industry was involved.

...

The wrangling over container sizes, which had consumed three years in the United States, was now repeated at the international level. By 1962, much of Europe was allowing larger sizes than was America, so the new American standard sizes, 8 feet high, 8 feet wide, and 10, 20, 30, or 40 feet long [Marad changed it's mind--EKR], faced no technical obstacles. Economic interests were another story. Many continental European railroads owned fleets of much smaller containers, made for 8 or 10 cubic meters of freight rather than the 72.5 cubic meter volume of a 40-foot container. The Europeans wanted their containers recognized as standard. The British, Japanese, and North American delegatations were all opposed, because the European containers were slightly wider than 8 feet. A compromise was struck in April 1963. Smaller containers, including the European railroad sizes and American 5-foot and 6 2/3-foot boxes would be recognized as "Series 2" containers. In 1964, these smaller sizes, along with 10-, 20-, 30-, and 40-foot containers, were formally adopted as ISO standards.

backward-compatibility issues with the installed base:

A ship built with cells for 27-foot containers could not easily be redesigned to carry 35-foot containers. Most ships then carrying containers had shipboard cranes built to handle a particular size, and they would have to be converted to handle other sizes. Large containers might prove impossible to fill with the available freight, but smaller ones would increase costs by requiring more lifts at the dock. Some lines had made large investments that could be rendered worthless if their containers were deemed "nonstandard."

...

Not a single container owned by the two leading containership operators, Sea-Land service (the former Pan-Atlantic) and Matson, conformed to the new "standard" dimensions.

and last-minute hacks:

Instead they proposed a minor change to the fitting that the MH-5 committee was designing based on the Sea-Land patent. If the hole on the top of the fitting were moved half an inch, they estimated, 10,000 containers—about 80 percent of all large containers used by U.S. railroards and ship lines other than Sea-Land—would be "reasonably compatible with Sea-Lands's. The fitting they recommended, they said, would cost less than half as much as the National Castings fitting ($42.24 versus $97.90) and weigh barely half as much (124 pounds verus 236).

...

Through 1966, engineers around the world tested the new fittings and found a variety of shortcomings. As an extra check, a container was put through emergency tests in Detroit, just ahead of another meeting of the ISO committee. It failed, the fittings on the bottom of the test container giving way under heavy loads. When TC104 convened in London in January 1967, it was faced with the uncomfortable fact that the corner fittings it had approved in 1965 were deficient. Nine engineers were named to an ad hoc panel and told to solve the problems quickly. They agreed on the tests that fittings would have to pass, and then two engineers, one British, one American, were sent to a hotel room with their slide rules and told to redesign the fitting so that it could pass the tests. Requiring thicker steel in the walls of each fitting, they calculated, would solve most of the problems. No existing container complied with their "ad hoc" design. Over the bitter complaints of many ship lines that had encountered no problems with their own containers, ISO approved the "ad hoc" design at a meeting in Moscow in June 1967. The thousands of boxes that had been built since ISO first approved corner fittings in 1965 had to have new fittings welded into place, at a cost that reached into the millions of dollars.

Remember, this is a rectangular metal box with fittings at the corners. What makes standards-setting complicated isn't primarily that the technical issues are difficult—though they certainly can be—but that they require coordination between multiple players who often have radically different incentives.

UPDATE: s/MD-5/MH-5/. Thanks to Chris Walsh for pointing this out.

 

October 17, 2006

Earlier today my United affinity Visa card was declined while trying to buy plane tickets. I pay my bill on time, so this is a bit surprising and I called Chase to see what's up. The conversation goes something like this.

EKR: Earlier today my card was declined, so I wanted to see what was going on.
CSR: OK, I can help you with that, but I need to ask you some security questions about the primary cardholder [Mrs. Guesswork].
EKR: OK, I'll try to answer them.
CSR: Where has the primary cardholder ever owned property in locations X, Y, or Z, or none of these?
EKR: Uh, can you repeat those.
CSR: Where has the primary cardholder ever owned property in locations X, Y, or Z, or none of these?
EKR: None of these.
CSR: Which of these locations did the primary cardholder live at, A, B, or C, or none of these.
EKR: Uh, I have no idea. This would have been before she met me, I'll have to ask her.
CSR: That's OK, here's another one. Which of these locations did the primary cardholder live at, D, E, or F in year ZZZ or none of these?
EKR: Uh, this is before I met her. I'll have to have her call you back
CSR: That's OK, you pass.
EKR: I do?
CSR: Yeah.
EKR: OK. Well, so can you tell me why my card was declined?
CSR: This is the charge for XXX?
EKR: Yeah.
CSR: It was rejected because the merchant didn't give us their name.
EKR: Well, it was United and this was a United affinity Visa.
CSR: I don't know but they didn't give us their name, and merchants Foo and Bar do. Something must have gone wrong when their computers transferred the name.
EKR: I don't know.
CSR: Anyway, I can clear this up and reset your account.
EKR: Thanks.

That sure was a useful set of security checks there.

 

October 9, 2006

Back last week before the alleged DPRK nuclear test, the Bush Administration was doing a lot of tough talk about it. Here's Condoleeza Rice:
Secretary of State Condoleezza Rice said Tuesday a North Korean nuclear test would be "a very provocative act" and the United States would have to assess its options should it be carried out.

Rice's warning, at a news conference in Cairo, reflected widespread concern within the Bush administration. She stressed, however, that a North Korean test was an issue "for the neighborhood" and not just for the United States.

"It would be a very provocative act," she said. Still, she said, "they have not yet done it."

Rice did not elaborate on the options she said the United States would consider if North Korea followed through on it threat.

Now that the DPRK claims to have conducted such a test, you might be wondering what those options actually are. Here's Humphrey Appleby to explain it to you:

There are essentially six options. One, ignore it, two, file a protest, three, issue a statement condemning it, four, cut off aid, five, sever diplomatic relations, six, declare war. Now, if we ignore it, we tacitly acknowledge it, if we file a protest it'll be ignored, if we issue a statement it will seem weak, we can't cut off aid because we're not giving any, if we sever relations we risk losing the oil contract and if we declare war... people might just think we're overreacting.

Apparently for now we've decided to go with option 3.

 

October 8, 2006

AP reports Sen. George Allen (R-VA) has failed to report stock options given to him byXybernaut and Commonwealth Biotechnologies. Allen's says that he didn't report the options because they were underwater:
Allen's office said he sold his Xybernaut stock at a loss and has not cashed in his Commonwealth options because they cost more than the stock is now worth. The senator also said he saw no conflict going to work for companies shortly after assisting them as governor.

"I actually got no money out of Xybernaut. I got paid in stock options which were worthless. Commonwealth Biotech asked me to be on their board. Glad to do it. I learned a lot on their board and enjoyed working with 'em, and they seem to be doing all right, I guess," Allen said.

...

Allen's office said he did not report his Commonwealth options on his past five Senate disclosure reports because their purchase price was higher than the current market value. Allen viewed them as worthless and believed in "good faith" he did not have to report them, aides said.

Allen disclosed the options once - on an amendment to his 2000 ethics report filed three months after the normal filing period ended. He excluded the options from subsequent reports.

As Radley Balko observes, this is nonsense. As long as Allen is holding an equity position in the company (whether stock or options) then he has a stake in the company's success, and in particular because these companies appear to be partly dependent on Federal contracts. Indeed, Allen appears to have attempted to help Xybernaut while he was still an options holder:

Reid said he is aware of only one time that Allen's office helped any of his former companies. That came in December 2001 when Allen asked the Army to resolve a lingering issue with Xybernaut. The company asked Allen to intervene, and he urged the Army to give Xybernaut an answer, Reid said.

At the time, Allen still owned options to buy 110,000 shares of Xybernaut stock, which could be affected by any new federal contracts.

The Army answered but did not give Xybernaut what it wanted, and Allen did nothing more, Reid said. The office declined to release the correspondence, saying constituent letters are confidential.

More importantly, the claim that options which are underwater are worthless is simply wrong, as a moment's reflection will tell you. If I buy a share of company $X for $10 and then X goes down to $5, I've lost $5. But options are different.

The kind of incentive options issued to employees or directors are effectively calls. They give the option holder the right to buy a stock at some point in the future at a fixed price—no matter what the price of the stock at that future time is. Say that shares of company X are currently trading at $10 and I'm issued an option to buy at $10 (this is called the strike price). If I exercise the option (buy the stock) today, then my profit is $0. But if X shares are ever trading at above $10, then I can exercise the option and sell at the higher price and make money. On the other hand, if the stock price never goes over the strike price, then I don't exercise the option. Because I'm not required to exercise the option, so there's no way I can lose money here, I either make some money or no money.1

Because of this asymmetry, the value of an option isn't determined by whether the stock is currently trading above the strike price but rather by the probability that it will ever trade above the strike price. (There's a complicated model here called Black-Scholes but you don't need to know it to understand this point.) As long as there's some chance that the stock will eventually trade above the strike price than the option has value. That's why employee stock options are valuable even though they're often issued with a strike price equal to the current stock price; the idea is that the stock price will go up.

This really isn't particularly complicated stuff and something one might find it useful to know, particularly if one was serving on the Senate Small Business and Entrepreneurship Committee or the Senate High Tech Task Force.

1. Of course, if you're trading options, then the way you lose money buying calls is that the stock is never worth more than the strike price and you're out the cost of the option, but in this case the options were part of Allen's compensation, so he most likely never paid anything for them.

 

October 7, 2006

The San Diego Tribune has an article questioning whether Allerca is for real. I have no independent information here either way, but if it is a hoax, it's too bad.
 
My paperwork record-keeping abilities ccould be charitably described as "suboptimal" or "limited." Ordinarily, I pay my bills on time (thanks Paytrust) so this is only a problem when I need to look at things more than a few months old—you can only imagine how much fun tax time is around here.

Anyway, one does occasionally have to check one's records, and today was one such day. I had a dispute with a vendor about whether a bill had been paid. Ordinarily, I'd be hosed but thanks to the Internet, I was able to look at my Paytrust history, verify the check had been written (4 months ago!), and then check my online bank statements and verify that the check had indeed been cashed. Mission accomplished! The only downside here is that banks and credit cards seem to only carry statements back a year or so, which can be inconvenient (taxes again). Disk space obviously isn't an issue here, so I'm guessing it's just a holdover from the old hierarchical retention policies they had (local paper, remote paper, microfilm, etc.) Hopefully policies will catch up with technology; my accountant sure would appreciate it.

 

October 6, 2006

Allerca is now taking orders for hypoallergenic cats þ NYT. It turns out that no genetic engineering was required. There's a natural mutation which suppresses production of the major allergenic protein, Fel d 1, so they were able to selectively breed for cats with the allele. This is pretty cool, though pricy: the cats are $3950 with an estimated delivery date of November 2007--plus you can pay $1950 for expedited delivery.

Since the Fel d 1 suppressing allele is naturally occurring and no genetic engineering rocket science was involved, it should be a straightforward matter for another organization to reproduce the breeding program: all you need is to be able to test for the relevant gene--or the expression of the protein--both of which are well-understood technologies. Lots of people want to have cats and cat allergies are incredibly common, so I'd expect to see other vendors of hypoallergenic cats sometime in the not too distant future.

 

October 5, 2006

Patricia Dunn has been indicted on wire fraud and conspiracy charges. On the other hand, HPQ is within a dollar of its 3-year high at 37.84, up about 10% from before scandal. Way to add shareholder value, Patricia.
 

October 3, 2006

Obviously, having people say that you solicit 16-year-olds isn't exactly good for your reputation, but what's really killing Mark Foley in trouble is that you can see his IMs for yourself. That's not something people are going to forget. Because the interaction style of IM feels more like telephony than e-mail it's easy to forget that most IM programs have a feature that lets you log your entire conversation.

It's worth noting that crypto doesn't really help in situations like this because the person revealing the information isn't an attacker listening on the network but the intended recipient of the messages. Even something like Off-the-Record doesn't solve the problem. It's true that it provides confidentiality without any ability to prove the source of the e-mails, but it's highly unlikely that there was any cryptographic proof that Foley sent these messages in the first place. Rather, the recipient claims that he got them and Foley (probably wisely) hasn't said otherwise. In order to have plausible deniability, you first need to be willing to issue a denial.

I've of course seen products that aim (typically cryptographically) to ensure that the receiver doesn't keep a copy of the communication, at all.1 The problem with this kind of thing is that there are lots of reasons why the recipients want to do so anyway, and as long as they control their own computer they can bypass whatever protection you've put in place. Really doing an adequate job requires trusted hardware and tightly controlled software, which users are (understandably) reluctant to deploy.

1. I've skimmed the OTR source code and don't see any evidence that it attempts to suppress logging, but even if it did, since it's distributed in source code it would be easy to disable.

 

October 2, 2006

I'm currently watching —a movie classic, by the way. For the five people in the US who haven't; seen it, the plot revolves around police officer John McClane (played by Bruce Willis) who is trapped in an office building which has been taken over by terrorists--well, armed robbers, actually. actually. For the first 30 minutes or so, he struggles to find some way to get police support, finally ending up calling them on a radio he's taken from one of the robbers, at which point they blow him off because he's using an emergency channel.

Die Hard was made in 1988, before cell phones became ubiquitous, but imagine how different things would have been if he'd just been able to dial 911—well, actually the 911 operators would probably have just blown him off too, so bad example, but you get the idea. Lots of movie plots have been overtaken by advancing technology. (For a hostage situation in the post cell-phone era, see Inside Man). Another example is Flight Of the Phoenix, in which a bunch of oil workers are stuck in the Gobi after a plane crash. For under $700 you too can have a personal locator beacon which will notify your friends—or at least some satellites—that you are trapped somewhere in Mongolia.

Here's a partial list of movie problems that seem baffling now or probably will in 10 years:

  • Being out of communication (cell phones, sat phones, PLBs...).
  • Being lost (GPSes and integrated nav systems)
  • Dying of a bunch of diseases. Of course a lot of them are substitutable--we've mostly killed TB here in the West but cancer just does as well--but some are unique. Remember when HIV was a death sentence?
  • Medical mysteries. Even now when you watch something like House the screenwriters have to find all sorts of obscure reasons why the doctors can't run the normal diagnostics. When you can do a complete rapid DNA sequence in a couple hours it's going to be a lot easier to figure out what bacterium/virus/etc. is killing your patient. That doesn't mean you'll be able to cure them of course. Even now our diagnostic ability leads our ability to treat.
  • Any situation in which information takes up too much space, is too hard to search, etc.

Of course, I can think of at least one movie with a plot that wouldn't have been possible 10 years ago: Cellular.

 

October 1, 2006

A while ago I came across the claim that Hezbollah had compromised Israel's battlefield communications:
"We were able to monitor Israeli communications, and we used this information to adjust our planning," said a Hezbollah commander involved in the battles, speaking on the condition of anonymity. The official refused to detail how Hezbollah was able to intercept and decipher Israeli transmissions. He acknowledged that fighters were not able to hack into Israeli communications around the clock.

The Israeli military refused to comment on whether its radio communications were compromised, citing security concerns. But a former Israeli general, who spoke on the condition of anonymity, said Hezbollah's ability to secretly hack into military transmissions had "disastrous" consequences for the Israeli offensive.

"Israel's military leaders clearly underestimated the enemy and this is just one example," he said.

...

Like most modern militaries, Israeli forces use a practice known as "frequency-hopping" - rapidly switching among dozens of frequencies per second - to prevent radio messages from being jammed or intercepted. It also uses encryption devices to make it difficult for enemy forces to decipher transmissions even if they are intercepted. The Israelis mostly rely on a U.S.-designed communication system called the Single Channel Ground and Airborne Radio System.

This probably needs some unpacking. There are two technologies in play here, frequency hopping and encryption.

Let's start with the encryption. In any communication security environment you want to be able to ensure that attackers can't get access to the data you're transferring (confidentiality), that they can't insert data that you accept as valid (data origin authentication), and that they can't modify your data in flight (integrity). We have fairly well-developed cryptographic techniques for providing these services provided the cryptographic keys are handled correctly. The particular unit the Israelis are using (SINCGARS was designed by the NSA--which knows what it's doing. It would be very big news if Iran knew how to break NSA-designed crypto.

Even if the NSA had screwed up, you'd expect Israel to have caught it. Israel has some of the best cryptographers in the world. I'd be pretty surprised if the Israeli military is using crypto that hasn't been properly vetted. Obviously, it's possible that Hezbollah got their hands on a few crypto units, but you'd expect the Israelis to change their code keys in response. It's hard to believe that they broke the cipher per se.

The frequency hopping system is a different matter. The general idea behind a frequency hopping scheme is that the sender and receiver have synchronized pseudorandom number generators and use those to constantly adjust what frequencies they're transmitting and receiving on. This makes the signal both hard to jam and hard to intercept (more info here). It's easy to believe that the Iranians developed technology to make it easier to intercept this kind of communication for instance by monitoring all the candidate channels and using signal-processing techniques to reassemble the signal (disclaimer: I'm not an RF engineer, which is why that's a bit handwavy).

Of course, even if you were able to recover the signal, you'd still have only ciphertext, but that would still let you do traffic analysis, localization, etc. which could be very useful in a battlefield situation, even if you don't know the actual content of the communication that's being transmited.