March 2009 Archives


March 28, 2009

Last Sunday I did the Diablo Trail 50K put on by Save Mount Diablo.

This was a fairly tough course, a bit over 32 miles with 8518 feet of climb, point-to-point from Round Valley Park to Castle Rock Park in Walnut Creek. The way this works is that you drive to Castle Rock and park your car, take a bus to the start, where you register, change if necessary and dump your bags which are carted to the finish.

It was raining hard Saturday night and windy and cold Sunday morning, so I opted for a tank top (Race Ready Ares trail shirt) and a long sleeve shirt (Brooks Dryline; you can't get these any more, unfortunately), plus some light gloves. The first rest stop was at 10 miles, so I decided to carry a hydration system (Patagonia Houdini) instead of a race belt, with a bunch of gels. Even so, it was really cold standing around at the start, and it never warmed up much. It was fine down in the valleys but as soon as you got up to the ridge line it got super-cold and windy.

To make things harder, it was pouring rain Saturday night and the first 10 miles of the trail was incredibly muddy. Even with trail shoes and walking the uphills, people were slipping all over the place. As soon as you got to a downhill your shoes would pick up about an inch of mud making it real difficult to run. I would say about 40% of the first 10 miles was serious mud, and since we were running on trails that went through pastures, cow manure. By the first aid station around mile 10 I was over 2 hours and starting to feel seriously tired. In retrospect, even knowing that you have to take it easy in a race this long, I think I pushed it too hard. It didn't help that I'd been sick the whole previous week and hadn't really completely recovered. All in all, I probably lost about 5-8 places from the start to the finish, which suggests I went out too hard.

On the other hand, I hate DNFing and while I was still tired by the time I hit the 17.2 aid station (a bit over half way), I figured it was mostly a matter of sucking it up to finish. I was starting to feel nauseous and looking for something a bit more substantial than the energy gels I'd brought and switched over to some pretzels I picked up at the aid station. It took me a while to get them down, but my stomach finally settled a bit and I made it to the 24 mile aid station without any real problems. Around mile 24 I ran into Joe McDonald, who I'd never met before but turns out to be a legend in ultra circles. We ran the next 7 miles or so together, just taking it moderate and I had an opportunity to quiz him a bit about how to improve in the sport, which was great. The intensity level in ultras and the way you get tired is a lot different than it is in triathlon (remember, you're running for a lot longer, even if the total event time is shorter), so that's something I have to get used to.

The three miles from the mile 29 aid station to the finish were pretty tough. It wasn't so much that I was tired but Joe and Jennifer Ray (advertisement: the RD for Skyline 50K, who seems pretty nice), who caught up with us at about mile 30, decided to pick up the pace a little bit, and while I wasn't quite able to stay with them, I did pick up the pace myself, running rather than walking the uphills, and did the last mile or so fairly hard. Finishing time 7:13:05, which puts me 21st out of 48.


March 27, 2009

Sorry about the lack of content last week—was at IETF and just didn't have time to write anything. I should have some more material up over the weekend. In the meantime, check out this photo of the bathroom sink at the Hilton where we were having the conference:

That thing to the left of the sink is an automatic soap dispenser (surprisingly, powered by a battery pack underneath the sink). Now notice that the sink itself is manually operated. Isn't this kind of backwards? The whole point of automatic soap dispensers and sinks in bathrooms is to appeal to your OCD by freeing you from having to touch any surface which has been touched by any other human without being subsequently sterilized. But when you wash your hands, the sequence of events is that you turn on the water, wet your hands, soap up, rinse, and then turn off the water. So, if you have a manually operated sink, people contaminate the handles with their dirty, unwashed hands, which means that when you go to turn the sink off, your just-washed hands get contaminated again. The advantage of automatic faucets, then, is the automatic shutoff, which omits the last stage. By contrast, having the soap dispenser be automatic doesn't buy you that much because you only need to touch it before washing your hands. There's probably some analogy here to viral spread in computer systems, but for now let's just say that this is how security guys think.


March 24, 2009

Leslie Daigle just summed up the situation with IPv6 at today's ISOC IPv6 press event: "It's [IPv6] sort of a broccoli technology; good for you but not necessarily attractive in its own right."

UPDATE: Corrected the quote a bit. Thanks to Greg Lebovitz for the correction.


March 20, 2009

From my review of draft-meyer-xmpp-e2e-encryption-01:

The context of this draft is that currently messages in XMPP from to go through Alice and Bob's
respective servers ( and in transit.  This
implies that Alice and Bob need to trust their servers both to enforce
appropriate security policies (i.e., to make sure there is TLS along
the whole path if appropriate) and not to actively subvert security as
by message sniffing, injection, etc. The purpose of this document is
to allow Alice and Bob to establish an end-to-end secure cryptographic
channel that does not rely on the server for security.

Before talking about the draft details, it's important to get clear on
the threat model. In particular, we need to be clear on how much the
server are trusted. There are at least three plausible models:

- The server is trusted completely (the current system).
- The server is trusted to authenticate Alice and Bob,
  but should not see the traffic.
- Server not trusted at all.

Clearly, we're trying to do better than the first of these, so it's
between the second two.  For contrast, in SIP (cf. RFC 4474) the basic
assumption is that that proxy (the server) owns the namespace
associated with it. So, for instance, if decides it wants
to take the name "" away from Alice and give it to
her sister "Alice", it can. So, the proxy is trusted to authenticate
Alice, but shouldn't see the traffic, i.e., the second model.

The security requirements for these two are different. In particular,
in the second case, you need some independent mechanism for Alice and
Bob to authenticate each other. 

I think it's important to be clear on which of these environments
you think is the dominant one. I'm sure there are *some* cases
where people don't trust the servers at all, but I suspect in most cases
they just want (1) not to have to trust the server to enforce security
policy and (2) deter casual sniffing by server operators. In these
cases, a model where the server authenticates the users for an
E2E connection (a la DTLS-SRTP) is appropriate. If that's a common
model, then forcing all users to use a secure independent channel
just because some want to is going to be a very serious inconvenience.
My instinct is that that's a mistake.

The design of a system in which the servers vouch for the users identities
is fairly straightforward, with DTLS-SRTP as a model: the servers simply
authenticate the users and then pass on digests of the user's certificates
(as provided by the user) along with an authenticated indication of the
user's identity (a la RFC 4474 or even the current TLS model)
and the end-to-end connection is compared to these fingerprints.

As noted above, the design of a system in which the servers aren't trusted
is significantly more complicated. Roughly speaking, thre are three
major techniques available here: key/certificate fingerprints, a
shared password, and a short authentication string. See
[] for
some background here.

I think it's generally agreed that fingerprints are too much of a hassle
for regular use, though if your model was that most users would be
happy without it, then you might think that they would be OK for 
the exceptionally paranoid. 

This leaves us with SAS and shared passwords. The important interface
differences here are as follows:

- The SAS must be verified *after* the connection is set up. The password
  must be set up before hand.
- You can use the same password with multiple people semi-safely. 
  The SAS is new for every message.
- SAS probably requires modifying TLS. There are existing mechanisms 
  for passwords.
- The SAS is "optional" in the sense that you can generate it and not
  check it. The password is "mandatory" in the sense that if it's
  specified, it must be supplied or the connection will not be set up.

Passwords can be further broken down into two variants: ZKPP/PAKE
schemes and ordinary PSK schemes. The relevant differences between
these two are that PSK schemes are susceptible to offline dictionary
attack but that ZKPP/PAKE schemes have a much more problematic IPR

Finally, there is the question of where the authentication is done.
As I noted above, TLS has existing PSK and SRP mechanisms. However,
one could also add at least password and PAKE mechanisms to 
SASL if one wanted and use a channel binding to connect the two.

More to come at the XMPP2 BOF next week in San Francisco, which BOF for some unknown reason, I'm chairing.


March 19, 2009

Can someone explain to me why when when I go to download Firefox, Xcode, or a bunch of other software for that matter, it happens over HTTP and not HTTPS? Remember, I'm about to install and run this software on my computer: if an attacker has managed to hijack my connection, they can get me to run anything they want. But nooo.... Even if you connect to the site with HTTPS, it redirects you to HTTP to download your file. There are obvious reasons to favor HTTP over HTTPS, namely performance and allowing mirrors. On the other hand, that makes the need for publication of the digest even more critical, since it sucks to have to trust the mirror.

If you're going to use mirrors, the right thing to do here is to public a digest of the file on an HTTPS-accessible page (remember: these sites already will let you access them over HTTPS, so this doesn't make the situation worse). This would let users download the file from a mirror and then check the digest against the master site. I don't see digests on either site, though. It could just be that I'm missing it, but then surely lots of others are as well.


March 18, 2009

Ezra Klein complains that congresspeople want to twitter rather than blog:

But this is the problem with the public sphere's quick embrace of Twitter. It's intimacy without communication. McCaskill doesn't actually say anything in 140 characters or less. The illusion of transparency comes because in everyday life, we only hear about the dinner plans of people we actually have a relationship with. What's useful about intimacy, however, isn't the exchange of trivia but the access to different perspectives. And I'd really like to hear her perspective! It would be rather nice if senators and congressmen routinely wrote posts explaining their thinking on major issues. A public service, even. Instead, they've all embraced Twitter.

It's not just McCaskill. It's McCain and Dodd and Hoekstra and Boehner and a half dozen more converts every day. And that's no accident. Twitter allows the benefits of blogs -- an authentic connection with your audience -- without exposing you to the dangers of actual, substantive engagement.

I think that's a fairly accurate assessment of Twitter. One the one hand, the Twitter message size limit really lowers the entry barrier to posting anything. It's just not that hard to write 140 characters about anything. On the other hand, because it's really hard to make any kind of sustained point in 140 characters, unless you're incredibly good with words if you want to say anything substantive (i.e., something other than "On my way to the airport") you're mostly limited to preaching to the converted, snark, one-liners, etc. After all, what else could you be expected to say in the space allotted? On the third hand, a lot of people's blogging was lifestyle updates anyway, and Twitter actually seems like a more suitable medium for that: if you want to blog about your new hat it's a lot easier if you don't feel like you have to write a review of it.


March 17, 2009

This really makes me want to move to Angola:
Because of a shortage of vaccines, doctors were unable to save any of the children taken to the city's main pediatric hospital, the hospital's chief, Dr. Luis Bernardino, told the United Nations' Office for the Coordination of Humanitarian Affairs. In any case, many of the children were brought in too late to save, he added.

Rabies deaths in Luanda, where more than four million people are jammed into vast slums, may actually be much higher; the count was based on one hospital.

Roaming packs of dogs have been blamed. Even after deaths began last year, the city did little, Dr. Bernardino said, because it has no service to vaccinate dogs. However, recent news reports from Angola have been contradictory. One said thousands of dogs had been rounded up for vaccination, tested and released if they were rabies-free -- but that can be determined only after the dog has died, by taking a brain sample.

That's pretty scary. This week it's packs of rabid dogs, next week it's zombies. There's no vaccine for zombieism, either, but at least you can tell whether they're infected before they're dead.


March 16, 2009

Perry Metzger (via Mangan's and Patri Friedman) points me to this paper on the impact of Vitamin C on endurance training. The basic result is that Vitamin C supplementation in rats seems to significantly decrease the effect of endurance training. The hypothesized (though they have expression studies to back it up) mechanism is that the production of free radicals during exercise stimulates mitochondrial development in the muscle and that taking antioxidants interferes with this mechanism, resulting in a reduced training effect: there isn't a significant impact on VO2max, but rats treated with training along will run significantly longer than those treated with training plus vitamin C (where the test is a forced treadmill run with shock as incentive.) The authors also did a small human study, and the vitamin C group performs worse but the results aren't statistically significant.

Some initial thoughts:

  • Obviously, it would be nice to see a bigger trial on humans.
  • The study was done on untrained rats and humans. It would be interesting to see a similar study with trained athletes to see if there is any difference.
  • One of the reasons that athletes tend to supplement with C is on the theory that it improves immune function. Getting sick even once has a huge impact on your training cycle. I don't think the data on immune function is really that convincing, but to the extent to which C does prevent you getting sick, you would need to balance that against the impact on training.
  • If C inhibits the training effect, what impact does it have in the post-training period? Is there an argument for some sort of vitamin C cycling?

All that said, I recently ran out of vitamin C and this is making me rethink, at least a little, whether I want to buy more.


March 15, 2009

I've had my Roclite 295s for a bit over two months now, so I've got enough experience for long-term comments. First, they're still really comfortable. You quickly get used to the low heel and the impact it has on your stride, and they're great in the mud, which we've had a lot of with all the recent rain.

My only real complaint is that the inner part of the heel wears incredibly fast. I've got about 200 miles on the shoe and I've pretty much completely worn through the both layers of the lining and through the foam on the right shoe, which means now I blister on each heel unless I tape up before every run. This happens to all my shoes eventually and even with a shoe with a tough liner like my hiking boots, I tend to blister in the same spot, so I suspect that there's something about my foot shape and my stride that creates a lot of friction at the heel. That said, 200 miles is pretty fast; I don't want to have to buy new shoes every 2 months or so. I went by Zombie Runnner to buy a new pair and asked if there was anything I could do and they suggested Engo patches, which are designed to go on the inside of the shoe and reduce friction. I slapped some on today and while it looks like I need to do some trimming to the right size, so far so good:

Good thing, too, since my new pair is on back-order. Hopefully, if I slap the patches on as soon as I get them, I can extend the lifetime of the shoe towards something more like other shoes I have used.


March 14, 2009

I had occasion to wipe my disk the other day prior to bringing the machine into Apple (this never materialized, but that's another story), and Allan Schiffman asked why I trusted Disk Utility if I didn't trust Apple's techs. Now, I do have a story for this, but it suggests an interesting question: say you want to wipe the disk on a computer you don't trust; how can you be sure the wipe was successful. The key problem is that the computer can lie, so just telling it to write zeros (or anything else) to the disk doesn't help. It can just pretend to do the write and then "forget". So you need some way to verify the data was actually written.

The natural thing to do is to read it back. So, for instance, we could write block i and then read it back immediately. Unfortunately, once we've done that there's nothing stopping a malicious machine from writing block i+1 over block i, and keeping the real block i+1 around. In the limit, then, the attacker just needs to erase one disk block, the one it uses as temporary storage. In order to demonstrate complete erasure, then, you need to force the target machine to fill the entire disk with data of your choice, thus leaving no room for the original data (cf. pigeonhole principle).

So, here's a first cut at an approach:

  1. Mount the disk on the target machine remotely on a trusted system.
  2. Write B blocks of data to the target system where B is the capacity of the drive.
  3. Read back all B bytes. If they match, declare success. Otherwise, fail.

Now, as an optimization we can do the job with very little state on the trusted system, as long as we use a little crypto. The basic idea is to write predictable pseudorandom data to the target machine. It needs to be predictable to us but not the target machine, and it needs to be pseudorandom to avoid the target machine compressing it1 and using some of the remaining space to save the original data. The natural approach here is just to choose a random cryptographic key K and for block i write F(K, i) where F is a function that produces a pseudorandom block from a key and an index (i.e., counter mode). The only state that this requires on the trusted system is the key itself, which is tiny.

Similarly, we can avoid reading all the data by forcing the target machine to compute a function that requires all the data. For instance, we give it a random key K_2 and force it to return MAC(K_2, data). As long as K_2 is not known during the storage phase, the target needs to know all the data to compute this function. [I'm oversimplifying a little here, but this is true for realistic MAC functions.] This technique requires a slightly richer interface on the target machine, since MAC computation isn't a standard feature of remote disks, but we can get this functionality by installing a small agent, and that agent need not be trusted.

Unfortunately, as Allan pointed out to me, this assumes that you know exactly how much storage is available to the target machine. In practice, however, you don't. First, the machine has some memory, and it can use that as buffer storage. You might be able to get around this with a reboot from CDROM, however. Second, a really malicious manufacture might lie to you about the disk capacity. Indeed, real disks don't have exactly their rated capacity because some space is used for bad sector concealment. In general, if the machine has storage capacity S and you think it has capacity S' then it can always retain S - S' worth of data that can't be determined via the mechanism described above. If you're really serious about removing all the data on your drive and for some reason you absolutely don't trust the drive or the computer, physical destruction is the way to go.

1. Readers may notice a connection with proofs of retrievability (see, for instance [SW08]), but this problem differs in that we want to impose a stricter condition on the target, not just that it be able to retrieve B blocks but that it actually consume that much storage.

Acknowledgement: Thanks to Hovav Shacham for talking this over with me.
UPDATE: Minor editorial.


March 13, 2009

OK, it's depressing that we need it, but this is a pretty cool idea:

Basically, it's a portable, wheeled shelter designed so that homeless people can sleep in it at night and move it around with their belongings inside during the day.


March 12, 2009

I'm not an expert on quantum computing, but luckily EG reader Dan Simon is. the other day in the comments section he explained why he doesn't think it's very relevant. It's worth a read:

My impression is that watermarking is to signal processing what quantum computing is to theoretical physics, or cryptography to number theory: a way for a massive oversupply of researchers in a once-proud field to make a claim to relevance.


Basically, there is one thing that quantum computers have been found to be capable of doing much better than classical computers. That one thing has been characterized variously as "finding hidden subgroups", "solving the abelian stabilizer problem", or "finding periodicities in abelian groups". Because this one thing happens to lead to polynomial-time algorithms for integer factoring and discrete log, quantum computers have been bandied about as an incredible new computing technology, but the truth is that this one thing is really very limited in scope, and in a decade and a half, nobody's found another significant application for it.

Moreover, there are lots of (admittedly informal) reasons for believing that quantum computers can't really do anything interesting beyond this one thing. So we're left with a technology that, even if perfected*, is unlikely to be able to accomplish anything of interest beyond solving a certain narrow class of number theory problems.**

Dan goes on to observe that there are other public key algorithms not in wide use that don't appear to be vulnerable to quantum computing.

This brings us to another class of people besides quantum computing researchers with an interest in hyping the technology: people working on alternatives to factoring and discrete-log based cryptosystems. The deployment cycle of new public key algorithms is incredibly slow: to a first order, everyone outside the government is still using RSA. This means that new public key algorithms with similar "interfaces" to existing algorithms (e.g., they're interchangeable but faster or more secure, etc.) don't have much of a real-world value proposition outside of specialized niches, especially as there are a whole slew of existing algorithms with better properties based on elliptic curves, pairings, etc. But if QC actually worked, then those systems would all be broken and we'd need to reinvent them based on different problems: instant job security for cryptographers.


March 11, 2009

OK, so the new iPod Shuffle looks incredibly sweet, except for the tiny little detail that there are no buttons on the unit. Instead, they're in the headphones, which, as Pogue obserevs, locks you into the Apple headphones which don't sound very good and I, at least, find hideously uncomfortable. The nice new feature, however, is synthesized voice prompts, which (if they work, which is a big if) remedy the major drawback of the old shuffle: you can't tell what's going on because there's no display. Pogue claims that you'll still be able to get the old shuffle, so maybe they'll add the voice feature to that as well at some point. Until then, I think I'll stick with a nano.

March 10, 2009

Over the past few years I and a few collaborators have been working to develop a better system for key establishment for standards-based real time voice (i.e., SIP/RTP). Skype already has such a system, but unfortunately it's a closed system and the available systems for SIP and RTP had some serious problems. While this job is far from finished, today the IESG approved the first round of documents describing the two major pieces of this protocol: draft-ietf-avt-dtls-srtp and draft-ietf-sp-dtls-srtp-framework. There are a few smaller documents still and the minor task of getting widespread implementations remains, but this is definitely progress. Thanks are due to everyone who contributed to this effort.

March 9, 2009

I'm planning on doing some more ultras this year and I thought it might be a good idea if I actually trained for them. Triathlon experience indicates you should train the way you plan to race, so I'm trying some new stuff this time:

Hydration pack: support in tris and road races is pretty good, but with trail ultras, the distance between aid stations is a lot longer, both in time and distance. Road races typically have aid stations at between 1-3 miles apart; for trail races it's more like every 5 miles, and because there's a lot more climbing, that's something like every 30-60 minutes apart, so you need to carry fluid. You've basically got four options: (1) carry bottles (2) wear a bottle in a waist pack (3) wear a fuel belt, and (4) wear a hydration pack. I don't like to carry stuff in my hands, and all the bottle on belt schemes seem to max out at about 30-40 ounces, which isn't really enough and you also have to fumble with the bottles, which is a pain. I thought this time I'd try a hydration pack, and after reading a bunch of reviews I settled on the Patagonia Houdini (no longer available it seems).

So far, I'm pretty pleased. It takes a bit of getting used to initially, since the weight on your shoulders is different and it seems like it's going to rub on your neck or clavicle. The natural thing for a backpacker to do is to try to take the weight off with the hip belt, but it actually wants to ride higher up on your waist, which is initially a bit odd, but not uncomfortable, really. I only have two small complaints: the drinking tube (I went with a Platypus) tends to slip down a bit and I had to keep readjusting it back into the pack. I think I can fix this with a little adjustment inside the pack. The other problem is that it tends to pull your shirt/jersey up a bit, so I'll want something a bit longer in the future.

PowerGel (new): I used to race with Powergel exclusively, but then Powerfood reformulated it with 300% more sodium and it just seemed too salty, at least the chocolate version [*]. Two weeks ago, though, I was out running and really noticed I wanted more salt, so laast weekend I gave it another try with the raspberry cream and strawberry-banana flavors. It's still too salty, but not quite as disgusting somehow and with the hydration pack you can wash it down quickly. I think I'll be using it more on long runs.

Running with music: Last few races I did I noticed a lot more people wearing headsets. This seemed kind of odd to me—don't you want to focus on the race?—but after reading this post by Scott Dunlap, I thought I'd give it a shot. I have an old iPod nano with a broken display, but it's almost unnoticeable in the pocket of my Race Readys. The only problem I had was that the headphone cord kept pulling out, until I turned it the other way up so that the cord exits at the bottomw, at which point everything was fine. It's hard to evaluate whether music actually makes a difference in your performance, but it certainly decreases the boredom level, which starts to get significant after 2 hours. It's probably worth spending some time tuning the music to the right inspiring level, but that seems like it should be easy enough.


March 8, 2009

A few years ago I briefly subscribed to Bicycling Magazine. I don't think I paid for it; it was some sort of freebie from my credit card company or something like that. Anyway, I quickly came to the conclusion that it's primary purpose was to induce me to shell out for new gear and declined to renew my subscription. Despite that I continue to receive copies on semi-regular basis, complete with the requisite occasional threats about how if I don't pay up I'll stop getting them. Empty threats, apparently, since it's been years now. This makes me wonder, exactly what do I have to do to stop getting the magazine? I realize that the business of magazine publication is primarily advertisement, but don't the advertisers want any evidence at all that people actually subscribe voluntarily?

March 7, 2009

I understand that in-theater videotaping of movies is a major source of piracy, but it's hard to understand the threat model under which this is a useful technique:
In recent years, the problem of camcorder piracy in theaters has become more serious due to technical advances in camcorders. In this paper, as a new deterrent to camcorder piracy, we propose a system for estimating the recording position from which a camcorder recording is made. The system is based on spread-spectrum audio watermarking for the multichannel movie soundtrack. It utilizes a stochastic model of the detection strength, which is calculated in the watermark detection process. Our experimental results show that the system estimates recording positions in an actual theater with a mean estimation error of 0.44 m. The results of our MUSHRA subjective listening tests show the method does not significantly spoil the subjective acoustic quality of the soundtrack. These results indicate that the proposed system is applicable for practical uses.

OK, so let's say that this works as advertised: why does it help. The full article is behind a paywall, but I'm assuming the way this is supposed to work is that you wait for a pirated movie to show up on the file sharing network, then what? Let's assume that each print is separately marked, so you can tell what theater it was taken in and what position the camera was in. I still see several problems.

First, you need to figure out which showing the video was taken at. The easiest way to do this is probably to inject a signal into either the audio or video. As I understand the situation, modern projection equipment generally uses digital audio, so I suppose it's possible that you can reprogram the projection system to add a time signal to the audio track somehow; if you're using digital projection you could probably add it to the video as well. Even so, it seems to me that this technique requires new equipment or at least new software on every theater. That's a pretty significant investment.

Second, you need to be able to go from the position of the camera in the theater to the person doing the taping. Even if we assume that the camera position and the perpetrator's position are the same, people typically sit within a half meter or so of each other, so in a packed theater, there are probably about 4-8 people who potentially did the taping. Or, rather, you now know what seat they were sitting in. But theaters don't typically know where people are sitting, so now we need some way to keep records of where people are sitting, which either means IDing customers and having assigned seating, photographic records of where people are sitting, or both. That's a major change in the way theaters do business.

Of course, even if the theaters (or rather the movie distributors or MPAA) did all this stuff, if they actually started going after pirates this way, it should be pretty easy to circumvent. The low tech countermeasure is just to put the microphone somewhere else in the theater. The high tech countermeasure is to use signal processing techniques to tamper with the time signal, remove the theater-specific watermarks, or just fuzz things enough to remove the information used for positioning. For that matter, when you go into the theater to pirate the film you could presumably—and this is pretty advanced stuff—wear some sort of disguise.

UPDATE: I should probably mention that there's a /. thread on this, which is where I originally saw it. The remote mike idea was suggested there, but it's pretty immediately obvious as soon as you hear about this technique.


March 6, 2009

The Cook County Sheriff is suing Craigslist to force them to remove erotic services (i.e., prostitution) advertisements from the site:
"I've said all along that I'm not blaming them for prostitution," Dart said. "What I am blaming them for is that one part of their site is being horribly misused. Either shut that part of the site down or put some real monitoring in place."

Craigslist, the Web's biggest publication of classified advertisements, promised in November to begin cracking down on ads for prostitution after coming under fire by several state attorneys general.

"Misuse of Craigslist to facilitate criminal activity is unacceptable, and we continue to work diligently to prevent it," said Susan MacTavish Best, a Craigslist spokeswoman. "Misuse of the site is exceptionally rare compared to how much the site is used for legal purposes. Regardless, any misuse of the site is not tolerated on Craigslist.

"Craigslist is an extremely unwise choice for those intent on committing crimes since criminals inevitably leave an electronic trail to themselves," Best continued. "On a daily basis, we are being of direct assistance to police departments and federal authorities nationwide."

I don't really understand Craigslist's argument here. A quick look at the erotic services ads makes it pretty clear that it's full of advertisements for prostitution. It's true that many of the advertisements don't explicitly quote prices, but some do and it's pretty clear what the rest are about. CL's rationale for offering this category of services is to facilitate legal services (escorts, massage, etc.) and I guess there is some plausible deniability that that's what these ads are for rather than for prostitution, but it's more of the "you can't be totally sure" variety than something you'd really believe. It's certainly true that it wouldn't be very convenient for CL to censor this section of their site—and the idea that users are going to do any censoring is pretty implausible. So, whatever CL's intentions, I think it's pretty clear that their system facilitates prostitution and that whatever measures they are taking aren't really sufficient to suppress it.

That said, it's quite possible that as CL claims, the CDA preempts state-level enforcement, so this may all be irrelevant.


March 5, 2009

The Times reports on the discovery of a discarded jug containing a very small amount of plutonium found at the Hanford production facility. This part is pretty cool, though:
Through isotopic analysis, reactor simulations and other techniques, Dr. Schwantes and his team determined when the plutonium was separated and which reactor provided the fuel. Since every reactor produces spent fuel with a unique "fingerprint" of small variations in isotopic concentrations, similar analyses could help investigators determine the source of material for a terrorist bomb.

The researchers also demonstrated how another isotopic signature could be used to calculate when an amount of plutonium had been split from a larger batch, and how big the original batch was. That could aid in determining whether a seized amount of plutonium represented only part of a larger cache.

Overview here (original article here, but behind paywall). I'd seen this in The Sum of All Fears but I didn't know it actually worked. The coolest part is that they are able to detect when you divide a sample. It turns out that Sodium 22 production depends on the amount of Pu in the sample, so you can use it to model the history of the sample, including the size of the original sample.


March 4, 2009

The Supremes decided today that the fact that a drug is FDA approved doesn't pre-empt damages lawsuits for inadequate labelling. The most interesting part of this case for me, though is that Phenergan (promethazine) can cause "irreversible gangrene" is accidentally injected into an artery rather than a vein. Moreover, it's apparently somewhat tricky1 to administer correctly via an IV injection:

Due to the close proximity of arteries and veins in the areas most commonly used for intravenous injection, extreme care should be exercised to avoid perivascular extravasation or unintentional intra-arterial injection. Reports compatible with unintentional intra-arterial injection of PHENERGAN Injection, usually in conjunction with other drugs intended for intravenous use, suggest that pain, severe chemical irritation, severe spasm of distal vessels, and resultant gangrene requiring amputation are likely under such circumstances. Intravenous injection was intended in all the cases reported but perivascular extravasation or arterial placement of the needle is now suspect. There is no proven successful management of unintentional intra-arterial injection or perivascular extravasation after it occurs. Sympathetic block and heparinization have been employed during the acute management of unintentional intra-arterial injection, because of the results of animal experiments with other known arteriolar irritants. Aspiration of dark blood does not preclude intra-arterial needle placement, because blood is discolored upon contact with PHENERGAN Injection. Use of syringes with rigid plungers or of small-bore needles might obscure typical arterial backflow if this is relied upon alone.

I knew that some phenothiazines caused injection site irritation, but until recentl didn't know that promethazine was this bad. This seems like an excellent reason to avoid promethazine injections altogether, and if you must get them, have them done in your non-dominant hand.

1. Off-topic rant: why does Baxter think it's a good idea to password protect the PDF to prevent cutting and pasting? Further, why does Apple's PDF viewer—let along GMail's "view as HTML" feature—think it's a good idea to enforce this kind of caveman DRM? That said, Ghostscript seems to have your interests rather more at heart.


March 3, 2009

Last night I watched David Mamet's Redbelt. One of the difficult things about filming martial arts movies is that you have to compromise between realism and excitement, because fights between good people aren't that dramatic unless you really know what you're looking for. As one of my former instructors pointed out, the sort of good clean mechanics that you want if you're going to win a fight just don't film that well. On the other hand, the main character of Redbelt is a Jiu-Jitsu instructor (I get the impression Brazilian), but it's not entirely clear, and that's not an unrealistic starting point, especially for MMA in the early years, which was dominated by BJJ practitioners. It turns out that Mamet is a BJJ purple belt (this is hard to get, BJJ doesn't award belts as easily as your average karate dojo).

The cinemetography is really choppy, so the action is hard to follow, but the training scenes aren't too far off, with the exception of there not being some Brazilian guy yelling at you in Portuguese. However, one of the central plot points is that the main character has a training technique where you get randomly assigned a handicap (e.g., one arm tied) and has to fight someone without a handicap ("you never know when you might get injured"). This actually seems like a quasi-interesting technique as a practice mechanism, but in the movie it gets used in competition and that strikes me as totally unrealistic. Having one hand tied is a huge handicap. For example, if you're right handed, when you throw a jab with your left hand you want to seal off your face with your right hand. If you can't do this, then you leave yourself open to the other person's jab or hook. It seems to me that having your right arm tied would more or less preclude punching at all. Similar considerations apply to grappling: it's very hard to choke someone out with only one hand. If the fighters are even remotely evenly matched, the handicapped fighter is almost certainly going to lose, which kind of misses the point of the competition, since the random handicap basically decides the match.

So, this is a little odd as part of the premise for a movie...


March 2, 2009

As I mentioned, I have a Garmin Forerunner 305 (BTW, the 405 is now out and looks really sweet). Anyway, it's not bad for giving you a record of your workout, but like all GPS-devices, the vertical accuracy is pretty bad (see here for an overview of why.) The problem isn't just that the receivers are wrong, it's that they drift a lot over a short period of time. As an example, I just turned on my 305 and over the past 5 minutes, I've seen it record anything from 5 to 57 feet. While this isn't a real problem when you're using it as a straight altimeter, since you don't need to be accurate to within more than a few tens of feet. But when you're trying to measure how many feet you've climbed or descended, it's a different story. For example, here's yesterday's workout:

For those of you who live in the area, this is Rancho San Antonio: PG&E Trail + Upper Wildcat Trail (Rancho Runner code 1bEF3MNLKR654V2D1aEF3UTS6RKLNM3FEb1) and is nominally 16.58 miles/2515 ft of climb. By contrast, the GPS thinks it's 5200 feet of climbing. Now, the maps that Rancho Runner is based on could be a little inaccurate, but they're not off by nearly 3,000 feet. To get a feel for what's going on, look at the last major downhill, starting at around 12.75 and descending to around 13.75. This is more or less a continuous downhill with no significant uphills, but as you can see, the graph shows a nontrivial amount of climbing. I suspect that the error in aggregate ascent is basically due to this sort of error. Since the nominal altitude varies a bit around the true altitude it looks like you're constantly climbing and descending, even when you're not, so you get very inaccurate ascent and descent readings.

Obviously, what you really want is to correct the GPS readings with a barometric altimeter, of course, and you can get watch-sized altimeters. For instance, I have a Polar 625 SX. That said, the Polar isn't small and the Forerunner isn't small either, and I think it's fair to assume that if you stick them together, it's not going to get any smaller. So, it's interesting ask whether you can correct the errors via software-only fixes.

Obviously, if you're willing to stand in one place long enough, you can average out the error, but that's not very useful if you're running, since you actually may be changing your altitude: the system needs to discriminate between actual altitude changes and GPS error. This isn't to say you can't average out, though: one possibility is to assume that there's some maximum slope to the trail and fit some sort of smoothing curve (e.g., a Kalman filter, a spline, FFT to remove high frequency components, etc.) to the data points and then use that to remove some of the error. Unfortunately, this only works to the extent to which the GPS drifts faster than the slope of the hill, and I'm not sure that's true. I'm seeing fairly high levels of drift (3-5 fps) with the unit sitting on my couch, but I have lousy reception here and it might be better outside. A moderately steep trail can easily drop 1-2 fps, so we're right on the edge of this working here.

Another idea, suggested by Kevin Dick, is to simply try to estimate the aggregate level of drift of the system against some natural reference (e.g., when the horizontal position is more or less fixed) and then try to use that to produce an overall correction factor. It's hard to predict how well this work, too, since it depends on the vertical error rate being approximately constant. I'm not sure that's actually true, though, since it's dependent on how many satellites are in view, their elevations, etc. If the vertical drift isn't relatively constant, though, then your correction factor will be completely out of whack, and you'll still get bogus results.

UPDATE: Cleaned up the discussion of filtering a bit.


March 1, 2009

This is interesting. Volt, one of Microsoft's major job shops, just uh, asked their employees to take a 10% pay cut:

We have evaluated all pay rates for our Microsoft agency temporary workers and have concluded that we will be asking each of you to share in these measures by accepting a 10% reduction in your pay rate. These reductions are very difficult for Volt to implement since we value each and every one of you; however this is mandatory in order to continue your assignment at Microsoft and to respond to this economic environment.

We want to support you in continuing your assignment at Microsoft and respectfully ask that you respond by going to the upper left hand corner of this email under the ?Vote? response option and select, ?Accept? by close of business Tuesday, March 3, 2009. By accepting you agree to the pay adjustment in your pay rate. Volt has prepared a formal written amendment to your employment agreement for your signature and will execute this amendment in your scheduled meeting.

That's sure a delicate way of putting it. If it's "mandatory", than you're not "voting", but rather you're simply accepting an ultimatum: "take a 10% pay cut or lose the work." I'm not saying that Volt has done something wrong, but it's not like people are going to be somehow fooled into thinking they're voluntarily taking one for the team.