EKR: February 2012 Archives

 

February 25, 2012

Disclaimer: I am not a car guy. Read the following with that in mind.

As long-time EG readers will know, I've complained in the past that my Prius has a feeble starter/electronics battery which is easy to run down even by leaving the interior lights on. This despite the fact that the Prius has a huge battery running the hybrid system to draw on. But I certainly didn't want this. Michael DeGusta reports that if you leave your Tesla parked for a long time (like months), then the car bleeds enough power off of the battery to run the auxilary vehicle systems [parasitic load] to drain it down into deep discharge (and hance damage to the battery) territory:

A Tesla Roadster that is simply parked without being plugged in will eventually become a "brick". The parasitic load from the car's always-on subsystems continually drains the battery and if the battery's charge is ever totally depleted, it is essentially destroyed. Complete discharge can happen even when the car is plugged in if it isn't receiving sufficient current to charge, which can be caused by something as simple as using an extension cord. After battery death, the car is completely inoperable. At least in the case of the Tesla Roadster, it's not even possible to enable tow mode, meaning the wheels will not turn and the vehicle cannot be pushed nor transported to a repair facility by traditional means.

The amount of time it takes an unplugged Tesla to die varies. Tesla's Roadster Owners Manual [Full Zipped PDF] states that the battery should take approximately 11 weeks of inactivity to completely discharge [Page 5-2, Column 3: PDF]. However, that is from a full 100% charge. If the car has been driven first, say to be parked at an airport for a long trip, that time can be substantially reduced. If the car is driven to nearly its maximum range and then left unplugged, it could potentially "brick" in about one week.1 Many other scenarios are possible: for example, the car becomes unplugged by accident, or is unwittingly plugged into an extension cord that is defective or too long.

When a Tesla battery does reach total discharge, it cannot be recovered and must be entirely replaced. Unlike a normal car battery, the best-case replacement cost of the Tesla battery is currently at least $32,000, not including labor and taxes that can add thousands more to the cost.

There's been a lot of controversy about this report (see, for instance, this defense), but Tesla's response seems to by consistent with DeGusta's basic argument, as does the letter that Jalopnik reproduces above:

All automobiles require some level of owner care. For example, combustion vehicles require regular oil changes or the engine will be destroyed. Electric vehicles should be plugged in and charging when not in use for maximum performance. All batteries are subject to damage if the charge is kept at zero for long periods of time. However, Tesla avoids this problem in virtually all instances with numerous counter-measures. Tesla batteries can remain unplugged for weeks (even months), without reaching zero state of charge. Owners of Roadster 2.0 and all subsequent Tesla products can request that their vehicle alert Tesla if SOC falls to a low level. All Tesla vehicles emit various visual and audible warnings if the battery pack falls below 5 percent SOC. Tesla provides extensive maintenance recommendations as part of the customer experience.

At present, then, the agreed upon facts seem to be that:

  1. If you leave the Tesla's batteries at zero charge, battery damage occurs.
  2. If you leave a Tesla unplugged for long enough, even with a charged battery, parasitic load from the vehicle systems will eventually consume the battery's charge, leaving you in state (1) above. [Note that this appears to exceed the Lithium-Ion self-discharge rate, so it likely is parasitic load.]

The controversy really seems to be about who's fault this is, namely whether the customer should have known better, whether Tesla notified them correctly, etc. I don't have a Tesla so I don't care about that. I'm much more interested in the engineering question of what's going on and what, if anything, can be done about it.

The parasitic load thing isn't totally unfamiliar territory, of course. Any modern vehicle has electronics and those need power, which they get from the battery. Some do a better job than others. My BMW R1200GS motorcycle, for instance, has this problem and the manual explicitly tells you to connect it to a trickle charger (an expensive BMW model, of course, though you can use a standard one if you're willing to do a tiny bit of work) if you're not going to drive it for a while, and I duly plug it into the wall whenever I get home. If you don't do that, however, the worst you're going to be out is new lead-acid battery, which depending on what vehicle you have, leaves you out something like $50-$200, not $40,000.

However, the level of load we're talking about here seems awful high. Remember that we're talking about a battery capable of powering your car for 200 miles or so on a single charge (53 kWh). In order to deplete the battery in 11 weeks (~2000 hrs) you would need continuous battery consumption of around 30 W. For comparison, a Macbook Air has a 50Wh battery and gets something like 5 hours on a charge, so it's like the Tesla is running 5 Airs at once 24x7. It's natural to ask where all that power is going, since you don't need anywhere near that much to keep a vehicle on standby. One likely source seems to be the battery cooling system, of which Wikipedia says "Coolant is pumped continuously through the ESS both when the car is running and when the car is turned off if the pack retains more than a 90% charge. The coolant pump draws 146 watts." [Original reference and long discussion here. Note that this post is due to Martin Eberhard, one of the Tesla Founders but apparently no longer with the company at the time he wrote it. Thanks Wayback Machine for preserving this!].

Obviously, if you have a load this high, then you're going to deplete the battery. The question then becomes whether there is some way of avoiding permanent battery damage as the depletion gets to dangerous levels. The natural thing to do is install some sort of cutoff that turns off all power drain once you get close to that level. This may end up blowing away a bunch of the car's configuration (though really, it's not that hard to store that stuff in flash memory, even though historically manufacturers have tended not to), but surely it's cheaper to reboot your car than replace the entire battery pack. However, if the power is going to the cooling system and the cooling system is doing something important, like keeping the battery from being damaged by excessive heat, then this may not help.

Oh, one more thing. DeGusta claims that Tesla has the capability to remotely monitor the battery and locate the car, and has sent people out to fix it:

In at least one case, Tesla went even further. The Tesla service manager admitted that, unable to contact an owner by phone, Tesla remotely activated a dying vehicle's GPS to determine its location and then dispatched Tesla staff to go there. It is not clear if Tesla had obtained this owner's consent to allow this tracking5, or if the owner is even aware that his vehicle had been tracked. Further, the service manager acknowledged that this use of tracking was not something they generally tell customers about.

If true, that would be... interesting.

 

February 11, 2012

Cryptography is great, but it's not so great if you get arrested and forced to give up your cryptographic keys. Obviously, you could claim that you've forgotten it (remember that you need a really long key to thwart exhaustive search attacks, so this isn't entirely implausible.) However, since you also need to regularly be able to decrypt your data, this means you need to be able remember your password, so it's not entirely plausible either, which means that you might end up sitting in jail for a long time due to a contempt citation. This general problem has been floating around the cryptographic community for a long term, where it's usually referred to as "rubber hose cryptanalysis", with the idea being that the attacker will torture you (i.e., beat you with a rubber hose) until you give up the key. This xkcd comic sums up the problem. Being technical people, there's been a lot of work on technical solutions, none of which are really fantastic. (see the Wikipedia deniable encryption page for one summary).

Threat model
As usual, it's important to think about the threat model, which in this case is more complicated than it initially seems. We assume that you have some encrypted data and that the attacker has a copy of that data and of the encryption software you have used. All they lack is the key. The attacker insists you hand over the key and has some mechanism for punishing you if you don't comply. Moreover, we need to assume that the attacker isn't a sadist, so as long as there's no point in punishing you further they won't. It's this last point that is the key to all the technical approaches I know of, namely convincing the attacker that they are unlikely to learn anything more by punishing you further, so they might as well stop. Of course, how true that assumption is probably depends on the precise nature of the proceedings and how much it costs the attacker to keep inflicting punishment on you. If you're being waterboarded in Guantanamo, the cost is probably pretty low, so you probably need to be pretty convincing.

Technical Approaches
Roughly speaking, there seem to be two strategies for dealing with the threat of being legally obliged to give up your cryptographic keys:

  • Apparent Compliance/Deniable Encryption.
  • Verifiable Destruction

Apparent Compliance/Deniable Encryption
The idea behind an apparent compliance strategy is that you pretend to give up your encryption key, but instead you give up another key that decrypts the message to an innocuous ciphertext. More generally, you want a cryptographic scheme which produces a given ciphertext C which maps onto a series of plaintexts M_1, M_2, ... M_n via a set of keys K_1, K_2, ... K_n. Assume for the moment that only M_n is and M_1, ... M_n-1 are either fake or real (but convincing) non-sensitive data. So, when you are captured, you reveal K_1 and claim that you've decrypted the data. If really pressed, you reveal K_2 and so on.

The reason that this is supposed to work is that the attacker is assumed to not know n. However, since they have a copy of your software, they presumably know that it's multilevel capable, so they know that there may be more than one key. They just don't know if you've given them the last key. All the difficult cryptographic problems are about avoiding revealing n. There are fancy cryptographic ways to do this (the original paper on this is by Canetti, Dwork, Naor, and Ostrovsky), but consider one simple construction. Take each message M_i and encrypt it with K_i to form C_i and then concatenate all the results to form C. The decryption procedure given a single key is to decrypt each of the sub-ciphertexts in turn and discard any which don't decrypt correctly (assume there is some simple integrity check.) Obviously, if you have a scheme this trivial, then it's easy for an attacker to see how many keys there are just by insisting you provide keys for all the data, so you also pad C with a bunch of random-appearing data which you really can't decrypt at all, which in theory creates plausible deniability. This is approximately what TrueCrypt does):

Until decrypted, a TrueCrypt partition/device appears to consist of nothing more than random data (it does not contain any kind of "signature"). Therefore, it should be impossible to prove that a partition or a device is a TrueCrypt volume or that it has been encrypted (provided that the security requirements and precautions listed in the chapter Security Requirements and Precautions are followed). A possible plausible explanation for the existence of a partition/device containing solely random data is that you have wiped (securely erased) the content of the partition/device using one of the tools that erase data by overwriting it with random data (in fact, TrueCrypt can be used to securely erase a partition/device too, by creating an empty encrypted partition/device-hosted volume within it).

How well this works goes back to your threat model. The attacker knows there is some chance that you haven't revealed all the keys and maybe if they punish you further you will give them up. So, whether you continue to get punished depends on their cost/benefit calculations, which may be fairly unfavorable to you. The problem is worse yet if the attacker has any way of determining what correct data looks like. For instance, in one of the early US court cases on this, In re Boucher, customs agents had seen (or at least claimed to had seen) child pornography on the defendant's hard drive and so would presumably have known a valid decryption from an invalid one. Basically, in any setting where the attacker has a good idea of what they are looking for and/or can check the correctness of what you give them, a deniable encryption scheme doesn't work very well, since the whole scheme relies on uncertainty about when you have actually given up the last key.

Verifiable Destruction
An alternative approach that doesn't rely on this kind of ambiguity is to be genuinely unable to encrypt the data and to have some way of demonstrating this to the attacker. Hopefully, a rational attacker won't continue to punish you once you've demonstrated that you cannot comply. It's demonstrating part that's the real problem here. Kahn and Schelling famously sum up the problem of how to win at "chicken":

Some teenagers utilize interesting tactics in playing "chicken." The "skillful" player may get into the car quite drunk, throwing whiskey bottles out the window to make it clear to everybody just how drunk he is. He wears dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem;

Of course, as Allan Schiffman once pointed out to me, the really skillful player keeps a spare steering wheel in his car and throws that out the window. And our problem is similar: demonstrating that you have thrown out the data and/or key and you don't have a spare lying around somewhere.

The technical problem then becomes constructing a system that actually works. There are a huge variety of potential technical options here, but at a high-level, it seems like solutions fall into two broad classes, active and passive. In an active scheme, you actively destroy the key and/or the data. For instance, you could have the key written on a piece of paper which you eat, or there is a thermite charge on your computer which melts it to slag when you press a button. In a passive system, by contrast, no explicit action is required by you, but you have some sort of deadman switch which causes the key/data to be destroyed if you're captured. So, you might store the data in a system like Vanish (although there are real questions about the security of Vanish per se), or you have the key stored offsite with some provider who promises to delete the key if you are arrested or if you don't check in every so often.

I'm skeptical of how well active schemes can be made to work: once it becomes widely known how any given commercial scheme works, attackers will take steps to circumvent it. For instance, if there is some button you press to destroy your data, they might taser you and ask questions later to avoid you pressing it. Maybe someone can convince me otherwise, but this leaves us mostly with passive schemes (or semi-passive schemes as discussed in a bit.) Consider the following strawman scheme:

Your data is encrypted in the usual way, but part of the encryption key is stored offsite in some location inaccessible to the attacker (potentially outside their legal jurisdiction if we're talking about a nation-state type attacker). The encryption key is stored in a hardware security module, and if the key storage provider doesn't hear from you (and you have to prove possession of some key) every week (or two weeks or whatever), they zeroize the HSM, thus destroying your key. It's obviously easy to build a system like this where the encryption software automatically contacts the key storage provider, proves possession, and thus resets their deadman timer, so as long as you use your files every week or so, you're fine.

So, if you're captured, you just need to hold out until the deadman timer expires and then the data really isn't recoverable by you or anyone else. Of course, "not recoverable" isn't the same as "provably not recoverable", since you could have kept a backup copy of the keys somewhere—though the software could be designed in a way that this was inconvenient, thus giving some credibility to the argument that you did not. Moreover, this design is premised on the assumption that there is actually somewhere that you could store your secret data that the attacker couldn't get it from. This may be reasonable if the attacker is the local police, but perhaps less so if the attacker is the US government. And of course any deadman system is hugely brittle: if you forget your key or just don't refresh for a while, your data is gone, which might be somewhat inconvenient.

One thing that people often suggest is to have some sort of limited-try scheme. The idea here is that the encryption system automatically erases the data (and/or a master key) if the wrong password/key is entered enough times. So, if you can just convincingly lie N times and get the attacker to try those keys, then the data is gone. Alternately, you could have a "coercion" key which deletes all the data. It's clear that you can't build anything like this in a software-only system: the attacker will just image the underlying encrypted data and write their own decryption software which doesn't have the destructive feature. You can, however, build such a system using hardware security modules (assume for now that the HSM can't be broken directly.) This is sort of a semi-passive scheme in that you are intentionally destroying the data, but the destruction is produced by the attacker keying in the alleged encryption key.

The big drawback with any verifiable destruction system is that it leaves evidence that you could have complied but didn't; in fact, that's the whole point of the system. But this means that the attacker's countermove is to credibly commit to punishing you for noncompliance after the fact. I don't think this question has ever been faced for crypto, but it has been faced in other evidence-gathering contexts. Consider, for instance, the case of driving under the influence: California requires you to take a breathalyzer or blood test as a condition of driving [*], and refusal carries penalties comparable to those for being convicted of DUI. One could imagine a more general legal regime in which actively or passively allowing your encrypted data to be destroyed once you have been arrested was itself illegal, and with a penalty that was large enough that it would almost never be worth refusing to comply (obviously the situation would be different in extra-legal settings, but the general idea seems transferable.) I'll defer to any lawyers reading this about how practical such a law would actually be.

Bottom Line
Obviously, neither of these classes of solution seems entirely satisfactory from the perspective of someone who is trying to keep their data secret. On the other hand, it's not clear that this is really a problem that admits of a good technical solution.