Protecting your encrypted data in the face of coercion

| Comments (1) | COMSEC
Cryptography is great, but it's not so great if you get arrested and forced to give up your cryptographic keys. Obviously, you could claim that you've forgotten it (remember that you need a really long key to thwart exhaustive search attacks, so this isn't entirely implausible.) However, since you also need to regularly be able to decrypt your data, this means you need to be able remember your password, so it's not entirely plausible either, which means that you might end up sitting in jail for a long time due to a contempt citation. This general problem has been floating around the cryptographic community for a long term, where it's usually referred to as "rubber hose cryptanalysis", with the idea being that the attacker will torture you (i.e., beat you with a rubber hose) until you give up the key. This xkcd comic sums up the problem. Being technical people, there's been a lot of work on technical solutions, none of which are really fantastic. (see the Wikipedia deniable encryption page for one summary).

Threat model
As usual, it's important to think about the threat model, which in this case is more complicated than it initially seems. We assume that you have some encrypted data and that the attacker has a copy of that data and of the encryption software you have used. All they lack is the key. The attacker insists you hand over the key and has some mechanism for punishing you if you don't comply. Moreover, we need to assume that the attacker isn't a sadist, so as long as there's no point in punishing you further they won't. It's this last point that is the key to all the technical approaches I know of, namely convincing the attacker that they are unlikely to learn anything more by punishing you further, so they might as well stop. Of course, how true that assumption is probably depends on the precise nature of the proceedings and how much it costs the attacker to keep inflicting punishment on you. If you're being waterboarded in Guantanamo, the cost is probably pretty low, so you probably need to be pretty convincing.

Technical Approaches
Roughly speaking, there seem to be two strategies for dealing with the threat of being legally obliged to give up your cryptographic keys:

  • Apparent Compliance/Deniable Encryption.
  • Verifiable Destruction

Apparent Compliance/Deniable Encryption
The idea behind an apparent compliance strategy is that you pretend to give up your encryption key, but instead you give up another key that decrypts the message to an innocuous ciphertext. More generally, you want a cryptographic scheme which produces a given ciphertext C which maps onto a series of plaintexts M_1, M_2, ... M_n via a set of keys K_1, K_2, ... K_n. Assume for the moment that only M_n is and M_1, ... M_n-1 are either fake or real (but convincing) non-sensitive data. So, when you are captured, you reveal K_1 and claim that you've decrypted the data. If really pressed, you reveal K_2 and so on.

The reason that this is supposed to work is that the attacker is assumed to not know n. However, since they have a copy of your software, they presumably know that it's multilevel capable, so they know that there may be more than one key. They just don't know if you've given them the last key. All the difficult cryptographic problems are about avoiding revealing n. There are fancy cryptographic ways to do this (the original paper on this is by Canetti, Dwork, Naor, and Ostrovsky), but consider one simple construction. Take each message M_i and encrypt it with K_i to form C_i and then concatenate all the results to form C. The decryption procedure given a single key is to decrypt each of the sub-ciphertexts in turn and discard any which don't decrypt correctly (assume there is some simple integrity check.) Obviously, if you have a scheme this trivial, then it's easy for an attacker to see how many keys there are just by insisting you provide keys for all the data, so you also pad C with a bunch of random-appearing data which you really can't decrypt at all, which in theory creates plausible deniability. This is approximately what TrueCrypt does):

Until decrypted, a TrueCrypt partition/device appears to consist of nothing more than random data (it does not contain any kind of "signature"). Therefore, it should be impossible to prove that a partition or a device is a TrueCrypt volume or that it has been encrypted (provided that the security requirements and precautions listed in the chapter Security Requirements and Precautions are followed). A possible plausible explanation for the existence of a partition/device containing solely random data is that you have wiped (securely erased) the content of the partition/device using one of the tools that erase data by overwriting it with random data (in fact, TrueCrypt can be used to securely erase a partition/device too, by creating an empty encrypted partition/device-hosted volume within it).

How well this works goes back to your threat model. The attacker knows there is some chance that you haven't revealed all the keys and maybe if they punish you further you will give them up. So, whether you continue to get punished depends on their cost/benefit calculations, which may be fairly unfavorable to you. The problem is worse yet if the attacker has any way of determining what correct data looks like. For instance, in one of the early US court cases on this, In re Boucher, customs agents had seen (or at least claimed to had seen) child pornography on the defendant's hard drive and so would presumably have known a valid decryption from an invalid one. Basically, in any setting where the attacker has a good idea of what they are looking for and/or can check the correctness of what you give them, a deniable encryption scheme doesn't work very well, since the whole scheme relies on uncertainty about when you have actually given up the last key.

Verifiable Destruction
An alternative approach that doesn't rely on this kind of ambiguity is to be genuinely unable to encrypt the data and to have some way of demonstrating this to the attacker. Hopefully, a rational attacker won't continue to punish you once you've demonstrated that you cannot comply. It's demonstrating part that's the real problem here. Kahn and Schelling famously sum up the problem of how to win at "chicken":

Some teenagers utilize interesting tactics in playing "chicken." The "skillful" player may get into the car quite drunk, throwing whiskey bottles out the window to make it clear to everybody just how drunk he is. He wears dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem;

Of course, as Allan Schiffman once pointed out to me, the really skillful player keeps a spare steering wheel in his car and throws that out the window. And our problem is similar: demonstrating that you have thrown out the data and/or key and you don't have a spare lying around somewhere.

The technical problem then becomes constructing a system that actually works. There are a huge variety of potential technical options here, but at a high-level, it seems like solutions fall into two broad classes, active and passive. In an active scheme, you actively destroy the key and/or the data. For instance, you could have the key written on a piece of paper which you eat, or there is a thermite charge on your computer which melts it to slag when you press a button. In a passive system, by contrast, no explicit action is required by you, but you have some sort of deadman switch which causes the key/data to be destroyed if you're captured. So, you might store the data in a system like Vanish (although there are real questions about the security of Vanish per se), or you have the key stored offsite with some provider who promises to delete the key if you are arrested or if you don't check in every so often.

I'm skeptical of how well active schemes can be made to work: once it becomes widely known how any given commercial scheme works, attackers will take steps to circumvent it. For instance, if there is some button you press to destroy your data, they might taser you and ask questions later to avoid you pressing it. Maybe someone can convince me otherwise, but this leaves us mostly with passive schemes (or semi-passive schemes as discussed in a bit.) Consider the following strawman scheme:

Your data is encrypted in the usual way, but part of the encryption key is stored offsite in some location inaccessible to the attacker (potentially outside their legal jurisdiction if we're talking about a nation-state type attacker). The encryption key is stored in a hardware security module, and if the key storage provider doesn't hear from you (and you have to prove possession of some key) every week (or two weeks or whatever), they zeroize the HSM, thus destroying your key. It's obviously easy to build a system like this where the encryption software automatically contacts the key storage provider, proves possession, and thus resets their deadman timer, so as long as you use your files every week or so, you're fine.

So, if you're captured, you just need to hold out until the deadman timer expires and then the data really isn't recoverable by you or anyone else. Of course, "not recoverable" isn't the same as "provably not recoverable", since you could have kept a backup copy of the keys somewhere—though the software could be designed in a way that this was inconvenient, thus giving some credibility to the argument that you did not. Moreover, this design is premised on the assumption that there is actually somewhere that you could store your secret data that the attacker couldn't get it from. This may be reasonable if the attacker is the local police, but perhaps less so if the attacker is the US government. And of course any deadman system is hugely brittle: if you forget your key or just don't refresh for a while, your data is gone, which might be somewhat inconvenient.

One thing that people often suggest is to have some sort of limited-try scheme. The idea here is that the encryption system automatically erases the data (and/or a master key) if the wrong password/key is entered enough times. So, if you can just convincingly lie N times and get the attacker to try those keys, then the data is gone. Alternately, you could have a "coercion" key which deletes all the data. It's clear that you can't build anything like this in a software-only system: the attacker will just image the underlying encrypted data and write their own decryption software which doesn't have the destructive feature. You can, however, build such a system using hardware security modules (assume for now that the HSM can't be broken directly.) This is sort of a semi-passive scheme in that you are intentionally destroying the data, but the destruction is produced by the attacker keying in the alleged encryption key.

The big drawback with any verifiable destruction system is that it leaves evidence that you could have complied but didn't; in fact, that's the whole point of the system. But this means that the attacker's countermove is to credibly commit to punishing you for noncompliance after the fact. I don't think this question has ever been faced for crypto, but it has been faced in other evidence-gathering contexts. Consider, for instance, the case of driving under the influence: California requires you to take a breathalyzer or blood test as a condition of driving [*], and refusal carries penalties comparable to those for being convicted of DUI. One could imagine a more general legal regime in which actively or passively allowing your encrypted data to be destroyed once you have been arrested was itself illegal, and with a penalty that was large enough that it would almost never be worth refusing to comply (obviously the situation would be different in extra-legal settings, but the general idea seems transferable.) I'll defer to any lawyers reading this about how practical such a law would actually be.

Bottom Line
Obviously, neither of these classes of solution seems entirely satisfactory from the perspective of someone who is trying to keep their data secret. On the other hand, it's not clear that this is really a problem that admits of a good technical solution.

1 Comments

I was wondering when you were going to bring this up. I was hoping the news would be better. But not expecting it. There appear to be "physics" involved.

Leave a comment