COMSEC: March 2009 Archives

 

March 20, 2009

From my review of draft-meyer-xmpp-e2e-encryption-01:

The context of this draft is that currently messages in XMPP from
alice@atlanta.com to bob@biloxi.net go through Alice and Bob's
respective servers (atlanta.com and biloxi.net) in transit.  This
implies that Alice and Bob need to trust their servers both to enforce
appropriate security policies (i.e., to make sure there is TLS along
the whole path if appropriate) and not to actively subvert security as
by message sniffing, injection, etc. The purpose of this document is
to allow Alice and Bob to establish an end-to-end secure cryptographic
channel that does not rely on the server for security.

THREAT MODEL
Before talking about the draft details, it's important to get clear on
the threat model. In particular, we need to be clear on how much the
server are trusted. There are at least three plausible models:

- The server is trusted completely (the current system).
- The server is trusted to authenticate Alice and Bob,
  but should not see the traffic.
- Server not trusted at all.

Clearly, we're trying to do better than the first of these, so it's
between the second two.  For contrast, in SIP (cf. RFC 4474) the basic
assumption is that that proxy (the server) owns the namespace
associated with it. So, for instance, if atlanta.com decides it wants
to take the name "alice@atlanta.com" away from Alice and give it to
her sister "Alice", it can. So, the proxy is trusted to authenticate
Alice, but shouldn't see the traffic, i.e., the second model.

The security requirements for these two are different. In particular,
in the second case, you need some independent mechanism for Alice and
Bob to authenticate each other. 

I think it's important to be clear on which of these environments
you think is the dominant one. I'm sure there are *some* cases
where people don't trust the servers at all, but I suspect in most cases
they just want (1) not to have to trust the server to enforce security
policy and (2) deter casual sniffing by server operators. In these
cases, a model where the server authenticates the users for an
E2E connection (a la DTLS-SRTP) is appropriate. If that's a common
model, then forcing all users to use a secure independent channel
just because some want to is going to be a very serious inconvenience.
My instinct is that that's a mistake.


AUTHENTICATION MECHANISMS
The design of a system in which the servers vouch for the users identities
is fairly straightforward, with DTLS-SRTP as a model: the servers simply
authenticate the users and then pass on digests of the user's certificates
(as provided by the user) along with an authenticated indication of the
user's identity (a la RFC 4474 or even the current TLS model)
and the end-to-end connection is compared to these fingerprints.

As noted above, the design of a system in which the servers aren't trusted
is significantly more complicated. Roughly speaking, thre are three
major techniques available here: key/certificate fingerprints, a
shared password, and a short authentication string. See
[http://www.educatedguesswork.org/2008/08/authentication.html] for
some background here.

I think it's generally agreed that fingerprints are too much of a hassle
for regular use, though if your model was that most users would be
happy without it, then you might think that they would be OK for 
the exceptionally paranoid. 

This leaves us with SAS and shared passwords. The important interface
differences here are as follows:

- The SAS must be verified *after* the connection is set up. The password
  must be set up before hand.
- You can use the same password with multiple people semi-safely. 
  The SAS is new for every message.
- SAS probably requires modifying TLS. There are existing mechanisms 
  for passwords.
- The SAS is "optional" in the sense that you can generate it and not
  check it. The password is "mandatory" in the sense that if it's
  specified, it must be supplied or the connection will not be set up.

Passwords can be further broken down into two variants: ZKPP/PAKE
schemes and ordinary PSK schemes. The relevant differences between
these two are that PSK schemes are susceptible to offline dictionary
attack but that ZKPP/PAKE schemes have a much more problematic IPR
situation.

Finally, there is the question of where the authentication is done.
As I noted above, TLS has existing PSK and SRP mechanisms. However,
one could also add at least password and PAKE mechanisms to 
SASL if one wanted and use a channel binding to connect the two.

More to come at the XMPP2 BOF next week in San Francisco, which BOF for some unknown reason, I'm chairing.

 

March 19, 2009

Can someone explain to me why when when I go to download Firefox, Xcode, or a bunch of other software for that matter, it happens over HTTP and not HTTPS? Remember, I'm about to install and run this software on my computer: if an attacker has managed to hijack my connection, they can get me to run anything they want. But nooo.... Even if you connect to the site with HTTPS, it redirects you to HTTP to download your file. There are obvious reasons to favor HTTP over HTTPS, namely performance and allowing mirrors. On the other hand, that makes the need for publication of the digest even more critical, since it sucks to have to trust the mirror.

If you're going to use mirrors, the right thing to do here is to public a digest of the file on an HTTPS-accessible page (remember: these sites already will let you access them over HTTPS, so this doesn't make the situation worse). This would let users download the file from a mirror and then check the digest against the master site. I don't see digests on either site, though. It could just be that I'm missing it, but then surely lots of others are as well.

 

March 12, 2009

I'm not an expert on quantum computing, but luckily EG reader Dan Simon is. the other day in the comments section he explained why he doesn't think it's very relevant. It's worth a read:

My impression is that watermarking is to signal processing what quantum computing is to theoretical physics, or cryptography to number theory: a way for a massive oversupply of researchers in a once-proud field to make a claim to relevance.

...

Basically, there is one thing that quantum computers have been found to be capable of doing much better than classical computers. That one thing has been characterized variously as "finding hidden subgroups", "solving the abelian stabilizer problem", or "finding periodicities in abelian groups". Because this one thing happens to lead to polynomial-time algorithms for integer factoring and discrete log, quantum computers have been bandied about as an incredible new computing technology, but the truth is that this one thing is really very limited in scope, and in a decade and a half, nobody's found another significant application for it.

Moreover, there are lots of (admittedly informal) reasons for believing that quantum computers can't really do anything interesting beyond this one thing. So we're left with a technology that, even if perfected*, is unlikely to be able to accomplish anything of interest beyond solving a certain narrow class of number theory problems.**

Dan goes on to observe that there are other public key algorithms not in wide use that don't appear to be vulnerable to quantum computing.

This brings us to another class of people besides quantum computing researchers with an interest in hyping the technology: people working on alternatives to factoring and discrete-log based cryptosystems. The deployment cycle of new public key algorithms is incredibly slow: to a first order, everyone outside the government is still using RSA. This means that new public key algorithms with similar "interfaces" to existing algorithms (e.g., they're interchangeable but faster or more secure, etc.) don't have much of a real-world value proposition outside of specialized niches, especially as there are a whole slew of existing algorithms with better properties based on elliptic curves, pairings, etc. But if QC actually worked, then those systems would all be broken and we'd need to reinvent them based on different problems: instant job security for cryptographers.

 

March 10, 2009

Over the past few years I and a few collaborators have been working to develop a better system for key establishment for standards-based real time voice (i.e., SIP/RTP). Skype already has such a system, but unfortunately it's a closed system and the available systems for SIP and RTP had some serious problems. While this job is far from finished, today the IESG approved the first round of documents describing the two major pieces of this protocol: draft-ietf-avt-dtls-srtp and draft-ietf-sp-dtls-srtp-framework. There are a few smaller documents still and the minor task of getting widespread implementations remains, but this is definitely progress. Thanks are due to everyone who contributed to this effort.