May 2011 Archives


May 28, 2011

I've been listening to Gregory Clark's World Economic History -- Pre-Hsitory to the Industrial Revolution class on iTunes U. The class, taught out of Clark's popular A Farewell To Alms, is devoted to Clark's thesis that prior to the Industrial Revolution all humans mostly lived on the Malthusian frontier [*]. That is to say that the population level is maintained in equilibrium between population and resource levels/income. Anything that adjusts these factors temporarily removes the system from equilibrium, but homeostasis quickly reasserts itself. So, for instance, if there is some new technology that increases crop yields, people respond by having more children or by living longer (whether consciously or because being better fed makes you more fertile and live longer) and thus population increases with little or no likely impact on overall standard of living. Thus (Clark argues) there was nearly no net improvement in standard of living from the Neolithic to the Industrial Revolution.

As I understand it, Clark argues that two factors acted to change this state of affairs. First, technological change accelerated to the point where the the amount of resources that could be exploited was changing faster than the time scale on which birth and death rates responded, keeping the system permanently out of equilibrium. Second, people started practicing real fertility control, which acts to damp the population response to higher income levels (and of course there are natural limits on how much lifespan can increase solely on the basis of income.) [There's also a whole bunch of stuff about how these changes are due to cultural and biological evolution as a result of differential reproduction between the rich and the poor, but I don't want to talk about that just yet.]

Much of the first half of the course is devoted to explicating the model, and in classic counterintuitive economist style, Clark makes a big point about how in the Malthusian world, things that you ordinarily think of as good are bad, and vice versa. To take on example, if a horrible disease gets introduced into your society so that it increases the death rate by 10%, that leaves more resources for everyone else with the result that the people who don't die of plague have a higher overall standard of living. Here's Clark's Table 2.2 (page 37 of "A Farewall to Alms"):

Malthusian "Virtues" and "Vices"
Fertility limitationFecundity
Bad sanitationCleanliness
Harvest failuresPublic granaries
InfanticideParental solicitude
Income inequalityIncome equality
IndolenceHard work

Note the scare quotes around virtues and vices. Clark sort of equivocates between the view that societies with short life spans and high material standards of living are really better and the view that income levels are just one instrument. For instance, on page 36 he writes:

In summary table 2.2 shows Malthusian "virtues" and "vices." But virtue and vice here are measured with reference only to whether actions raised or lowered material income per person.

This sort of supports the "it's just an instrument" view but then on page 38 Clark writes:

The failure of settled agriculture to improve living conditions, and the possibility that living conditions fell with the arrival of agricultire, have led some economists, anthropologists, and archaeologists to puzzle over why mankind abandoned the superior hunter-gatherer lifestyle for inferior agrarian societies.

This argument isn't a new one. In fact, this specific form of the argument was famously made by Diamond's The Worst Mistake in the History of the Human Race. Clark comes back to the more general theme repeatedly, and suggests in a number of places during the class that people in fact would prefer to live in societies with a lot of disease and violence but correspondingly higher material living standards (though again, there is some ambiguity about the level to which he is actually endorsing this view.)

It seems to me that one ought to raise at least two objections to this general line of argument. First, it's not at all obvious that it's that useful to assess people's welfare by their income level (or as I once heard it put more crudely, by the number of calories they are able to consume per day.) [I wish I could remember where I read this.] First, to a great degree human sense of how happy they are is positional, so if everyone in society gets 100% richer, that doesn't make everyone 100% happier, it's just that instead of being jealous of the guy next to me at the stop light in his Audi RS4, I'm now jealous that he has a 911 Turbo; not much of a net win. Second, in the Malthusian model a lot of the "virtues" that result in a higher standard of living are things people find really unpleasant. In particular, random violence, crop failures, and disease (and uncertainty in general) is incredibly stressful (see Sapolsky's Why Zebras Don't Get Ulcers for a good primer on this.) It's not clear at all that people in general would rather trade much higher rates of catastrophic events for a somewhat higher level of expected welfare if you survive. Clark does offer some arguments that people make choices along these lines, for instance that people voluntarily joined the East India Company even though the risks were very high, but it's not clear that it's really that useful to use the risk/reward behaviors of 20-year old male adventure seekers as a stand-in for the entire society.

Even if we are to concede that people are individually happier in societies with high violence and disease rates (and hence lower populations) but high standards of living than they would be in societies with lower violence and disease rates (and hence higher populations) but correspondingly lower standards of living, that does not mean that those societies are truly more desirable. (See, for instance Parfit's "Only France Survives" in Reasons and Persons). Surely, most people in societies with a low standard of living would nevertheless prefer to be alive than dead, even if dead would mean that other people would live better, so it's difficult to say that the society with the lower population is better, especially when that equilibrium is obtained by high death rates (i.e., the killing of people who exist and have their own interests) rather than low birth rates (i.e., the nonexistence of people who might otherwise exist.)


May 25, 2011

As I wrote a while back, NSTIC feels like requirements written with a solution already in mind, specifically what's called a "federated identity" system. The basic idea behind a federated identity system is that a person who wants to do stuff on the Internet (i.e., you) will have a relationship with some identity provider (IdP). At minimum, you'll share some set of authentication credentials with the IdP, so that you're able to establish to them that you're sitting at a given computer. Then, when you go to an arbitrary site on the Internet and want to authenticate, that site (called the relying party) can verify your identity via the IdP. What's nice (at least theoretically) about a system like this is that you can just authenticate once to the IdP and then all other authentication transactions are handled more or less seamlessly via the IdP. (This is often called single sign-on (SSO)). What makes the system federated is that there is expected to be more than one IdP and so I might be able to get my identity through the USPS and you get yours through Wells Fargo, but Amazon trusts both so we're both able to get our $0.59 stoneware bowls via Amazon Prime.

This isn't by any means a new idea. In the US your driver's license is issued by your state, but it's accepted as identification by a whole variety of relying parties, not just the state and its agents. It's not just your driver's license, either; you likely have a relationship with multiple IdPs. For instance, I have a driver's license (issued by California) and a passport (issued by the US State Department), plus a variety of more specialized credentials which are arguably used for authentication/authorization, such as credit cards, library cards, etc. That's not really the state of play for Internet transactions, though, with the partial exception of Facebook Connect (discussed below).

Simple SSO
In the simplest case, all the IdP does is attest to the fact that you have an account with them and maybe some weak indication of your identity, such as your claimed display name and an account identifier. There are a variety of systems of this type, such as OpenID, but by far the most popular is Facebook Connect. The way that Facebook Connect works is that you log into Facebook and then when you visit a Facebook Connect relying party, they are able to get your account information from Facebook, though that mostly consists of information you've given Facebook, so there's no real guarantee that for instance the name that Facebook delivers the relying party is your real name.

Even a simple system like this presents technical challenges, especially when it's implemented in the extremely limited Web environment (hopefully more on this in a separate post). First, there's the question of privacy: just because I have an account with IdP X and site Y is a relying party for IdP X doesn't mean that when I visit site Y I want them to be able to identify me. The more widely accepted an IdP is, the more serious this problem becomes; if every site in the world accepts my IdP, then potentially I can be tracked everywhere I visit, both by the IdP and by the relying parties. If we want to avoid creating a universal tracking mechanism (people often call this a supercookie), we need to somehow let users decide whether to authenticate themselves to relying parties via their IdPs. This creates obvious UI challenges, especially because there's plenty of evidence that users get fixated on whatever task they are trying to accomplish and tend to just click through whatever dialogs are required to achieve that objective.

The second challenge is dealing with multiple IdPs: in an environment where there are a lot of different IdPs and different relying parties accept different IdPs, then we need some mechanism to let relying parties discover which IdP (or IdPs) I have an account with. This is actually a little trickier than it sounds to do in a privacy-preserving way because the profile of which IdPs I support can itself be used as a user-specific fingerprint even if none of the IdPs directly discloses my identity to the relying party. Moreover, when I have a pile of different IdPs, I need to somehow select which IdP to authenticate with, which means more UI.

Real-Life Attributes and Minimal Disclosure
Once you get past a system that just carries an identifier—especially one that's unique but not really tied to any form of verifiable identity—life gets more complicated. Consider your driver's license, which typically has your name, address, age, picture, and driver's license number. When I go to a bar to buy a drink, all I need is to demonstrate that I'm over 18, but there's no reason for the bar to know my real name, let alone my address. In general, the more attributes an identity system can prove, the more useful it is, but also the more of a privacy threat it potentially is if any relying party just learns everything about me.

There has been a lot of work on cryptographic systems designed to allow people to prove individual properties to relying parties without revealing information that the relying party doesn't need to see (the term here is "minimal disclosure"), such as Microsoft's U-Prove. The idea here is that you will establish a whole bunch of attributes about yourself to one or more "claims providers". You can individually prove these claims to relying parties without revealing information about other claims. In the best case, even claims providers/IdPs don't get to know which relying parties you are proving claims to or which claims you are proving (e.g., the state wouldn't get to find out that you were proving your age to a bar.)

Making this work requires a lot of cryptography, though at this point that's pretty well understood. However, it also requires user interaction to allow users to determine which claims are to be proven to which relying parties. So, for instance, you would visit a site and it would somehow tell your Web browser which claims it wanted you to prove; your browser would produce some UI to let you agree; once you've agreed, your browser would cooperate with the claims provider to prove those claims to the relying party.1

Putting it Together
Putting the pieces together here, what NSTIC seems to envision is a federated identity system of users, IdPs, claims providers, and relying parties. As a user you'd be able to select your IdP/claims providers and as a relying party you'd be able to decide which of these you trust. The whole system would be glued together with by privacy-preserving cryptographic protocols. In the next post, I'll try to explain some of the challenges of actually building a system like this in the Web environment.

1. It's worth noting that if you don't mind your IdP/claims provider learning who you authenticate to and which claims you prove, then you don't need any crypto magic. This is basically the kind of system Facebook Connect already is.


May 9, 2011

MSNBC (from Reuters) reports that Sen. Charles Schumer wants Amtrak to create a "no-ride" list for trains (þ Volokh):
Schumer, citing U.S. intelligence analysts, said attacks were also considered on Christmas and New Year's Day and following the president's State of the Union address.

He called on the U.S. Department of Homeland Security to expand the Secure Flight monitoring program, which cross-checks air travelers with the terror watch list in an attempt to prevent anyone on the "no-fly list" from boarding, for use on Amtrak.

Such a procedure would create an Amtrak "no-ride list" to keep suspected terrorists off the U.S. rail system, he said.

This is one of those situations where reasoning by analogy can lead you seriously astray. Airplanes are an unusual case not so much because they are uniquely vulnerable, but because they are uniquely secure. It's true that planes are relatively fragile in that a small amount of explosive can kill a lot of people, and that even small accidends tend to kill everyone. [This is only a partially unique property, though, so a full account of why planes are such an attractive target surely needs to involve some social and psychological factors.] However, they are also ordinarily well protected, so it's hard to get access to them to do damage. At least in theory planes are kept in secure conditions on the ground so it's hard to place a bomb, and obviously once they're in the air it takes something like a surface-to-air missile (or least good luck with a gun) to cause catastrophic damage. This means that if you want to attack a plane, it's very convenient to actually be on it, so it's at least arguably useful to keep suspected terrorists off the plane.

But this doesn't apply to trains, which are (a) not particularly well secured and (b) easily accessible when they are in transit, which means you need to secure hundreds of miles of track. This means that if you want to attack a train, you don't need to be on it, you just need to get access to the track and damage it at the right time. (See here for a list of train accidents). It's like these guys have never seen Bridge on the River Kwai.

Even if that weren't true, it's important to remember that while planes are already a limited-access type thing, trains often are not. Amtrak may do some kind of passenger identification there are lots of commuter trains (e.g., Caltrain) where that not only aren't passengers identified, you can get on the train without a ticket. Instead, the conductors just come by periodically and audit. Converting to a system where you actually checked ID for each passenger before they got on (and remember that something like 50-100 people might get on in 2 minutes on an open platform) seems like it would be prohibitively expensive. These trains don't carry as many people as some Amtrak trains, but they certainly have enough passengers that if you could kill a significant fraction of them it would be bad. And this doesn't even get into the question of subways. In general, train security is set at a level designed to deter fare evasion, not to protect the train itself.

Even if you think that airplane security is set at an appropriate level (which IMHO it probably isn't) this seems like a security measure which comes at a huge amount of cost and very little benefit.


May 7, 2011

Like anyone else who listens to public radio, I've often felt that donating money would be a lot more attractive if you could just make the pledge breaks stop. Of course, since radio is a broadcast medium and you're only donating a small fraction of what they want to raise, that's not really possible. Technically speaking, I suppose they could set up some alternate, encrypted broadcast, but that sounds like more trouble than it's worth since conventional radios can't do any of that stuff and while satellite radios can do encryption not that many people have them and of course those who do aren't exactly the typical public radio demographic. It's of course obvious that Internet streaming could be used to provide this service, but that's not really that great a substitute either, especially for those of us who tend to listen to radios in our cars. However, KQED, at least, seems to have decided it's worth a try; this time around they are offering a pledge free streaming option, at least sort of.

I say "sort of" because it's not like anyone who donates gets access. Rather they're positioning it as one of their "gifts":

Public comments about our on-air fundraising drives have not been ignored! KQED has listened and is proud to have developed technology to respond with an alternative. The new Pledge-Free Stream is the first attempt by any public radio station to offer listeners the satisfaction of giving without pledge break interruptions. We believe it is critical for our organization to recognize how we can best serve you -- our members and listeners. Through this launch of the Pledge-Free Stream, we will be evaluating listener interest and feedback to inform us on the viability of this product in the future.

A few observations about this plan. First, it seems to reflect a rather different view of the role of the pledge breaks in KQED's programming than I would have expected—and certainly that I have. As suggested above, I always figured that the point of the pledge break was to hassle you to donate some money and that once that purpose had been fulfilled they would naturally stop with the hassling, except that with radio it's kind of an all-or-nothing proposition. That always seemed implicit to me in the exhortations from the announcers that once they hit their fund-raising goal they would go back to regular programming. However, that's clearly not how KQED sees things, since they're actually requiring you to separately pay for pledge-break freeness:

Because the Pledge-Free Stream is a separate gift item, you must select it when making your donation. For example, if you'd like to donate $75 and receive the KQED Wave T-shirt, you would still need to select the Pledge-Free Stream and give an additional $45, for a total of $120.

And since they do pledge 3x a year and this "gift" only applies for this pledge drive, you're looking at paying $135/year not to listen to fund raising.

So, why isn't KQED just providing this service to anyone who pledges?

At this time, we are not offering the Pledge-Free Stream as a free gift. Current members and recent donors will also need to give an additional donation of $45 to receive this service as a separate thank-you gift. Since this is the first time KQED or any public radio station has offered a pledge-free stream, it is important for us to accurately measure public interest. By your response and the number of people who donate for this gift, we will be able to evaluate how to offer the Pledge-Free Stream in the future. Your feedback will help us improve our service to you, as well as understand the value of this gift. There is also a substantial cost that KQED must cover to produce this secondary stream, from equipment to doubling the number of announcers. The funds raised through this service will help offset those costs.

As far as I can tell, the cost rationale is basically bogus. There's a fixed cost to producing the extra content, but as soon as they offer it to anyone then they've already incurred that cost. [Also, how high can it really be? They programming being preempted is largely national programming they pay NPR or PRI for plus the interstitial announcements they need to run.] As for the costs of hosting this, if we assume that they're running a 32kbps stream and every listener uses the system 24x7 they would be able to host the service on Cloudfront for something like $.05/listener-day or about $1.00 for the entire pledge period. Really, it's far less since nobody listens 24x7

The opening part of this paragraph seems to me to suggest a more likely likely rationale, namely that they're trialing a subscription service and want to see what the market will bear. There's actually one more factor to consider that they don't raise at all: I've been talking as if forcing people who have already donated to listen to pledge breaks was all loss, but arguably it's not. If nothing else, it forces those people to listen to a lot of free advertising for the station's sponsors (and especially to listen to the announcers pimping the various "gifts"). Allowing people to opt out to some extent diminishes the value those sponsors are receiving for funding the pledge drive, and perhaps diminishes their willingness to donate.


May 4, 2011

If everyone loved passwords, then we wouldn't be having an extended discussion about how to get rid of them (incidentally, I was at IIW this week, where the suckiness of passwords is a basic assumption.) So, what's there not to like?

The biggest problem with passwords as currently deployed is that they are replayable: in order for Alice to authenticate to her bank, she must provide her bank with her password. The unfortunate consequence of that is that once Alice has authenticated to her bank, then the bank can impersonate her in the future. This doesn't sound so bad, since it's not that useful for the bank to impersonate me to itself, but it has two very bad implications:

  • Phishing: If some attacker can convince me that they are my bank, and I give them my password, then they can impersonate me indefinitely, including to my bank. This sort of fraud is a huge issue for banks.
  • Unsafe Password Reuse: I'm not (overly) worried about my bank impersonating me to itself, but if I use the same password with two banks, then evil bank A might impersonate me to good bank B. More generally, any time I use the same password at two different sites, then I have to worry about whether I trust both those sites. This is what motivates the advice people usually get to use a different password at each site.

It turns out that there are technical mechanisms for alleviating these issues. The basic principle is to arrange that the merchant never gets to see a replayable password. The technology is complicated and there are a bunch of different mechanisms, but the basic idea is that when Alice establishes her account she gives the site some numeric verifier (V). Then when she comes back, she types her password into her browser which can then prove to the server that she knows V without ever giving the server a copy of V. PwdHash is one example of such a system, as are PAKE-based systems.

Password Proliferation
As Constant observes, it's probably not necessary to worry about Slashdot stealing your password and using it to impersonate you to Kayak, but even people without a lot of commercial relationships tend to have fair number of accounts that they probably don't treat interchangeably. This is particularly difficult when those accounts span a spectrum of security. Consider the following accounts ranked somewhat in increasing order of sensitivity:

  • Slashdot
  • Twitter
  • Gmail
  • Amazon
  • Bank of America
  • Morgan Stanley

I think there is a pretty fair argument that each of these represents a distinct level of security. I don't much care whether people post as me on Slashdot, but I probably do as Twitter. Unlike Twitter, I have actual private information on Gmail but there's actual money involved at Amazon but less than one the table for my bank account, and perhaps less than what I have for my investment portfolio at Morgan Stanley (Note: these providers do not necessarily represent my actual accounts.) Since these exist at different levels of security, they should have different passwords. Moreover, at the highest levels, I most likely want the use different credentials for each site. The end result of this is that I need to have (and likely remember, see below) a whole pile of passwords. This is not something that people like.

Backward Compatibility
Although we know how to build password-based systems that don't reveal the user's password to the relying party, we don't really know how to deploy them securely. The basic problem is that users are already prepared to type their passwords into Web forms that give the password to the server. It's not at all clear how to construct a UI that the user can be sure is safe and thus can type their password into and that also can't be imitated by a malicious Web site. [Technical note: it's easy to build UI that can't be imitated precisely, but the test is whether users will be fooled by bad imitations.] (More about this issue be found here.)

Low Entropy Space
A well-known problem with passwords is that they generally have a very low entropy level, which is to say that your average user draws their password from a relatively small number of passwords. This means, that if I have some oracle which will tell me whether a given candidate password is valid (e.g., a list of encrypted passwords, a server which I can try to log into, etc.) it doesn't take as many attempts as one would like to try the most probable candidate passwords. This is generally called dictionary attack.

The low entropy of the passwords isn't actually quite as bad as it sounds: even though users generally choose terrible passwords, in order to check a candidate password you generally need to try to log into the site in question, which affords the site the ability to do velocity checks and/or limited-try capabilities, such as locking your account after some fixed number of login failures. However, even then you need to use a password with a certain minimum level of security; if I use "ekr" as my username, then this is going to be a lot of attacker's first guess, so I need to get far enough up the entropy curve to make this kind of attack infeasible. That said, low entropy passwords significantly weaken the guarantees of using password diversification technologies like PwdHash, since it's comparatively easy for an attacker who has a verifier to extract the original password.

This brings us back to the memorability problem. Every new password is something else to remember, and (loosely) the higher the entropy of the passwords, the harder they are to remember.In the limit, if I have a randomly generated password for each site, I'm pretty much going to need some password manager to remember them (either that or a big pile of of post-it notes).1

Aside from the drawbacks listed above, passwords are inherently a 1-1 mechanism. If I have relationships with five different banks, I need to have established a password—even if it's the same one—with each of them. This sort of entry barrier is a pain for users, but especially for new sites, which have trouble converting visitors to users because they first need to drive them through an annoying registration experience. So, passwords don't really permit any notion of delegating trust to someone else. This is also true in the inverse sense, where I can't easily give you permission to look at my bank balance without giving you permission to make funds transfers, at least not without the bank going to a lot of effort.

A related concern is that passwords don't really have any mechanism for establishing stuff about users outside the system. For instance, when I want to sign up for a credit card, the issuer really wants to know that it's me, but the best they can do is use the (not-really) secrecy of my social security number, address, etc. as a weak password. Once I've signed up with them, a password may be fine, but it's not a useful entry point into the relationship. Similar arguments apply for proving that I'm over 21 or that I live in a given state.

Next Up: What kind of architecture is NSTIC contemplating?
Hopefully the above gives you a sense of the sort of concerns that are motivating something like NSTIC. While formally NSTIC is written as a set of requirements, to my eyes it's more like one of those documents whose authors start with a given solution in mind and write the requirements around that. In the next post in this series I'll try to talk a little bit about that implicit architecture.

1. Some people use a sort of lame mental hash function to generate related but distinct passwords for each site, but this seems to involve a fair amount of mental overhead.


May 1, 2011

As I said earlier, a lot of the use cases used to motivate NSTIC are about the inadequacy of existing 1-1 authentication mechanisms. As you can imagine, a huge amount of research has gone into trying to figure out how to build a set of systems which don't have these drawbacks, but the results haven't been entirely satisfactory. The following is an attempt to briefly survey the space and why it's proven so difficult. To be honest, I'm getting a little tired of writing this kind of thing (an older attempt to do this in longer form for the non-Web context can be found here), but it has to get done if you're to make sense of the rest.

It's probably easiest to get a sense of what the problem is by looking at deficiencies in the existing password-type systems on the Web. As you all know, what happens now is that you go to some site—which, if you're lucky, uses HTTPS— and it gives you a form (i.e., text fields on the page) to enter your username and password. This user interface is completely under control of the Web site, and to a first order just looks like any other Web form to the browser1 You type that stuff in, hit return or click on the submit button, and the browser sends the username and password to the server, which verifies them and either lets you log in or not. [Technical note: each page you fetch/link you click on a site is sort of an independent transaction. The site uses web cookies to string the transactions to gether so you don't have to type your username/password on each page.] There's plenty to hate here, but before we talk about that, it's worth talking about the stuff that's good.

From the user's perspective one of the most important properties is portability. Say I buy a new machine or I want to use a kiosk somewhere: as long as I remember my password (this is a lot easier if I use the password monkey10 everywhere than if I generate a random 16-character password for each site), then I just sit down, type it in, and I'm good to go. Even if I have a really long password, I can write it down on a piece of paper which will survive the failure of any particular device. This sounds simple, but it's actually a feature that many of the proposed fixes for this problem don't have. To give you just one example, pretty much all the systems that involve you having a long-term client-side public key then require some way to haul that key around. There have been a lot of proposed answers to this (USB tokens, smartphones, etc.) but none of them have come close to taking off.

Backward Compatibility
Say you've just invented a really good remote authentication technique. What now? Well, if it involves modifying the client, then you've got a real problem since Web browsers turn over comparatively slowly (10% of the net is still running IE 6). So, even if you manage to convince all the browser manufacturers to put your system in, you're looking at years before you can count on everyone being able to authenticate with it, and hence before you can use it exclusively. Similar reasoning applies when you need to modify the server, since any new mechanism on the client is useless without server support. The only lowest common denominator mechanism is passwords through Web forms, which is why so many new authentication systems have been structured as enhancements to that basic mechanism, either on the server side (e.g., if you don't recognize this image, don't proceed) or on the client side (e.g., PwdHash) so that they can be deployed unilaterally.

Site Control of Look and Feel
Passwords in web forms aren't the only authentication mechanism that was intended when the Web was first designed. Indeed, HTTP supports not one but two password-based authentication mechanisms, "Basic" (i.e., passwords in the clear in the HTTP header) and "Digest" (i.e., challenge response in the HTTP header). Neither of these sees much usage, most likely due to the hideous user interface they typically have, which involves the browser bringing up a modal or semi-modal dialog as you first go to the page (generally before you see anything on the page). It turns out that this is not what site operators want, which is instead to control the UI experience, including offering you first-time registration without an annoying dialog box, password recovery, branding, etc. In other words, they want to brand it, and aren't really interested in authentication mechanisms which don't offer that ability.

Next Up: What's bad about passwords?
While passwords have some useful features, if they were really great then we wouldn't be having this discussion. Next, I'll talk about some of the obvious drawbacks, but as you're reading that you should remember that while annoying none of them have been severe enough to push us over the edge into actually discarding passwords for most applications.

1. The exception here is that there is a special indicator that tells the browser that the password field should display dots or stars or whatever instead of your real password.