EKR: June 2005 Archives

 

June 30, 2005

Although credit card style systems (and I include in this VISA debit cards which have the same properties) has a lot of inherent security flaws looked at from a purely user interface perspective, there are some real advantages:

Static and reusable
Because the credit card number almost never changes, you can memorize it. Even better, merchants can memorize it, which is what lets Amazon do 1-click ordering.

Easy to read
There are three separate ways for the merchant to get the information they need to clear your credit card. The easiest is to swipe the mag stripe, but the raised digits let them use a credit card press (or even some carbon paper and a pencil in a pinch) or they can just transcribe the credit card number visually off the face.

Short
Credit card numbers are fairly short, which makes them easy to type in into fields (e.g., at Amazon). This would be impractical if the numbers were much longer.

Cheap
Finally, credit cards are incredibly cheap to manufacture. I don't know what credit card companies pay, but you can buy mag strip cards in lots of 1000 for less than $.15/each, so I imagine the price to the credit card issuers is more like $.05. Mag stripe readers are also cheap and credit card terminals are extremely simple and cheap to manufacture.

(Nearly?) all proposed more secure solutions involve giving up one or more of these properties. If it doesn't it's going to be basically isomorphic to credit cards: a symmetric key which you give directly to the merchants in order to execute a transaction.

The naive approach that everyone thinks of first is to use digital signatures. Every credit card account gets a public/private key pair and when you want to execute a transaction you just digitally sign it. This is an outstanding design in that it manages to sacrifice nearly all of the above properties: You'd sign each transaction separately so the authenticator always changes. Performing the signature requires extensive computation so it's not cheap. The signatures are long (The shortest digital signature scheme based on a standard assumption (BLS) is 163 bits long, which maps to about 50 decimal digits! (Yes, there are alternative encodings but they're still long.)) so you can't really type them in, meaning you need some kind of electronic interface to deliver them. Probably no more mag stripe interfaces, at least not static ones.

Despite all this, signatures aren't an inherently bad technology for making purchases on the Internet. Of course, they'd require entirely new software deployments on the client, but that's arguably merely a transition problem, and software only takes half of forever to replace. However, for in-person transactions that involve swiping credit cards, this means that the customer has to have some handheld computing device. Typically this is assumed to be a smartcard, which means replacing every point of sale terminal--not an easy task. Even if you've succeeded, note that you've now basically ruled out the possibility of dump POS devices like the old credit card swipe machines. I still get asked to use these fairly often...

Years ago I was a bit player in Visa/MasterCard's SET signature-based electronic payment system. SET managed to have all the disadvantages I mentioned above and then some. Aside from requiring extensive processing on the client side, the protocols and PKI in particular were fiendishly complex. In addition, the computational effort required on the server side was truly excessive. To make matters worse, Visa and MasterCard never offered any real incentives to merchants to deploy SET. This explains why going on 10 years later you're still not using SET to buy stuff on the Internet.

Another possibility is to have a device that simulates a credit card but that produces a different credit card number for each transaction. The card would generate a stream of valid credit card numbers (some credit card issuers already have web sites that let you produce temporary numbers in order to let users shop online while reducing the perception of fraud). This type of system can be made to have a lot of the same UI properties as today's system: the numbers would have to be a little longer but not enormously--maybe 20 or 25 digits instead of 16. You could implement this with a smartcard but a more attractive approach would be with an LCD display like a SecureID card. Then people could key in the numbers just like with a conventional card. You might even be able to convince it to simulate a mag stripe, just like those CD-to-tape adapters you can get for your car. Merchants wouldn't necessarily be able to store the number, since it would change every time, but you could imagine having the card also produce long-term codes one per merchant, thus making merchant database compromise more easily containable. This design has at least one major disadvantage: the cards are sure to be expensive and will likely be bulkier than an ordinary credit card.

The principal concern with any of these systems is getting merchants to be willing to deploy them. The issuers can ship new cards to users whenever they're ready, but unless the merchants deploy the readers and server side software, people will have to use standard credit cards. Back when I was working on SET, everyone assumed that the acquiring banks would give merchants who deployed SET a break on their credit card processing charges, but as far as I know this didn't happen--though they may have belatedly decided to do it after I was no longer involved. Some incentive like that will surely be required to get deployment of any new system: remember that customers have no real liability when their cards are stolen so their incentive to change is extremely small.

If we ever do see attempts to deploy a new system, expect the initial few generations of cards that get rolled out to be dual purpose, e.g., a smart card with a mag stripe and raised credit card digits on the face. You could use the card in either mode but merchants would get a break for using it in secure mode. I don't see many signs of this happening in the US though. I keep hearing that smartcards for financial transactions are big in Europe, but I'm not that familiar with the European financial industry so I don't know if this is true.

 

June 28, 2005

NYT reports that the State Department doesn't screen passports against blacklists of criminals or terrorists:
WASHINGTON, June 28 - The names of more than 30 fugitives, including 9 murder suspects and one person on the Federal Bureau of Investigation's most-wanted list, did not trigger any warnings in a test of the nation's passport processing system, federal auditors have found.

Insufficient oversight by the State Department allows criminals, illegal immigrants and suspected terrorists to fraudulently obtain a United States passport far too easily, according to a report on the test by the Government Accountability Office to be released Wednesday.

The lapses occurred because passport applications are not routinely checked against comprehensive lists of wanted criminals and suspected terrorists, according to the report, which was provided to The New York Times by an official critical of the State Department who had access to it in advance. For example, one of the 67 suspects included in the test managed to get a passport 17 months after he was first placed on an F.B.I. wanted list, the report said.

And this isn't even what State thinks is the real problem:

The real problem, Ms. Harty said, is the ease with which people can obtain fraudulent identification, like birth certificates, to apply for a passport. The State Department, she said, tries to block fraudulent applications by checking them against other records, like Social Security files.

Ms. Harty said that to screen passport applicants better, she had secured commitments from the F.B.I. and the federal Terrorist Screening Center to provide more complete access to records, including the comprehensive list of suspected terrorists.

"Nobody should have a passport in an attempt to flee from prosecution," Ms. Harty said.

If you're going to have a watchlist-based system, it helps to (1) make it difficult for people to get false documents and (2) actually check the blacklist. Outstanding!

 

June 27, 2005

Spaf and Schneier weigh in in an article from Saturday's NYT about credit card theft:
"Right now it is very easy to get somebody's identity," said Eugene H. Spafford, executive director of the Center for Education and Research in Information Assurance and Security at Purdue University. "Plus there is a low threshold for authentication." To use someone else's credit card, for instance, all that is needed is the number, name, expiration date and, possibly, the three-digit security code. (In the CardSystems case, all that information was stolen.)

What may be additionally required, Mr. Spafford said, are stronger authenticators like so-called digital wallets, which contain all the data needed for transactions in encrypted form.

Some experts argue that protecting personal data is a hopeless task, that the emphasis should be on making transactions more secure."Making information harder to use is the key," Mr. Schneier said. "Making it harder to steal is a dead end."

This deserves some unpacking.

The basic problem with credit card authentication is that the information required by the merchant to run your credit card is exactly the same information that you require to use it: the number, the exp date, the security code, and maybe your ZIP. Every time you give your credit card to someone in a restaurant, they have an opportunity to steal your card information (remember having to tear up your carbons?). And of course, Mastercard's database has exactly the same information. So, any compromise of the merchant's or issuer's systems leads to the attacker being able to forge credit card charges. Not all authentication systems are like this.

Credit card authentication is tricky because it involves a large number of parties (you, the merchant, two banks, VISA...) so let's take a step back and talk about a simple system: user authentication where I convince some remote computer that I am who I say I am. The way that all of these systems work is that the server has some verifier V that it stores with my record. I have some secret information S that corresponds to V. When I authenticate, I provide an authenticator A (based on S) which the server checks against V.

Roughly speaking, there are three classes of system:

V=S and the server stores S.
Anyone who compromises the server once can simply steal S and can then impersonate me to the server any time that they want. These systems are usually called password-equivalent.

A=S.
This is how style UNIX passwords (and a lot of SSH password authentication) works. The system stores a password hash and the user gives the server his password. If the server is compromised and the attacker steals the password file, then he can't directly impersonate the user. However, if he has long-term access to the server he can of course capture the user's password when it comes over the network. In addition, because the system needs to be able to compare V and S the attacker can verify whether a given S is correct by checking it against V. This isn't a problem if S is well-chosen, but if its, for instance, a common word, then it's pretty easy to guess. This is called a Dictionary Attack.

S,A, and V are all different.
This is how public key authentication works. You store the private key (S). The server stores the public key (V)) The server provides some challenge which you sign to create A. The server can verify that you know S but can't use that information to impersonate you to anyone else. (Note for crypto-nerds: the non-password equivalent zero-knowledge password protocols fit roughly into this category as well.)

From a security perspective, public-key type systems are vastly superior. However, their deployment has been spotty at best. The major reason is that it requires changing both the client and the server. In particular, computing A from S is nontrivial and requires software on the client side, which is an obvious deployment hassle. This used to be a big problem with login authentication, but in the wake of the wide deployment of SSH it's starting to go away--though other usage and user education issues still remain. However, it's easy to see how it could be a problem with credit card systems, since the terminals used to authenticate credit cards are extremely primitive and the protocols are difficult to change.

Credit card authentication is more complicated but basically a password-equivalent scheme. You give your credit card to the merchant, they give it to their acquiring bank, and so on all the way down the line. Anyone in this chain has the opportunity to steal your credit card number and use it. Merchants routinely keep copies of your credit card to enable features like Amazon one-click, so merchant database theft is a real problem (as-is merchant fraud). And of course the back-end probably has all the credit card numbers sitting in some database, ready for theft (this isn't the only implementation, but it it's the easiest one.).

So far, most of the reactions of the credit companies have been to add new authenticators that aren't actually printed on the credit card (your ZIP code and the security code) but since those authenticators need to be provided to any merchant you want to do business with they won't stay secret for long. As long as the system is designed so that your "secret" information and the information the merchants get are one and the same, credit card theft and fraud will continue to be a real problem.

In my next post on this topic, some of the obstacles to removing password equivalence from the credit card network and a special SET retrospective.

 
The Grokster decision was supposed to come out today. Here's what's on the NYT front page right now:
Justices Rule Internet File-Sharing Services May Be Sued for Encouraging Illegal Sharing of Music and Movies
Looks like the Supremes ruled against Grokster. We'll have to wait for the opinions to see how.

UPDATE: The opinions are up here, here and here. I haven't read them yet, though.

 

June 26, 2005

In a comment on Crooked Timber, Dan Simon writes:
Jet, I'm using "judicial activism" in the only sense in which it makes sense to me: the (ab)use of judicial authority to overrule democratically enacted laws. And yes, Pat, many hypocrites do "tend to love it or hate it depending on whether it goes their way" but I'm not one of them. I take a principled stand against it, irrespective of its political direction in any particular case.

Dan's position is that (at least as stated above) seems to embody a complete rejection of judicial review. Congress were to pass a law that directly violated the Constitution (e.g. declaring Christianity the national religion and banning the practice of Islam) then Dan's position would imply objecting to the Supreme Court overruling said law. It seems to me that there are two primary positions that are consistent with this view:

  1. The government should be a pure democracy without any limitations on the popular will.
  2. There should be limitations on the popular will but it's not the court's job to enforce them.

Now, (1) is certainly a reasonable position in some political science sense (though I think that the public choice literature indicates that it's a bad idea), but I think it's pretty clear that it's not the form of government we have. It's pretty clear in the Constitution that in the US system the government is at least nominally constrained by certain rules. Indeed, the oaths taken by Senators, Representatives, and the President explicitly commit them to support and defend the Constitution. So, that leaves us with (2).

The question at hand, then is what check there is against those officers violating their oaths and acting in an unconstitutional fashion. One view would be that there's no check other than voting them out of office. However, if you believe, as I do, that many of the activities that Congress has historically been willing to engage in are blatantly unconstitutional, then this is fairly small comfort, since we're more or less back to unrestricted democracy, which, while perhaps fine, isn't the system we're supposed to have.

None of this is to say, of course, that I think that every time the Supremes rule a law unconstitutional they've made the right decision. The question is whether as a structural matter they should be able to do so at all. And if you think they shouldn't, then what's the point of having a constitution that limits the power of the government?

 

June 25, 2005

Back when I was in high school and taking multiple choice tests, the conventional wisdom was to trust your first choice and not to change your answer unless you were sure. Turns out that this is the wrong answer. Kruger et al. report in J. Pers. Soc. Psychology that when people are down to two answers, their first instinct is more often wrong than right. However, it turns out that they regret the times when they incorrectly change from their first instinct more than the times they incorrectly stick with it, and they remember those cases better. The authors suggests that this accounts for the belief that one ought to stick with one's first instinct.

Of course, for students of the economic psychology literature, this result shouldn't be too surprising. The data consistently demonstrates that people's ability to estimate quantities is fairly bad and is improved by systematic techniques for arriving at the answers. Of course, that doesn't necessarily mean that your second intuition is any better, but if you're actually trying to consider the problem rather than just use your gut, then it's probably worth going with that answer...

 

June 24, 2005

On of my big complaints about technical writing is the tendency authors have to focus on details while missing the big picture. This is a particular annoyance when you're trying to review some system that's described purely in terms of some PDUs1 and a state diagram and the first thing you have to do is figure out how the protocol works before you can actually review it. RFC 4101 is a probably quixotic attempt to help correct this situation:
Writing Protocol Models
Eric Rescorla and the Internet Architecture Board

The IETF process depends on peer review. However, IETF documents are generally written to be useful for implementors, not reviewers. In particular, while great care is generally taken to provide a complete description of the state machines and bits on the wire, this level of detail tends to get in the way of initial understanding. This document describes an approach for providing protocol "models" that allow reviewers to quickly grasp the essence of a system.

Also, today the IESG approved Datagram TLS and Pre-Shared Key Ciphersuites for Transport Layer Security (TLS) as Proposed Standards.

1. Protocol Data Units---protocol messages

 
For endurance athletes, the two staple workouts are easy distance and interval training. If you want to get good, you've just got to put in the mileage. Or so we thought until recently. Eu-Jin Goh pointed me to this paper in the June Journal of Applied Physiology. The authors report significant improvement in endurance performance in untrained but active subjects with six sessions of sprint intervals totalling less than 15 minutes over two weeks:
Six sessions of sprint interval training increases muscle oxidative potential and cycle endurance capacity in humans

Kirsten A. Burgomaster,1 Scott C. Hughes,1 George J. F. Heigenhauser,2 Suzanne N. Bradwell,1 and Martin J. Gibala1

1Exercise Metabolism Research Group, Department of Kinesiology, and 2Department of Medicine, McMaster University, Hamilton, Ontario, Canada

Parra et al. (Acta Physiol. Scand 169: 157165, 2000) showed that 2 wk of daily sprint interval training (SIT) increased citrate synthase (CS) maximal activity but did not change "anaerobic" work capacity, possibly because of chronic fatigue induced by daily training. The effect of fewer SIT sessions on muscle oxidative potential is unknown, and aside from changes in peak oxygen uptake (O2 peak), no study has examined the effect of SIT on "aerobic" exercise capacity. We tested the hypothesis that six sessions of SIT, performed over 2 wk with 12 days rest between sessions to promote recovery, would increase CS maximal activity and endurance capacity during cycling at ~80% O2 peak. Eight recreationally active subjects [age = 22 +/- 1 yr; O2 peak = 45 +/- 3 ml·kg1·min1 (mean +/- SE)] were studied before and 3 days after SIT. Each training session consisted of four to seven "all-out" 30-s Wingate tests with 4 min of recovery. After SIT, CS maximal activity increased by 38% (5.5 +/- 1.0 vs. 4.0 +/- 0.7 mmol·kg protein1·h1) and resting muscle glycogen content increased by 26% (614 +/- 39 vs. 489 +/- 57 mmol/kg dry wt) (both P < 0.05). Most strikingly, cycle endurance capacity increased by 100% after SIT (51 +/- 11 vs. 26 +/- 5 min; P < 0.05), despite no change in O2 peak. The coefficient of variation for the cycle test was 12.0%, and a control group (n = 8) showed no change in performance when tested ~2 wk apart without SIT. We conclude that short sprint interval training (~15 min of intense exercise over 2 wk) increased muscle oxidative potential and doubled endurance capacity during intense aerobic cycling in recreationally active individuals.

This is a very interesting result. A doubling of endurance is an amazing improvement. Two caveats: 1. 8 subjects in each group is really small. 2. This is in active but untrained athletes, so it's not clear that it will work for people who are trained. Nevertheless, as Kevin Dick observes, if reproducible this would be a great way for people to kick-start their training, getting themselves to the point where they can do longer workouts.

 

June 23, 2005

From Rubicon: The Last Days of the Roman Empire:
All the same, Crassus was not the only man to have dreamed of pushing Rome's supremacy to the limits of the world. Something was changing in the mood of the Republic. Globalizing fantasies were much in the air. The globe itself could be found on coins as well as triumphal floats.

Which implies the question: what's on a Roman globe?. Even with the best available mapmaking they can't have known about North America or much of Asia and Africa. So, what was on the other sections? Blank space? Here there be dragons like on European maps? I haven't been able to find any high enough resolution pictures of Roman coins to get a good answer....

 

June 22, 2005

Today's NYT carries a story about improvements in IEDs used by Iraqi insurgents. According to the article, the new IEDs use shaped charges for better destructive power. In addition, they've started to use infrared remote controls rather than radio, because IR is harder to jam.
 
Today's TechWeb News has an article about the dire threat that iPods pose to enterprise security. The latest round handwringing was set off by Abe Usher's demonstration of a program that would copy all the document files on a hard drive into your iPod. Now, we've known that this was possible for years and writing a program like this is incredibly trivial (it's a one-liner on UNIX). Usher's primary contribution appears to have been to give this attack a cool-sounding name: "pod slurping".

It's not clear why Usher decided to focus on the iPod, since the same attack is possible with USB memory sticks, which are now so small they can fit easily in your wallet. Anyway, As I observed the last time this came up, stopping people who have physical access to your machines from stealing your confidential information is basically impossible--unless you're willing to strip search them on the way in and out. And this has been true pretty much ever since the invention of compact removable media--even a 5.25" floppy can carry plenty of confidential stuff. The take home is simple. If you don't trust people, don't let them near your computers, or any other confidential stuff for that matter.

 

June 21, 2005

From Newsweek:
Counterinsurgency experts are alarmed by how fast the other side's tactics can evolve. A particularly worrisome case is the ongoing arms race over improvised explosive devices. The first IEDs were triggered by wires and batteries; insurgents waited on the roadside and detonated the primitive devices when Americans drove past. After a while, U.S. troops got good at spotting and killing the triggermen when bombs went off. That led the insurgents to replace their wires with radio signals. The Pentagon, at frantic speed and high cost, equipped its forces with jammers to block those signals, accomplishing the task this spring. The insurgents adapted swiftly by sending a continuous radio signal to the IED; when the signal stops or is jammed, the bomb explodes. The solution? Track the signal and make sure it continues. Problem: the signal is encrypted. Now the Americans are grappling with the task of cracking the encryption on the fly and mimicking itso far, without success. Still, IED casualties have dropped, since U.S. troops can break the signal and trigger the device before a convoy passes. That's the good news. The bad news is what the new triggering system says about the insurgents' technical abilities.

Kind of puts your communications security problems into perspective, doesn't it?

 

June 20, 2005

Radley Balko points to an article about a drug testing device that appears to be rather too sensitive:
A Welsh assembly member who called for his colleagues to volunteer to try out a new drug detection machine has tested "positive" for cannabis himself.

Swabs taken from Conservative AM William Graham's hands at the Welsh assembly building revealed traces of the drug, probably from a door handle.

He had arranged for police to come in to demonstrate the hi-tech machine.

...

It is so sensitive it can detect the equivalent in drugs of a grain of salt in an Olympic-sized swimming pool.

A similar test of fellow AM Ms Hart showed that although she had not been using drugs, her hands had been cross contaminated with traces of the substance, from door handles, money or other public areas.

"You could pick it up from anywhere couldn't you?" she said.

The machine is used by Gwent Police to test people queuing for a night club, and to detect traces of drugs in a house where the actual substances had already been removed.

...

Divisional Crime Prevention Officer Pc Simon James said that while the results could not be used as evidence, they can indicate to officers that a person should be searched or questioned.

Let me see if I have this right: when ordinary people going into a club show up positive, they're subject to suspicion and potentially questioning or search, but when politicians come up positive, we just accept that they're not on drugs because they say so? If they're going to maintain their innocence, then a reasonable person would have to conclude that this device has an unacceptably high false positive rate. Unfortunately, we're dealing with politicians:

Mr Graham, who represents South Wales East, said: "Anything that deters people from taking drugs is a good thing. If people know this thing exists then they will know that they might get caught".

Outstanding.

 
Watchers of Canadian culture will want to read Margaret Wente's 7 things You Can't Say in Canada as well as seven more from Colby Cosh. Nominees for an equivalent list for the US are now open in the comments section.
 
I've been meaning to write something about Colin Percival's caching timing attack, but haven't gotten around to it. A reader writes in to prompt me, so here goes.

There's been a lot of debate about how serious this attack is. The Linux discussion in particular has been fairly acrimonious, with Linus Torvalds arguing that the attack isn't interesting and Percival pitching its importance fairly hard. The following is my summary of the attack and its implications.

The basic observation is that in Intel's hyperthreaded CPUs, multiple hyperthreads (and therefore operating system processes) can share access to the cache. Therefore by observing when its data is evicted from the cache an attacker can get information about the state of a process he doesn't control. Percival describes two uses for this observation: a covert channel between two cooperating processes and a malicious process which obtains information about the cryptographic keying material of a process owned by an unsuspecting victim. The covert channel isn't very interesting. The only systems in which people have made any real attempt to remove covert channels are multi-level secure systems, which haven't exactly made it big in consumer use. Percival seems to be mostly using it to help him describe the other attack, which is more interesting.

The key stealing attack takes advantage of the observation that the cache behavior of RSA operations (really, modular multiplies in general) depends on the key. This allows a monitoring process to learn about bits of the key by watching the victim process's cache behavior. The details aren't really important, but you should know the result: Percival's attack allows the attacker to recover enough bits of the RSA private key to recover the whole thing. (Of interest only to nerds: OpenSSL uses the Chinese Remainder Theorem to perform its private key operations, so what you actually do is recover parts of p and q and then factor the modulus.)

The most important thing to know is that the attack only works if the attacker can actually run programs on your computer. So, this is only realistically a problem if the attacker either has a legitimate account on your system or has broken in. So, if you don't let other people use your computer, then this would have to be combined with a remote attack on your computer. In general, commercial sites which do real transaction volume run their servers on dedicated machines, so it's not likely that ordinary people will have logins to those machines. The most likely environment in which this attack makes sense is shared servers like those run by hosting providers. One user on such a system could potentially capture the private key of another user's web server. More interestingly, he might attack the server's SSH key and try to steal user's passwords.

The second thing to know is that an attacker who is running with root/Administrator privileges can easily snoop on memory and thus can steal your private key without resorting to anything this sophisticated. Because most Operating Systems are riddled with local "privilege escalation" attacks, unless you're exceeding careful about your system security, an attacker can probably just escalate to root and then steal your private key directly. This fact has been the source of most of the debate, with Torvalds espousing the view that there's nothing that special about this attack and Percival asserting that one should try to close all vulnerabilities.

The final factor to consider is whether attackers will really steal private keys. Consider that every vulnerability in a SSL-enabled Web server is a possible avenue to steal that server's private key (it's possible to run with your private key in a hardware security module to defend against this, but that's comparatively rare). Yet, I know of no malware designed for this purpose and in fact, have yet to hear of an attack on a commercial system that involved private key theft.

None of this is to say that Operating System vendors or administrators shouldn't be concerned about this attack. Any attack that potentially leads to compromise of sensitive material needs to be addressed at some point. However, I don't consider this to be any worse than your average privilege escalation attack.

 

June 19, 2005

News.com reports that the DoJ wants ISPs to retain logs of customer activity.
In Europe, the Council of Justice and Home Affairs ministers say logs must be kept for between one and three years. One U.S. industry representative, who spoke on condition of anonymity, said the Justice Department is interested in at least a two-month requirement.

Justice Department officials endorsed the concept at a private meeting with Internet service providers and the National Center for Missing and Exploited Children, according to interviews with multiple people who were present. The meeting took place on April 27 at the Holiday Inn Select in Alexandria, Va.

"It was raised not once but several times in the meeting, very emphatically," said Dave McClure, president of the U.S. Internet Industry Association, which represents small to midsize companies. "We were told, 'You're going to have to start thinking about data retention if you don't want people to think you're soft on child porn.'"

This is phrased as being about retention, but the ISPs can only retain what they captured in the first place, and in many cases the answer is surprisingly little.

The first thing you have to realize is that the Internet isn't like the telephone network. In the PSTN, each call setup and termination requires explicit creation of state at each switch along the path. The Internet (and packet switched networks in general) is different: state typically exists only at the endpoints. The intermediate routers just forward packets. For instance, when you make a Web (HTTP) connection to Amazon.com, there's TCP state on your client and your server (and maybe on a firewall or two in between) but as far as the intermediate routers are concerned, they just see a bunch of mostly uninterpreted IP datagrams, with traffic from multiple senders and receivers mixed together. The routers don't even attempt to reassemble them into connections, nor do they need to. That's part of the elegance of the Internet design. What logging occurs generally happens at the connection endpoints.

Web (HTTP)
The simplest case is HTTP. If you connect to a Web server, that server will keep some logs of your activity. Your web browser may keep logs too, but of course those live on your computer. So, your ISP generally doesn't keep logs of your Web browsing. On the other hand, if your ISP runs your Web site for you, they probably keep logs, but that's in their capacity as your Web server, not as your ISP. For instance, EG runs at Dreamhost, but they're my hosting server only. They don't provide my home Internet service.

Mail (SMTP)
There's a lot more opportunity to log e-mail traffic. Most home users get their e-mail service from their ISP. What this means in practice is that when mail is sent to them it gets delivered to the ISP's e-mail server. The user reads their mail by contacting the ISP's server to pick it up, using POP, IMAP, or Web mail. Because the mail server is involved in mail delivery—and typically has the mail lying around for a while—its easy for it to keep logs and standard mail servers do so by default. These logs typically contain to/from information and the disposition of the mail, as well as a timestamp. Often, when you actually read your mail is logged too. Because the mail server has access to the content (i.e., the message body), it can of course keep a copy, but standard practice isn't to do so.

When users send mail, they typically deliver it to the ISP's mail server, which then takes care of the ultimate delivery. This has the advantage that it's "fire and forget". If the message can't be delivered right away the mail server will keep trying even if your machine is disconnected from the Internet. A lot of ISPs actually require their users to use their mail servers under the theory that it helps them suppress spam. As before, these transactions are easy to log, and as far as I know this is standard practice.

IM
The situation with IM is fairly complicated. The general rule is that whoever runs your IM service (e.g., Yahoo, AOL, MSN) has an opportunity to log but it's inconvenient for your ISP. If you run your own server (e.g., Jabber/XMPP) then whoever runs that server can log traffic, as with HTTP. Whoever runs the service has the opportunity to access the actual data traffic, but they typically don't.

Non-server logging
Of course, just because something isn't logged now doesn't mean it couldn't be. In theory, ISPs could capture every packet that goes through their routers. They could decode them and synthesize their own logs or simply record them to disk for future processing. In practice, however, this would be a substantially nontrivial undertaking. The routers that ISPs use aren't set up to record this kind of detailed information. In practice, this probably means putting some sort of tap on the network. This is, of course, possible, but is a substantially different issue from merely retaining some logging information.

 
From The Hunt for Red October (1984):
During her last overhaul, the Dallas had received a very special toy to go with her BQQ-5 sonar system. Called the BC-10, it was the most powerful computer yet installed aboard a submarine. Though only about the size of a business desk, it cost over five million dollars and ran at eighty million operations per second.
Emphasis mine.

Twenty years and 13 or so cycles through Moore's law later, this kind of massive computing power comes in a rather more convenient package.

 

June 18, 2005

USA Today reports that some photo printers are refusing to print customer pictures which look "too professional":
Wal-Mart spokeswoman Jacquie Young said her company's photo departments are instructed to err on the side of protecting copyrights, even if that means a conflict with an insistent customer. She would not say what signs of professionalism the photofinishers are told to look for.

In the printing labs for the Kodak EasyShare Gallery, the photo Web site formerly known as Ofoto, professionally taken pictures are placed on the walls to remind technicians of such images' telltale signs, such as school photos and stylish backdrops in posed pictures of children.

The idea that photofinishers can be sued for copyright infringement for inadvertantly printing copyrighted material is fairly problematic. There's just no reasonable way for them to detect copyrighted material. Obviously, it's desirable to prevent copyright infringement, but that doesn't mean that it's worth imposing arbitrary costs on everyone else in order to prevent it. The right analogy here is to common carriers, which generally have no liability even if they're used to transport materials which would otherwise be illegal. I.e., FedEx doesn't get in trouble if someone ships drugs or weapons and isn't legally required to scan packages for them.

 

June 17, 2005

I get asked a lot about the performance of encryption. Here are some microbenchmarks that should give you the feel of the situation. Obviously, real protocols behave differently, but these should give you a feel for the order of magnitude.

Symmetric Encryption Algorithms

AlgorithmSpeed (MB/s)
DES67
3DES (EDE)24
AES-12869
AES-25655
RC-4125

Message Digests

AlgorithmSpeed (MB/s)
MD5309
HMAC-MD5315
SHA-1116

Public Key Algorithms

AlgorithmPrivate Key Ops/sPublic key ops/s
RSA-10242454614
RSA-2048411411
DSA-1024495410

All measurements taken using OpenSSL on a single-processor 3 GHz Pentium running FreeBSD.

The take home message here is that well designed communications security systems are fast enough for almost any practical Internet communications scenario and most Intranet ones. For the few cases where you actually need speeds that approach or exceed 1Gb/s, acceleration hardware is readily available.

 

June 15, 2005

Anne Applebaum's column in today's WaPo makes the same point that security types have been making about airport security: it's almost certainly unjustifiable by any reasonable cost-benefit analysis.
This is not to say that the uniformed screeners aren't more professional than they were in the past or that their presence doesn't create a degree of psychological comfort, both for government officials, who can claim to be doing something to keep us all safer, as well as for those passengers who continue to believe that engaging in ritualistic shoe-removal gives them mysterious, magical protection against terrorism. On the grand scale of things, though, that's all it is: magical protection.

...

But, then, this isn't a country that has ever been good at risk analysis. If it were, we would never have invented the TSA at all. Instead, we would have taken that $5.5 billion, doubled the FBI's budget, and set up a questioning system that identifies potentially suspicious passengers, as the Israelis do. Even now, it's not too late to abolish the TSA, create a federal training program for airport screeners, and then let private companies worry about how many people to hire, which technology to buy and how long the tables in front of the X-ray machines should be (that last issue being featured in a recent government report). But every time that suggestion is made in Congress, someone denounces the plan as a "privatization" of our security and a sellout.

Which is why I conclude that we don't actually want value for money. No, we want every passenger to have the chance to recite that I-packed-these-bags-myself mantra to a uniformed official before boarding an airplane. Magic words, it seems, are what make Americans feel really safe.

The long lines and intrusive searches aren't a side effect of an effective screening process, they're an essential part of providing the appearance of security.

 

June 14, 2005

The State of Wisconsin Pharmacy Board has disciplined a pharmacist for refusing to fill an oral contraceptive prescription and refusing to transfer it to another pharmacist. In this case, the woman in question was getting a last minute refill and because another pharmacist was not on hand she actually missed doses, so this was a fairly substantial inconvenience. This isn't exactly what you'd call an ideal test case. The pharmacist's contract explicitly specified that he would furnish "All services generally performed by a registered pharmacist in the customary manner and extent ordinarily performed at pharmacies." so he was pretty clearly in breach of contract. Moreover, he appears to have claimed that he would make alternate arrangements, but in practice he did not do so. Nevertheless, the case makes fairly interesting reading. At the very least, it should make clear that a generic right not to fill prescriptions without a requirement to transfer them to another pharmacist will likely result in substantial inconvenience—and potential harm—to patients.
 

June 13, 2005

From The Subtle Knife, somewhere in a parallel universe, two children stop in an abandoned store:
Before they left, Will dropped some coins in the till behind the counter.

"What you doing?" she said.

"Paying. You have to pay for things. Don't they pay for things in your world?"

"They don't in this one! I bet those other kids en't paying for a thing."

"They might not, but I do."

This is obviously intended to be an indication of maturity--paying for what you take--but I'm not so sure. Will and Lyra are in a parallel universe which uses entirely different currency. Will is from our universe and our fiat money is totally different from whatever they use. From the perspective of the people who own the store, they might as well just be stealing whatever they take.

But of course, that's not the view from the perspective of the children: they have a limited amount of funds and so paying for their "purchases" constrains their ability to just take anything they want. Obviously, if there are other people who are going to be coming by (the owners or other looters), this is a good thing. But if nobody else is around, then this constraint just makes the situation worse. And, of course, if the economy has totally broken down, there's no way for them to earn more money, which is really inefficient if they're the only people around.

Unfortunately, real maturity often means that the right decisions aren't particularly clear.

 
My laptop computer has no firewire ports, so I can't use iPod's firewire-based charging. On my most recent trip I forgot my iPod charger and was reduced to using a friend's Mac to charge my 3G iPod. But here's the surprising thing: while she can (or at least says she can) charge her iPod (a U2 edition) via USB, I can't, even when I use her cable. I see that you can buy 3rd-party iPod USB chargers. Does anyone know if these are likely to work with my iPod? It sure would be nice not to have to carry the brick around.
 

June 12, 2005

In one of the last vestiges of airline route regulation, Southwest Airlines is legally forbidden from flying from their base in Dallas Love Field to any states besides Texas, New Mexico, Oklahoma, Kansas, Arkansas, Louisiana, Mississippi and Alabama. Southwest can't even sell you a ticket from Love to a destination outside these states even if they stop somewhere inside first. This law principally benefits American Airlines, which has its hub at Dallas-Fort Worth (DFW).

Southwest is pushing to have the law in question, the Wright Amendment, repealed, [*] but of course American is against it. The best part this is American's statementopposing the change:

"This push by Southwest reflects the understandably selfish intentions of a company that today is roaming the halls of Congress seeking special favors. If Southwest were sincere about growing and competing, they would be flying from DFW Airport -- and they wouldn't need an act of Congress. We're confident the community will not let Southwest risk the quality of life for North Texans just to preserve and expand their monopoly at Love Field."

Ah, the genius of capitalism...

 

June 11, 2005

IAD terminal C (at least at gate C7) has a free wireless AP, SSID 05B404045912. Pretty convenient since it let me look at SeatGuru while talking to customer service. Over in terminal D, where I've just been rerouted to, all there is is the Red Carpet Club fee-for-service T-Mobile AP...
 
Daum and Lucks demonstration of colliding PostScript files is getting a fair amount of attention. The attack is straightforward. They generate a pair of colliding prefixes A and B and then tack on a common PostScript program P. Because of the way that hash functions work, H(A P) = H(B P) so, now they have two files that collide. The trick here is the PostScript program, which actually contains two entirely separate documents. P then looks at its prefix and displays document 1 if the prefix is A and document 2 if the prefix is B.

Daum and Lucks argue that this shows that the current attacks on MD5 are serious:

Recently, the world of cryptographic hash functions has turned into a mess. A lot of researchers announced algorithms ("attacks") to find collisions for common hash functions such as MD5 and SHA-1 (see [B+, WFLY, WY, WYY-a, WYY-b]). For cryptographers, these results are exciting - but many so-called "practitioners" turned them down as "practically irrelevant". The point is that while it is possible to find colliding messages M and M', these messages appear to be more or less random - or rather, contain a random string of some fixed length (e.g., 1024 bit in the case of MD5). If you cannot exercise control over colliding messages, these collisions are theoretically interesting but harmless, right? In the past few weeks, we have met quite a few people who thought so.

With this page, we want to demonstrate how badly wrong this kind of reasoning is! We hope to provide convincing evidence even for people without much technical or cryptographical background.

Superficially, this a convincing argument, but I don't think it holds up under examination. First, consider the scenario Daum and Lucks envision:

  1. Alice prepares the pair of colliding files.
  2. The signing party views the "innocuous" version in a PostScript viewer. This is a key point because if you look at the source of PostScript file you can see both alternative documents (though of course one could obfuscate this...)
  3. The signing party signs the innocuous document.
  4. Alice transfers the signature to the "bad" version of the file and presents it to the relying party.
  5. The relying party then views the bad version (again in a PostScript viewer) and is fooled.

What makes this all work is that what's being signed is a program and that the victim only sees the program's output and is willing to sign based on that. But if you're willing to do that, you've already got a problem, even without compromise of digest functions. Consider the following document:

This file contains a simple JavaScript function that displays one document fragment if the current month is June and the other fragment if it isn't. The links below let you force the switch:
Click here to change to Not June mode
Click here to change to June mode

This technique lets us mount a simple attack: prepare a document like the one above. Set it to display the innocuous message from days 1-5 and then a less innocuous message after day 5. Get the signing party to sign sometime on day 1. Then on day 6 present it to the relying party. The signing party and the relying party see different things, just as in the Daum and Lucks case.

There are a few obvious objections here. The first is that this is an HTML file, not a PostScript. PostScript does have conditionals, but it doesn't seem to have a Date operator. There is probably some other conditional you could use, but I haven't looked too hard. PDF, however, has support for JavaScript, so you may be able to make it work with PDF. In any case, it's not clear why one would think that people are more willing to sign PostScript than HTML.

Second, this attack isn't quite as elegant as the Daum/Lucks attack. The signing party might decide to look at the file later and notice what had happened. However, a Date is just the simplest kind of conditional. JavaScript is quite powerful, and you should be able to use more sophisticated mechanisms to figure out what to display, e.g., by checking some remote web page. Actually, if you have a network connection, you can mount this kind of attack without having any kind of program on the client: just have the "document" be an inline image linked to in the HTML file the victim signs. You can then make it appear any way you want whenever you want, and even condition the behavior on which computer is doing the asking.

The bottom line here is that you can't safely sign content that you didn't create based purely on the way it appears in some viewing application (this is one of the concerns with XML signatures as well [*]. Daum and Lucks have just found another way to demonstrate this.

 

June 10, 2005

I just tried to dereference www2006.org and was treated to Firefox printing "waiting for www2006.org" over and over. A little bit of protocol debugging clears up what's going on. Here's the HTTP request captured from the network:
New TCP connection #17: 192.168.1.115(63782) <-> augur.ecs.soton.ac.uk(80)

GET / HTTP/1.1
Host: www.www2006.org
User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.7) Gecko/20050508 Firefox/1.0.3
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive

www.www2006.org is CNAMED to augur.ecs.soton.ac.uk, so we connect there and ask for www.www2006.org in the Host header. But I typed in www2006.org, so why did we get www.www2006.org? Well, let's try dig...

[34] dig www2006.org

; <<>> DiG 9.3.0 <<>> www2006.org
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47778
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;www2006.org.                   IN      A

;; AUTHORITY SECTION:
www2006.org.            705     IN      SOA     dns0.webcentre.net. hostmaster.webcentre.net. 2005051701 7200 3600 604800 3600

;; Query time: 12 msec
;; SERVER: 64.102.6.247#53(64.102.6.247)
;; WHEN: Fri Jun 10 12:40:25 2005
;; MSG SIZE  rcvd: 94

Now things become clear: there's no IP address available for www2006.org, so Firefox tries prepending a www. to the front, in case I mistyped. There is an IP address for that:

[35] dig www.www2006.org

; <<>> DiG 9.3.0 <<>> www.www2006.org
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12134
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 6, ADDITIONAL: 7

;; QUESTION SECTION:
;www.www2006.org.               IN      A

;; ANSWER SECTION:
www.www2006.org.        553     IN      CNAME   augur.ecs.soton.ac.uk.
augur.ecs.soton.ac.uk.  1291    IN      A       152.78.68.160
...

So, we connect to augur.ecs.soton.ac.uk, and ask for www.www2006.org, which brings us back to the request at the top. Here's the response:

HTTP/1.1 302 Found
Date: Fri, 10 Jun 2005 17:55:04 GMT
Server: Apache/2.0.46 (Red Hat)
Location: http://www2006.org/
Content-Length: 287
Content-Type: text/html; charset=iso-8859-1
Via: 1.1 Application and Content Networking System Software 5.1.13
Connection: Close

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://www2006.org/">here</a>.</p>
<hr />
<address>Apache/2.0.46 (Red Hat) Server at www.www2006.org Port 80</address>
</body></html>

Unfortunately, this response is a redirect to www2006.org, so we go back to the beginning of the cycle, resulting in an infinite loop.

What we've got here is an interaction of DNS misconfiguration and a browser bug. Note that connecting to www.www2006.org and providing a Host header for www2006.org works fine.

 
One thing that's worth noting about restricting people from running OS/X on generic PC hardware is that trusted computing technology is of fairly little use. It's certainly easy enough to arrange for a trusted computing module in all legitimate Apple computers but that doesn't buy you much, since it just amounts to the OS checking for the TC module, which is little better than serial number checks. The other alternative is to require part of the code to run in the TC module, perhaps by encrypting that section of code. However, then the attacker just reverse engineers that interface and replaces that section of the code. You could, of course, make the whole thing run in that kind of trusted hardware, but this isn't compatible with the general design of such systems which is to mate a general purpose CPU with small trusted computing base.
 
Apple's transition from PPC to Intel combined with their hardware-based revenue model creates an unusual set of imperatives for them: create a non-portable operating system—one which can't be run on commodity PC hardware. This goes against a 30+ year old trend towards maximal portability going back at least as far as the transition of UNIX to C. To make things even more confusing, OS/X is based (at least mostly) on FreeBSD, which runs on exactly such hardware, and Apple clearly wants to leverage as much of that common heritage as possible.

Now, in practice, Apple doesn't need to make it impossible to run OS/X on commodity machines, just irritating enough that people who otherwise would have bought Macs don't buy PCs instead. That's basically a matter of compensating for the price premium that Apple charges for its hardware. So, it's interesting to explore what Apple might do.

The simplest thing to do is to simply embed checks in the software to see if its running on authorized hardware. For instance, every machine could have a serial number that's checked by the operating system. Potential candidates on commodity hardware include a CPU serial number like Intel at one point had, or to look at the Ethernet MAC address. These serial numbers could be chosen from a range that was identifiable as being owned by Apple (MAC addresses already are assigned this way). They could also be digitally signed but its overkill. This sort of mechanism is fairly easy to counter. All the attacker has to do is find all the checks in the software and distribute a patch file that nulls them out. The defender can of course try to hide the checks, use different idioms, etc., so it's kind of an arms race, and given how frequently Apple delivers new software revisions, if there's any serious demand, patches will be available most of the time.

Because Apple controls the hardware on which OS/X runs, there's something better they can do: simply arrange that all of their hardware components (BIOS, chipsets, graphics cards, etc.) have semi-proprietary interfaces that aren't available in commodity PCs and wire their interfaces to that hardware fairly deeply into their programs. Obviously, people can replace these drivers, but we're now talking about very substantial amounts of work, more than the average person looking to save a couple of hundred bucks is going to be willing to do.

I imagine that this, plus some kind of per-copy license enforcement, is the approach Apple will take.

 

June 7, 2005

Palo Alto is voting today on a $493/year parcel tax (replacing the $293 parcel tax expiring in 3006) to go to Palo Alto Schools. Despite the majority requirement, it looks like it's going to pass. Reasonable people can differ, of course, on whether its a good idea to pass this kind of tax on property, but there's one pretty annoying feature:
Section 3. An optional exemption shall also be available for a person 65 years or older who owns and occupies as a principal residence a parcel (as defined above) and applies to the District for such exemption in accordance with guidelines established by the District.

This is a canny political move--conventional wisdom is that seniors disproportionately vote in this kind of special election--but it's hard to see how it embodies any sort of fairness. People's benefit from improved property values scales with the current value of their property, not their age.

 

June 6, 2005

CNN reports that UPS has lost backup tapes containing the identities 3.9 million customers [*].
"We deeply regret this incident, which occurred in spite of the enhanced security procedures we require of our couriers," Kevin Kessinger, executive vice president of Citigroup (Research), said in a statement. "Beginning in July, this data will be sent electronically in encrypted form," said Kessinger, who heads the company's consumer finance business in North America.

In its letter, New York-based Citigroup told the people affected there was "little risk of your account being compromised because you have already received your loan."

"No additional credit may be obtained from CitiFinancial without your prior approval, either by initiating a new application or by providing positive proof of identification," the nation's No. 1 financial services company said in the letter

It wouldn't be crazy to ask why the records weren't encrypted in the first place.

That said, it sounds like Citi has the right idea--make it harder to use this kind of personal data to execute financial transactions. As far as I can tell, the battle to keep this data secure is pretty much lost; it's time to focus on damage control.

 

June 5, 2005

A distributed system is one on which I cannot get any work done because some machine I have never heard of has crashed.
   —Leslie Lamport

The first thing I noticed when I went to check out my books at the library today was that there was a long line. It quickly became apparent why: the automatic checkout machines were down. When we got up to the front of the line, the checkout librarian started checking us out manually---writing down our card numbers and books on a piece of paper. Here's how the conversation went (from memory, so only approximately accurate):

EKR: What's the problem?
Librarian: Our computers are down.
EKR: I didn't realize they were so brittle.
Librarian: Well, our Internet connection is down.
EKR: That's not a very good design
Librarian: Yes it is. The libraries all use the same checkout system and so if the Internet connection is down then the other libraries have no way of knowing if something has been checked out.
EKR: Well, you could just check things out on the computers here and then synch things up when the Internet connection comes back online.
Librarian: No. The Internet connection is either up or its down.
EKR: No, I mean you can just check people out here and then when the connection comes back online you just upload the changes.
Librarian: But when people return something at Mitchell Park it shows up here right away. We can't do that if the connection is down.

At this point she had finished writing down my books and there were people behind me, so I gave up. But the great thing about blogs is that now I can talk about it here.

Say you want to have a distributed system like this. You've got two branches, which should to have a common view of the universe. Call them Alpha and Beta. So, when someone checks out a book in Alpha, Beta knows about it and vice versa. In order to achieve these, you either have Alpha and Beta linked up to each other or to a common central server.

The central server is easier to explain so let's start with that. In the basic design, the computers at Alpha and Beta are dumb and all the information is stored at the central server--call it Central. Whenever anyone at Alpha or Beta wants to know anything, it asks Central. Whenever they want to change anything, they tell Central to change it. And when they want to know the state--even on something they just changed--they ask Central.

This design has the nice property that Alpha and Beta can never get out of synch, since all the brains are at Central (I'm deliberately ignoring database locking here for the moment). However, it has the annoying property that it's really slow, because neither Alpha nor Beta can ever display anything without talking to Central first. The natural fix for this is for Alpha and Beta to each maintain a replica of the database locally. Then, when they want to display something they just read it out of their replica.1 Of course, if Alpha changes anything it needs to tell the central server, which notifies Beta, and vice versa. Note that at this point it becomes clear that you don't really need the central server. Alpha and Beta can just maintain replicas and notify each other whenever anything changes. There are advantages to having a central server, but it's easiest to explain the rest of this if we assume that there are just two machines connected together.

This all works fine as long as all the machines are connected all the time. But what happens if for some reason one becomes disconnected? There are four basic strategies for dealing with disconnected operation

  1. Forbid it. If Alpha's network connection goes down, no users at Alpha can look up or check out books.
  2. Read-only. If Alpha's network connection goes down, people at Alpha can look up stuff, but not check stuff out.
  3. Partial write. Alpha and Beta each get assigned some subset of the database. They're allowed to read anything in the database but can only write their assigned section. When the network comes back online, they synch up. Note that if Alpha changes something while they're disconnected and Beta tries to read it, it gets the wrong result until they're reconnected.
  4. Concurrent write. Each of Alpha and Beta are allowed to read and write any record. When the network connection comes back online they synch up. This may mean resolving any records which have been changed by both machines. This isn't so much of a problem in the library context because any patron or any copy of a given book can only be at one location at once (though think about what happens if I put a hold on a book from location Alpha and then go to Beta to pick it up and check it out). However, in other systems it's common to have records changed at two places simultaneously. Re-synchronization in such systems can be a real pain in the ass (cf. CVS).

What's weird about the system at the Palo Alto Library is that they're pretending they're executing a Read-Only system when they're actually using what's basically a Partial Write scheme. They're letting people check books out of the library but instead of keying it into the computer, writing it down on paper with the intention to key it into the computer when it comes online again. A reasonable database system would let you do this all in the computer and then synch up automatically when the Internet connection came back up. Apparently the Palo Alto librarians aren't IT savvy enough to demand a reasonable system.

1. In a lot of environments it's not efficient to keep a full replica. In particular, if there's a lot of locality of reference (Alpha mostly works on some subset of the data, Beta mostly works on another) then you can get a more efficient system with caches rather than replicas. But library systems aren't necessarily this way.

 
From Alastair Reynold's Century Rain:
'This is Niagara,' said Skellsgard. 'As you might have gathered, he's a citizen of the Federation of Polities'

'It's all right,' Niagara said. I won't be the least bit offended if you all me a Slasher. You probably regard the term as an insult.'

'Isn't it?' Auger asked, surprised.

'Only if you want it to be.' Niagara made a careful gesture, like some religious benediction: a diagonal slice across his chest and a stab to the heart. 'A slash and a dot,' he said. 'I doubt it means anything to you, but this was once the mark of an alliance of progressive thinkers linked together by one of the very first computer networks. The Federation of Polities can trace its existence right back to that fragile collective, in the early days of the Void Century. It's less a stigma than a mark of community.'

I eagerly await more tales of the final battle between the Slashers and their mortal enemies the Farkers.

 

June 4, 2005

Neal Asher Gridlinked, The Line of Polity, The Skinner, and Brass Man

Fairly hard science space opera. The setting is reminscent of Iain Banks's Culture novels. The civilization at the center of the story is called the Human Polity but it's really run by AIs. Most of the stories center on the interactions between the Polity and the less advanced human worlds at its periphery. The most noticeable characteristic of Asher's books is the many varieties of technologically enhanced human (whether through cybernetic, biological, or nanotechnological means) he populates the books with.

John Barnes The Giraut Leones series:
A Million Open Doors, Earth Made of Glass, and The Merchants of Souls (this is far weaker).
In the far future, humanity has settled most of the nearby star systems using sublight ships but in the process has fragmented into the "Thousand Cultures": designer civilizations based on preservation of individual ethnic groups, literary traditions, or as-yet untried political theories. Everything changes with the invention of instantaneous matter transmission, which forces the cultures back together.

The first, and best of these books is A Million Open Doors. The protagonist, Giraut Leones is a 22ish musician in a culture based on an idealized version of French troubadour culture. He gets sent on a mission to Utilitopia, a culture founded on Rational Christianity, which consideres emotion weak and monetary transactions the only moral way to relate to other people, and somewhat accidentally starts a revolution. What's particularly fine here is how effectively Barnes manages to portray the cultures, which are completely artificial and yet somehow believable.

Barnes is fairly obsessed with violence and arguably misogynistic. Characteristic of these are Kaleidoscope Century (extremely violent) and Candle (less violent), which are both set in the "Meme Wars" universe. So is The Sky so Big and Black, which I haven't read. Mother of Storms is in a different universe but equally violent and probably more misogynistic.

Barnes has also written a lot of pulp: Patton's Spaceship, Washington's Dirigible, and Caesar's Bicycle are fairly readable, though schlocky.

Peter F. Hamilton The Neutronium Alchemist series: The Reality Dysfunction Part I: Emergence, The Reality Dysfunction Party II: Expansion, The Neutronium Alchemist Part I: Consolidation, The Neutronium Alchemist Part II: Conflict, The Naked God Part I: Flight, The Naked God Part II: Faith

Over 3,000 pages of wide scope space opera. Humans have expanded throughout the galaxy and are divided into two major sects: Christian Adamists who reject biotechnology and Edenists who use it extensively and have a telepathic link to each other and their (biological) ships and habitats. Aliens are in the picture, but don't play any active role. The basic plot concerns humanity's first encounter with what Banks calls an "Outside Context Problem"--an aggressive force they have no real way to counter. A Second Chance at Eden is set in the same universe but not in the same plotline.

Fallen Dragon, and Pandora's Star (first in a series) are pretty solid. I'm not a big fan of the Greg Mandel Mysteries A Quantum Murder and The Nano Flower but some people like them. They're SF mysteries set in the fairly near future.

Karl Schroeder Ventus
Another post-singularity story. This one's about the hunt for a fragment of an evil godlike intelligence which is set upon resurrecting itself and taking over.

Permamence
An interesting premise: we've settled stars in the local area using sublight drives. Shortly afterward, we invent an FTL drive. The only problem is that it can't be started in the most of the star systems we've colonized, which surround brown dwarf stars. This creates a two-class system: worlds which have access to FTL and those which don't. Unfortunately, the actual plot lags quite a bit. Not too bad, though.

 
Peter Gutmann is not a big fan of X.509 [*]:
Denis Pinkas  writes:

>The Directory (i.e. X.500) failed, but the good part of it, i.e. X.509,
>remains.

"good"?  Hmm [flips through dictionary]... oh I see, you're using "good" here
to mean "ample, substantial", as in "The PKIX profile of X.509 covers a _good_
part of 2,000 pages of text".  Well, there can be no doubt that X.509 has
become gooder than X.500 ever was, and is only going to become gooder in the
future.

Peter.

See also, Peter's X.509 Style Guide

 

June 3, 2005

Ebru Demir and Barry Dickson have shown that splicing a single male gene into female drosophila causes them to generate male sexual behavior, including approaching females rather than males [*]:
fruitless Splicing Specifies Male Courtship Behavior in Drosophila
Ebru Demir and Barry J. Dickson*

All animals exhibit innate behaviors that are specified during their development. Drosophila melanogaster males (but not females) perform an elaborate and innate courtship ritual directed toward females (but not males). Male courtship requires products of the fruitless (fru) gene, which is spliced differently in males and females. We have generated alleles of fru that are constitutively spliced in either the male or the female mode. We show that male splicing is essential for male courtship behavior and sexual orientation. More importantly, male splicing is also sufficient to generate male behavior in otherwise normal females. These females direct their courtship toward other females (or males engineered to produce female pheromones). The splicing of a single neuronal gene thus specifies essentially all aspects of a complex innate behavior.

Makes you wonder about the implications for human behavior, doesn't it?

 
WaPo reports on Steven Brill's "Clear" card, which lets airport passengers pay $60 and submit to a background check and get special security treatment. It's now being tested in Orlando. Bruce Schneier raises the obvious concern that "As soon as you make an easy path and a hard path through a security system, you invite the bad guys to try to take the easy path".

That's a generic argument against this kind of program, but it seems to me that there's a more substantial problem with the implementation here:

He's giving them good reason to listen: In its proposal to Orlando officials _ which beat a rival bid from technology integrator Unisys Corp. _ Verified ID promised to share 29 percent of Clear's first-year revenue with the airport authority and as much as 22.5 percent in succeeding years. The airport also would get 2.5 percent of Clear's future nationwide revenue.

The proposal says Verified ID expects to have 3.3 million members across the nation within six years, with annual memberships likely costing $100.

I'm not sure it's that great an idea to be giving airports incentives to make the non VIP security lines worse.

 

June 2, 2005

So far, the FCC has received over 7000 comments on whether to allow cell phone usage in aircraft. I haven't paged through all 7000 of them, of course, but I just looked at the most recently filed 10. They break down like this:
  • 1 in favor without rationale.
  • 1 against without rationale.
  • 2 against because it will cause "air rage"
  • 7 against because they don't want to have to listen to other people.
  • 1 against because people won't be able to hear announcements.
Does not add up to 10 because some people cited more than one reason

I totally understand that people talking on cell phones is annoying. Guess what, lots of things in life are annoying. Heck, lots of things on planes are annoying, including seatmates who want to talk to you, having to sit next to fat people, and crying babies, but we don't ask the FCC to ban them. Just because cellphones happen to be a piece of communications equipment doesn't mean that the fact that they're annoying is a good reason for the FCC to ban them (though it's potentially a good reason for the FAA to do so).

Also, check out the FBI comments. Quite the laundry list of requirements. More later.

 

June 1, 2005


The guys over at Gizmodo have a bat bombs, but who can forget pigeon-guided missiles?
 
The European Court of Justice just ruled that it has no authority to stop parallel trading in drugs:
The business has sprung from a pricing policy in Europe unique to the drug industry. Each European country sets the price that drug manufacturers charge for medicines based on how wealthy the country is. The prices drug makers set for Eastern Europe can be as much as 30 percent to 70 percent less than in Britain, for example.

A network of wholesalers buys the drugs, and sells some of them locally. In lower-priced countries, however, the wholesalers often divert some of the drugs to countries that have higher prices, including Britain, Germany and some Scandinavian countries. The wholesalers repackage the drugs with instructions in the local language, but otherwise make no changes to the product, and they can make a substantial profit.

So, the interesting question for me is how the drug companies will respond. They can obviously refuse to sell in the lower cost countries (where they make less money anyway), try to lobby for restrictions on import (which I doubt they'll get) or they can live with the loss of profits. Any guesses?