SYSSEC: September 2009 Archives

 

September 28, 2009

My comments can be found here. You may also be interested in ACCURATE's comments which can be found here.
 

September 27, 2009

[See here]

S 5.2.2 requires that systems be written in programming languages which support block-structured exception handling:

The above requirement may be satisfied by using COTS extension packages to add missing control constructs to languages that could not otherwise conform. For example, C99[2] does not support block-structured exception handling, but the construct can be retrofitted using (e.g.) cexcept[3] or another COTS package.

The use of non-COTS extension packages or manufacturer-specific code for this purpose is not acceptable, as it would place an unreasonable burden on the VSTL to verify the soundness of an unproven extension (effectively a new programming language). The package must have a proven track record of performance supporting the assertion that it would be stable and suitable for use in voting systems, just as the compiler or interpreter for the base programming language must.

One could interpret this requirement as simply being that the language must support this functionality, not that it be used, in which case the requirement is unobjectionable. However, S 5.2.5 makes clear that programmers are expected to actually use exception constructs, which is significantly more problematic.

The first issue is that an exception-oriented programming style is significantly different than an error code-oriented programming style, so complying with the spirit of this requirement implies a very substantial rewrite of any existing system which uses error codes. However, experience with previous VVSG programming style requirements suggests that vendors will do the minimum required to comply with the letter of the VVSG. Because the two styles are similar enough that the conversion process can be done semi-mechanically, the likely result will be a a program which superficially works but which now has subtle bugs which were introduced during the conversion process.

One class of bugs that deserves particular attention is the proper cleanup of objects before function exit. When a function creates a set of objects and then encounters an error, it is important to clean up those objects before returning with the error. Failure to do so can leak memory and, more importantly, lead to improper finalization of objects which may be connected to non-memory resources such as network connections, hardware, etc. In languages such as Java, this is handled by garbage collection, but C is not garbage collected. Thus, when an exception is thrown from a function deep in the call stack and caught in a higher function, it bypasses all explicit cleanup routines in intermediate functions, which can lead to serious errors. Handling this situation correctly requires extreme care comparable to that required with conventional error handling. Writing correct code under these circumstances is challenging under the best of conditions, but is likely to be impractical under conditions where programmers are required to convert existing error code-based software.

While it might be the case that a completely new system written along exception-oriented lines would be superior, I am aware of no evidence that a retrofitted system would be superior and there is a substantial risk that it will be worse.

The second issue is that because C has no native exception handling, systems written in C will need to use a COTS package. Unfortunately, because exception handling is not a native feature of C, any attempt to retrofit it involves tradeoffs. As an example, the cexcept[3] package cited above, does not support conditional exception handling. In C++ exceptions, it is possible to have a Catch statement which only catches some exceptions, e.g.,

   try {
 

   }
   catch (memory_exception){
     ;
   } 
   catch (data_exception){
     ;
   }
 
   // Other exceptions get passed through
But in cexcept, a Catch statement catches exceptions of all types and you need to use an explicit conditional in order to discover which exception was thrown. But this creates much the same opportunity to ignore/mishandle unexpected exceptions that error codes do.

Another problem with cexcept is that it is very brittle whenever exception handling is intermixed with conventional error handling. Any function which jumps out of a try/catch block can result in "undefined" behavior (i.e., the implementation can do anything whatsoever). This, of course, is an easy mistake to make when converting from return codes to exceptions.

cexcept is not, of course, the only C exception package. For instance, Doug Jones has developed a different exception package, which makes different tradeoffs (though the above intermixed exception/return problem seems to exist here too).

Third, the use of the term "COTS" to cover these packages seems to require a fairly loose definition of COTS. While it is true that there are a number of exception packages available for free download, it is not clear to what extent they are in wide use by programmers. In my experience as a professional programmer, I have yet to work on a system written in C which used on of these packages. As the stated purpose of the COTS requirement is to be to ensure that the packages have seen enough deployment that we can have confidence in their quality, it seems questionable whether any of the available packages meet this standard.

 

September 23, 2009

Nominum is introducing a new "cloud" DNS service called Skye. Part of their pitch for this service is that it's supposedly a lot more secure. Check out this interview with Nominum's John Shalowitz where he compares using their service to putting fluoride in the water:
In the announcement for Nominum's new Skye cloud DNS services, you say Skye 'closes a key weakness in the internet'. What is that weakness?

A: Freeware legacy DNS is the internet's dirty little secret - and it's not even little, it's probably a big secret. Because if you think of all the places outside of where Nominum is today - whether it's the majority of enterprise accounts or some of the smaller ISPs - they all have essentially been running freeware up until now.

Given all the nasty things that have happened this year, freeware is a recipe for problems, and it's just going to get worse.

...

What characterises that open-source, freeware legacy DNS that you think makes it weaker?

Number one is in terms of security controls. If I have a secret way of blocking a hacker from attacking my software, if it's freeware or open source, the hacker can look at the code.

By virtue of something being open source, it has to be open to everybody to look into. I can't keep secrets in there. But if I have a commercial-grade software product, then all of that is closed off, and so things are not visible to the hacker.

By its very nature, something that is freeware or open source [is open]. There are vendors that take a freeware product and make a slight variant of it, but they are never going to be ever able to change every component to lock it down.

Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure.

First, I should say that I don't have any position on the relative security of Nominum's software versus the various open source DNS products. With that said, I'm not really that convinced. The conventional argument goes that it's harder for attackers to find vulnerabilities in closed source software because it's harder to work with the binaries than the source. This is a proposition which I've seen vigorously argued but for which there isn't much evidence. Now, it's certainly true that if nobody can get access to your program at all, then it's much harder to figure out how it works and how to attack it. However, Nominum does sell DNS software, so unless the stuff they're running on Skye is totally different, it's not clear how much of an advantage this is.

Salowitz also argues that being closed source lets him hide "secret way[s] of blocking a hacker from attacking my software". This seems even less convincing, primarily because it's not really clear that such techniques exist; there's been a huge amount of work on software attack and defense in the public literature, so how likely is it that Nominum has really invented something fundamentally new? And if you did in fact have such a technique, but one that's only secure as long as it's secret, then it's far more vulnerable to reverse engineering than programs ordinarily are, since the attacker just needs to reverse engineer it once and it's insecure forever. By contrast, if they reverse engineer your program to find a vulnerability, you can close that vulnerability and then they need to find a new one.

Again, this isn't to say that Nominum's system is or isn't more secure than other DNS servers (though DJBDNS, for instance, has a very good reputation). I don't have any detailed information one way or the other. However, this particular argument doesn't seem to me to establish anything useful.

 

September 19, 2009

A number of political blogs (e.g., Obsidian Wings, Matthew Yglesias, etc.) seem to have a problem with comment impersonation. The general pattern is that someone will show up and post something more or less blatantly offensive under the name of a well-known commenter. This is then followed by a series of posts asking "was that really John or just a comment spoofer?" "Can someone check/block their IP?", and often eventual removal of the offending comment, leaving everyone confused about what the fuss is about.

Obviously, the underlying source of the problem is that most blog software has completely open commenting: you don't need to register and you can provide any identity you want and it will just accept it. This is convenient if you regularly post from random machines, but makes this sort of impersonation trivial. The natural "security guy" defense here is of course to require actual user authentication [this seems to be supported by most blog software], but that's really overkill for this situation, where all we really want to do is stop random people from impersonating random other people. Here, then, is my suggestion for a small set of changes which would make most casual impersonation very difficult:

  1. The first time a given identity is used, record the IP from which it is used and install a cookie in the browser which is used to make the comment.
  2. In future, restrict use of that identity to requests which either come from that source IP or present the right cookie.
  3. If you see a request from a different source IP that present the right cookie, add that source IP to the whitelist.
  4. If you see a request from a whitelisted source IP without a cookie, install the cookie.
  5. Have a manual mechanism (e.g., e-mail challenge response) for allowing a new computer to post comments under an existing name.

This isn't perfect in a number of respects. First, it doesn't provide perfect security. For instance, if I ever post from a hotspot (which generally has a NATted network) anyone else from that hotspot will be able to post as me. However, that seems relatively unlikely given the form of attack which we're generally seeing here, which is mostly trolls trying to disrupt the conversation. The second problem, of course, is that it's a little inconvenient if you have multiple computers, but even people who do post from multiple computers generally only have a few and those would quickly be whitelisted. The big advantage of this scheme is that it provides reasonable deterrence against a common attack and is generally pretty transparent to most users. We don't have a comment impersonation problem here on EG, and I'm too lazy to implement it for the public good, but I'm a litle surprised that hosting services like Typepad haven't implemented something similar.

 

September 15, 2009

Ed Felten writes about the problem of fleeing voters:
Well designed voting systems tend to have a prominent, clearly labeled control or action that the voter uses to officially cast his or her vote. This might be a big red "CAST VOTE" button. The Finnish system mistakenly used the same "OK" button used previously in the process, making voter mistakes more likely. Adding to the problem, the voter's smart card was protruding from the front of the machine, making it all too easy for a voter to grab the card and walk away.

No voting machine can stop a "fleeing voter" scenario, where a voter simply walks away during the voting process (we conventionally say "fleeing" even if the voter leaves by mistake), but some systems are much better than others in this respect. Diebold's touchscreen voting machines, for all their faults, got this design element right, pulling the voter's smart card all of the way into the machine and ejecting it only when the voter was supposed to leave -- thus turning the voter's desire to return the smart card into a countermeasure against premature voter departure, rather than a cause of it. (ATM machines often use this same trick of holding the card inside the machine to stop the user from grabbing the card and walking away at the wrong time.) Some older lever machines use an even simpler method against fleeing voters: the same big red handle that casts the ballot also opens the curtains so the voter can leave.

I was at the Fidelity office in Palo Alto today and I noticed an ingenious solution to a related problem: fleeing customers. Their investment terminals have dead-man switches, well mats:

The way that this works (apparently) is that there's a pressure sensitive mat in front of the terminal, positioned so that you need to (or at least it's really inconvenient not to) stand on the mat in order to use the terminal. When you step off the mat to walk away, the terminal logs you out, so there's only a minimal window of vulnerability where you're logged in but not present. Now, obviously, a real attacker could tamper with the mats to keep you logged in, but this seems like a pretty good safeguard against simple user error being exploited by subsequent customers. You could imagine building a similar safeguard into voting machines, where the machine rings some alarm if you step away.

UPDATE: Fixed blockquote...