SYSSEC: August 2007 Archives


August 31, 2007

Computerworld reports that the Bank of India Web site was attacked and seeded with a rather excessive amount of malware:
Although the bank's site had been scoured of all malware by Friday morning, it's currently offline. "This site is under temporary maintenance and will be available after 09:00 IST on 1.09.07," a prominent message currently reads.

Researchers at Sunbelt Software Inc. first posted details of the hack yesterday afternoon after finding rogue code embedded in the site's HTML. That code, actually an IFRAME exploit, silently redirected users to a hacker server, which pushed 22 different pieces of malware onto vulnerable PCs. By Sunbelt's tally, the malware included one worm, three rootkits, five Trojan downloaders, and several password stealers. "The biggest issue is the sheer volume of malware we've had to analyze," said Alex Eckelberry, Sunbelt's CEO, in a blog posting yesterday.

I guess if something's worth doing it's worth doing right. Outstanding!


August 20, 2007

Here's Skype's official word on what caused their outage:
In an update to users on Skype's Heartbeat blog, employee Villu Arak said the disruption was not because of hackers or any other malicious activity.

Instead, he said that the disruption "was triggered by a massive restart of our users' computers across the globe within a very short timeframe as they re-booted after receiving a routine set of patches through Windows Update," Arak wrote.

Microsoft Corp. released its monthly patches last Tuesday, and many computers are set to automatically download and install them. Installation requires a computer restart.

"The high number of restarts affected Skype's network resources. This caused a flood of log-in requests, which, combined with the lack of peer-to-peer network resources, prompted a chain reaction that had a critical impact," Arak wrote.

Arak did not blame Microsoft for the troubles and said the outage ultimately rested with Skype. Arak said Skype's network normally has an ability to heal itself in such cases, but a previously unknown glitch in Skype's software prevented that from occurring quickly enough.

Some thoughts:

  • The phrasing "lack of peer-to-peer network resources" is quite interesting. One design goal for P2P systems is that their ability to handle load scales smoothly (or at least semi-smoothly) with the number of clients (peers) trying to use the system. It would be interesting to know what happened here.
  • This is probably not a behavior you'd see in a truly decentralized system. If, for instance, everyone in the world rebooted their SIP client this would probably not cause all the SIP phones in the world to stop working for two days. Though it might cause transient outages as people independently rebooted their machines.
  • How hard would it be for an attacker to trigger this sort of behavior intentionally by bouncing a large number of Skype clients which they have taken over (i.e., zombies in a botnet)?

August 8, 2007

Infrant (makers of ReadyNAS, now owned by Netgear) just released a security advisory for remote root SSH access to their box:
NETGEAR has released an add-on to toggle SSH support for the ReadyNAS systems based on a potential exploit to obtain root user access to the ReadyNAS RAIDiator OS. Each ReadyNAS system incorporates a different root password that can be used by NETGEAR Support to understand and/or fix a ReadyNAS system remotely using the ReadyNAS serial number as a key. An attacker that has obtained the algorithm (and your serial number) to generate the root password would be able to remotely access the ReadyNAS and view, change, or delete data on the ReadyNAS.

ReadyNAS installation most vulnerable to this attack is in an unsecure LAN and where the ReadyNAS SSH port (22) is accessible by untrusting clients. Typical home environments are safe if a firewall is utilized and port 22 is not forwarded to the ReadyNAS from the router. We do advise that all ReadyNAS users perform this add-on installation regardless.

Installation of the ToggleSSH add-on will disable remote SSH access and thus close the vulnerability. At the same time, if you need remote access assistance from NETGEAR Support, you can install the ToggleSSH add-on again to re-enable SSH access during the time when the remote access is needed.

In other words, NETGEAR support can remotely log into any ReadyNAS box as root and manage it. A few notes:

  • I'm having trouble imagining any conditions under which I'd want NETGEAR support to have remote access to my fileserver (and no, I don't own one of these). I wonder if there's some way to change the root password or if you're stuck with this backdoor. Is this really something that they need a lot or was it just a cunning plan that didn't get filtered out at some higher level.
  • They don't disclose the algorithm they use to produce the password. Some such algorithms are good and some are bad. It would be interesting to know which type this is.
  • There are three major ways to build a system like this on the verifying side:
    1. Have the box simply know its own password.
    2. Have the password-generation algorithm built into the box.
    3. Use public key cryptography. E.g., the password is a digital signature over the serial number.
    If I had to bet, it would be on (1) or (2). (2) is obviously pretty bad since it means that anyone who has a single box can reverse engineer the algorithm and generate as many passwords as they want. Anyone take one of these apart and know?
  • What kind of auditing is available to find out if your box has already been taken over by some attacker who knows the key—or just someone from NETGEAR tech suport.
Oh, and what were they thinking having this on by default? Outstanding!

August 5, 2007

While we're on the subject of armed robots, it's sort of worth asking the question of what sorts of inputs caused them to "spin out of control". Are these the kind of inputs that could potentially be presented by attackers? If so, I hope they were actually fixed, not just covered up with a kill switch.
Wired reports that the DoD has taken delivery of three "special weapons observation remote reconnaissance direct action system" (SWORDS) robots. (Pretty tricky with those acronyms, guys!). Anyway, these are remote-controlled robots armed with M-249 machine guns.

Apparently these robots were uh, a bit flakey, but the manufacturers say they've got all the bugs worked out now:

The SWORDS -- modified versions of bomb-disposal robots used throughout Iraq -- were first declared ready for duty back in 2004. But concerns about safety kept the robots from being sent over the the battlefield. The machines had a tendency to spin out of control from time to time. That was an annoyance during ordnance-handling missions; no one wanted to contemplate the consequences during a firefight.

So the radio-controlled robots were retooled, for greater safety. In the past, weak signals would keep the robots from getting orders for as much as eight seconds -- a significant lag during combat. Now, the SWORDS won't act on a command, unless it's received right away. A three-part arming process -- with both physical and electronic safeties -- is required before firing. Most importantly, the machines now come with kill switches, in case there's any odd behavior. "So now we can kill the unit if it goes crazy," Zecca says.

OK, so ignoring the wisdom of starting from a platform which used to "spin out of control", I'm sort of interested in how the "kill switch" works. As far as I know, there are two basic ways to build a system like this:

  • Fail-unsafe. The kill command is just a separate command that tells the unit to shut down.
  • Fail-safe. The control unit regularly (or continuously) sends a signal. If the robot stops getting the signal it shuts down.

It should be pretty clear that if what you think there's a high likelihood that the robot's going to go nuts and you want to minimize the chance that it kills your own people, random civilians, their pets, etc., you probably want something that fails safe. This is especially true in view of the implication in this article that signal strength isn't always what you might like. You really don't want to have a situation where the robot is busy slaughtering innocent bystanders and you can't shut it down because your control unit is showing zero bars.

On the other hand, a fail-safe system is also much easier to DoS—it's probably more important when the system being DoSed is shooting your enemies than when it's serving up copies of Girls Gone Wild. All the attacker has to do is somehow jam your signal (and remember that since you probably want to have a cryptographically secured control channel, they only need to introduce enough errors to make the integrity checks fail). This makes the problem of designing the control channel a lot more difficult. I'd definitely be interested in hearing more about the design of the protocol for these gizmos.


August 3, 2007

For the past couple months I've been spending most of my time working on California's Top-to-Bottom Review of electronic voting systems certified for use in California.

The overall project was performed under the auspices of UC and led by Matt Bishop (UC Davis) and David Wagner (UC Berkeley), who did a great job of negotiating a wide variety of organizational obstacles to get the project going and keep it on track.

This project reviewed the systems of three manufacturers:

  • Diebold Election Systems Inc. (DESI)
  • Hart InterCivic
  • Sequoia Voting Systems
Each company makes both an optical scanner for paper ballots and a computerized direct recording electronic (DRE) (these are often called touchscreen, but the Hart system actually uses a clickwheel), as well as a back-end election management system.

Each system was assigned to three teams:

  • A documentation team which reviewed only the documentation.
  • A "red team" which conducted penetration testing.
  • A source code team which reviewed the source code.

There was also an accessibility team for all the systems.

I led the Hart source code team, consisting of me, Srinivas Inguva, Hovav Shacham, and Dan Wallach, and sited at an undisclosed location which can now be disclosed as SRI International in Menlo Park. Our report was just published yesterday, just ahead of the statutory deadline for the State to decide on whether these systems will continue to be certifed (more detail here). You can get it here and all the reports here.

I wasn't planning on saying much about this on EG. Most of what I have to say is already said better in our report. I did want to say a word about my team, who put in extraordinary amounts of effort under an extremely tight timeline; just over a month from the time we got the Hart source to the delivery of the final report. Thanks, guys, and I look forward to working with you again, hopefully next time in a room with 24x7 air conditioning.