Comments on Network Endpoint Assessment

I've reviewed the two NEA drafts (draft-thomson-nea-problem-statement-03 
and draft-khosravi-nea-requirements-01) and have some high-level 
concerns, discussed below. 
Any real-world computer can be in an insecure state in a variety of 
ways, including: 
- Configured in an insecure fashion 
- Running versions of software with security holes 
  (these can either be known or unknown). 
- Actually running malicious software 
For obvious reasons, it would be nice to be able to know when your 
machine--or a machine about to connect to your network--wasn't 
in a secure state. NEA appears to be intended to address this. 
The idea is that when your machine connects to the network, there 
is a protocol that runs between the connecting machine and some 
server on the network that allows this determination to be made. 
As far as I can tell, there are five potential objectives for 
a protocol of this type: 
1. Pure network management a la SNMP (I've heard this called 
   an inventory protocol) 
2. Letting user operators determine whether their computers  
3. Providing users with an incentive to keep their computers 
   secure by stopping them from connecting their insecure 
   computers to a local network, even if that network is 
   secure, on the theory that they'll eventually connect to 
   less secure network. 
4. Stopping insecure computers from connecting to a hostile 
   local network. 
5. Stopping infected computers from connecting to a secure 
   local network. 
Based on S 3 of draft-thomson-nea-problem-statement-03, the 
objectives of this effort appear to include (1)-(4) and 
potentially (5). 
Objectives (3)-(5) are all potentially adversarial situations: the 
user wants to connect to the network and probably doesn't particularly 
care whether his system is completely up to date. This is particularly 
true in case (3) where the local network is secure and it's just some 
theoretical other network the user might connect to that's hostile. 

In situations like this, there's a serious concern about the client 
machine lying about it's state. It's no doubt straightforward for the 
user to install some fake agent that would produce false 
information. One interesting special case here is systems which 
completely fail to comply with network security policies (e.g., it's a 
Windows only shop but your machine is running Linux), in which case 
lying is very attractive. This actually happens in settings where Web 
servers only accept particular browser versions; Safari has explicit 
support for controlling the user agent it advertises. 
The lying problem is even worse in case (5), in that if your machine 
is infected the malware can of course advertise any security posture 
that it wants to. These drafts explicitly disclaim solving this 
problem, but I wonder how useful this technology is without that. 
So, I'm skeptical about how effective the enforcement feature will be, 
as opposed to something that was purely advisory. There are, of 
course, techniques using trusted hardware to stop machines from lying 
about their posture, but that's not something IETF is in the business 
of standardizing.  
A secondary problem here is the inability of getting a full picture 
of the software running on a computer. In the reference architecture 
posited in draft-thomson-nea-problem-statement, there are a variety 
of posture collecters which talk to the posture broker. Say I 
install a new insecure application which doesn't implement 
this protocol. How does the broker (and by extension the network 
enforcement point) know about it? I don't see how it can without 
having extensive insight into the OS which sort of obviates the 
point of having this split-up architecture. 
This effort proposes to standardize the posture attribute protocol, 
which AFAICT exists solely within a single computer. I have 
three concerns about this: 
1. It's not clear to me that it will work (see the previous point). 
2. Even if it does work, I'm not sure that intra-host protocols 
   are suitable for IETF standardization. 
3. The system should be designed so that the agent that runs  
   on the client can be monolithic and directly access the status 
   of each component (as I believe current systems do) without 
   being required to impersonate agents for every component. 
The proposed effort seems to have already assumed that evaluation 
of appropriate posture should take place on the network/server 
side rather than on the connecting host. Why? I can see that this 
superficially appears more attractive in terms of making  
enforcement decisions, but as I noted above, I'm skeptical that 
that will actually work.  
There are obvious advantages to doing evaluation on the host 
side: the network can provide the current policy definitions 
and the host can do arbitrary processing--and user 
interaction--in order to figure out what steps are necessary. 
A second benefit is that the security requirements for host-side 
evaluation are much simpler; the network just publishes an 
authenticated policy. A third benefit is that the host can 
cache the policy and use rsync-style fast fetching to update 
the cache. 
Even if enforcement is desired, I would argue that this is 
better done on the client side. If you trust the client, then 
it can do the right thing and all the advantages I mentioned 
above obtain. If you don't trust the client, expect it to lie.