Mismanagement in Vulnerability Management Systems

I’m always scouring the net for interesting presentations and this is an interesting one, from Bsides Detroit By Gordon MacKay¹ which have been put on the Net by Adrian Crenshaw (irongeek.com)²

The presentation is about a flaw in vulnerability management systems which also happens to be what Gordon MacKay programs now for Digital Defense Inc.

Here is an overview image from the youtube video about 10min in.

vulnerabilityscanningtech

 

Gordon reviews the 3 ways a Vulnerability management system(VMS) runs:

  1. Credential Based
  2. Remote unauthenticated based
  3. Agent Based

 

Briefly a VMS is software that scans and rates devices on the network including computers, printers, firewalls, routers, and more.  Anything on the network runs software and the VMS is trying to find out if you patched your device or not. How much risk should be placed depends on the patch level. Of course there are several ways one can test the devices on the network and many manufacturers have different ways of doing so(hence the 3 methods above). So why is this important? Because one has to review the computer for its patches and then state if it is vulnerable to certain attacks and then declare a risk factor (part of the Risk management Model)

I went into more detail of the Risk level and risk Management model at this link on my site³.

failed-risk_management_model

Thereseems to be a problem with current VMS vendor software.

The problem is when a scan is performed at a certain point in time

it has a certain vulnerability analysis. So to best illustrate this let’s say it is “Medium” risk factor due to a specific vulnerability.

Test system A has “medium” on June of 2016.   The problem creeps into being when in July  the system is renamed or has a new IP address because the hard drive was rebuilt(hardware failure).

So now the system is a different system, but does it have the old vulnerability or not? Now we need to rescan.

What if the Vulnerability is no longer there in the rebuilt system or it is there but is now renamed in the system. The software may declare the old system fixed and new system with a changed risk factor.

 

It depends on how the VMS asked to classify network devices – and what happens when characteristics are changed  [ip address, hostnames(DNS or NETBIOS)].

How often do host names change in 3 months?

Here is data from a slide in video that has some info for Server host characteristic change(in Gordon’s data):

  • IP address changed 4%
  • DNS hostname changed 46%
  • NETBIOS changed 34%

And for client host change:

  • ip address changed 35%
  • dns hostname changed 42%
  • NETBIOS changed 20%

Why is this mismatch of network devices bad?

Lets say ‘test system A’ changes host name address.

Now the new system has not been scanned yet, or is scanned partially (due to timing) and the report for all scans no longer sees the old system name so declares the old system “fixed”. The “new” test system A has some different vulnerabilities so the old vulnerability is gone right?.  It depends on the VMS vendor software as to what happens in the cracks of the software – the hidden areas where bugs or other effects, not to mention dispatching unnecessary  resources or not dispatching necessary resources. I.e. management software turns into Mismanagement software. Gordon designed his own software to keep the tracking of systems in mind throughout time and changes in network.

 

If a system is brought online then it should be matched to a previous scan if it is not a real “new” system and should be placed in a complete scan as soon as possible.

If the system has changed characteristics then there should be a method of tracking the change throughout time.

 

There are possibilities of changing characteristics and mismatched vulnerabilities depending on how well the vendor software handles this phenomena.

 

Has the risk changed  as the environment changes?  What is the real risk analysis as the environment changes, as software is added and subtracted?

 

The network has to be scanned and vulnerabilities found, but in a large environment it is not that easy to fix all the vulnerabilities right away. So time passes and as more vulnerabilities are  “created” by hackers the VMS software has to keep up. This is the problem and where resources may not be spent well, as the risks may be higher or lower than what the software says it is.

 

Contact Us to discuss your environment.

 

 

  1. https://www.youtube.com/watch?v=5pkC1fioOe0
  2. http://www.irongeek.com/i.php?page=videos/bsidesdetroit2016/mainlist
  3. http://oversitesentry.com/what-is-your-risk-level/
Advertisements

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.