Why you say?
We can review our systems that we have and assign a higher risk on some systems depending on the value of the data, it’s function etc.
So let’s say you have 3 computer servers
There is only so much in money and labor resources, so as an analytical person we assign the best we can a value to each system: Low, moderate, High, Extreme.
So we assign “likely to get attacked” and consequences to each system.
Thus we keep the high value targets (extreme) with the highest number of resources patched and protected.
But then a hacker finds a low value target system (such as in JPMorgan hack: http://dealbook.nytimes.com/2014/10/03/hackers-attack-cracked-10-banks-in-major-assault/ )
So now the hacker is in and is sniffing around, finding any weakness that you may have and then taking that to the next level.
In cybersecurity field we look at the lower priority CVS vulnerabilities and see them as “nice to patch” since everything is a risk. Patching a system or fixing a problem can cause problems as well. But the hacker looks at a lower CVS and sees an opportunity. If you are not fixing all vulnerabilities then get ready to get hacked.
Now you are hoping that the hacker will not get further into your environment with the higher levels of protections on the critical systems… So now this is your picture:
Any mistake will allow the hacker in
<update 03/12/15 – The issue is the criminal hacker has patience and takes their time moving from weakly defended system to other systems>
Any mistake will allow the hacker to advance
You have unwittingly created a system that invites mistakes and is a heaven for hackers, why do you think it takes 210 days before the breach is found?
Risk management as a philosophy must be changed to:
Fix ALL MISTAKES, all CVS errors, bugs and create an impenetrable environment with NO insignificant risks.
How do you do this?
The same way we beat the Soviets to the moon. The US scientific environment (with all of its problems) made it to the moon first by introducing testing to the individual pieces of space equipment being produced.
That allowed more errors to be fixed before they became problems. (Whereas the Soviets did not do that and tested the whole system).
Let me introduce you to what I live and breathe (I am a Systems Engineer BS Washington University ’93) About Us
It is simple really: test your environment, check and doublecheck your IT department. It is not that you do not trust them, it is just a higher risk environment and we cannot afford _any_ mistakes.
And finally I am proposing piece – by – piece testing:
1. Fill out a permission paper (we will not attack -or test- your systems without it)
3. We write the report
4. Your IT department fixes the problems ( we do not do the work)
5. Now we retest and make sure it was done.
With this method you may still have mistakes, but they will be fixed and a more secure environment, which is what we are all striving for.