We as Cybersecurity practitioners must use the best tools we can find. So if AI(Artificial Intelligence) can help us we need to use them.
Of course we have to use real AI tools, not old tools renamed “AI” to sell more software for a little bit of time.
What is the definition of AI ? a machine software (i.e. no human modification) that imitates human behavior. Or a branch of computer science dealing with simulation of intelligent behavior in computers.
So a true AI Cybersecurity is a program running attack or defense for the network or computer without human interaction.
What in today’s environment shows small views of intelligence? Bots and viruses of course.
It is also my opinion that future AI will first come as more sophisticated “Bots” or infectious software:
Again this affected entities that did not patch their PeopleSoft HR and Oracle E-business Suite software.
What makes this vulnerability bad is that it is a remote execution vulnerability. “Easily exploitable vulnerability allows unauthenticated attacker with network access via T3 to compromise Oracle WebLogic Server. Successful attacks of this vulnerability can result in takeover of Oracle WebLogic Server.” (from NIST link).
So if an AI program can program itself to infect and take over other machines to both infect other machines and perform other goals (like mine crypto currencies the latest actions in this exploit for example) then it is easily done when people find ways not to patch their software.
Image example of CVE-2017-10271 as it was found
The key is to patch your machines, and we have to develop “Blue team” AI first in this coming “AI war”
To be a bit clearer (as mud I am sure) As someone programs an attack program to do the 3 things mentioned:
- Find vulnerability
- Exploit vulnerability and make money with cryptocurrencies on your machines.
- Propagate the program as much as possible
So the future in AI (the real scary part) is when a truly non-human fully automated attack program does all 3 items and improves. The danger in how it will act is still not fully realized yet. I.e. we are not sure how bad it will get.
The important piece of this puzzle is the exponential level of improvement a fully electronic AI could do.
Some people have talked about the ‘singularity’ moment when an AI will have more capabilities than a human brain(supposedly sometime in 2020s).
What about a Cybersecurity ‘singularity’ moment? When a improving attack program starts to improve so fast that it morphs into something that is difficult to stop.
Contact me to discuss