The Enemy Has Say With Your Best Plans

In the field of Cybersecurity we have to do a lot of basic things: as discussed in Behavioralscientist.org

So what is your plan?  Firewall, Antivirus, IT people vigilance, updating devices and software…

What are your enemies’ plans?

When your enemy actually interacts with your employees it  shows.

There are always business level threats (where employees are spoofed) or  (vendors are spoofed).

Do you have a new device with Machine Learning? (a basic type of AI (Artificial Intelligence).  Then the enemy will do something to counteract that.

Adversarial Machine Learning.  It will go against your ML goals, and will try to eventually corrupt your goals by adding faulty data and thus changing your assumptions of the data set.

Another way to use Adversarial Machine learning is to use this method to ‘teach’ your ML to get better  results. It turns out that some ways of GAN (Generative Adversarial Networks) do just that.

For Example:  “Adversarial Machine Learning at Scale” paper from Cornell University   First sentence:

“Adversarial examples are malicious inputs designed to fool machine learning models.”    

Improving the ML learning models if done right. This method has not been used by criminals, as they are still figuring out how to incorporate this in their attacks.

So they may not use this as an adversarial attack, instead they may devise ML attacks which will be hard to distinguish and will become better faster.

Ian Goodfellow (the guy who created GAN – Generative Adversarial Networks) has used the adversarial nature to make a better AI algorithm. Where has this already worked?  Initially he was looking for a Security reason within the AI world, and when he created GAN, it was obvious that he was making AI better.

Who would have known, but AI is creating new images of cats that are entirely  ‘fake’ or better ‘artificial’. the algorithm created a new type of cat picture where needed.

Meow Generator ML algorithms that design cat pictures.

So what does this really mean? Fake pictures of people, animals and other items will start to proliferate.

It remains to be seen how this aspect of AI is actually going to be useful.

Do you want to test ML for Cybersecurity?

We are developing new tests for AI and ML – contact US to discuss.

Advertisements

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.