AI Can Be Dangerous For Cyberdefense or Privacy

AI or Artificial Intelligence is the latest buzzword which means many things in the spectrum of computerized or automated instructions.  There is machine learning neural logic and more, but let’s ask a GPT (Generative Pre-Trained Transformer) or

I asked (a GPT AI site about types of Artifical Intelligence)


There are several ways to classify the types of artificial intelligence, but one common classification is based on the abilities and characteristics of the AI system. Here are some common types of AI based on this classification:

  1. Reactive machines: These are AI systems that can only react to specific situations in a pre-determined way, based on a set of programmed rules or algorithms. They cannot learn or adapt to new situations beyond their programmed responses.

  2. Limited memory: These AI systems can learn from historical data and use that learning to make decisions in the present. However, they can only learn from recent data and have limited memory, meaning that they cannot make long-term predictions or decisions based on older data.

  3. Theory of mind: These AI systems can understand the mental states of other entities and use that understanding to predict behavior. They are able to model the beliefs, desires, intentions, and emotions of other entities, and use that model to interact more effectively with them.

  4. Self-awareness: These AI systems have a sense of self and are aware of their own existence. They can understand their own internal states and use that understanding to modify their behavior or make decisions.

Other types of AI that are often mentioned include machine learning, natural language processing, robotics, expert systems, and fuzzy logic. These types of AI can be further classified into subtypes based on their specific algorithms, techniques, and applications. sourced the information from here :

There are many scare stories about AI like from an article about AI on the New York Times about George Hinton(the godfather of AI) who left Google so he can be more independent although is still loyal to Google.

This was his comment on Twitter: “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”

Yes there  are scare stories that are starting and have been going on, but they are very general – what are some actual dangers of AI?

AI is created by siloed data scientists which may not be developing the end result as you may think, or the overall effort believes as in this story at

“The problem with AI is that you have small teams of builders that are usually data scientists, mathematicians, machine-learning engineers,” he says. “Really brilliant but very siloed in the way that they’re doing their work.”

This causes a disconnect between the designers and the eventual purpose of the machines they create — and it’s a dangerous work culture.

And then one has the potential offensive possibilities in the usual suspects, Iran, Russia, and China’s governments are anti-western and are looking for an edge in their clandestine and overt operations.

I have also seen a good presentation about what AI is and is not. It is true AI is not the end all be all, since it is only a basic if- then statement although it can do if-then’s in the thousands quickly. Obviously much quicker than a human, although the input matters in how the output is shaped, this is why AI developers are concerned over bias and discrimination in AI responses.

Instead of discrimination we should be worried about what offensive cyberattacks can be automated with very fast if-then statements.

A good analogy I heard was that the current AI cannot think on its own, it can only go where it was programmed, like on a Rhumba cleaning robot.

Notice how the Rhumba interacts with the dog (it depends on if the dog moves or not.  Rhymba will travel until it bumps into something and then go elsewhere and slowly it is mapping a go and nogo area.

This is what current AI is like – The CCP China will program an attack program to see where it can attack successfully and not. and so on and so on.

The attack points are only as good as you let it attack your infrastructure.

And you can’t just defend the stuff that needs defending, have to defend everything so as to not allow a foothold in the systems. because once they are in the systems then it is a different attack (and defense).

Contact me or buy my book to learn more about that.