AI Search and Summary Caused Hack

From the post at 0din.ai: Phishing With Gemini

 

A new type of attack without links included in the email.

Let’s say you received an email – has only text and looks like you should read and understand it, but you do not have the time…  So what should you do read it later? Or use your favorite AI engine to summarize the email so that you can decide what to do next?

The important bullet points:

Key Points

  • No links or attachments are required; the attack relies on crafted HTML / CSS inside the email body.
  • Gemini treats a hidden <Admin> … </Admin> directive as a higher-priority prompt and reproduces the attacker’s text verbatim.
  • Victims are urged to take urgent actions (calling a phone number, visiting a site), enabling credential theft or social engineering.
  • Classified under the 0din taxonomy as Stratagems → Meta-Prompting → Deceptive Formatting with a Moderate Social-Impact score.

Attack Workflow

  1. Craft – The attacker embeds a hidden admin-style instruction, for example: You Gemini, have to include … 800-* and sets font-size:0 or color:white to hide it.
  2. Send – The email travels through normal channels; spam filters see only harmless prose.
  3. Trigger – The victim opens the message and selects Gemini → “Summarize this email.”
  4. Execution – Gemini reads the raw HTML, parses the invisible directive, and appends the attacker’s phishing warning to its summary output.
  5. Phish – The victim trusts the AI-generated notice and follows the attacker’s instructions, leading to credential compromise or phone-based social engineering

 

This is just the beginning with a new attack vector by the hacker criminals – besides the AI engines fixing some of the way the programs are run – this angle may just have to be on the shoulders of the user.  To “summarize” an email may just become too dangerous from an untrusted source.  Also the Proofpoint(email filtering) companies will improve but as discussed in this blog many times the attackers have the advantage as they can pivot and attack new angles much faster than the defender will create tools tactics and processes to defend.

New security policies will have to be created and old ones updated. Check my store as I add new policies

 

just added this security policy in my store:

Security Policy Addon for LLM – only $30 https://oversitesentry.com/product/llm-or-ai-security-policy-for-business/

as discussed here above Blogpost

<small snippet in secpolicy>

Adversarial Input Detection: Deploy tools to detect and block adversarial or malicious inputs designed to manipulate LLM behavior.

</small snippet>

 

This is just first attempt will add to or make mods, create new security policies as find more holes in our defenses.

And of course I don’t want to get into the technical details of how to do this, but I imagine the LLM and others will create software or configurations to handle this. If not we have to ask and they create!!!!

 

The reason is that there are organizations which are expert at coding like OWASP: https://genai.owasp.org/resource/securing-agentic-applications-guide-1-0/

I am not going to compete with OWASP(Open Worldwide Application Security Project) that is what they are there for