New Year Same Problems – Hackers Try to Get Clicks

Yes this time it is a successful attack on the hospitality industry in Europe – so why include?

Because if it is working there – it will come here, just a matter of time.

Record.media has the story: Russian hackers target European hospitality industry with “Blue Screen of Death” malware.

So it usually starts with a phishing email that is a fake booking.com email – that actually sends you to  a command and control server which connects the machine and inserts this ransomware.

Unfortunately after clicking on the fake booking.com email some systems in Europe (so far) get this Blue Screen of Death(BSoD) that occurs when an unexpected error happens on Windows machines.

While watching a

Hackers using our tools to attack us (our AI applications)  called “vibe hacking”  instead of vibe coding for coding software with AI apps. Also interesting to not Anthropic (made Claude AI)  created a threat research of their own platform(AI app – Claude) it is located at https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf

The executive summary has some interesting nuggets:

While specific to Claude, the case studies presented below likely reflect
consistent patterns of behaviour across all frontier AI models. Collectively,
they show how threat actors are adapting their operations to exploit
today’s most advanced AI capabilities:
• Agentic AI systems are being weaponized: AI models are themselves
being used to perform sophisticated cyberattacks – not just advising on
how to carry them out.
• AI lowers the barriers to sophisticated cybercrime. Actors with
few technical skills have used AI to conduct complex operations, like
developing ransomware, that would previously have required years of
training.
• Cybercriminals are embedding AI throughout their operations.
This includes victim profiling, automated service delivery, and in
operations that affect tens of thousands of users.
• AI is being used for all stages of fraud operations. Fraudulent
actors use AI for tasks like analyzing stolen data, stealing credit card
information, and creating false identities.
We’re discussing these incidents publicly in order to contribute to the
work of the broader AI safety and security community, and help those
in industry, government, and the wider research community strengthen
their own defences against the abuse of AI systems. We plan to continue
releasing reports like this regularly, and to be transparent about the
threats we find.

(Leaving misspelling as is from the report :).

What does this mean? Well, unfortunately the hackers are using our tools against us!!!

If you cannot see yourself working with AI because change is bad – the hackers are not under that belief.

Contact me to keep working incrementally on your security program.