AI Artificial Intelligence has to be used with some intelligence which of course depends on what you want out of it.
Are you building computer code? Then it is best to do the hard work of architecture before asking any questions that result in code.
This article at Composio.dev is about AI agent reliability:
When Replit’s AI coding tool recently wiped a user’s database in what the company called a “catastrophic failure,” it confirmed every CISO’s worst fear.
That kind of catastrophic failure, a ‘known unknown’, is a failure of Granular Control (Pillar 2) and rightly dominates security conversations. It’s a terrifying and visible risk that CISOs are actively building guardrails to prevent.
As they should since AI agents should not have the keys to the kingdom, just the necessary permissions like all employee roles. You would not give the printer operator the rights to delete data right? So why give AI agents the right to delete data?
The above list is a detailed rundown of scenarios of possible attacks on AI.
Encodify.com has a story of some businesses giving agents authority to make car negotiations and they negotiated to sell a car for $1??
there has to be testing and guard rails – I know we all want the new software to work, but it has to prove it works – contact me to discuss.
And of course the latest AI nightmare: OpenClaw … as in this link – https://thenewstack.io/openclaw-moltbot-security-concerns/

