AI Implemented without Governance Concerns

As this story pointed out at the Register:

“Enterprises neglect AI security – and attackers have noticed”

The findings come from Big Blue’s Cost of a Data Breach Report 2025 report, which shows that AI-related exposures currently make up only a small proportion of the total, but these are anticipated to grow in line with greater adoption of AI in enterprise systems.

 

I know the infosec department is the department of NO but why can’t we include a new system in GRC(Governance, Risk, Compliance)?

 

Take a look at this snippet:

Based on data reported by 600 organizations globally between March 2024 and February 2025, IBM says 13 percent of them flagged a security incident involving an AI model or AI application that resulted in an infraction.

Almost every one of those breached organizations (97 percent) indicated it did not have proper AI access controls in place.

97%? That means every organization was so excited to set up AI chatGPT or other AI programs and tools that Governance be damned.

Well this will have wide ranging effects soon enough.

 

 

Here is the relevant section from IBM report:

97%
Share of organizations that reported an AI-related breach and
lacked proper AI access controls
Security incidents involving an organization’s AI remain limited—
for now. On average, 13% of organizations reported breaches
that involved their AI models or applications. However, among
those that did, almost all (97%) lacked proper AI access controls.
The most common of these security incidents occurred in the AI
supply chain, through compromised apps, APIs or plug-ins. These
incidents had a ripple effect: they led to broad data compromise
(60%) and operational disruption (31%). The findings suggest
AI is emerging as a high-value target.

 

As usual we get new toys and tools which are not secure by default (because why think of security while coding great features?).

Thus new tools have to get hacked first then we become more secure after the damage is done.

Contact me and check our store for a different philosophy.

This is just a small section of what should be done – At Isaca (from Isaca:

AI has taken the world by storm in a mostly good way, increasing productivity, creativity and profits faster and easier than ever before in recent years.

If AI is training computers to think like humans, I consider AI governance in its most simplified definition to be “Making sure AI thinks like a good human.”

Include AI addon from my store at this link AI LLM.  Adding to your security policy is a good idea.