The numbers tell a sobering story of unchecked optimism: 13% of organisations have reported breaches involving AI models or applications, according to IBM’s 2025 Cost of a Data Breach Report. Even worse, 8% of organisations don’t even know if they’ve been compromised.

However, 97% of breached organisations had no AI access controls in place. Let that sink in. It’s like leaving your front door wide open, posting your address online, and being surprised when someone robs you.

Welcome to the AI security crisis of 2025, where innovation has outpaced governance by at least three years.


The Rise of Shadow AI: The Hidden Risk

Right now, unauthorised AI usage is thriving in your organisation—whether you know it or not. In industries like healthcare, manufacturing, and financial services, shadow AI usage has surged by over 200% year-over-year.

What is shadow AI? It’s when employees use unsanctioned AI tools, bypassing IT policies for convenience.

  • 71% of UK employees admit to using unauthorised AI tools, with over half using them weekly.
  • In the U.S., 46% of office workers—even those in IT—report using unapproved AI tools.
  • 68% of employees use personal AI accounts at work, and 57% input sensitive data.

From marketing analysts drafting campaigns in ChatGPT to developers automating workflows with private APIs, this invisible infrastructure circumvents your organisation’s controls.

Smaller companies are especially vulnerable, with firms of 11–50 employees averaging 269 shadow AI tools per 1,000 employees. Most lack dedicated security staff, leaving them defenceless.


The Half-Million Dollar Problem

The financial cost of ignoring shadow AI is staggering. Organisations hit by cyberattacks tied to shadow AI faced an average of $670,000 in additional breach costs compared to those with proper controls.

  • 60% of AI-related security incidents led to data breaches.
  • 31% caused operational disruptions.

Shockingly, 63% of breached organisations had no governance policy in place, and even those with policies often lacked basic oversight like approval processes or access controls.


What’s at Risk?

The data being input into unsecured AI systems should alarm every security professional.

  • 34.8% of corporate data entered into AI is classified as sensitive—a sharp rise from 27.4% last year.
  • Sensitive data often includes source code (18.7%) and R&D materials (17.1%).

Shadow AI exacerbates the risks:

  • 65% of breaches involved compromised personally identifiable information (PII).
  • 40% involved intellectual property theft.

Attack vectors are multiplying, with 45% of breaches stemming from malware in AI models33% from chatbots, and 21% from third-party apps.


Shadow AI: A Long-Term Threat

This isn’t a passing trend. Shadow AI tools are becoming entrenched in business workflows:

  • Two specific tools had median usage durations of over 400 days—more than a year of continuous use without approval.
  • Removing these tools often disrupts business operations, as employees rely on them for productivity.

Adding to the challenge, 45% of organisations don’t report AI-related breaches to avoid reputational damage.


Who’s Responsible?

With 76% of organisations still debating which teams should oversee AI security, the leadership vacuum is glaring. Meanwhile, 28% of employees lack access to approved AI tools, so they improvise with personal accounts.

From the employee’s perspective, AI tools make their jobs easier. But from a security standpoint, they’re introducing attack vectors that leadership doesn’t even know exist.

The disconnect is stark:

  • Only 32% of employees are concerned about company or customer data being exposed via AI.
  • Just 29% worry about shadow AI creating security risks.

A Glimmer of Hope

It’s not all bleak. Organisations that integrate AI and automation into their security operations save an average of $1.9 million in breach costs and reduce breach lifecycles by 80 days.

As of 2025, 96% of companies are increasing their AI security budgets. The challenge now is whether they can act fast enough to close the gap.


Best Practices for Securing AI Systems

To protect AI systems and the data they handle, organisations should adopt these strategies:

  1. Use Trusted Data and Track Provenance
    • Source data from verified providers.
    • Maintain secure, immutable logs to detect tampering.

  2. Protect Data Integrity
    • Use checksums and cryptographic hashes to ensure data accuracy.

  3. Authenticate Data Revisions
    • Apply digital signatures to prevent unauthorised changes.

  4. Operate on Trusted Infrastructure
    • Leverage Zero Trust architecture and secure enclaves for sensitive operations

    • Classify and Control Access
    • Label data by sensitivity, enforce access controls, and encrypt outputs.

  5. Encrypt Everything
    • Use AES-256 encryption for data at rest and TLS for data in transit.

  6. Secure Storage Devices
    • Store data only in FIPS 140-3–certified devices.

  7. Use Privacy-Preserving Methods
    • Employ techniques like data masking or differential privacy during training.

  8. Delete Data Safely
    • Follow secure deletion protocols outlined in NIST SP 800-88.

  9. Continuously Assess Risk
    • Conduct regular risk assessments and update controls to address evolving threats.

What Needs to Change

The solutions aren’t complicated—they’re just not being implemented. Organisations need:

  • Continuous monitoring for unauthorised AI tools.
  • Enforceable AI governance policies.
  • Approved AI tools that are both secure and user-friendly.

Currently, only 34% of organisations with AI governance policies actively monitor for shadow AI. The real issue isn’t the lack of policies—it’s the lack of enforcement and visibility.

Security teams must shift from reacting to AI risks to proactively managing them. That means:

  • Asset management for AI systems.
  • Risk assessments tailored to AI vulnerabilities.
  • Incident response plans designed for AI-specific threats.

These proactive measures may lack the glamour of “revolutionary AI,” but they are essential to closing the governance gap before another crisis emerges.

TIME BUSINESS NEWS

JS Bin