Back to News

Shadow AI Is the Biggest Threat You're Not Managing: What RSAC 2026 Revealed for UK Businesses

Shadow AI Is the Biggest Threat You're Not Managing: What RSAC 2026 Revealed for UK Businesses

At RSAC 2026 — the world's largest cybersecurity conference, held in San Francisco this April — the message was unambiguous: shadow AI is now the fastest-growing attack surface in business. Not ransomware variants. Not zero-day exploits. The biggest emerging threat comes from your own employees, using AI tools you don't know about, feeding your company's most sensitive data into external services without a second thought.

For UK SMEs, the implications are immediate. Your staff are almost certainly using ChatGPT, Claude, Gemini, Copilot and dozens of other AI tools — pasting customer records, financial data and proprietary information into systems your IT team has never approved or audited. Under GDPR, you remain the data controller regardless of which employee used which unapproved tool.

This guide distils the critical findings from RSAC 2026 into actionable intelligence for UK businesses — what shadow AI is, why agentic AI creates a new category of risk, and what steps you can take to protect your organisation.

What Is Shadow AI?

Shadow AI is the far more dangerous successor to shadow IT. Where shadow IT might involve someone using Dropbox instead of OneDrive, shadow AI involves employees feeding your confidential business data into third-party AI systems that learn from, store, and potentially expose that information.

60%
Organisations Using AI Automation
<20%
Same Metric in 2023
3x
Growth in 2 Years

60% of organisations have transitioned to AI-augmented automation, up from less than 20% in 2023. That tripling outpaced governance frameworks — creating a massive gap between what employees are doing with AI and what organisations have authorised.

These scenarios are happening in your business right now:

  • A sales manager pastes a client's contract into ChatGPT to summarise key terms — exposing commercial terms and client details externally
  • An HR officer uploads CVs to an AI screening tool — sharing candidates' personal data with an unvetted third party
  • A developer pastes proprietary source code into an AI assistant — potentially exposing intellectual property
  • A finance team member feeds quarterly figures into an AI tool — sharing unreleased financial data externally

Each action takes seconds. Each creates a data exfiltration path that bypasses every security control your organisation has invested in.

GDPR Warning

Under UK GDPR, your organisation is the data controller even when an employee uses an unapproved AI tool. A single employee pasting customer data into ChatGPT could constitute an unauthorised transfer without a lawful basis or Data Processing Agreement. The ICO can fine up to £17.5 million or 4% of global turnover.

Agentic AI: When AI Stops Chatting and Starts Acting

RSAC 2026 introduced Shadow Agents — unapproved AI agents operating autonomously within businesses, entirely outside security visibility. These are browser extensions that auto-respond to emails, productivity tools connected to your CRM and calendar, or AI assistants granted access to company systems via OAuth.

Conversational AI responds to prompts. Agentic AI takes autonomous actions via APIs, tools, and integrations — a fundamentally different risk profile.

Characteristic Conversational AI Agentic AI
Interaction Prompt and response Goal and autonomous action chain
System access Text in, text out APIs, databases, email, web
Persistence Session-based Continuous background operation
Risk profile Data leakage via prompts Exfiltration, lateral movement, action chains
Detection Moderate Extremely difficult

If compromised, a single AI agent with email access could exfiltrate data, send phishing emails to your contact list, and modify documents across connected storage — all before anyone notices. RSAC researchers highlighted identity propagation: one compromised agent cascading through connected services, creating a blast radius far exceeding a single compromised account.

“An AI agent with the wrong permissions is not a tool — it's an insider threat with superhuman speed. We're building systems that take real-world actions without the security frameworks to contain them.”

— RSAC 2026 Keynote Panel on Agentic AI Security

NIST announced initiatives to define security standards for AI agents. Until those standards exist, organisations operate in a governance vacuum.

Ransomware in 2026: 51 Seconds to Catastrophe

51s
Ransomware Breakout Time
80%
Attacks Now Malware-Free
168hrs
Average Discovery Time

Ransomware breakout time has collapsed to 51 seconds. Your security team has less than a minute to contain a threat before it spreads. Most organisations take 7 days to discover a breach, then 12 hours to resolve it.

80%
Malware-Free Attacks (80%) Traditional Malware (20%)

80% of successful attacks are now malware-free — using stolen credentials, legitimate remote access tools, and identity manipulation. Traditional antivirus is increasingly ineffective.

Ransomware Payments: A Twenty-Fold Explosion

2018$39M
2019$92M
2020$290M
2021$590M
2022$457M
2023$813M+

Global ransomware payments exploded from $39 million in 2018 to over $813 million by 2023. Ransomware accounts for over 90% of cyber insurance losses in H1 2025.

Agentic AI Defence: The Silver Lining

Traditional Discovery168 hours
Traditional Resolution12 hours
AI-Augmented Resolution38 seconds

Agentic AI, properly deployed by security teams, can compress resolution from 12 hours to 38 seconds — turning the attackers' speed advantage back against them.

The Cyber Insurance Paradox

John Kindervag — creator of zero-trust security — argued at RSAC that cyber insurance is actively making ransomware worse.

“Companies with insurance are more likely to pay ransoms because the insurer covers it. Criminals target insured companies because the payout is guaranteed. We've created a funding mechanism for organised crime and called it risk management.”

— John Kindervag, Creator of Zero Trust, RSAC 2026

The cycle is destructive: companies buy insurance, become more willing to pay, criminals target them, claims rise, premiums increase 20–30% annually. The answer isn't to forgo insurance — it's to ensure your security posture is strong enough that insurance becomes a backstop, not a crutch.

Real-World Shadow AI Incidents

Incident What Happened Impact
Samsung Engineering Leak Engineers pasted semiconductor source code into ChatGPT Confidential IP exposed; company-wide AI ban
JPMorgan Chase Employees used ChatGPT for financial analysis and client comms Firm-wide AI restrictions; governance programme launched
Amazon Warning ChatGPT responses resembled internal Amazon data Urgent employee warning issued
Legal Sector (Multiple) Lawyers used AI to draft filings; AI hallucinated fake citations Court sanctions, misconduct proceedings

These are large organisations with dedicated security teams. UK SMEs — typically without a CISO — are far more vulnerable.

GDPR Implications for UK Businesses

  • You are the data controller — responsible when an employee pastes customer data into any AI tool
  • No lawful basis — most employee AI usage occurs without a DPA, legitimate interest assessment, or consent
  • International transfer risk — most AI services process data in the US without adequate safeguards
  • Data minimisation breach — employees paste far more data than necessary
  • No retention control — you lose control over how long data is kept or whether it trains models

UK SME AI Governance Readiness

Have formal AI acceptable use policy
12%
Maintain approved AI tool list
8%
Monitor network for AI tool usage
5%
Have DPIAs covering AI usage
6%
Provide AI data handling training
15%

No AI Policy vs Governed AI

No AI Policy

Unmanaged Shadow AI Risk
Data exposureUncontrolled — any employee, any data, any tool
GDPR complianceLikely in breach without knowing it
Incident responseNo visibility into AI data leaks
ProductivityInconsistent and risky AI usage
InsuranceMay invalidate claims

Governed AI Approach

Controlled AI Enablement
Data exposureDefined boundaries with DPAs in place
GDPR complianceDocumented and auditable
Incident responseMonitoring for rapid detection
ProductivityEmpowered with safe, approved tools
InsuranceDemonstrates due diligence

Shadow AI Audit and Technical Controls

Before you can govern AI usage, you need to understand what's happening:

  1. Network discovery: Review DNS logs and firewall records for AI service domains (api.openai.com, claude.ai, gemini.google.com). Check OAuth logs in M365 or Google Workspace. Examine expense reports for AI subscriptions
  2. Staff survey: Anonymous survey asking which AI tools staff use — frame it positively, not punitively. Ask about data types shared
  3. Risk categorisation: High risk (personal data, financial info, IP with no DPA), medium risk (enterprise tiers available but using free accounts), low risk (general tasks, no sensitive data)

Then implement technical controls:

  • DNS/firewall blocking: Block unapproved AI services at network level — most firewalls now include AI service categories
  • CASB: Monitor AI service access and data volumes flowing to them
  • DLP: Detect sensitive data patterns (NI numbers, credit cards) being pasted into AI interfaces
  • Network monitoring: Alert on unusual data volumes to AI-related domains
Pro Tip

Don't ban AI outright. Employees will find workarounds, driving usage underground. Provide approved alternatives that meet productivity needs whilst maintaining security. Governance is always more effective than prohibition.

AI Acceptable Use Policy Essentials

  1. Approved tool list — which AI tools are authorised, which tiers, and for which teams
  2. Data classification rules — what can and cannot be shared with AI tools
  3. Prohibited actions — explicitly ban pasting customer data, source code, financial reports into unapproved tools
  4. Fast approval process — if requesting new tools takes weeks, staff will bypass it
  5. No-blame incident reporting — for employees who realise they've shared sensitive data
  6. Training requirements — mandatory AI data handling training with annual refreshers
  7. Quarterly review cycle — the AI landscape changes too fast for annual reviews

The Workforce Crisis and AI as Solution

The ISC2 2025 study revealed a global shortage of 3.5 to 4 million cybersecurity professionals, underpinning every challenge discussed at RSAC.

4M
Global Cyber Skills Shortage
20-30%
Annual Insurance Premium Rises
90%+
Insurance Losses from Ransomware

AI can be part of the solution. AI agents handling tier-one tasks — phishing triage, alert correlation, initial incident assessment — free hundreds of analyst hours monthly. For SMEs without dedicated security teams, managed security service providers offer AI-augmented monitoring at accessible price points.

Your Shadow AI Action Plan

This Week

  • Issue an all-staff communication announcing an AI governance review
  • Begin a network audit for AI service connections (DNS logs, last 30 days)
  • Draft an interim AI acceptable use policy

This Month

  • Complete the shadow AI audit (network + survey + risk categorisation)
  • Establish an approved AI tool list with enterprise agreements and DPAs
  • Implement DNS-level blocking for unapproved AI services
  • Conduct a DPIA covering AI tool usage
  • Review your cyber insurance for AI-related incident coverage

This Quarter

  • Deploy CASB or equivalent monitoring
  • Implement DLP policies for AI data transfer patterns
  • Roll out mandatory AI data handling training
  • Evaluate AI-augmented security tools

The Bottom Line

RSAC 2026 made one thing clear: AI adoption is not slowing down. The organisations that thrive will embrace AI whilst managing its risks — not impose blanket bans employees will circumvent.

For UK SMEs, you face the same risks as large enterprises with fewer resources. But a 50-person company can implement an AI policy, deploy DNS controls, and train staff in weeks — not the months it takes an enterprise. Your size is an advantage.

Shadow AI is not a future threat. It's happening in your organisation right now. The question is whether you'll discover it through a structured governance programme — or through an ICO investigation after a breach.

Need Help Managing AI Risk?

CloudSwitched helps UK businesses implement secure AI strategies with proper governance, technical controls, and staff training — from shadow AI audits to acceptable use policies.

AI Strategy & Governance Cybersecurity Services
Tags:AICyber SecurityIT Support
CloudSwitched

London-based managed IT services provider offering support, cloud solutions and cybersecurity for SMEs.

CloudSwitched Service

AI Software & Tools

GPT, Gemini and Claude integration to automate workflows and boost productivity

Learn More

Technology Stack

Powered by industry-leading technologies including SolarWinds, Cloudflare, BitDefender, AWS, Microsoft Azure, and Cisco Meraki to deliver secure, scalable, and reliable IT solutions.

SolarWinds
Cloudflare
BitDefender
AWS
Hono
Opus
Office 365
Microsoft
Cisco Meraki
Microsoft Azure

Latest Articles

20
  • AI

AI Implementation for UK SMEs: A Complete Guide

20 Mar, 2026

Read more
14
  • IT Office Moves

How to Set Up Meeting Room Technology in Your New Office

14 Oct, 2025

Read more
15
  • Cloud Email

How to Manage Mobile Devices with Microsoft Intune

15 Jan, 2026

Read more

Enquiry Received!

Thank you for getting in touch. A member of our team will review your enquiry and get back to you within 24 hours.