At RSAC 2026 — the world's largest cybersecurity conference, held in San Francisco this April — the message was unambiguous: shadow AI is now the fastest-growing attack surface in business. Not ransomware variants. Not zero-day exploits. The biggest emerging threat comes from your own employees, using AI tools you don't know about, feeding your company's most sensitive data into external services without a second thought.
For UK SMEs, the implications are immediate. Your staff are almost certainly using ChatGPT, Claude, Gemini, Copilot and dozens of other AI tools — pasting customer records, financial data and proprietary information into systems your IT team has never approved or audited. Under GDPR, you remain the data controller regardless of which employee used which unapproved tool.
This guide distils the critical findings from RSAC 2026 into actionable intelligence for UK businesses — what shadow AI is, why agentic AI creates a new category of risk, and what steps you can take to protect your organisation.
What Is Shadow AI?
Shadow AI is the far more dangerous successor to shadow IT. Where shadow IT might involve someone using Dropbox instead of OneDrive, shadow AI involves employees feeding your confidential business data into third-party AI systems that learn from, store, and potentially expose that information.
60% of organisations have transitioned to AI-augmented automation, up from less than 20% in 2023. That tripling outpaced governance frameworks — creating a massive gap between what employees are doing with AI and what organisations have authorised.
These scenarios are happening in your business right now:
- A sales manager pastes a client's contract into ChatGPT to summarise key terms — exposing commercial terms and client details externally
- An HR officer uploads CVs to an AI screening tool — sharing candidates' personal data with an unvetted third party
- A developer pastes proprietary source code into an AI assistant — potentially exposing intellectual property
- A finance team member feeds quarterly figures into an AI tool — sharing unreleased financial data externally
Each action takes seconds. Each creates a data exfiltration path that bypasses every security control your organisation has invested in.
Under UK GDPR, your organisation is the data controller even when an employee uses an unapproved AI tool. A single employee pasting customer data into ChatGPT could constitute an unauthorised transfer without a lawful basis or Data Processing Agreement. The ICO can fine up to £17.5 million or 4% of global turnover.
Agentic AI: When AI Stops Chatting and Starts Acting
RSAC 2026 introduced Shadow Agents — unapproved AI agents operating autonomously within businesses, entirely outside security visibility. These are browser extensions that auto-respond to emails, productivity tools connected to your CRM and calendar, or AI assistants granted access to company systems via OAuth.
Conversational AI responds to prompts. Agentic AI takes autonomous actions via APIs, tools, and integrations — a fundamentally different risk profile.
| Characteristic | Conversational AI | Agentic AI |
|---|---|---|
| Interaction | Prompt and response | Goal and autonomous action chain |
| System access | Text in, text out | APIs, databases, email, web |
| Persistence | Session-based | Continuous background operation |
| Risk profile | Data leakage via prompts | Exfiltration, lateral movement, action chains |
| Detection | Moderate | Extremely difficult |
If compromised, a single AI agent with email access could exfiltrate data, send phishing emails to your contact list, and modify documents across connected storage — all before anyone notices. RSAC researchers highlighted identity propagation: one compromised agent cascading through connected services, creating a blast radius far exceeding a single compromised account.
“An AI agent with the wrong permissions is not a tool — it's an insider threat with superhuman speed. We're building systems that take real-world actions without the security frameworks to contain them.”
NIST announced initiatives to define security standards for AI agents. Until those standards exist, organisations operate in a governance vacuum.
Ransomware in 2026: 51 Seconds to Catastrophe
Ransomware breakout time has collapsed to 51 seconds. Your security team has less than a minute to contain a threat before it spreads. Most organisations take 7 days to discover a breach, then 12 hours to resolve it.
80% of successful attacks are now malware-free — using stolen credentials, legitimate remote access tools, and identity manipulation. Traditional antivirus is increasingly ineffective.
Ransomware Payments: A Twenty-Fold Explosion
Global ransomware payments exploded from $39 million in 2018 to over $813 million by 2023. Ransomware accounts for over 90% of cyber insurance losses in H1 2025.
Agentic AI Defence: The Silver Lining
Agentic AI, properly deployed by security teams, can compress resolution from 12 hours to 38 seconds — turning the attackers' speed advantage back against them.
The Cyber Insurance Paradox
John Kindervag — creator of zero-trust security — argued at RSAC that cyber insurance is actively making ransomware worse.
“Companies with insurance are more likely to pay ransoms because the insurer covers it. Criminals target insured companies because the payout is guaranteed. We've created a funding mechanism for organised crime and called it risk management.”
The cycle is destructive: companies buy insurance, become more willing to pay, criminals target them, claims rise, premiums increase 20–30% annually. The answer isn't to forgo insurance — it's to ensure your security posture is strong enough that insurance becomes a backstop, not a crutch.
Real-World Shadow AI Incidents
| Incident | What Happened | Impact |
|---|---|---|
| Samsung Engineering Leak | Engineers pasted semiconductor source code into ChatGPT | Confidential IP exposed; company-wide AI ban |
| JPMorgan Chase | Employees used ChatGPT for financial analysis and client comms | Firm-wide AI restrictions; governance programme launched |
| Amazon Warning | ChatGPT responses resembled internal Amazon data | Urgent employee warning issued |
| Legal Sector (Multiple) | Lawyers used AI to draft filings; AI hallucinated fake citations | Court sanctions, misconduct proceedings |
These are large organisations with dedicated security teams. UK SMEs — typically without a CISO — are far more vulnerable.
GDPR Implications for UK Businesses
- You are the data controller — responsible when an employee pastes customer data into any AI tool
- No lawful basis — most employee AI usage occurs without a DPA, legitimate interest assessment, or consent
- International transfer risk — most AI services process data in the US without adequate safeguards
- Data minimisation breach — employees paste far more data than necessary
- No retention control — you lose control over how long data is kept or whether it trains models
UK SME AI Governance Readiness
No AI Policy vs Governed AI
No AI Policy
Governed AI Approach
Shadow AI Audit and Technical Controls
Before you can govern AI usage, you need to understand what's happening:
- Network discovery: Review DNS logs and firewall records for AI service domains (api.openai.com, claude.ai, gemini.google.com). Check OAuth logs in M365 or Google Workspace. Examine expense reports for AI subscriptions
- Staff survey: Anonymous survey asking which AI tools staff use — frame it positively, not punitively. Ask about data types shared
- Risk categorisation: High risk (personal data, financial info, IP with no DPA), medium risk (enterprise tiers available but using free accounts), low risk (general tasks, no sensitive data)
Then implement technical controls:
- DNS/firewall blocking: Block unapproved AI services at network level — most firewalls now include AI service categories
- CASB: Monitor AI service access and data volumes flowing to them
- DLP: Detect sensitive data patterns (NI numbers, credit cards) being pasted into AI interfaces
- Network monitoring: Alert on unusual data volumes to AI-related domains
Don't ban AI outright. Employees will find workarounds, driving usage underground. Provide approved alternatives that meet productivity needs whilst maintaining security. Governance is always more effective than prohibition.
AI Acceptable Use Policy Essentials
- Approved tool list — which AI tools are authorised, which tiers, and for which teams
- Data classification rules — what can and cannot be shared with AI tools
- Prohibited actions — explicitly ban pasting customer data, source code, financial reports into unapproved tools
- Fast approval process — if requesting new tools takes weeks, staff will bypass it
- No-blame incident reporting — for employees who realise they've shared sensitive data
- Training requirements — mandatory AI data handling training with annual refreshers
- Quarterly review cycle — the AI landscape changes too fast for annual reviews
The Workforce Crisis and AI as Solution
The ISC2 2025 study revealed a global shortage of 3.5 to 4 million cybersecurity professionals, underpinning every challenge discussed at RSAC.
AI can be part of the solution. AI agents handling tier-one tasks — phishing triage, alert correlation, initial incident assessment — free hundreds of analyst hours monthly. For SMEs without dedicated security teams, managed security service providers offer AI-augmented monitoring at accessible price points.
Your Shadow AI Action Plan
This Week
- Issue an all-staff communication announcing an AI governance review
- Begin a network audit for AI service connections (DNS logs, last 30 days)
- Draft an interim AI acceptable use policy
This Month
- Complete the shadow AI audit (network + survey + risk categorisation)
- Establish an approved AI tool list with enterprise agreements and DPAs
- Implement DNS-level blocking for unapproved AI services
- Conduct a DPIA covering AI tool usage
- Review your cyber insurance for AI-related incident coverage
This Quarter
- Deploy CASB or equivalent monitoring
- Implement DLP policies for AI data transfer patterns
- Roll out mandatory AI data handling training
- Evaluate AI-augmented security tools
The Bottom Line
RSAC 2026 made one thing clear: AI adoption is not slowing down. The organisations that thrive will embrace AI whilst managing its risks — not impose blanket bans employees will circumvent.
For UK SMEs, you face the same risks as large enterprises with fewer resources. But a 50-person company can implement an AI policy, deploy DNS controls, and train staff in weeks — not the months it takes an enterprise. Your size is an advantage.
Shadow AI is not a future threat. It's happening in your organisation right now. The question is whether you'll discover it through a structured governance programme — or through an ICO investigation after a breach.
Need Help Managing AI Risk?
CloudSwitched helps UK businesses implement secure AI strategies with proper governance, technical controls, and staff training — from shadow AI audits to acceptable use policies.
AI Strategy & Governance Cybersecurity Services


