The default model behind Microsoft 365 Copilot has changed — and almost no UK business has noticed. On 4 May 2026 Microsoft’s Message Center notification MC1269241 took effect and Anthropic Claude became the default model for Copilot in Excel and PowerPoint, with Word following in summer 2026 under Microsoft 365 Roadmap ID 558440. Outlook is in beta with the same direction of travel. For most of the commercial cloud, this happened automatically. For UK and EU/EFTA tenants it did not happen at all — because Microsoft set Anthropic to off by default in these regions.
That asymmetry is the whole story. A British marketing director in London writing a pricing model in Excel on 5 May 2026 received output from a different model than a colleague in Texas using the same Microsoft 365 SKU. The London director did not get the option to choose — their tenant’s Global Administrator has to opt in explicitly first, and when they do, the data processing for any Claude-generated Copilot output moves outside the EU Data Boundary. This is not a small consent box. It is one of the most consequential AI governance decisions a UK Microsoft 365 admin has had to make this year, and it has landed almost silently in the middle of Patch Tuesday week.
What Microsoft actually changed on 4 May 2026
The mechanics are simple, the implications are not. Anthropic has now been formally onboarded as a Microsoft subprocessor. The old admin toggle — which allowed tenant admins to opt in to use Anthropic models under Anthropic’s separate commercial terms and data processing agreement — has been deprecated. In its place sits a new admin centre setting at Copilot → Settings → View all → AI providers operating as Microsoft subprocessors, governed by a new AI Administrator role.
For UK tenants the toggle appears, but it is set to Off. Until a Global Administrator flips it, every Copilot prompt in Microsoft 365 in your tenant continues to run through the established OpenAI-based path. The moment the toggle is set to On, three things change at once: (1) users see UI indicators when a Claude model is in use; (2) the AI Administrator can assign access to specific users or Microsoft Entra ID security groups; and (3) Copilot in Excel and PowerPoint silently switches its default Researcher, Agent Mode, and document-agent model to Claude for those users. Word follows in summer 2026. The legacy toggle is gone — if you previously opted in under the old terms, you must opt in again under the new subprocessor model.
This is not a question of whether Anthropic models are “coming to” your tenant. They have come. The toggle is in your admin centre right now. The question is whether you have made a deliberate decision about it — written down, signed off, and explainable to your auditor, your DPO, and any customer who asks where their data is being processed. Microsoft has made the geography of the decision very explicit: opting in moves your Copilot data out of the EU Data Boundary. A UK business that lets the toggle drift without a documented decision is in a worse position than one that explicitly turns it on with the right paperwork, or one that explicitly leaves it off with a board-approved reason.
The 6-month timeline that brought us here
This was not announced last week. It was telegraphed through a careful rollout that very few UK admins kept up with. The sequence matters because most of the procurement and audit questions you will be asked over the next month will reference one of these dates.
What does “outside the EU Data Boundary” actually mean for a UK SME?
The EU Data Boundary is Microsoft’s contractual commitment that customer data for participating Microsoft Online Services is stored and processed within the European region. The UK has its own complementary commitments under the Microsoft Data Protection Addendum. When Microsoft says Anthropic processing is excluded from the EU Data Boundary, it means that for any Copilot prompt that gets routed through a Claude model, the geographic processing footprint is no longer guaranteed to sit inside Europe.
For a UK SME this has three concrete consequences. The first is the GDPR Article 28 conversation with your customers. If you process personal data in Excel or PowerPoint, and Copilot Excel uses Anthropic to generate output, then the relevant model run happens outside the EU Data Boundary. You need to be able to explain this in your Records of Processing Activities and in your customer-facing privacy notice. The second is the procurement question. Regulated buyers — financial services, healthcare, public sector, defence supply chain — will increasingly ask which AI model produced a deliverable. Documenting model attribution becomes part of your audit trail. The third is the Cyber Essentials v3.3 governance question: A1 (Firewalls), A2 (Secure Configuration), A4 (User Access Control) and the new A6 governance items all expect a documented, defensible position on third-party AI providers. A blank position is not a defensible position.
The UK opt-in rate so far — a low number, rising fast
From conversations with our managed Microsoft 365 customers and from public Microsoft tenant-health telemetry shared at the April 2026 Microsoft Partner roundtables, the picture for the UK SME segment is consistent. As of the second week of May 2026 fewer than one in five UK Microsoft 365 Copilot tenants have made an explicit decision either way. The rest are simply running on the default-off setting without a documented rationale, which is its own kind of risk — quieter than an active misconfiguration, but harder to defend in an audit.
Where most UK tenants are exposed today
The exposure is rarely about the model itself. Anthropic is a reputable provider with strong enterprise commitments and Claude Opus 4.7 and Sonnet 4.6 are well-tested production models. The exposure is governance: not knowing which model produced which output, not having an AI Administrator named, not having the Cyber Essentials and ISO/IEC 27001 documentation aligned with the new subprocessor model, and not having a customer-facing answer when a regulated buyer asks whether their data may have been processed outside the EU Data Boundary.
The realistic cost of getting the governance wrong
The costs of this decision are not exotic. They are the predictable consequences of audit failures, GDPR enquiries, and lost commercial deals. The numbers below are drawn from the realistic upper bound of an SME incident envelope — not worst-case headlines, but the figure your finance director should be comfortable defending if the decision is challenged.
| Business size | Audit & DPO remediation | Lost deal exposure (one regulated customer) | Realistic total envelope |
|---|---|---|---|
| Micro (1–9 staff) | £1,500 – £3,500 | £8,000 – £30,000 | £9,500 – £33,500 |
| Small (10–49 staff) | £3,500 – £9,000 | £20,000 – £90,000 | £23,500 – £99,000 |
| Medium (50–249 staff) | £10,000 – £28,000 | £60,000 – £350,000 | £70,000 – £378,000 |
| Upper SME (250–500 staff) | £25,000 – £70,000 | £150,000 – £1,200,000 | £175,000 – £1,270,000 |
The largest line item in the envelope is rarely the regulator and rarely the legal fees. It is the single regulated customer who quietly stops renewing because your AI governance documentation does not match theirs. That is the quiet cost of letting the toggle drift — you are not breached, you are not fined, you simply find out at renewal that the deal has gone elsewhere.
Default-off drift vs. an actively-decided posture
The choice in front of every UK Microsoft 365 admin this week is not really “OpenAI vs. Anthropic”. It is “drift vs. decision”. A drift posture is the path of least resistance — do nothing, keep the toggle off, hope no one asks. A decision posture is to write down what you have chosen, why you have chosen it, who signed off, and what evidence supports it. Both postures are defensible. Only one of them is defensible under questioning.
Drift posture (where most UK tenants are today)
Default-off, no documentation, no decision
- Anthropic toggle left at default-off by silent inertia
- No AI Administrator role assigned in Entra ID
- No Records of Processing Activities entry for Anthropic
- No privacy-notice line on third-party AI subprocessors
- No brand-voice retest planned ahead of the Word default switch
- No customer-facing answer when a regulated buyer asks the model question
- Cyber Essentials v3.3 governance items unaligned
- Indefensible under audit, DPIA, or regulated-buyer due diligence
Decision posture (where Cloudswitched takes you)
Explicit, documented, defensible — on or off
- Toggle position set by a written decision, signed off by a director
- AI Administrator role assigned, scoped, and time-bound
- RoPA, DPIA and privacy notice updated for Anthropic as a subprocessor
- Entra ID security group governs which staff may use Anthropic models
- Brand-voice prompts retested ahead of the summer Word default switch
- Customer-facing AI-governance one-pager ready for procurement questions
- Cyber Essentials v3.3 evidence pack updated for AI subprocessor controls
- Defensible at every level: ICO, customer DPO, Cyber Essentials assessor
The 10-step Cloudswitched opt-in decision framework
The good news is that this is one of the most contained governance projects a UK SME can run this year. The whole sequence fits inside a single working week for most tenants, and it does not require any third-party tooling. The 10-step framework below is the exact sequence we run on a managed Microsoft 365 tenant.
Your tenant readiness score — honest baseline
If you are reading this and have not yet run the 10-step sequence, your current tenant readiness for the Anthropic-default era is somewhere between 20 and 40 out of 100. That is not unusual and it is not a crisis. It is, however, the gap between today’s typical UK SME posture and the posture that will be expected by Cyber Essentials assessors, ICO enquiries, and regulated-buyer procurement teams within the next twelve months.
The v3.3 control framework that comes into force on 27 April 2026 explicitly expects defensible documentation around third-party data processors and AI tooling. The Anthropic-as-subprocessor decision is a textbook v3.3 evidence item. If you choose to opt in, the evidence pack needs a DPIA, an RoPA entry, a privacy-notice update, and the AI Administrator role assignment. If you choose to stay opted out, the evidence pack still needs a recorded decision and a sign-off. Either path produces defensible evidence; doing nothing produces none.
The two-path decision tree, in plain English
Most UK SMEs end up on one of two well-defined paths. The first is “opt in, govern explicitly”: enable Anthropic, scope it to a named security group, document the EU Data Boundary exclusion in the privacy notice, retest the brand-voice prompts, and ship a one-page customer-facing AI governance summary. The second is “opt out, document the choice”: leave the toggle off, record a board-approved decision that explains why, capture the date and the sign-off, and revisit the decision quarterly. There is no third path that is both safe and undocumented. The cost of either choice is small; the cost of avoiding the choice is not.
The at-a-glance summary for UK Microsoft 365 admins
| Question | The 12 May 2026 answer for a UK tenant |
|---|---|
| Is Anthropic now a Microsoft subprocessor? | Yes — effective 7 January 2026. |
| Is Anthropic on by default for my UK tenant? | No — default is OFF for all UK, EU and EFTA tenants. |
| Where is Anthropic processing done? | Outside the EU Data Boundary; specific geographies vary. |
| Does the Microsoft CCC apply to Anthropic output? | Yes — within products covered by the CCC, including M365 Copilot and Copilot Studio. |
| Who has authority to opt my tenant in? | A Microsoft 365 Global Administrator. New AI Administrator role manages day-to-day controls. |
| What is the user-visible signal that Claude is in use? | UI indicators in Copilot (web, desktop, mobile) show when Claude is selected. Researcher and Agent Mode in Excel and Word, Excel, PowerPoint agents offer Claude selection. |
| Can I restrict who in my tenant can use Anthropic? | Yes — via Entra ID security groups, applied at the provider level. |
| Does Anthropic work in GCC, GCC High, DoD, or sovereign clouds? | No — not currently available; no toggle appears. |
| What about my old “Anthropic legacy toggle” opt-in? | Deprecated. You must opt in again under the new subprocessor model. |
| When does Word switch its default Copilot model? | Summer 2026, under Microsoft 365 Roadmap ID 558440. |
| What is the Cyber Essentials v3.3 expectation? | A documented, defensible AI subprocessor decision in your evidence pack. |
Where this fits in the wider 2026 governance picture
The 12 May 2026 Anthropic-default story is not standalone. It sits inside a wider 2026 governance arc that any UK SME running Microsoft 365 needs to be tracking: the 17 April Microsoft outage wave that put resilience back on every board agenda; the 27 April Cyber Essentials v3.3 / Danzell go-live; the 23 April record-high UK business-leader concern about AI cyber risk; the 30 April Cyber Security Breaches Survey 2025/2026; the 1 May Cyber Resilience Pledge and £90m SME fund; the 11 May WordPress mass-takeover wave; and the 19 June 2026 Secure Boot certificate cliff. Today’s decision — opt in, opt out, or document the choice — folds neatly into the same evidence pack as all of those.
Want a Cloudswitched-managed opt-in decision for your tenant?
Our Microsoft 365 service runs the full Anthropic-as-subprocessor 10-step framework on your tenant: identify state, assign the AI Administrator role, run the DPIA, update RoPA and privacy notice, scope an Entra ID security group, capture the board-approved decision, set the toggle, retest brand-voice prompts, and ship a customer-facing AI-governance one-pager. Most UK SME tenants finish inside one working week, and the evidence pack is ready for your next Cyber Essentials assessment.
Talk to us about managed Microsoft 365Frequently asked questions
Make the decision — don’t let it drift
Whether you choose to opt in and scope Anthropic to specific teams, or opt out and document the rationale, Cloudswitched can run the 10-step framework on your Microsoft 365 tenant inside a single working week. You finish with an AI Administrator named, an Entra ID security group scoped, a DPIA on file, an updated RoPA, a privacy notice that reflects reality, brand-voice prompts retested, and a one-page customer-facing AI-governance summary ready for procurement. The decision is now overdue. The paperwork is genuinely small. The defensibility is genuinely large.
Book a Microsoft 365 governance review


