Back to Blog

How to Secure AI Tools and Large Language Models in Business

How to Secure AI Tools and Large Language Models in Business

Artificial intelligence is transforming the way UK businesses operate — from automating customer service with chatbots to generating marketing copy, summarising legal documents, and writing code. Tools built on large language models (LLMs) such as Microsoft Copilot, ChatGPT, Google Gemini, and a growing ecosystem of specialist AI applications are being adopted at a pace that most IT departments simply cannot match. The productivity gains are real and substantial. But so are the risks.

The challenge facing every business leader in 2025 is not whether to adopt AI — that ship has sailed. The challenge is how to adopt it securely. Without proper governance, AI tools can leak confidential data, introduce compliance violations, create intellectual property disputes, and open entirely new attack surfaces that traditional cyber security measures were never designed to address. The UK Information Commissioner’s Office (ICO) has already signalled that organisations will be held accountable for how AI processes personal data, and the EU AI Act — which has implications for UK businesses trading with European partners — introduces additional regulatory requirements.

This guide provides a comprehensive, practical framework for securing AI tools and LLMs in your business. Whether you are a managing director wondering what your staff are pasting into ChatGPT, an IT manager tasked with rolling out Microsoft Copilot securely, or a compliance officer trying to map AI usage against your data protection obligations, this article will give you the knowledge and structure you need to move forward with confidence.

75%
of UK knowledge workers have used generative AI tools at work in the past 12 months
£3.4bn
estimated UK business spending on AI tools and services in 2025
60%
of AI tool usage in businesses occurs without IT department knowledge or approval
38%
of employees admit to pasting sensitive company data into public AI tools

Understanding the AI Threat Landscape

Before you can secure AI tools, you need to understand exactly where the risks lie. AI and LLMs introduce a category of threats that are fundamentally different from traditional cyber security concerns. These are not just new versions of old problems — they require new thinking, new policies, and new technical controls.

Data Leakage Through AI Prompts

The single biggest risk most UK businesses face with AI tools is uncontrolled data leakage. Every time an employee pastes text into an AI tool, that data is transmitted to a third-party server for processing. Depending on the tool’s terms of service and data retention policies, that information may be stored, logged, or even used to train future models. Consider the scenarios that play out in offices across the country every day: a solicitor pastes a client’s contract into ChatGPT to summarise the key clauses; a finance manager uploads a spreadsheet of employee salaries to an AI tool for analysis; a sales director feeds customer contact details and deal values into an AI assistant to draft follow-up emails.

In each case, sensitive — potentially personally identifiable — data has left the organisation’s control. If that AI tool does not have a data processing agreement in place, if it retains data for model training, or if it stores information on servers outside the UK, the business may be in breach of UK GDPR without even realising it.

Prompt Injection Attacks

Prompt injection is an emerging attack vector specific to LLM-powered applications. In a prompt injection attack, a malicious actor crafts input designed to override the AI system’s instructions, causing it to behave in unintended ways. For example, if your business uses an AI chatbot on your website that has access to internal knowledge bases, an attacker could craft a carefully worded query that tricks the model into revealing information it was instructed to keep confidential — pricing structures, internal policies, or even database contents.

Indirect prompt injection is even more insidious. An attacker could embed hidden instructions in a document, email, or web page. When an AI tool processes that content — say, summarising an email or analysing a document — the hidden instructions are executed. This could cause the AI to exfiltrate data, generate misleading summaries, or take actions the user never intended.

Shadow AI

Shadow AI is the AI equivalent of shadow IT — the use of AI tools by employees without the knowledge, approval, or oversight of the IT department. It is arguably the most pervasive and difficult-to-manage risk because it is driven by well-intentioned employees trying to be more productive. A marketing executive signs up for an AI copywriting tool using their personal email. A developer uses a free AI code assistant that sends code snippets to external servers. An HR manager uses a consumer AI chatbot to draft sensitive employee communications.

Shadow AI is dangerous not because the tools themselves are inherently harmful, but because their use bypasses every security control, data protection policy, and vendor assessment process the organisation has in place. You cannot secure what you cannot see.

The Hidden Cost of Shadow AI

Shadow AI does not just create security risks — it creates legal liability. If an employee uses an unapproved AI tool to process personal data and a breach occurs, the ICO will hold your organisation responsible, not the employee. Under UK GDPR, the organisation is the data controller and must demonstrate that appropriate technical and organisational measures were in place. “We didn’t know our staff were using it” is not a defence — it is an admission of inadequate governance.

Model Hallucinations and Misinformation

LLMs generate text that is statistically plausible, not factually verified. They can and do produce confident, well-structured output that is entirely wrong. In a business context, this creates risks that range from embarrassing to legally consequential. An AI-generated report containing fabricated statistics could mislead board-level decision-making. AI-drafted legal advice based on hallucinated case law could expose the firm to negligence claims. Customer-facing content with incorrect product specifications or compliance claims could trigger regulatory action.

The risk is compounded by the fact that LLM output often looks authoritative. Staff without training in AI limitations may accept it at face value, particularly under time pressure.

Building Your AI Acceptable Use Policy

The foundation of any AI security framework is a clear, enforceable acceptable use policy (AUP) that every employee understands. This is not a technical control — it is an organisational one — but it is the single most important step you can take because it sets the boundaries within which all other controls operate.

What Your AI AUP Should Cover

An effective AI acceptable use policy must address every dimension of how staff interact with AI tools. It should be written in plain English, not legal jargon, and it should include real-world examples that employees can relate to their daily work.

Policy Area What to Define Example Rule
Approved Tools Which AI tools are sanctioned for business use Only Microsoft Copilot and the company ChatGPT Enterprise account may be used for work tasks
Prohibited Data Categories of data that must never be entered into any AI tool Personal data, financial records, client contracts, source code, and board papers must not be pasted into AI tools
Approval Process How new AI tools are requested, evaluated, and approved All new AI tools must be submitted to IT for a security and data protection assessment before use
Output Verification Requirements for human review of AI-generated content All AI-generated content must be reviewed for accuracy by a qualified person before publication or distribution
Data Classification Mapping data sensitivity levels to permitted AI interactions Public data may be used freely; internal data requires an approved tool; confidential and restricted data is prohibited
Intellectual Property Ownership and rights over AI-generated output AI-generated content must not be represented as original human work in client deliverables without disclosure
Incident Reporting How to report accidental data exposure or policy violations If sensitive data is accidentally entered into an AI tool, report it to IT within one hour via the security incident form
Making Your AUP Stick

The most common reason AI acceptable use policies fail is that they are written, circulated once, and then forgotten. To make your policy effective, integrate it into your onboarding process, require annual acknowledgement, and reinforce it through regular security awareness training. Consider creating a one-page “AI Do’s and Don’ts” quick-reference card that staff can keep at their desks. The goal is to make secure AI use the path of least resistance, not an obstacle to productivity.

Data Classification for AI

Your AI acceptable use policy will reference data sensitivity levels, so you need a data classification framework that maps directly to AI usage permissions. Many UK businesses already have some form of data classification, but few have updated it to account for AI-specific risks. The key principle is simple: the more sensitive the data, the more restrictive the AI controls.

Without AI Data Classification
  • Employees guess what is safe to paste into AI tools
  • No clear rules — different people make different decisions
  • Sensitive data regularly ends up in consumer AI platforms
  • Compliance team cannot demonstrate data governance
  • Incident response is reactive with no defined escalation
With AI Data Classification
  • Every data type has a clear AI usage permission level
  • Staff know exactly what they can and cannot do with each tool
  • Technical controls enforce classification boundaries automatically
  • Audit trail demonstrates compliance to regulators and clients
  • Incidents are detected early with defined response procedures

A Practical Four-Tier AI Data Classification

We recommend a straightforward four-tier model that aligns with most existing UK data classification schemes while adding AI-specific permissions at each level.

Tier 1 – Public: Information already in the public domain or intended for publication. Marketing copy, published blog content, publicly available product specifications. This data can be used freely with any approved AI tool.

Tier 2 – Internal: Business information not intended for public release but not highly sensitive. Internal process documents, general meeting notes, non-confidential project plans. This data may only be used with enterprise-licensed AI tools that have a data processing agreement and a commitment not to use input data for model training.

Tier 3 – Confidential: Sensitive business information, personal data, financial records, client information. Employee records, client contracts, financial reports, strategic plans. This data may only be processed by AI tools deployed within your own tenant (such as Microsoft Copilot with your Microsoft 365 data boundary) and must never be entered into any external or consumer AI tool.

Tier 4 – Restricted: The most sensitive categories: special category personal data under UK GDPR, trade secrets, legal privilege, regulatory submissions. This data must not be processed by any AI tool without explicit written approval from the data protection officer and a full data protection impact assessment (DPIA).

Securing Microsoft Copilot in Your Organisation

Microsoft Copilot is the AI tool most UK businesses will encounter first, given the dominance of Microsoft 365 in the enterprise market. Copilot integrates directly into Word, Excel, PowerPoint, Outlook, and Teams, which makes it extraordinarily useful — but also means it has access to everything your Microsoft 365 environment contains. If your permissions and data governance are not in order before you enable Copilot, you are essentially giving an AI assistant the keys to your entire digital estate.

Pre-Deployment Checklist for Microsoft Copilot

Before enabling Copilot for any users, work through these critical preparation steps. Each one addresses a specific risk that Copilot amplifies if left unresolved.

Review SharePoint and OneDrive permissionsCritical
Must do first
Apply sensitivity labels to documents and sitesCritical
Essential for data governance
Audit overshared mailboxes and Teams channelsHigh
Commonly overlooked
Configure Copilot access policies per user groupHigh
Role-based rollout
Enable audit logging for Copilot interactionsHigh
Compliance requirement
Train staff on responsible Copilot usageMedium
Ongoing programme

The most critical step — and the one most organisations underestimate — is the permissions review. Over years of use, Microsoft 365 environments accumulate permission sprawl: SharePoint sites shared with “everyone in the organisation,” Teams channels with overly broad membership, OneDrive folders shared with former employees or external contacts. Copilot inherits the permissions of the user who invokes it, which means if a junior employee has inadvertent access to the board’s SharePoint site, Copilot will happily surface that content in response to their queries.

The Oversharing Problem

In our experience working with UK businesses, the average Microsoft 365 environment has between 15% and 30% of its SharePoint content overshared — accessible to users who should not have access. Before Copilot, this was a latent risk because users had to actively navigate to content to find it. Copilot changes this dynamic entirely: it proactively surfaces relevant content, which means overshared documents are far more likely to be discovered. Fix your permissions before you switch Copilot on, not after.

Copilot Sensitivity Labels

Microsoft Purview sensitivity labels are your primary technical control for governing what Copilot can and cannot access. These labels can be applied to documents, emails, Teams meetings, and SharePoint sites to enforce encryption, access restrictions, and usage limitations that Copilot respects.

At minimum, configure labels for: Public (no restrictions, Copilot can reference freely), Internal (Copilot can reference within the organisation), Confidential (Copilot references restricted to labelled users), and Highly Confidential (excluded from Copilot indexing entirely). This maps directly to the four-tier data classification framework outlined earlier, creating a consistent governance model across both human and AI access.

GDPR Implications of AI Tool Usage

The UK General Data Protection Regulation applies to AI tools in exactly the same way it applies to any other data processing activity. If personal data is involved, GDPR is engaged. The fact that processing happens through an AI model rather than a traditional database does not change your obligations — if anything, it increases them because AI processing is less transparent and harder to audit.

Lawful Basis for AI Processing

Every use of personal data in an AI tool requires a lawful basis under Article 6 of UK GDPR. For most business AI use cases, the likely bases are:

Legitimate interests (Article 6(1)(f)): The most commonly relied-upon basis for business AI use. You must demonstrate that the processing is necessary for a legitimate business purpose, that it does not override the rights and freedoms of the data subjects, and that you have conducted a legitimate interests assessment (LIA). Using Copilot to summarise meeting notes containing employee names, for example, could be justified under legitimate interests if appropriate safeguards are in place.

Consent (Article 6(1)(a)): Rarely practical for employee data (due to the power imbalance) but may be appropriate for customer data in some contexts. Consent must be freely given, specific, informed, and unambiguous — which means you need to tell people exactly how AI will process their data.

Contractual necessity (Article 6(1)(b)): May apply where AI processing is directly necessary to fulfil a contract with the data subject.

DPIA Requirements

Under UK GDPR, you are required to carry out a Data Protection Impact Assessment (DPIA) before any processing that is “likely to result in a high risk” to individuals. The ICO considers AI-based processing of personal data to be inherently high-risk in most cases. If your AI tools process personal data — particularly employee data, customer data, or any special category data — you almost certainly need a DPIA. This is not optional; it is a legal requirement, and failure to conduct one when required is itself a GDPR violation.

Data Subject Rights and AI

Data subjects retain all their rights under UK GDPR regardless of whether their data is processed by a human or an AI system. This includes the right to be informed about how AI processes their data, the right of access to any data held about them (including data derived by AI), the right to erasure, and — critically — the right not to be subject to solely automated decision-making under Article 22. If your AI tools are making or significantly influencing decisions about individuals (recruitment screening, credit decisions, performance assessments), you must ensure meaningful human oversight is in place.

Vendor Risk Assessment for AI Tools

Every AI tool your business uses is a third-party data processor, and UK GDPR requires you to conduct due diligence on your processors. But AI vendors require a more thorough assessment than traditional software providers because of the unique ways they handle data.

Key Questions for AI Vendor Assessment

Assessment Area Key Questions Red Flags
Data Retention How long are prompts, inputs, and outputs retained? Can retention be configured? Indefinite retention, no deletion mechanism, vague privacy policy language
Training Data Use Is input data used to train or fine-tune models? Can this be opted out? Default opt-in to training, no enterprise opt-out, unclear data pipeline
Data Location Where is data processed and stored? Are servers in the UK or EU? US-only processing with no UK adequacy safeguards, no data residency options
Sub-processors Who are the vendor’s sub-processors? How is the chain managed? Undisclosed sub-processors, no notification of changes, long sub-processor chains
Security Certifications Does the vendor hold ISO 27001, SOC 2, or Cyber Essentials Plus? No recognised certifications, refusal to share audit reports, no penetration testing
Incident Response What is the vendor’s breach notification timeline and process? No defined SLA for breach notification, no UK-based support, unclear escalation
Data Processing Agreement Is a GDPR-compliant DPA available? Does it cover AI-specific processing? No DPA offered, generic terms that do not address AI, refusal to negotiate terms

Securing API Keys and AI Integrations

As businesses move beyond consumer AI tools towards API-based integrations — building AI-powered features into their own applications, automating workflows, or connecting AI to internal systems — a new category of security risk emerges around API key management and integration security.

API Key Security Best Practices

AI API keys are credentials that grant access to powerful (and expensive) services. A leaked OpenAI API key, for example, could allow an attacker to run thousands of pounds worth of API calls at your expense — or worse, use your key to process data through the API, creating a data breach attributed to your organisation.

Store API keys in environment variables or a dedicated secrets manager such as Azure Key Vault or AWS Secrets Manager. Never hard-code keys in application source code, configuration files committed to version control, or client-side JavaScript. Rotate keys on a regular schedule — quarterly at minimum — and immediately if there is any suspicion of compromise. Apply the principle of least privilege: create separate API keys for different applications and environments, each with the minimum permissions required. Monitor API usage for anomalies: unexpected spikes in usage, calls from unfamiliar IP addresses, or requests outside business hours should trigger alerts.

Hard-coded in source codeCritical Risk
Most common mistake
Shared in team chat or emailHigh Risk
Frequently occurs
Stored in .env files without .gitignoreHigh Risk
Easily preventable
Single key for all environmentsMedium Risk
Limits blast radius
Never rotated after creationMedium Risk
Set a rotation schedule

Securing AI Integrations in Your Applications

When you integrate AI APIs into your business applications, you create a data pipeline that requires the same security rigour as any other data flow. Implement input validation and sanitisation before sending data to AI APIs — this helps prevent prompt injection attacks from reaching the model. Apply output filtering to AI responses before presenting them to users or feeding them into downstream systems. Log all AI API interactions for audit purposes, including the prompts sent, responses received, and the user or system that initiated the request. Implement rate limiting and cost controls on API usage to prevent runaway costs from bugs, abuse, or compromised credentials.

Monitoring and Auditing AI Usage

You cannot secure what you cannot see, and you cannot demonstrate compliance without evidence. Monitoring AI usage across your organisation is essential for both security and regulatory purposes.

What to Monitor

A comprehensive AI monitoring programme should cover three dimensions: who is using AI tools, what data is being processed, and what outputs are being generated. For sanctioned tools like Microsoft Copilot, this is relatively straightforward — Microsoft provides audit logs through the Purview compliance portal that capture Copilot interactions, the documents referenced, and the users involved. For other enterprise AI tools, check whether the vendor provides admin-level usage analytics and audit logging.

Detecting shadow AI is harder but not impossible. Network monitoring tools can identify traffic to known AI service domains (api.openai.com, bard.google.com, claude.ai, and similar endpoints). Cloud access security brokers (CASBs) can flag when employees access unsanctioned AI applications. Endpoint detection and response (EDR) tools can identify AI-related browser extensions or desktop applications. Regular employee surveys and anonymous reporting channels can also surface shadow AI usage that technical controls miss.

4.2x
increase in shadow AI tool usage among UK businesses since 2023
£12,000
average annual spend on unsanctioned AI tools per 50-person UK business
23%
of UK organisations have formal AI usage monitoring in place

Building an AI Audit Trail

For regulatory compliance — particularly UK GDPR accountability requirements — you need to maintain records of how AI tools process personal data. This audit trail should include: which AI tools are approved and in use, the lawful basis for AI processing of personal data, data protection impact assessments for high-risk AI processing, records of AI vendor due diligence and data processing agreements, logs of AI interactions involving personal or sensitive data, records of any AI-related incidents or data breaches, and evidence of staff training on AI security and acceptable use.

Building an AI Security Framework

Pulling everything together into a cohesive, manageable framework is the final step. An AI security framework is not a single document — it is a structured collection of policies, processes, technical controls, and governance mechanisms that work together to enable secure AI adoption.

The Five Pillars of AI Security

We recommend structuring your framework around five pillars, each of which addresses a distinct aspect of AI risk management.

Pillar 1 – Governance: This is the strategic layer. It includes your AI acceptable use policy, your data classification framework, your AI steering committee or responsible person, and your approach to AI risk appetite. Governance defines the boundaries within which your organisation operates and ensures that AI adoption aligns with your business objectives and risk tolerance.

Pillar 2 – Technical Controls: These are the mechanisms that enforce your governance decisions. They include Microsoft Purview sensitivity labels, network-level blocking of unsanctioned AI services, API key management and secrets vaults, input validation and output filtering for AI integrations, and data loss prevention (DLP) policies that detect and block sensitive data being sent to AI tools.

Pillar 3 – Vendor Management: This covers the due diligence and ongoing oversight of AI tool providers. It includes your vendor assessment questionnaire, data processing agreements, sub-processor monitoring, regular security reviews, and exit strategies for each AI vendor relationship.

Pillar 4 – People and Training: Technology alone cannot secure AI usage. Your staff need to understand the risks, know the policies, and have the skills to use AI tools responsibly. This pillar covers security awareness training with AI-specific modules, role-specific guidance (what a finance team member needs to know differs from what a developer needs), regular policy refreshers, and a culture of reporting rather than concealing AI-related incidents.

Pillar 5 – Monitoring and Response: The ongoing operational layer. This includes continuous monitoring of AI tool usage, shadow AI detection, audit logging and compliance reporting, incident response procedures specific to AI-related breaches, and regular reviews and updates to the framework as the AI landscape evolves.

Ad-Hoc AI Adoption
  • No formal policy — employees decide individually
  • No visibility into what tools are being used or what data is exposed
  • Vendor selection based on popularity rather than security
  • No training — staff learn by trial and error
  • Incidents discovered after damage is done
  • Regulators find gaps during investigations
Governed AI Framework
  • Clear AUP with approved tools and prohibited data categories
  • Full visibility through monitoring, logging, and shadow AI detection
  • Vendor assessment with DPA, security certification, and exit strategy
  • Regular training programme with AI-specific security modules
  • Proactive detection with defined response procedures
  • Demonstrable compliance with full audit trail for regulators

Practical Steps to Get Started Today

If you are reading this and feeling that your organisation has a long way to go, you are not alone. The vast majority of UK businesses are still in the early stages of AI governance. The good news is that you do not need to implement everything at once. Focus on the highest-impact actions first and build from there.

Quick Wins (This Week)

Audit your current AI landscape. Ask department heads what AI tools their teams are using. You will almost certainly be surprised by the answers. Document every tool, who uses it, and what data it processes.

Block consumer AI tools on your corporate network. If your web filtering or firewall supports it, block access to consumer versions of ChatGPT, Gemini, Claude, and similar tools from corporate devices. This is a blunt instrument, but it immediately reduces your shadow AI risk while you develop more nuanced policies.

Draft a one-page AI guidance note. Even before your full AUP is ready, circulate a brief guidance note to all staff reminding them not to paste sensitive data into any AI tool. Keep it simple, specific, and non-threatening.

Medium-Term Actions (This Quarter)

Develop your full AI acceptable use policy. Use the framework outlined in this guide. Involve stakeholders from IT, legal, HR, and the business to ensure buy-in and practicality.

Conduct a Microsoft 365 permissions review. Before rolling out Copilot (or if you already have), audit your SharePoint, OneDrive, and Teams permissions to eliminate oversharing. This is the single most impactful technical action for Copilot security.

Complete vendor assessments for all AI tools in use. Apply the assessment framework from this guide to every AI tool currently in use or under consideration. Terminate or replace any tools that cannot meet your minimum security and data protection requirements.

Longer-Term Programme (This Year)

Implement sensitivity labels. Deploy Microsoft Purview sensitivity labels (or equivalent) across your document estate to provide granular control over what AI tools can access.

Roll out AI security training. Develop and deliver training that covers AI risks, your AUP, data classification, and practical guidance for secure AI usage in each role.

Establish ongoing monitoring. Implement the monitoring programme outlined in this guide, including shadow AI detection, usage analytics, and compliance reporting.

Build your AI audit trail. Ensure you have the documentation and evidence to demonstrate compliance to regulators, auditors, and clients.

The Bottom Line

AI is not going away, and the businesses that will thrive are those that learn to harness its power securely. The risks are real — data leakage, regulatory fines, prompt injection attacks, shadow AI, and reputational damage — but they are all manageable with the right framework in place.

The key message is this: AI security is not primarily a technology problem. It is a governance problem. The technical controls matter, but they only work within the context of clear policies, trained people, and robust processes. Start with governance, layer on technical controls, invest in your people, and build a monitoring programme that gives you visibility and evidence.

For UK businesses navigating these challenges, the stakes are high but the path forward is clear. The organisations that invest in AI governance now will not only avoid the pitfalls — they will build a competitive advantage through faster, more confident AI adoption, stronger client trust, and demonstrable regulatory compliance. Those that ignore AI security will eventually face a breach, a regulatory action, or a client audit that forces them to address it reactively and at far greater cost.

Need to Secure Your AI Tools?

CloudSwitched helps UK businesses deploy AI securely with proper governance. From Microsoft Copilot rollouts and data classification to AI acceptable use policies and shadow AI detection, our team provides the expertise and hands-on support you need to adopt AI with confidence. Get in touch for a free consultation and let us help you build an AI security framework that protects your business without slowing it down.

Tags:Cyber Security
CloudSwitched
CloudSwitched

London-based managed IT services provider offering support, cloud solutions and cybersecurity for SMEs.

From Our Blog

3
  • Cyber Security

How to Prepare Your Business for Cyber Essentials Plus Certification

3 Jun, 2026

Read more
1
  • Cloud Networking

Getting Started with Cisco Meraki: A Guide for Small Businesses

1 Feb, 2026

Read more
2
  • SEO

How to Plan a Website Redesign Without Losing SEO Rankings

2 Aug, 2025

Read more

Enquiry Received!

Thank you for getting in touch. A member of our team will review your enquiry and get back to you within 24 hours.