Azure virtual machines are the workhorses of cloud infrastructure for thousands of UK businesses. Whether you are running line-of-business applications, database servers, development environments, or web hosting platforms, the performance of your Azure VMs directly impacts user experience, operational efficiency, and ultimately your bottom line. Yet many organisations deploy VMs with default configurations and never revisit them — leaving significant performance gains and cost savings unrealised.
Optimising Azure VM performance is not a one-time task but an ongoing discipline. Workload patterns change, Azure introduces new VM families and features, application requirements evolve, and cost optimisation pressures intensify. UK businesses that treat VM optimisation as a regular operational practice consistently achieve better performance at lower cost than those who set and forget their cloud infrastructure.
This guide covers the key strategies for optimising Azure VM performance across compute, storage, networking, and cost dimensions — practical techniques that your IT team or managed cloud provider can implement to deliver measurable improvements.
Understanding Azure VM Families and Sizing
The foundation of VM performance optimisation is selecting the right VM family and size for your workload. Azure offers dozens of VM series, each designed for specific workload characteristics. Choosing the wrong series is like buying a sports car to deliver heavy freight — expensive, uncomfortable, and fundamentally the wrong tool for the job.
The most commonly used series for UK business workloads are the B-series (burstable, ideal for workloads with variable CPU usage like development servers and small web applications), D-series (general purpose, balanced CPU-to-memory ratio suitable for most business applications), E-series (memory optimised, ideal for database servers and in-memory caching), and F-series (compute optimised, suitable for batch processing and high-performance computing tasks).
Within each series, sizes range from small single-vCPU instances to massive configurations with hundreds of vCPUs and terabytes of memory. The key is matching the size to your actual workload requirements rather than your perceived requirements. Many UK businesses deploy D4s_v5 instances (4 vCPUs, 16 GB RAM) for workloads that would run perfectly well on a B2ms (2 vCPUs, 8 GB RAM) — doubling their compute costs unnecessarily.
Choosing the Right VM Generation
Azure regularly introduces new VM generations, and staying current matters more than many organisations realise. Each new generation typically delivers 15 to 30 per cent better price-performance than its predecessor, thanks to newer processor architectures and improved Azure hypervisor efficiency. For instance, the v5 generation of D-series VMs uses Intel Ice Lake or AMD EPYC Milan processors, offering significantly better single-threaded performance and throughput compared to v4 instances. UK businesses that deployed VMs two or three years ago and have not revisited their VM generation may be paying substantially more for less performance than current options provide.
Migration between VM generations within the same series is straightforward and typically requires only a brief reboot during a maintenance window. The process involves stopping the VM, changing the size to the equivalent in the newer generation (for example, from D4s_v4 to D4s_v5), and restarting. For most UK business applications, this can be completed in under five minutes and can be scheduled outside business hours to avoid any user impact. We recommend reviewing VM generations at least annually and including generation updates as part of your regular cloud optimisation reviews.
Confidential Computing and Specialised VM Requirements
Some UK businesses — particularly those in financial services, healthcare, and government — have workloads that demand additional security guarantees beyond standard cloud infrastructure. Azure's DC-series confidential computing VMs use hardware-based trusted execution environments to protect data whilst it is being processed, not just at rest or in transit. For organisations handling sensitive personal data, payment card information, or classified government workloads, these specialised VMs provide an additional layer of assurance that can satisfy regulators and auditors who may otherwise be reluctant to approve cloud deployment.
| VM Series | Optimised For | Typical UK Business Use Cases | vCPU:RAM Ratio | Relative Cost |
|---|---|---|---|---|
| B-series | Burstable workloads | Dev/test, small web apps, micro-services | 1:4 | £ |
| D-series v5 | General purpose | Business apps, mid-tier databases, web servers | 1:4 | ££ |
| E-series v5 | Memory intensive | SQL Server, SAP, in-memory analytics | 1:8 | £££ |
| F-series v2 | Compute intensive | Batch processing, gaming servers, modelling | 1:2 | ££ |
| L-series v2 | Storage intensive | Large databases, data warehousing, log analytics | 1:8 | £££ |
Right-Sizing Your VMs
Right-sizing is the process of analysing actual resource utilisation and adjusting VM configurations to match real workload demands. It is the single most impactful optimisation you can perform, often delivering 20 to 40 per cent cost savings with no negative impact on performance.
Azure provides built-in tools for right-sizing analysis. Azure Advisor automatically reviews your VM utilisation metrics and recommends resizing when it detects consistently underutilised resources. Azure Monitor collects detailed performance metrics — CPU utilisation, memory usage, disk IOPS, and network throughput — that allow you to understand actual workload demands over time.
When analysing utilisation, look at patterns over at least 30 days rather than snapshots. A VM that averages 15% CPU utilisation might spike to 80% during month-end processing, and right-sizing based only on the average would create performance problems when it matters most. Conversely, a VM that consistently runs below 20% CPU and 40% memory utilisation is almost certainly oversized and should be downsized.
Practical Right-Sizing for UK Organisations
In practice, right-sizing requires more than just technical analysis — it requires organisational buy-in from application owners who may be reluctant to reduce the resources allocated to their systems. A common concern is that downsizing will cause performance problems during peak periods or unexpected demand spikes. Address this by presenting data clearly: show the actual utilisation graphs alongside the proposed new size, demonstrate that the new size still provides adequate headroom, and assure stakeholders that the change can be reversed quickly if problems arise.
A phased approach works well for most UK businesses. Start with the easiest wins — development and test environments, where the risk of impact is minimal and the savings are often the most dramatic. Many organisations find that their non-production environments are two to four times larger than necessary, simply because they were provisioned to match production specifications that were themselves oversized. Once the team has built confidence through successful non-production right-sizing, move on to less critical production workloads, then finally to mission-critical systems.
It is also worth considering the B-series burstable VMs for workloads that spend most of their time idle but need occasional bursts of CPU performance. These VMs accumulate CPU credits during idle periods and spend them during busy periods, making them significantly cheaper than standard VMs for bursty workloads such as development servers, low-traffic web applications, and monitoring tools. A B2ms instance costs roughly half as much as a comparable D2s_v5 whilst delivering equivalent burst performance — a meaningful saving when multiplied across dozens of VMs.
Follow this systematic approach: First, enable Azure Monitor and diagnostic settings on all VMs. Allow at least 30 days of data collection. Then review CPU, memory, disk, and network utilisation using Azure Monitor workbooks or third-party tools. Identify VMs with peak utilisation consistently below 40% of provisioned capacity. Calculate the smallest VM size that accommodates peak workload with a 20% headroom buffer. Schedule the resize during a maintenance window, as most resizes require a brief restart. Verify performance after the change and adjust if needed.
Storage Performance Optimisation
Storage is frequently the performance bottleneck for Azure VMs, yet it is the most commonly overlooked optimisation area. The difference between Standard HDD, Standard SSD, Premium SSD, and Ultra Disk storage tiers is dramatic in terms of both IOPS (input/output operations per second) and latency — and selecting the right tier for each workload can transform application performance.
For database servers — particularly SQL Server instances, which are common in UK business environments — Premium SSD or Ultra Disk storage is essential. The latency difference between Standard SSD (typically 5-10ms) and Premium SSD (typically 1-2ms) can translate to significant improvements in query response times and transaction throughput. For application servers that primarily perform sequential reads, Standard SSD is usually sufficient and considerably cheaper.
Consider separating your OS disk, application disks, and data disks onto different storage tiers matched to their performance requirements. For example, a SQL Server VM might use a Standard SSD for the OS, a Premium SSD for the database data files, and an Ultra Disk for the transaction log files where write latency is most critical.
Disk Caching and Host Caching Strategies
Azure provides host-level caching options for managed disks that can substantially improve read performance without any changes to your application. Read-only caching stores frequently accessed data in the VM host's local SSD, reducing the need to fetch data from the remote storage back-end. For read-heavy workloads such as web servers loading static content or database servers performing frequent lookups, enabling read-only caching on data disks can improve read latency by 50 per cent or more.
However, caching must be configured thoughtfully. Write-heavy disks, such as database transaction logs, should use no caching or write-through caching to avoid the risk of data loss in the event of a host failure. A common mistake is enabling read-write caching on all disks without considering the workload characteristics — this can actually degrade write performance and introduce a small risk of data inconsistency. The correct approach is to analyse your I/O patterns first: use Azure Monitor to determine the read-to-write ratio for each disk, then apply caching only where the read ratio is high and data durability requirements permit it.
Managed Disk Bursting and Performance Tiers
Azure Premium SSDs support disk bursting, which allows your disks to temporarily exceed their baseline IOPS and throughput limits. This is invaluable for workloads that experience periodic spikes in storage demand — for example, an accounting application that runs intensive reports at month-end, or an e-commerce platform that handles surges during promotional events. On-demand bursting (available on Premium SSD disks larger than 512 GiB) can provide up to 30,000 IOPS temporarily, even if the baseline for that disk size is much lower. Understanding and leveraging disk bursting means you can provision smaller, cheaper disks that still handle peak loads comfortably, rather than over-provisioning to accommodate occasional demand spikes.
Network Performance Tuning
Network performance affects every VM that communicates with other services — which is virtually all of them. Azure provides several features for optimising network throughput and latency that many UK businesses do not take advantage of.
Accelerated Networking bypasses the Azure host's software-defined network stack and uses SR-IOV (single root I/O virtualisation) to provide near-bare-metal network performance to your VM. It reduces latency, reduces jitter, and increases throughput — and is available at no additional cost on supported VM sizes. If your VMs support it, there is almost no reason not to enable it.
Proximity Placement Groups ensure that VMs that communicate frequently are placed as close together as possible within the Azure data centre, minimising network latency between them. This is particularly important for multi-tier applications where a web server communicates with an application server which in turn communicates with a database server. Placing all three in the same proximity placement group can reduce inter-tier latency from several milliseconds to sub-millisecond levels.
For UK businesses, choosing the correct Azure region is also important. The UK South (London) and UK West (Cardiff) regions provide the lowest latency for users and systems based in the United Kingdom. Deploying VMs in these regions rather than in Western Europe or North Europe can reduce network round-trip times by 10 to 30 milliseconds — a meaningful improvement for latency-sensitive applications.
Network Security and Performance Trade-offs
Network Security Groups (NSGs) and Azure Firewall add essential security layers to your VM networking, but they can also introduce latency if not configured properly. Each NSG rule is evaluated against network traffic, and overly complex rule sets with hundreds of rules can add measurable processing overhead. Best practice is to keep NSG rules as concise as possible, use Application Security Groups to simplify rule management, and avoid unnecessary duplication of rules across subnet and NIC-level NSGs.
Azure Firewall Premium, whilst offering advanced features such as TLS inspection and intrusion detection, adds additional processing to every packet that traverses it. For latency-sensitive internal traffic between VMs in the same virtual network, consider whether the traffic truly needs to pass through the firewall or whether NSG rules provide sufficient security at lower latency. Many UK businesses route all traffic through a centralised firewall out of habit rather than necessity, adding several milliseconds of latency to every internal request.
ExpressRoute for Hybrid Connectivity
For UK businesses running hybrid cloud architectures — with some workloads on-premises and others in Azure — the network connection between your data centre and Azure can become a critical bottleneck. Azure ExpressRoute provides a dedicated, private connection between your on-premises infrastructure and Azure, bypassing the public internet entirely. This delivers more predictable latency, higher bandwidth, and improved security compared to VPN connections. For organisations with latency-sensitive workloads that span on-premises and cloud environments, such as database replication or real-time application synchronisation, ExpressRoute can be the difference between acceptable and unacceptable performance. Several UK telecommunications providers offer ExpressRoute connectivity from their London data centres, making it readily accessible for most UK businesses.
Cost Optimisation Without Sacrificing Performance
Performance optimisation and cost optimisation are not opposing forces — in fact, they often go hand in hand. A right-sized VM that matches its workload runs better and costs less than an oversized VM that wastes resources. Beyond right-sizing, several Azure features can significantly reduce costs.
Azure Reserved Instances offer discounts of up to 40% (one-year term) or 60% (three-year term) compared to pay-as-you-go pricing in exchange for committing to a specific VM size and region. For VMs that run continuously — production application servers, database servers, domain controllers — reserved instances are almost always the right choice. Azure Hybrid Benefit allows UK businesses with existing Windows Server or SQL Server licences (acquired through Volume Licensing or Software Assurance) to apply those licences to Azure VMs, saving up to 85% on Windows VM costs.
For non-production workloads — development environments, testing servers, batch processing — consider Azure Spot VMs, which offer discounts of up to 90% by using spare Azure capacity. Spot VMs can be evicted when Azure needs the capacity back, so they are unsuitable for production workloads, but for interruptible tasks they represent extraordinary value.
Building a Cost Governance Framework
Cost optimisation is most effective when it is embedded in organisational governance processes rather than treated as an occasional clean-up exercise. Establish Azure budgets for each department or project, configure alerts at 50%, 75%, and 90% of budget thresholds, and require cost impact assessments before provisioning new VMs. Azure Cost Management provides the tools to implement this governance, but the real challenge is cultural — ensuring that every team member who provisions cloud resources understands the cost implications of their choices.
For UK businesses with multiple teams or departments consuming Azure resources, implementing a tagging strategy is essential for cost attribution. Require tags for cost centre, environment (production, staging, development), application name, and responsible owner on every VM. These tags enable granular cost reporting that allows you to identify exactly where spend is occurring and hold the appropriate teams accountable. Without proper tagging, cloud costs become an opaque shared expense that nobody feels responsible for managing — and costs inevitably creep upwards.
Auto-Scaling and Scheduled Scaling
Not every workload needs to run at full capacity around the clock. Azure Virtual Machine Scale Sets allow you to automatically adjust the number of VM instances based on demand, scaling out during peak periods and scaling in during quiet times. For web applications, this means you can handle traffic surges without over-provisioning for average demand. For UK businesses with predictable usage patterns — such as business applications that are heavily used during office hours but idle overnight and at weekends — scheduled scaling rules can automatically reduce capacity outside business hours, delivering substantial savings. A typical UK business application that scales from four instances during business hours to one instance overnight and at weekends can reduce compute costs by 50 to 60 per cent compared to running four instances continuously.
Quick Performance Wins
- Enable Accelerated Networking on all supported VMs
- Switch database VMs to Premium SSD storage
- Right-size based on 30-day utilisation data
- Use Proximity Placement Groups for multi-tier apps
- Deploy to UK South or UK West regions
- Enable VM diagnostics and Azure Monitor
- Configure auto-shutdown for dev/test VMs
- Review and apply Azure Advisor recommendations
Common Performance Mistakes
- Using Standard HDD for database workloads
- Deploying all VMs at the same oversized spec
- Ignoring Azure Advisor recommendations
- Running dev/test VMs 24/7 unnecessarily
- Not using Reserved Instances for stable workloads
- Placing communicating VMs in different regions
- Never reviewing utilisation after initial deployment
- Using general-purpose VMs for memory-intensive workloads
Monitoring and Continuous Optimisation
Optimisation is not a project with a finish date — it is an ongoing operational practice. Establish regular review cadences: weekly checks of Azure Advisor recommendations, monthly utilisation reviews to identify right-sizing opportunities, and quarterly architectural reviews to assess whether workloads would benefit from migration to different Azure services (for example, moving a SQL Server VM to Azure SQL Database, or containerising a web application into Azure Container Apps).
Azure Monitor, combined with Application Insights for web applications, provides the telemetry you need to understand performance trends, identify bottlenecks, and validate the impact of optimisation changes. Set up alerts for key performance thresholds — CPU above 85% sustained, memory above 90%, disk queue length above 2 — to catch emerging problems before they affect users.
Establishing a Performance Baseline
Before you can optimise effectively, you need a clear understanding of what normal performance looks like for each VM. Establish performance baselines by recording key metrics — average and peak CPU utilisation, memory usage, disk IOPS, disk latency, and network throughput — during a representative period that includes both typical and peak workload conditions. Document these baselines and store them alongside your VM configuration records so that when performance changes occur, you can quickly determine whether the change is within expected parameters or indicates a problem that needs investigation.
Azure Monitor workbooks provide an excellent way to create visual performance dashboards that display current metrics alongside historical baselines. Create a workbook for each critical application or VM tier that shows at a glance whether performance is within acceptable bounds. Share these workbooks with application owners and operations staff so that everyone has visibility into system health without needing to log into the Azure portal and navigate complex metric interfaces.
Automation and Infrastructure as Code
Manual VM management does not scale well. As your Azure estate grows, maintaining optimised configurations across dozens or hundreds of VMs becomes impractical without automation. Azure Resource Manager templates, Bicep, or Terraform allow you to define your VM configurations as code, ensuring that every new VM is deployed with the correct size, storage tier, networking configuration, and monitoring settings from the outset. This prevents configuration drift — where VMs gradually diverge from their intended specifications through manual changes — and ensures that optimisation best practices are consistently applied across your entire estate. For UK businesses working with a managed cloud provider like Cloudswitched, infrastructure as code also provides transparency and auditability, making it clear exactly what is deployed and how it is configured.
Optimise Your Azure Environment
Cloudswitched provides expert Azure cloud management for UK businesses, including VM performance optimisation, cost management, and ongoing monitoring. Whether you need a one-off optimisation review or continuous cloud management, our certified Azure engineers can help you get more performance for less cost. Get in touch for a free Azure assessment.
GET IN TOUCH