Back to Articles

Network Performance Benchmarking: What to Measure

Network Performance Benchmarking: What to Measure

Network performance is the invisible foundation upon which every modern business operates. When your network performs well, nobody notices — applications load quickly, files transfer seamlessly, video calls are crisp, and cloud services respond instantly. When it performs poorly, everything suffers. Staff waste hours waiting for applications to respond, video conferences freeze and stutter, file transfers crawl, and customer-facing services become sluggish or unreliable.

Yet despite its critical importance, network performance is one of the least understood and most poorly monitored aspects of most UK business IT environments. Many organisations operate without any baseline understanding of their network's capabilities, meaning they have no way of knowing whether performance is degrading, no data to support infrastructure investment decisions, and no evidence to present to internet service providers when disputing poor connectivity.

Network performance benchmarking is the systematic process of measuring, recording, and analysing key network metrics to establish baselines, identify bottlenecks, and track changes over time. This guide explains what to measure, how to measure it, and how to use the results to make informed decisions about your network infrastructure.

82%
of UK businesses have never benchmarked their network
34 min
average daily productivity lost per user to network issues
£8,200
annual cost of poor network performance per 10 users
3x
faster problem resolution with established baselines

The Core Network Metrics

Effective network benchmarking requires measuring the right metrics. While there are dozens of network statistics you could track, the following core metrics provide the foundation for understanding your network's health and performance. Each metric tells a different part of the story, and together they paint a comprehensive picture of how well your network serves your business.

Bandwidth and Throughput

Bandwidth is the maximum theoretical capacity of your network connection, measured in megabits per second (Mbps) or gigabits per second (Gbps). It is the speed your internet service provider advertises and the number printed on your contract. Throughput, however, is the actual data transfer rate you achieve in practice — and it is almost always lower than your advertised bandwidth.

The difference between bandwidth and throughput is crucial. Your ISP might sell you a 1 Gbps leased line, but your actual throughput could be significantly lower due to network overhead, congestion, equipment limitations, or configuration issues. Benchmarking throughput at regular intervals — both internally across your LAN and externally to the internet — reveals whether you are getting the performance you are paying for and whether it is sufficient for your business needs.

When measuring throughput, it is important to test in both directions — download and upload — as they can differ significantly, particularly on asymmetric connections such as FTTC broadband. Many UK businesses have adopted video conferencing and cloud backup services that are heavily dependent on upload speeds, yet upload throughput is frequently overlooked in basic speed tests. A business with 80 Mbps download but only 10 Mbps upload will struggle with simultaneous video calls, cloud file synchronisation, and offsite backup operations.

Internal LAN throughput should also be tested separately from internet throughput. Your internal network may be capable of gigabit speeds between wired devices, but if your switches are ageing or your cabling is substandard, internal throughput could be a fraction of the theoretical maximum. This is particularly relevant for businesses that rely on file servers, network-attached storage, or internal applications hosted on local infrastructure.

Latency

Latency measures the time it takes for a data packet to travel from one point to another, typically expressed in milliseconds (ms). Low latency is critical for real-time applications such as VoIP phone calls, video conferencing, and remote desktop sessions. While a few milliseconds of latency is imperceptible for web browsing or email, it can make voice calls sound choppy and video conferences freeze.

For UK businesses, typical latency benchmarks are: under 1ms for traffic within your local network, under 10ms for traffic to a UK-based cloud service, under 20ms for traffic to a European data centre, and under 80ms for traffic to a US-based service. Latency that consistently exceeds these benchmarks indicates a problem that warrants investigation.

When measuring latency, it is essential to test against multiple destinations rather than relying on a single ping test. Latency to your primary cloud provider — whether that is Microsoft Azure, AWS, or Google Cloud — is the most relevant benchmark for cloud-dependent businesses. You should also measure latency to your VoIP provider's servers, your email provider, and any industry-specific cloud applications your staff use daily. Different destinations will show different latency profiles, and understanding these variations helps identify whether a latency problem is local to your network, ISP-related, or specific to a particular service.

It is also worth measuring latency at different times of day. Many UK businesses experience increased latency during peak hours when their ISP's network is under heavier load. If latency doubles between 9am and 11am, this pattern can explain why staff report that the internet feels slow in the morning — a complaint that might otherwise be dismissed as subjective perception rather than a measurable problem with a diagnosable cause.

Metric What It Measures Good Benchmark Warning Threshold Critical Threshold
Throughput (LAN) Internal data transfer speed >900 Mbps on Gigabit <700 Mbps <500 Mbps
Throughput (WAN) Internet data transfer speed >90% of contracted speed <75% of contracted <50% of contracted
Latency (LAN) Internal round-trip time <1 ms 1-5 ms >5 ms
Latency (WAN) Internet round-trip time <20 ms (UK) 20-50 ms >50 ms
Packet Loss Data packets that fail to arrive 0% 0.1-1% >1%
Jitter Variation in latency <5 ms 5-15 ms >15 ms

Packet Loss

Packet loss occurs when data packets travelling across the network fail to reach their destination. Even small amounts of packet loss can have a dramatic impact on application performance. A packet loss rate of just 1% can cause TCP-based applications to slow to a crawl because the protocol requires lost packets to be retransmitted, creating a cascade of delays. For real-time applications such as VoIP, packet loss causes audible gaps, clicks, and distortion that make communication difficult.

In a healthy network, packet loss should be zero or very close to zero. Any consistent packet loss above 0.1% warrants investigation. Common causes include faulty cabling, overloaded switches, misconfigured Quality of Service settings, wireless interference, and ISP-level congestion.

Diagnosing the source of packet loss requires systematic testing at different points in the network path. By running tests between devices on your local network, you can determine whether the loss is occurring internally — perhaps due to a failing switch or damaged cable — or externally on the path to the internet. Tools such as MTR (My Traceroute) can identify the specific network hop where packet loss is occurring, which is invaluable when presenting evidence to your ISP that the problem lies within their network rather than yours.

For UK businesses with multiple office locations connected via MPLS, SD-WAN, or site-to-site VPN, packet loss on the inter-site links can be particularly disruptive. Staff accessing centralised applications from a branch office may experience degraded performance that local IT support attributes to the application itself rather than the network connection between sites. Regular packet loss testing on all inter-site links should be a standard part of your benchmarking programme.

Jitter

Jitter is the variation in latency over time. While a consistent latency of 15ms is perfectly acceptable for most applications, a latency that fluctuates between 5ms and 50ms — even if the average is 15ms — causes significant problems for real-time communications. Voice calls become choppy, video conferences stutter, and interactive applications feel unpredictable.

Jitter is particularly important for businesses that rely heavily on Microsoft Teams, Zoom, or VoIP telephony. The ITU-T (International Telecommunication Union) recommends jitter below 30ms for acceptable voice quality, but most modern unified communications platforms perform best with jitter below 10ms.

Quality of Service for Real-Time Traffic

For businesses heavily reliant on VoIP telephony and video conferencing, Quality of Service (QoS) configuration is one of the most effective tools for managing jitter. QoS rules allow your network equipment to prioritise real-time traffic over less time-sensitive data such as file downloads, web browsing, and email. When a video call and a large file transfer compete for the same bandwidth, QoS ensures the video call gets priority, maintaining smooth communication whilst the file transfer proceeds at a reduced speed.

Implementing QoS properly requires understanding your network traffic patterns — which is another area where benchmarking data proves invaluable. Without knowing what traffic your network carries and when, QoS rules are based on guesswork. With benchmark data showing your traffic composition at different times of day, you can configure QoS policies that accurately reflect your business priorities. Most modern managed switches and enterprise-grade routers support QoS configuration, though the specific implementation varies between manufacturers.

Why Averages Can Be Misleading

When benchmarking network performance, always look beyond average values. A network with an average latency of 10ms might be performing perfectly — or it might have latency that alternates between 2ms and 150ms, with the average masking severe intermittent problems. Always examine percentile values (particularly the 95th and 99th percentiles), standard deviation, and maximum values alongside averages. The 95th percentile tells you what performance looks like during the worst 5% of the time — which is often when your users are most frustrated.

Internal vs External Benchmarking

A complete benchmarking programme measures performance both within your internal network (LAN benchmarking) and across your internet connection (WAN benchmarking). These are distinct measurements that reveal different types of problems.

LAN benchmarking tests the performance of your internal network infrastructure — switches, cabling, wireless access points, and internal routing. Poor LAN performance indicates problems with your own equipment or configuration. Common issues include ageing switches that cannot handle modern traffic volumes, Cat5 cabling that limits speeds to 100 Mbps, wireless dead spots or interference, and misconfigured VLANs creating unnecessary traffic bottlenecks.

A thorough LAN benchmark should test every segment of your internal network, not just the link between a single workstation and the server room. Test from different floors, different switch stacks, and both wired and wireless connections. You may discover that one floor consistently underperforms because it is served by an older switch, or that a particular wireless access point is creating a bottleneck because it has degraded to a slower standard due to interference or firmware issues.

For businesses operating structured cabling, the age and category of your cables directly affects maximum achievable throughput. Cat5e cabling supports Gigabit Ethernet at distances up to 100 metres, but Cat5 — the earlier standard without the enhanced specification — is limited to 100 Mbps. If your building was cabled more than fifteen years ago, portions of your infrastructure may be using Cat5 cable that silently caps your internal network speed at a tenth of what modern equipment can deliver. A cable audit as part of your benchmarking programme can reveal these hidden limitations and inform a targeted re-cabling strategy.

WAN benchmarking tests the performance of your internet connection and the path to external services. Poor WAN performance may indicate problems with your ISP, your firewall or router, or congestion on the wider internet. WAN benchmarking should include tests to multiple destinations — not just a single speed test server — to differentiate between ISP-level problems and destination-specific issues.

Wireless Network Benchmarking

With an increasing proportion of office devices connecting wirelessly, benchmarking your wireless network deserves specific attention. Wireless performance is inherently more variable than wired performance, affected by physical obstacles, interference from other wireless networks, the number of connected clients, and the capabilities of both the access point and the client device. A comprehensive wireless benchmark should cover signal strength, throughput, latency, and packet loss across all areas where staff work — not just near the access points.

Heat mapping tools such as Ekahau or NetSpot allow you to create visual representations of wireless coverage and performance across your office floor plan. These maps often reveal surprising gaps — conference rooms with poor coverage despite being critical for video calls, or areas where the 2.4 GHz and 5 GHz bands overlap inefficiently. For UK businesses in multi-tenanted office buildings, channel congestion from neighbouring networks is a common problem that may only be apparent when all tenants are at full occupancy during core business hours.

Multi-Site and Remote Worker Benchmarking

Businesses with multiple office locations or a significant remote workforce face additional benchmarking challenges. Network performance between sites — typically across an MPLS, SD-WAN, or VPN connection — directly affects collaboration and access to shared resources. Benchmarking inter-site connectivity reveals whether your WAN solution is delivering adequate performance and helps identify sites that may be struggling with inadequate local internet connections.

For remote workers, benchmarking is more difficult but no less important. Providing staff with simple benchmarking tools that can report key metrics back to your IT team helps identify situations where a remote worker's performance problems stem from their home internet connection rather than your corporate systems. This data-driven approach prevents wasted time investigating server or application issues when the root cause is a remote worker's congested broadband connection during peak evening hours.

LAN Throughput
940 Mbps
Wi-Fi Throughput (5GHz)
550 Mbps
Wi-Fi Throughput (2.4GHz)
150 Mbps
WAN Download
480 Mbps
WAN Upload
115 Mbps

Benchmarking Tools and Methodology

Effective benchmarking requires consistent methodology. Ad-hoc speed tests run from random devices at random times produce unreliable data that cannot be meaningfully compared. A structured benchmarking programme should use dedicated tools, run tests at consistent times, and record results systematically.

For LAN benchmarking, iPerf3 is the industry standard open-source tool. It measures TCP and UDP throughput between two endpoints on your network with high accuracy. For WAN benchmarking, commercial tools such as PRTG Network Monitor, SolarWinds, or Cisco ThousandEyes provide continuous automated testing with historical reporting. For wireless benchmarking, tools such as Ekahau or NetSpot can map signal strength, channel utilisation, and throughput across your office floor plan.

Benchmarks should be run at multiple times throughout the day to capture performance variations. A network that performs brilliantly at 7am but crawls at 10am when all staff are online and running cloud applications tells a very different story from one that performs consistently. Run tests during quiet periods (early morning, late evening), during peak hours (mid-morning, early afternoon), and during specific events (large file transfers, company-wide video calls, backup windows).

Automated and Continuous Monitoring

Whilst periodic manual benchmarking provides valuable snapshots, the most effective approach is automated, continuous monitoring that collects network metrics around the clock without manual intervention. Automated monitoring tools run tests at configurable intervals — typically every one to five minutes — and store the results in a time-series database that enables historical analysis and trend detection.

The advantage of continuous monitoring over periodic manual testing is twofold. First, it captures intermittent problems that manual testing might miss — a brief spike in packet loss at 2pm every Tuesday, for example, or a gradual decline in throughput that occurs so slowly it is imperceptible on a day-to-day basis. Second, it enables alerting: when a metric crosses a predefined threshold, your IT team is notified immediately rather than discovering the problem only when users complain. For UK businesses where IT resource is limited, automated monitoring transforms network management from reactive firefighting into proactive maintenance.

Setting appropriate alert thresholds requires baseline data, which brings us back to the fundamental importance of initial benchmarking. Without knowing what normal performance looks like for your specific network, any alert threshold is arbitrary. Establish your baselines first, then set warning thresholds at one standard deviation above normal and critical thresholds at two standard deviations. This approach minimises false alarms whilst ensuring genuine problems are flagged promptly.

Documenting and Reporting Results

The value of benchmarking data is directly proportional to how well it is documented and communicated. Raw numbers stored in spreadsheets that nobody reviews serve little purpose. Establish a standardised reporting format that captures the key metrics, compares them against your baselines, highlights any values that exceed warning or critical thresholds, and provides a trend view showing how performance has changed over recent weeks or months.

Monthly benchmarking reports should be reviewed by your IT team or managed service provider as part of regular service review meetings. Annual reports should be presented to business leadership as part of IT budget planning, providing the evidence base for infrastructure investment decisions. When your network monitoring shows that throughput has declined by 20% over twelve months and is approaching the threshold where video conferencing quality will be affected, the case for upgrading your internet connection or replacing ageing switches becomes self-evident.

Effective Benchmarking Approach

  • Consistent tools and methodology
  • Tests at multiple times of day
  • Both LAN and WAN measurements
  • Multiple metrics tracked simultaneously
  • Results recorded and trended over time
  • Percentile analysis, not just averages
  • Automated and scheduled testing
  • Documented baselines for comparison

Ineffective Benchmarking Approach

  • Random speed tests from web browsers
  • One-off tests with no historical data
  • Only measuring download speed
  • Testing from a single location
  • Ignoring latency, jitter, and packet loss
  • Relying solely on average values
  • Manual testing with inconsistent timing
  • No documented baselines or trends

Using Benchmarks to Drive Decisions

The value of benchmarking lies not in the numbers themselves but in how you use them to make informed decisions. Baseline benchmarks establish your network's normal performance, allowing you to detect degradation early — before users start complaining. Comparative benchmarks help you evaluate whether a proposed upgrade (new switches, faster internet, Wi-Fi 6 access points) delivers the expected improvement. And trend analysis reveals whether your network's performance is gradually declining as your business grows and places increasing demands on the infrastructure.

When presenting a business case for network investment, benchmark data transforms the conversation from subjective complaints — "the network feels slow" — into objective evidence: "our throughput has declined 30% over twelve months and now falls below the threshold for reliable video conferencing." This evidence-based approach is far more compelling to budget holders and decision makers.

Common Bottlenecks Revealed by Benchmarking

In our experience working with UK businesses of all sizes, certain bottlenecks appear with remarkable consistency during network assessments. The most common is an internet connection that has not been upgraded to match growing demand. A 100 Mbps connection that was adequate for a team of twenty using basic cloud applications five years ago is woefully insufficient for a team of forty running video conferencing, cloud file synchronisation, and hosted telephony today.

Wireless infrastructure is another frequent culprit. Many businesses invested in Wi-Fi access points years ago and have never updated them, despite the dramatic improvements in wireless technology. Older access points running Wi-Fi 4 or early Wi-Fi 5 cannot deliver the throughput that modern devices and applications demand, and their limited client capacity creates congestion when too many devices connect simultaneously. Upgrading to Wi-Fi 6 or Wi-Fi 6E access points, properly positioned based on a wireless site survey, can transform the user experience in open-plan offices and meeting rooms.

Firewall throughput is a less obvious but equally important bottleneck. Every packet entering or leaving your network passes through your firewall, and if that firewall is underpowered — particularly when running advanced security features such as deep packet inspection, intrusion prevention, or SSL decryption — it can become a chokepoint that limits your effective internet speed to a fraction of your contracted bandwidth. Benchmarking should include firewall throughput testing to ensure your security infrastructure is not inadvertently degrading your network performance.

Building a Network Performance Improvement Plan

Benchmark data naturally leads to a structured improvement plan. Once you know where your network's weak points are, you can prioritise investments by impact. If your wireless network is the primary bottleneck, upgrading to Wi-Fi 6E access points will deliver more noticeable improvement than upgrading a wired backbone that is already performing well. If your internet connection is consistently saturated during business hours, adding a secondary WAN link or upgrading your circuit will have a greater effect than replacing perfectly adequate internal switches.

A good improvement plan is phased and measurable. For each proposed change, document the specific metric you expect to improve, the target value, and how you will verify the improvement through post-implementation benchmarking. This approach ensures that every pound spent on network infrastructure delivers measurable value and prevents the common mistake of upgrading components that are not actually contributing to the performance problems your users experience.

Businesses using automated monitoring29%
Businesses with documented baselines18%
Businesses benchmarking at multiple times12%

Network performance benchmarking is not a one-off exercise. It is an ongoing discipline that should be embedded in your IT management practices. Regular benchmarking ensures your network keeps pace with your business requirements and provides the data foundation for proactive infrastructure planning rather than reactive firefighting.

Need Help Benchmarking Your Network?

Cloudswitched provides professional network assessment and benchmarking services for UK businesses. We establish baselines, identify bottlenecks, and provide actionable recommendations to optimise your network performance. Get in touch to arrange an assessment.

GET IN TOUCH
Tags:Network Admin
CloudSwitched

London-based managed IT services provider offering support, cloud solutions and cybersecurity for SMEs.

CloudSwitched Service

Network Administration

Design, deployment and management of secure, high-performance business networks

Learn More
CloudSwitchedNetwork Administration
Explore Service

Technology Stack

Powered by industry-leading technologies including SolarWinds, Cloudflare, BitDefender, AWS, Microsoft Azure, and Cisco Meraki to deliver secure, scalable, and reliable IT solutions.

SolarWinds
Cloudflare
BitDefender
AWS
Hono
Opus
Office 365
Microsoft
Cisco Meraki
Microsoft Azure

Latest Articles

4
  • IT Support

Why 24/7 IT Support Matters Even If You Work 9-to-5

4 Aug, 2025

Read more
9
  • Cloud Backup

How to Back Up Google Workspace Data

9 Nov, 2025

Read more
10
  • Cyber Security

Ransomware Protection: A Practical Guide for SMEs

10 Mar, 2026

Read more

Enquiry Received!

Thank you for getting in touch. A member of our team will review your enquiry and get back to you within 24 hours.