Back to Blog

Network Performance Benchmarking: What to Measure

Network Performance Benchmarking: What to Measure

Network performance is the invisible foundation upon which every modern business operates. When your network performs well, nobody notices — applications load quickly, files transfer seamlessly, video calls are crisp, and cloud services respond instantly. When it performs poorly, everything suffers. Staff waste hours waiting for applications to respond, video conferences freeze and stutter, file transfers crawl, and customer-facing services become sluggish or unreliable.

Yet despite its critical importance, network performance is one of the least understood and most poorly monitored aspects of most UK business IT environments. Many organisations operate without any baseline understanding of their network's capabilities, meaning they have no way of knowing whether performance is degrading, no data to support infrastructure investment decisions, and no evidence to present to internet service providers when disputing poor connectivity.

Network performance benchmarking is the systematic process of measuring, recording, and analysing key network metrics to establish baselines, identify bottlenecks, and track changes over time. This guide explains what to measure, how to measure it, and how to use the results to make informed decisions about your network infrastructure.

82%
of UK businesses have never benchmarked their network
34 min
average daily productivity lost per user to network issues
£8,200
annual cost of poor network performance per 10 users
3x
faster problem resolution with established baselines

The Core Network Metrics

Effective network benchmarking requires measuring the right metrics. While there are dozens of network statistics you could track, the following core metrics provide the foundation for understanding your network's health and performance. Each metric tells a different part of the story, and together they paint a comprehensive picture of how well your network serves your business.

Bandwidth and Throughput

Bandwidth is the maximum theoretical capacity of your network connection, measured in megabits per second (Mbps) or gigabits per second (Gbps). It is the speed your internet service provider advertises and the number printed on your contract. Throughput, however, is the actual data transfer rate you achieve in practice — and it is almost always lower than your advertised bandwidth.

The difference between bandwidth and throughput is crucial. Your ISP might sell you a 1 Gbps leased line, but your actual throughput could be significantly lower due to network overhead, congestion, equipment limitations, or configuration issues. Benchmarking throughput at regular intervals — both internally across your LAN and externally to the internet — reveals whether you are getting the performance you are paying for and whether it is sufficient for your business needs.

Latency

Latency measures the time it takes for a data packet to travel from one point to another, typically expressed in milliseconds (ms). Low latency is critical for real-time applications such as VoIP phone calls, video conferencing, and remote desktop sessions. While a few milliseconds of latency is imperceptible for web browsing or email, it can make voice calls sound choppy and video conferences freeze.

For UK businesses, typical latency benchmarks are: under 1ms for traffic within your local network, under 10ms for traffic to a UK-based cloud service, under 20ms for traffic to a European data centre, and under 80ms for traffic to a US-based service. Latency that consistently exceeds these benchmarks indicates a problem that warrants investigation.

Metric What It Measures Good Benchmark Warning Threshold Critical Threshold
Throughput (LAN) Internal data transfer speed >900 Mbps on Gigabit <700 Mbps <500 Mbps
Throughput (WAN) Internet data transfer speed >90% of contracted speed <75% of contracted <50% of contracted
Latency (LAN) Internal round-trip time <1 ms 1-5 ms >5 ms
Latency (WAN) Internet round-trip time <20 ms (UK) 20-50 ms >50 ms
Packet Loss Data packets that fail to arrive 0% 0.1-1% >1%
Jitter Variation in latency <5 ms 5-15 ms >15 ms

Packet Loss

Packet loss occurs when data packets travelling across the network fail to reach their destination. Even small amounts of packet loss can have a dramatic impact on application performance. A packet loss rate of just 1% can cause TCP-based applications to slow to a crawl because the protocol requires lost packets to be retransmitted, creating a cascade of delays. For real-time applications such as VoIP, packet loss causes audible gaps, clicks, and distortion that make communication difficult.

In a healthy network, packet loss should be zero or very close to zero. Any consistent packet loss above 0.1% warrants investigation. Common causes include faulty cabling, overloaded switches, misconfigured Quality of Service settings, wireless interference, and ISP-level congestion.

Jitter

Jitter is the variation in latency over time. While a consistent latency of 15ms is perfectly acceptable for most applications, a latency that fluctuates between 5ms and 50ms — even if the average is 15ms — causes significant problems for real-time communications. Voice calls become choppy, video conferences stutter, and interactive applications feel unpredictable.

Jitter is particularly important for businesses that rely heavily on Microsoft Teams, Zoom, or VoIP telephony. The ITU-T (International Telecommunication Union) recommends jitter below 30ms for acceptable voice quality, but most modern unified communications platforms perform best with jitter below 10ms.

Why Averages Can Be Misleading

When benchmarking network performance, always look beyond average values. A network with an average latency of 10ms might be performing perfectly — or it might have latency that alternates between 2ms and 150ms, with the average masking severe intermittent problems. Always examine percentile values (particularly the 95th and 99th percentiles), standard deviation, and maximum values alongside averages. The 95th percentile tells you what performance looks like during the worst 5% of the time — which is often when your users are most frustrated.

Internal vs External Benchmarking

A complete benchmarking programme measures performance both within your internal network (LAN benchmarking) and across your internet connection (WAN benchmarking). These are distinct measurements that reveal different types of problems.

LAN benchmarking tests the performance of your internal network infrastructure — switches, cabling, wireless access points, and internal routing. Poor LAN performance indicates problems with your own equipment or configuration. Common issues include ageing switches that cannot handle modern traffic volumes, Cat5 cabling that limits speeds to 100 Mbps, wireless dead spots or interference, and misconfigured VLANs creating unnecessary traffic bottlenecks.

WAN benchmarking tests the performance of your internet connection and the path to external services. Poor WAN performance may indicate problems with your ISP, your firewall or router, or congestion on the wider internet. WAN benchmarking should include tests to multiple destinations — not just a single speed test server — to differentiate between ISP-level problems and destination-specific issues.

LAN Throughput
940 Mbps
Wi-Fi Throughput (5GHz)
550 Mbps
Wi-Fi Throughput (2.4GHz)
150 Mbps
WAN Download
480 Mbps
WAN Upload
115 Mbps

Benchmarking Tools and Methodology

Effective benchmarking requires consistent methodology. Ad-hoc speed tests run from random devices at random times produce unreliable data that cannot be meaningfully compared. A structured benchmarking programme should use dedicated tools, run tests at consistent times, and record results systematically.

For LAN benchmarking, iPerf3 is the industry standard open-source tool. It measures TCP and UDP throughput between two endpoints on your network with high accuracy. For WAN benchmarking, commercial tools such as PRTG Network Monitor, SolarWinds, or Cisco ThousandEyes provide continuous automated testing with historical reporting. For wireless benchmarking, tools such as Ekahau or NetSpot can map signal strength, channel utilisation, and throughput across your office floor plan.

Benchmarks should be run at multiple times throughout the day to capture performance variations. A network that performs brilliantly at 7am but crawls at 10am when all staff are online and running cloud applications tells a very different story from one that performs consistently. Run tests during quiet periods (early morning, late evening), during peak hours (mid-morning, early afternoon), and during specific events (large file transfers, company-wide video calls, backup windows).

Effective Benchmarking Approach

  • Consistent tools and methodology
  • Tests at multiple times of day
  • Both LAN and WAN measurements
  • Multiple metrics tracked simultaneously
  • Results recorded and trended over time
  • Percentile analysis, not just averages
  • Automated and scheduled testing
  • Documented baselines for comparison

Ineffective Benchmarking Approach

  • Random speed tests from web browsers
  • One-off tests with no historical data
  • Only measuring download speed
  • Testing from a single location
  • Ignoring latency, jitter, and packet loss
  • Relying solely on average values
  • Manual testing with inconsistent timing
  • No documented baselines or trends

Using Benchmarks to Drive Decisions

The value of benchmarking lies not in the numbers themselves but in how you use them to make informed decisions. Baseline benchmarks establish your network's normal performance, allowing you to detect degradation early — before users start complaining. Comparative benchmarks help you evaluate whether a proposed upgrade (new switches, faster internet, Wi-Fi 6 access points) delivers the expected improvement. And trend analysis reveals whether your network's performance is gradually declining as your business grows and places increasing demands on the infrastructure.

When presenting a business case for network investment, benchmark data transforms the conversation from subjective complaints — "the network feels slow" — into objective evidence: "our throughput has declined 30% over twelve months and now falls below the threshold for reliable video conferencing." This evidence-based approach is far more compelling to budget holders and decision makers.

Businesses using automated monitoring29%
Businesses with documented baselines18%
Businesses benchmarking at multiple times12%

Network performance benchmarking is not a one-off exercise. It is an ongoing discipline that should be embedded in your IT management practices. Regular benchmarking ensures your network keeps pace with your business requirements and provides the data foundation for proactive infrastructure planning rather than reactive firefighting.

Need Help Benchmarking Your Network?

Cloudswitched provides professional network assessment and benchmarking services for UK businesses. We establish baselines, identify bottlenecks, and provide actionable recommendations to optimise your network performance. Get in touch to arrange an assessment.

GET IN TOUCH
Tags:Network PerformanceBenchmarkingMonitoring
CloudSwitched
CloudSwitched

Centrally located in London, Shoreditch, we offer a range of IT services and solutions to small/medium sized companies.