If you had to choose one metric to focus on when improving the performance of your web pages then it would be speed index. However, it's important to understand that it is fundamentally a subjective measurement so in order to use it effectively you need to understand the consequences of this for alerting.
What is speed index?
Speed index is a measure of visual completeness. It is essentially a numeric representation of how much of the final page has been rendered over time, with low numbers being better.
In this example page A quickly reaches a significant proportion of visual completeness in a short time so has a low speed index in comparison to page B that takes a long time to draw the majority of the page.
On these graphs the speed index is the area above the line.
Image credit: WebPagetest documentation
The effect of connection speed
The way it's calculated as a function of time does negate much of the effect caused by the speed of the connection but it doesn't isolate it.
Let's take a look at the speed index for my homepage over the course of six days on a fast (cable) connection and a slow (3G) connection. No changes were made to the page or the hosting over this time so this is a stable test environment.
|Coefficient of variation||0.388||0.1247|
TL;DR: This is a highly variable data set.
The most obvious difference is the effect of the speed of the connection for the same page on the same day - on average an increase of 520% from cable to 3G.
On the other axis we not only see a high variation from day to day (as expressed by the coefficient of variation) but the variation in fast and slow connections isn't correlated - a low 3G speed index can correspond to a high cable sped index and versa visa.
Firstly, this isn't a considered harmful article: speed index is without doubt something you should be targeting and tracking when optimising your pages.
However, it's not a great metric to be handing off to stakeholders who don't understand its subjective nature. This particularly applies when it comes to automated monitoring and alerting: whilst tracking speed index is useful for diagnostics (and writing blog posts like this) setting it as an alerting target will cause lots of false positives (which can be useful if you want inspiration for a blog post on the fluctuating nature of speed index as a metric).
What I've settled on as my preferred approach for alerting is to use the objective measures of number of requests and total page data. After optimising a page - relying heavily on speed index as part of this process - I'll draw a line in the sand and take the requests and data at that point as my alerting benchmark. Whilst it's technically possible that the perceived performance of the page can still get worse without adding requests or data this is at least going to catch things like the addition of extra scripts or unoptimised images.
You should still be monitoring and doing spot checks on the perceived performance and when you make your next round of optimisations you're going to want to reset the alerting thresholds.
A couple of weeks after posting this I made a change that meant text assets (CSS & JS) weren't being gzipped and it was the total bytes alert that drew my attention to the issue. By contract, the speed index was actually at the low end of the normal fluctuation rage so wouldn't have highlighted the issue.
Want some more help with optimising your web performance? Get in touch.