Firefox and Chrome browsers both use a rolling, date based, release cycle. These browsers feature a built-in updater that ensures they receive updates as soon as possible after they are released. This continuous update is why they are referred to as evergreen browsers.
As a result of their evergreen status, support for these browsers is often phrased as "latest stable and previous stable" but how useful is this statement? To reliably assess this we will need to consider how effective the automatic updates are and how long older versions remain in significant use.
To judge the update efficiency we will compare the amount of traffic for the latest and stable and previous stable versions on individual days for a period of three weeks from the official release date. In this analysis we are ignoring all versions of the browser that aren't these two versions, see the section on the long tail for information on other browser versions.
The data for this was taken from a site with sufficiently high traffic to be representative (over ten thousand visits per day for each browser).
The data shown below is the percentage split of visits between the two versions rounded to the nearest whole number.
Firefox currently updates on a planned six week cycle.
|Date||v23 (%)||v24 (%)|
|Date||v24 (%)||v25 (%)|
There seems to be a pattern in these two releases where the update gets to 15-20% adoption and then plateaus for several days before moving to over 80% adoption in the next couple of days. Whilst I've not done the same detailed daily breakdown, the adoption curves in the analytics show the same pattern going back to at least the version 16-17 upgrade so I'm guessing it might be a feature of the updater that does a staged roll out. (Alternatively it could be a consistent anomaly in the source of the statistics I'm using.)
Chrome currently updates on a planned six week cycle.
|Date||v29 (%)||v30 (%)|
|Date||v30 (%)||v31 (%)|
Whilst the data for the update efficiency only looked at the latest two versions of a given browser we also need to look at the situation for all versions of that browser. Whilst we can see that during the transition period of the update we need to consider both versions of the browser, we need to ensure that there are no other versions that make up a significant portion of our visits.
For this we will look at Firefox as it has a longer transition period for the update and a more frequent release cycle than Chrome as this will magnify the effect we are looking at. This data is taken from 2013-10-29, the day before version 25 was officially released. Looking at the data for the switch between versions 23 and 24 we can see this was slower than the subsequent 24 to 25 update so we might reasonably expect this, if anything, to again magnify the effect.
As can be seen, there is a long tail of previous versions but, even in total, they are not particularly significant. In fact, if we take the latest and previous stable as an aggregate at this point they make up 86% of visits.
During the transition phase as browsers are updated there will be a period of time when both the latest stable version and the previous stable version are in use. Immediately after the release of a new stable version, and during the subsequent transition phase, some visitors wiil be using the previous version but, just before the next version is released, use of the previous version will be insignificant.
Whilst you should always examine your own individual situation, in general for evergreen browsers it is safe to make the assumption that by testing in the latest and previous stable versions you are covering the majority of your visitors using those browsers at any given moment in time.