I currently have limited availability - please get in touch if you'd like to work with me.

User agent profiling

Orde Saunders' avatar by Orde Saunders

Server side user agent sniffing has received a bad name, mostly due to a history of misuse. By combining client side, progressive enhancement and feature detection, and server side user agent sniffing it is possible to provide a better experience to new browsers whilst still supporting older browsers.

Note: This technique assumes that there is a one-to-one mapping between a user agent and a particular browser with a defined feature set. In the vast majority of cases this will hold true but you should keep the limitations of this assumption in mind.

Probably the most widespread and high profile use of user agent sniffing that is currently employed is to switch users to (but frequently not back from) mobile specific websites. This has become such an epidemic that a number of browsers provide an easily accessible menu item to switch the user agent and the fact this is normally labelled "Request Desktop site" tells its own story.

Understandably, user agent sniffing has earned a bad name amongst web developers, to the point where it gets talked about in polarising terms: "user agent sniffing is evil". Whilst I'd agree that making macro decisions - such as site redirection based solely on user agent - is very rarely well implemented, taking the user agent as a signal can be very valuable.

User agent profiles are not new, static databases of browser capabilities such as WURFL and Device Atlas are leading players in this area. I've not used Device Atlas but my experience of WURFL has never been encouraging - it's large and complex, encountering misidentification of user agents is not unusual and the large list of capabilities it provides have never really covered anything I was particularly interested in for server side adaptation of content.

In my article on building a layered UI I include an outline of how to employ broad categorisation of the user agent to provide a sensible starting point for client side feature detection. The approach outlined here is an extension of this principle combining client side detection and a server side user agent database.

Client side detection

The client side detection uses standard feature detection to build an object indicating the feature and its support status. For example, for a modern mobile browser we might have a profile object similar to the following:

var profile = {
  'touch': true,
  'svg': true

We can then pass this information back to the server. My preferred method is to encode this data as JSON and set it in a cookie as this will then be transmitted back to the server with each subsequent request.

Server side recording

On the server side we create a database of user agent strings and the corresponding profile information sent back from the client side detection. When a request comes in we are then able to query this and modify the output.

Modifying the output

Initially we have no information about any user agent so we have to send a version of the site that is maximally compatible. Taking SVG support as an example, we would initially set the source of images in a format that is very widely supported. If desired, at this stage we could use JavaScript to rewrite the DOM to provide support for the more advanced feature. Once we have a profile from the client that positively identifies that we have support, we can then modify the markup that we send from the server to allow the client to start with the advanced features.


There are a number of ways that this technique can be used but here are two ways that I use it on this site.


SVG graphics have a number of advantages including smaller file size and resolution independence. However, the main disadvantage is that there is limited support for older browsers. Whilst it is possible to use client side DOM manipulation to provide a fallback in a more widely supported image format, it is much more likely that a browser that doesn't support SVG images will have less JavaScript support making this more difficult. As a result, for maximum support, we need to start with a widely supported format and then use DOM replacement to insert the SVG image. This obviously has overheads in terms of script execution and downloading two versions of the image so the ideal would be to serve the SVG in the markup to browsers that support the format.


The default is to serve a widely supported format and use DOM manipulation to replace the image:

<img src="/img/logo.png" class="svg-rewrite" alt="logo"/>
<script> $(document).ready(function() { if (profile.svg) { $('.svg-replace').each(function() { $(this).attr('src', function (i, src) { src.replace(/.[^.\?]*($|\?)/, '.svg$1'); }); $(this).removeClass('svg-replace'); }); } }); </script>


If we have profiled the user agent and know that it has support for SVG images we can then serve these directly to the browser:

<img src="/img/logo.svg" alt="logo"/>

Asynchronous scripts

During page load, downloading and executing JavaScript will block downloading of other assets and page rendering. To overcome this problem the <script> tag now supports the async attribute that allows the browser to download the script in parallel with other assets but, again, it is not supported in older browsers. If we provided scripts in the head of the document with the async attribute set then, whilst it would be faster for new browsers, it would still block downloading assets on older browsers.


To provide asynchronous support for older browsers we need to inject the script into the page using JavaScript:

    (function () {
      var s, s0;
      s = document.createElement('script');
      s.type = 'text/javascript';
      s.async = true;
      s.src = '/js/main.js';
      s0 = document.getElementsByTagName('script')[0];
      s0.parentNode.insertBefore(s, s0);

Whilst this works for all browsers it is not optimal for browsers that do support the async attribute as the browser's read ahead parser is not able to see the resource until it has been injected which happens late in the page load. If the script was available as an external <script> tag in the head of the document then the browser is able to see the resource early and proceed with the background download whilst the majority of the page loads.


If we know from our profiling that the browser supports asynchronous scripts we can add it to the head of the document:

  <script src="/js/main.js" async></script>


This technique isn't a silver bullet, there are a number of things that need to be taken into consideration when adopting this approach.

First load

The most obvious issue with this approach is that it is not possible to optimise the page for a particular browser until we have a profile and we cannot get a profile until the page has been loaded. However, on a site with a lot of traffic or where users normally visit a number of pages, the impact of this problem is diminished.


An extension of the first load issue is bounce visits - i.e. only one page is viewed in a session. As the profiler only runs after the first load and is communicated back to the server via cookie in a subsequent request it will never reach the server and it means the next time someone using a browser with this user agent visits we will still be in the first load state. If users of your site tend to visit more than one page then this isn't too much of an issue. But if, on a site like this where people tend to read one article and leave, your bounce rate is high then you don't want to miss out on the profile data you have captured. To overcome this, after the page has finished loading, I send an Ajax call to a dedicated end point on the server that stores the data and returns a 204 (no content) response.

HTTP vary header

If you are modifying the content served by the server based on the user agent it is important to set the vary HTTP header for the user agent. This is not perfect as we are also potentially varying the content within the same user agent but, as we are providing safe starting points, it is not too much of a concern if a cache is populated with the unoptimised version of the page. What we are really concerned with is ensuring a version of the page that includes support for advanced features is not served to an older browser.

Changing user agents

Browsers on a rapid release schedule, such as Firefox and Chrome, will change their user agent often which will cause them to be treated as a new browser each time. Fortunately, the user agents of these browsers tend to be consistent for an operating system which reduces the impact of this.

Spoofed user agents

If a browser with support for a number of advanced features were set to have the user agent of a browser that did not support these features, then you could end up serving markup for features that are not supported thus negating the value of this approach. On a small site this is not a significant problem but on a popular site this could be an issue and you would need to put in steps to combat this.


When combined with client side feature detection, server side user agent sniffing can be used to provide a better starting point for progressive enhancement. However, this approach is not universally suitable and requires careful consideration before adoption.