CSS specificity graphs in practice

Orde Saunders' avatarUpdated: Published: by Orde Saunders

In a recent article Harry Roberts introduced the idea of CSS specificity graphs and Jonathan Snook followed up by showing how the graph applies in the SMACCS model. This article outlines my experience of putting them into practice.

On recent projects I've been looking to better manage the specificity of CSS to make authoring and - more importantly - maintenance easier. Whilst I haven't been thinking of it in quite these terms I can see that the specificity graph can be useful as a way to reason about CSS. To better understand its value I've tried applying this conceptual model to practical experience.


The key tenet of specificity graphs is that specificity should trend upwards as you move down the source order:

Macro level specificity graph showing trend

The reasoning behind this is that both specificity and the cascade (as expressed by source order) have an impact on the result of our CSS as it is applied by user agents. Essentially this means that the same rule in different places in the source can produce different results, as can the same rule in the same place but using a selector with different specificity.

Whilst techniques such as OOCSS, SMACSS and BEM are designed to mitigate these risks, on larger and/or longer running projects the risk of exposure is correspondingly greater. The specificity graph is an additional tool to help manage the code base as a whole.


At the macro level we want the trend to increase but at the micro level we're going to see fluctuations:

Micro level specificity graph showing undulations

These fluctuations aren't an necessarily an issue provided they aren't too great and they're not straying too far from the trend line. Big movements and sections that are a long way from the trend line indicate that there is potentially an issue with the overall structure of our CSS that would benefit from further investigation.


Whilst it's generally a bad idea to think too much in terms of pre-processors as opposed to their output, they are a very useful tool to help with code organisation and, by it's very nature, this will have an impact on our source order.

What we're aiming for in this case is for specificity to increase within each source file whilst still maintaining the overall trend for an increase in specificity. This means we'll end up with saw teeth of specificity at the micro level:

Section of a specificity graph showing preprocessor source files

Trying to separate this out to make a smooth upward curve is going to be counter productive as it's not only going to be difficult to author it's going to be an absolute nightmare to maintain.

The key here is that micro level increases in specificity like this aren't a problem if they don't interact. What we're trying to avoid is having to fight specificity. If you go specific early and later need to change it for an alternative presentation then you have to at least match the specificity before you start extending.

For example, we might have put this selector early in the CSS:

#sidebar .panel p.highlight {
  color: red;

If we now have a second highlight presentation in a promotional panel that needs to be blue we can't just target the class, we need to start by matching the specificity before we can change things. As a result we can't use a generic promotional panel highlight class:

.promo .highlight {
  color: blue;

We have to use:

#sidebar .promo p.highlight {
  color: blue;

In this case whilst putting .promo .highlight earlier on in the source order would fit with an upward trend in the specificity graph it would still be overridden by the more specific #sidebar .panel p.highlight so just fitting an upward trend doesn't seem to offer any benefit. However, rather than our goal being to fit our code to the graph, the graph is highlighting that this use of specificity may be inappropriate so we would refactor down to reduce it.

Alternatively, if this use of specificity is appropriate for what we need to achieve, then the graph indicates where to place it in the source. Whilst the placement in source order may not affect the result (although, due to the cascade, it might) it makes our CSS easier to reason about, more consistent and - ultimately - easier to maintain.

It's this latter use of the graph to help us decide where to place our code that is particularly useful. A common occurrence with long lived CSS code bases is that when a new presentation is required it's added on to the end. However, just adding onto the end doesn't lead to maintainable code, we start creating issues with the cascade which will be circumvented by adding specificity and the more we do it the more it compounds. Often adding code onto the end is simply because it's not immediately obvious where it fits within the rest of the code base. In this case seeing how what we want to do fits into the specificity graph can help find where we should best place it.

Media queries

The key to understanding how media queries fit into the specificity graph is that they don't act upon specificity, they act on the cascade. Take the following example:

.foo {
  width: 50%

@media (min-width: 30em) {
  .foo {
    width: 25%;

When the viewport is wider than 30em .foo will have a width of 25% not because the media query is adding specificity but because it is further on in the source order so is higher priority in the cascade. If we had placed the media query above the naked class declaration then it would never take effect - it would be overridden by the cascade.

From this we can determine that the specificity graph for the above example will be flat - the specificity of the selector remains at 'one class' as we move along the source order.

If we keep our media queries close to the initial declarations that they affect in order to increase the maintainability of our code then we shouldn't see much effect on the specificity graph, at most some micro level sawtoothing. If we're seeing some macro level spikes resulting from media queries then it's an indication that we need to revisit our code to see if we can refactor it down to make it more maintainable.

!important annotations

Similarly to media queries, the !important annotation acts on the cascade rather than the specificity so, strictly speaking, we won't see rules containing !important standing out in the graph as they have no specificity.

As !important won't show up in a pure specificity graph we need to find a way to include it because it's definitely something we are interested in seeing in our visualisation. In particular we want it to stand out more than an ID selector so, after some experimentation, I have settled on adding 1000 to the specificity of the selector of any rule that contains an !important annotation.

As we now have a methodology for showing !important in the graph we can see what effect this has on our code.

In the following example, despite having greater specificity and appearing later in the source order, the #username.error would not override the !important:

.error {
  color: red !important;

#useranme.error {
  color: green;

If we now move the !important declaration to follow the initial declaration to fit in with an upward trend in the graph it doesn't change the effect but it does make the intention of the code easier to understand:

#username.error {
  color: green;

.error {
  color: red !important;


So far we've been talking about the graph in abstract terms but to really understand it we will need to see a graph of some actual CSS. Shown below is a graph of my baseline CSS which, as its name suggests, is relatively basic:

Specificity graph on a linear scale

In this format the graph doesn't seem particularly useful as the specificity of the uses of !important are overwhelming the vertical scale: we can just about see a few bumps and a small spike for the use of an ID (included for the purposes of this demonstration).

The issue here is that we are using a linear scale to try and compare values spanning four orders of magnitude which inevitably leads to the lower orders being indistinguishable from each other. The solution to this is to use a logarithmic scale, and here is the same CSS graphed in this way:

Specificity graph on a log scale

The effect of a switch to a log scale has brought a significant benefit - we are now comparing degrees of specificity. On a linear scale the visual difference between one class and two classes is greater than the difference between one element and one class. By contrast, on the log scale the visual difference between one element and one class is an order of magnitude greater than the difference between one class and two classes. Essentially we are favouring the macro over the micro - we're highlighting the overall trend whilst simultaneously reducing the apparent significance of the saw teeth described above.

Looking at this graph of my CSS I can see a couple of areas for refactoring: there are some classes used at the start that would fit better after the element rules and the use of !important would be a better fit at the end of the source. This won't necessarily change the effect of my CSS but it will make it fit a more consistent model which will make it easier to maintain.


These two areas of my CSS were both contained within their own Sass source files so by moving the location of the import statements I was able to get a consistent trend to the graph with no change to the appearance of the site.

Refactored specificity graph on a log scale


CSS specificity graphs aren't an aim in themselves, they are a tool to help us reason about our code. Whilst writing CSS that closely follows a smooth upward trend on a graph is going to be much more of a hindrance than a help, the graph's real value is helping us quickly and easily identify potentially problematic areas and provides us with a conceptual model for our code which - when applied consistently - makes it easier to maintain.



Thanks to Ben Seven (@ben_seven) and Lennie Sefton (@drgs100) for their help in preparing this article.