Skip to main content

Introduction

We recently completed a very successful round of training sessions for councillors, on behalf of the Local Government Association (LGA), to build their confidence in using data. While the content was tailored to the role of councillors in local government performance management, many of the concepts are universally applicable. So we thought we’d pull out some of the key themes from the training that should be useful to a range of audiences. Below we share some tips including the pros and cons us using means and medians in performance management, simple ways of benchmarking, the importance of a deep-dive under the headline data, and traps to look out for when interpreting data visualisations.

Skip to:

  1. Using means and medians in performance management
  2. The power of ranks and quartiles
  3. The importance of comparative trend data
  4. The perils of threshold indicators
  5. Deep dives under the headlines
  6. Beware the editorial slant

Tip 1: Using means and medians in performance management

We all remember the discussions of means and medians from our school days. But those terms are rarely used in day to day life or in the media, with the word “average” far more common. Often “average” is used to mean, well, mean. And we will often compare something or someone to the mean.

However, in a performance management context the median is a very useful measure. Take the below example comparing 15 local authorities. Imagine you are a councillor for Council A looking at how many affordable homes are being built.

Number of completed affordable homes delivered by council (in alphabetical order)

Council A is below the mean. However, more than half (ten) of the 15 local authorities are actually below this average, which is a bit counterintuitive; it’s because Wigan and Doncaster have particularly large volumes of houses built and they are “skewing” the mean upwards. When appraising Council A’s performance, we might want to see whether they are in the top half of local authorities in the comparator group. And, as shown in the chart below, it turns out that, when comparing to the median, Council A is actually in the top half of these 15 local authorities:

Number of completed affordable homes delivered by council (In rank order)

Of course, to truly appraise performance we should also take into consideration the size of the local authority, the demand for affordable housing, and maybe other factors like relative deprivation, to ensure we are comparing to sensible benchmark local authorities. This indicator alone doesn’t give us all the intelligence we need to form a proper judgement.

Tip 2: The power of ranks and quartiles

One of the advantages of using medians is that it neatly links to the idea of ranking. As much as some people aren’t fans of the idea of a league table, ranking is an incredibly simple and easily-understood way of looking at performance – particularly when comparing to organisations with a similar context. Senior managers in complex organisations need to monitor a vast array of performance indicators and other measures – for example, some of our local authority dashboards have over 100 indicators – and looking at particularly high or low ranks is a quick way of identifying areas that need your focus. At Mime, we have a simple way of drawing attention to the key areas; our dashboards colour the rank of any indicator that is in the top or bottom quartile (i.e. the best and worst performing 25% of organisations). This is normally green or red for performance indicators, and yellow and purple for indicators which don’t have a “better or worse” dimension to them, such as the size of an organisation.

As an example, there are 153 top-tier local authorities in England. The top 25% of ranks (1st to 38th) are in the top, or best performing, quartile. The bottom 25% of ranks (116th to 153rd) are in the bottom, or lowest performing, quartile. To avoid ambiguity, we tend to use language like “best performing” rather than “top”, as “top quartile” can be confusing when it comes to indicators like absence – does top mean the most absence, or the least?

We often go a step further in our analysis and divide the top and bottom quartile into two, highlighting the best performing “octile” in dark green, and the 2nd best performing octile light green etc.

Tip 3: The importance of comparative trend data

Looking at trends over time is clearly an essential part of performance management. As well as showing whether things have changed over time, they help you set realistic targets, monitor progress towards goals and see whether a change could be a blip or link to a significant event or seasonal pattern.

However, looking at your own trend data in isolation can often miss important context; you often need to see how it compares to the same trend in data for similar organisations. As an example, see the chart below which looks at the GCSE performance of a real local authority in recent years (and note the truncated y-axis which can exaggerate the size of the change):

Council A: % of pupils achieving at least grade 5 in both English and maths

On the surface, this looks like the local authority was getting better until 2021/22, then had a bad year in 2022/23. However, how does your interpretation change when we add the England average for the same time period?

Council A vs England: % of pupils achieving at least grade 5 in both English and maths

The scale of the drop in 2022/23 actually looks pretty similar to the England drop in the same period – so maybe it’s not so bad after all. As with any piece of data, this should prompt further questions – why did England and Council A drop so much in a year – was it really to do with the ability of the cohort? It seems unlikely that, nationally, there was such a big difference in ability or in the teaching going on. And sure enough, the actual reason for the drop nationally was because Ofqual, the exam regulators, were changing the grading boundaries so they returned to the pre-pandemic levels, following a period of higher grades that resulted from the loss of exams, and the use of teacher assessments instead.

Because of this unusual trend pattern, it’s quite hard to gauge from the chart exactly how well Council A was doing over this period. So again, we return to the use of ranks, using the same indicator, but showing how Council A ranks against other local authorities nationally. The chart below shows that, ignoring the 2020/21 period of teacher assessed grades, there’s actually a very positive picture for Council A, even in the period where results appeared to be falling.

Council A’s National Rank for % of pupils achieving at least grade 5 in both English and maths

All our performance dashboards include at least 4 years of trend data, comparing these to the trends from comparator groups such as national or regional averages.

Tip 4: The perils of threshold indicators

A threshold indicator is simply one that counts the proportion of cases or an activity that meets a certain threshold. There are myriad examples in all forms of organisation, including emails responded to within a target timeframe, cases completed before a statutory deadline, widgets that meet a certain quality threshold and so on. These are commonly used in performance management, but do have one major drawback; namely that, once the threshold is missed, there is little incentive to work hard on the case. And the problem with this is that, if things are then ignored completely these cases can drift and drift, and are more likely to lead to very unsatisfied customers, generating complaints and even negative media coverage.

As an example, imagine councils have a threshold target of no longer than 12 months for families to be on a waiting for council accommodation. The two charts below show two different made-up councils. On the surface, Council B appears to be performing slightly better in terms of the achievement of the threshold indicator.

However, when we explore the data further in the charts below, we can see that Council B has a number of cases taking way too long – whereas Council A has no households waiting more than 18 months. The distribution of cases for Council B suggests that they are very focussed on meeting the threshold, then take their eyes off the ball once the threshold is missed. Although this is fabricated data, we do see this type of pattern in the real world, when there is concerted effort directly before the threshold, then a big drop off and long tail after it.

Months on housing waiting list for two example councils (target 12 months)

There are a couple of relatively simple ways to avoid falling into this trap:

  • Show the standard deviation alongside the threshold and average indicators. In this case, Council A has a standard deviation of 1.9 months, and Council B has a standard deviation of 4.6 months – and so has a bigger range of times on the waiting list.
  • Show the minimum and maximum values, or the number of values over another threshold
  • Plot the distribution of data on a histogram like those shown here, and focus on the long tail – what are the longest wait times, and what are the characteristics of these long waits?

Tip 5: Deep dives under the headlines

The other risk of just looking at headline measures, be they threshold indicators, means, medians or otherwise, is that they often cover an entire population or organisation, and therefore lose the nuance of what’s really going on. The key here is to use the headline, single figures, as a starting point, then ask further questions.

  • Break down the headline indicators by different characteristics, geographical areas or other dimensions. Does an overall average mask pockets of underperformance? For example, in the case of a school with overall good exam results, perhaps certain demographic groups, subjects or individual teachers aren’t performing as well and need additional support.
  • For indicators that look good, think about who is missing out. If 90% of cases meet the target, what are the characteristics of the other 10%? Do these disproportionately fall into certain demographic groups or geographical areas? Is there a risk that we are focussing our energies on the people who shout loudest, rather than the most important or needy groups?
  • Data is great at showing you what’s happening, but less good at telling you why. When you see blips in a trend, ask those on the ground what caused it. When performance in a particular area has seen a big improvement, ask what measures were taken to drive that performance, and if they could be replicated elsewhere. If applicable, carry out more detailed surveys or focus groups with service users in an area where headline data doesn’t look good. And if you have the resource and your team has the skills, consider carrying out statistical analysis such as regression, to attempt to isolate the drivers that are causing a particular trend.

Tip 6: Beware the editorial slant

OK, our final tip. As much as we like to think data is objective and impartial, whenever data is presented, there are various editorial decisions taken – sometimes consciously, sometimes not. These decisions include, but are not limited to:

  • Colours – For example, red can create a strong emotional reaction – so deciding the point at which lower performance is highlighted in red changes the viewers response. If we have missed a target by 0.1% should that be red, or, say, amber?
  • Icons – Similarly, we might choose to use icons to show a sense of scale, or whether we are higher or lower than the competition. Do these icons appear regardless of the scale of the difference, or only when we are a certain level above or below?
  • Axes – Has the y-axis of a chart been truncated to make differences or changes look bigger?
  • Labelling – Are their callouts on the data to highlight particular things that the author thinks are interesting? How is the chart titled; for example, objectively, like “Number of things”, or with subjective interpretation “Number of things is growing significantly”?
  • The data included or left out – We’ve discussed in this blog post the importance of looking at data in context. Does the analysis show trends over time, or comparisons to similar organisations? And if not, why not? Because the data doesn’t exist, or because the author is worried about the changing perception if it turns out that performance is, say, on a downward trend?

There’s a great example of this in the Correspondent that we often use to make this point – it is summarised below. See how your emotional response to the analysis changes through the three visualisations:

Source: https://thecorrespondent.com/664/how-maps-in-the-media-make-us-more-negative-about-migrants/87812829368-c203ad78

The point with all this is to accept that some judgements have been made when visualising the data, considering whether the author of the analysis has a particular angle they want to portray and bearing that in mind when forming your own emotional response to the analysis. This is particularly true in the media and in political propaganda, but also the case in any data dashboard, chart or table you see in your work.

Final thoughts

We love working with data, and it’s an essential tool in your performance management armoury. However, it’s easy to fall into some common traps if you don’t have the right context, and you should feel confident in challenging the data, or asking for more, if you don’t have all the information you need to form a solid judgement.

In the LGA training, we also discuss how to build a supportive, but appropriately challenging, performance management culture, key lines of questioning, equalities, and much more. If you are a councillor interested in data training, get in touch with the LGA; the data course is very popular (see a small set of feedback comments below!) and the LGA plan to run similar events in future. We’ve put some of the lovely feedback we received from councillors below!

And if you are interested in embedding a supportive, data driven culture across your organisation, then do get in touch with us at Mime; we can help you with data strategies, training on interpretating data, and producing intuitive data dashboards which empower your decision makers and make you truly data-led.

Turn your data into crystal clear insights

Get in touch today and discover how Mime can help you unlock the full potential of your data.
Get in touchExplore What We Do