When I was writing my last blog post and thinking about Natural Variation, I remembered this article written by PuMP founder Stacey Barr. It focuses on the risks of assigning traffic lights, those Red, Amber and Green coloured icons against performance measures in your reports and dashboards.
Her article also delves into the use of these particular colours, but I’d like to focus on the use of the traffic light system itself and the problems that can come from it. You can read the full article here, or below is a shortened copy:
Rules for allocating traffic light colours to measures are inconsistently defined and used.
I’ve seen several different comparisons used as the basis for allocating green, amber or red to performance measures. [… ]To really make good use of any traffic light system, we need very specific rules for assigning a traffic light colour to each performance measure.
[…]
Traffic light rules are statistically invalid assessments of current performance.
Traditional traffic lights lead us astray unknowingly because their rules use data in a way that draws inappropriate conclusions about performance. This month being better than last month, even if it’s by more than 10%, does not mean performance is better. It does not mean that the measure needs a green light. If your measure’s values typically vary between 50 and 70 from month to month, a 10% difference is just a natural part of the normal variation.
A professional statistician would never analyse data this way. So why should anyone else do it, and claim that it’s appropriate? Statisticians study the phenomenon of variation, because everything we measure has variation. This variation is caused by a bazillion different factors, tiny and gigantic, that interact in our complex world to take full control of performance away from us. We can expect nothing to be precise and predictable and within our complete control.
This natural variation needs to be understood before we can tell if a set of data has any signals we can draw conclusions about. Further still, everything has its own unique pattern and amount of variation, especially performance measures. That’s what the majority of traffic light rules ignore, and thereby end up telling us to act when no action is needed, or telling us all is fine when in fact there is a real performance problem emerging.
Are there any sensible traffic lights?
Don’t get me wrong: I’m not saying we have to accept the current amount of natural variation in our measures. In fact, performance improvement is all about trying to understand what contributes to that variation and finding ways to reduce it. The more variation we can control or manage, the better performance becomes. So we can’t ignore it or pretend it shouldn’t be there. And neither can our methods of reporting measures (if we are honest about using measures to improve performance, that is).
This is why I always use, and always advocate you to use, XmR charts. They report our measures with a full and proper appreciation of the variation our measures have. In fact, these charts quantify that pattern of variation for us. XmR charts also come with clear and unambiguous rules for signal detection, so we know exactly when performance changes and whether that change is a good one, or an unacceptable one.
To find out more about XmR charts and their use in PuMP, get in touch [link].
0 Comment