Quality Tools – Run Charts – Part 3: Shifts

This post is part 3 in our series on run charts. This one addresses the run chart rule for shifts. As an example to show how to apply the run chart rule for shifts, we look at the data from an improvement project team that’s watching their time interval data as they implement a change in their process for getting a 12 lead ECG on patients walking into the ED with chest pain.
(Duration = 12 min. 00 sec.)

Notes and Resources

  • An excellent summary article on use of run charts and application of run chart rules in healthcare – Perla RJ, Provost LP, Murray SK: The run chart: a simple analytical tool for learning from variation in healthcare processes. British Medical Journal – Quality and Safety. 2011;20:46. e51. doi:10.1136/bmjqs.2009.037895. https://goo.gl/MaEcCL
  • Posted September 2018


In part two of this series on run charts, we introduced run chart rules and how they’re used to point out statistical signals that suggest something unusual is taking place in the process you’re watching. To be a bit more technical, these statistical signals are strong indicators of non—random process activity. Also in part 2 of this series on run charts, I presented one of the run chart rules – the one for identifying a trend.

When the variations we see are just random, that’s a pretty good sign that the process is operating in a very consistent manner. Now, it may be consistently showing wide variability in performance, it may be consistently good, it may be consistently bad, or it may be consistently somewhere in between. The point is that in the absence of statistical signals pointing out non—random behavior, the process is very likely to be performing consistently at whatever level of performance it happens to have settled itself into. The random variation we see in that consistently performing process is sometimes called common cause variation. It’s just the common everyday -stuff- or -noise- going on in the background that produces the variation.

In contrast, when the variation of the process is non—random, those statistical signals will start to show up in your run chart. That non-random variation is often referred to as -special– cause variation. Something different seems to be going on. It’s a flag for someone to take a closer look to see what’s happening. If that something going on is bad, you’ll want to find a way to stop it and prevent it from happening again. If it’s good, you’ll want to understand why and maybe share it and try to make it happen more often or use it to help establish a better performing new normal.

So, here in part three of our series on run charts, here’s the next run chart rule – the shift. A shift has occurred when you have six or more data points in a row that all land above the centerline – or there are six or more in a row that are all below the centerline.

When a process just has a common everyday variation going on, it is very unlikely, statistically speaking, that six or more data points in a row all land above that centerline, or all below it.

Let’s consider an example. A local emergency department started a rapid cycle improvement project to reduce their arrival at triage to 12 lead ECG time on STEMI patients that presented as walk-ins. Looking at data from their STEMI registry records, they have been averaging 14 minutes 21 seconds or 14.4 decimal minutes. Decimal minutes is the way that programs like Excel will often show fractions of a minute. The ED staff leadership thought they could do better, so they formed an ad hoc QI project team to take this on. The cardiovascular nurse coordinator, having some strong QI training, knew that once-a-quarter data points were not going to be frequent enough for their rapid cycle improvement process. They wanted to make an impact in the next couple of weeks. So they decided to use the data they entered into their STEMI registry to plot their triage to ECG times for every eligible case that comes in. Each case would be plotted as a new data point on their run chart in chronological order. They have roughly 100 chest pain cases a week come into their ED as walk-ins, so they used the last 100 consecutive cases to establish a baseline. A baseline is used to define how the process is operating now, so that any changes that come in the future can be compared to that baseline to see if the performance of the process has changed. An average or median is calculated for the performance during the baseline period – and that becomes the value of the centerline that’s plotted on the run chart, as I described in part 1 in this series on run charts.

Here is what their baseline run chart looks like. It is important to note that they were not making any changes to their process during the time the data for these last 100 consecutive cases was being collected. They applied the complete set of run chart rules and did not see any evidence of special cause variations in those 100 cases used for the baseline. The average arrival at triage time to 1st 12 lead ECG acquisition time interval for these 100 baseline cases was 14.4 minutes and that is the value you see for the centerline.

The team looked at the peer-reviewed scientific literature and thought back on some of the presentations they have seen at conferences and articles in trade journals to identify some best practices on this process that they thought they could apply to their ED. They took what they thought were the most promising ideas that could be applied to their setting, adjusted those benchmarked processes to fit their circumstances specifically, and then implemented those changes. Remember, the baseline cases were the first 100 cases.

The QI project team, by introducing changes to their process for how and when a 12 lead ECG is acquired at the ED triage desk, is intentionally trying to create non-random behavior. They are hoping that the status quo from the baseline results is disrupted. They are hoping to provoke favorable special cause variation showing up with -shorter- triage to ECG time intervals.

We want to know about a change in process behavior as soon as possible after it starts to happen. We watch as each new cases are added to our run chart as we anxiously hope to see those statistical signals tells us when a significant process change has started to take place. If our process change is not effective, we will not see the favorable special cause variation show up. Remember, monitoring a run chart is like watching a movie. Each data point is like a new frame in that movie.

So let’s look at the run chart movie. It will start out with the baseline data in view and we’ll starting adding new data points. I’ve obviously already added all of the data to the chart, but for purposes of this example, imagine that we are accelerating time and watching new points come into view as cases are entered into the STEMI registry. The data will scroll along to the left so that the newest data shows up on the right side of the graph.

We start out seeing data points 81 through 100 on the screen. These are the last 20 data points from the baseline period. As data is added to the chart, the chart will move to keep the most recent 20 or so data points in view. This is called a moving data frame. The new process is implemented and the results from the new process will start with case number 101. In this simulation, the data is being added pretty quickly but soon we can see that fewer and fewer data points are showing up above the centerline. This a the shift that we have been talking about. The ED arrival to 1st ECG times are getting shorter and as a result, the position of the data on the graph is shifting downwards. It pretty east too see this well after the change has taken place. But, since we want to know as soon as possible so that we can lock in the process change – or if the shift is bad, we can terminate the process change and try something else. The run chart rule for detecting a statistically significant shift gives us that early indication.

It can be hard tell if and when a shift first occurs. Remember, it is at least six consecutive data points above the centerline or six consecutive points below the centerline. It’s often pretty obvious to see the shift well after it has occurred, but we want to know as soon as possible. So let’s watch the data flow again and apply the run chart rule for detecting a shift as the new data is added along the way. Remember, we are looking to see if we see six or more data points in row show up all over the centerline or all under the centerline.

We start out seeing data points 81 through 100 from the baseline period and we’ll watch for instances where several data points in a row are above or below the baseline. Here we see several. 1 2 3 4 5. Not enough. Moving on, here is another grouping below the centerline. 1 2 3 4 5 6 7. So it’s case number 162 where the first set of 6 or more consecutive data points below the centerline first appears. This is more objective and reliable than just eyeballing on all but the most obvious scenarios. But even with just our eyes, we can see that the shift goes even lower over time, representing even shorter time intervals.

I’m going show you the same data again, but in a different way that I think is more powerful. Instead of adjusting the view of the run chart to show the same amount of data each time as new data points are added, which is sometimes called a rolling or moving data frame, we are going to make a longer graph with space already added to hold new data in the same view. The advantage is that your eye does have to readjust to the new view and take in the new data at the same time. You can focus on just the new data. I think it helps you see how the new data compares to the older data much better. I also think having the data points more compressed together when you have a lot of data also helps your eye appreciate the overall changes rather than just focusing on the last few data points. Let’s take a look…

This first section of data shown here has the first 100 cases in the baseline period. The variation you’re seeing in that baseline period is common cause variation in the process – just the normal everyday ‘stuff’ or ‘noise’ going on in the background. Data will start to be displayed in just a moment that came in after the improvement intervention begins. Here we go. The point where the shift criteria is first met is right now! Not all that obvious to the eye alone at this early stage. As more of the data scrolls into view, you begin to see, visually, how the shift downwards takes place. You can see how the performance settles into a new normal range. This compressed view makes this type of visualization possible with larger sets of data points. This variation shown as the downward shift is one type of ‘special cause’ variation that provides a statistical signal that the process is performing differently – in this case – differently for the better – which is what we were hoping to see when we made the change in the process for how and when 12 lead ECGs were captured on ED walk-in cases presenting with chest pain.

So there you have it. The run chart rule for shifts and perhaps a better appreciation for some options in how your run chart is formatted. In the next post, we will take a look at some more run chart rules.

On the ImproveTheSystem.com website page where this vlog is embedded, scroll down to see some notes and links to other resources that expand on the topic just presented. A complete transcript of what’s presented here is also provided on that page.

If you have any questions or comments, please contact me directly at M I C – Mic at Improve The System.com. Please feel free to reach out and I will do my best to reply to every inquiry.

Thanks for watching!