Stepwise programming is a very useful way to prioritise when there are many variables at play.
The key benefit of the approach is that you do NOT try to understand and prioritise every thing at once against everything else. Instead you break the problem down into very small pieces and move through each one methodically. You then come up with a ranking of the importance of each variable from top to bottom.
And the approach scales well. You can do it on your own or you can get a lot of people to use the process at once to form a consensus about what issues really matter the most. I have used the approach for everything from setting priorities in retrospectives to ranking the capabilities needed in a team to structure more effective training.
An example of where you can use it
Let’s assume that you are working with a team of testers who work on projects. They have been complaining about being unloved and their stakeholders are questioning whether they actually add value to projects.
You do a retrospective with the team to reflect on where they think their could be improvement and you have some meetings with your key stakeholders.
As a result you find out that the team could get better at communication, team building, consistent tool use, work quality, knowledge of the systems being tested, working with the developers instead of against them, finding critical defects instead of just cosmetic things, problem solving, collaboration, good manners and a range of other things.
If you decide on one thing to fix then everyone will complain you are not fixing the rest, but if you try to fix everything you will not get anything done. Even worse, if the team don’t see value in what you are fixing then they will not help and you will fail. Even worse again, if your customers don’t see value then they will keep escalating and complaining about everything and you will be too busy apologising (or hiding from them) to focus on fixing things.
So you need to prioritise and here is where you can use step-wise programming.
Applying step-wise programming
The approach is simple. First you create a matrix with the items you want to prioritise listed in both the left hand column and the top row:
To get better at | Root cause analysis | Collaboration | System knowledge | Use of automation | Reporting status | Total |
Root cause analysis |
X |
|||||
Collaboration |
X |
|||||
System knowledge |
X |
|||||
Use of automation |
X |
|||||
Reporting status |
X |
Next you start on the first row and compare the heading you put in the left-hand column with the heading you have put in the top row.
- If the variable in the left hand column is higher priority to the the one in the top row, then enter a 1. Otherwise enter -.
For example I think it is important for the team to get better at both root cause analysis and collaboration. But if I had to choose between them, then I would choose root cause analysis. So I would enter a 1 in the appropriate field. I then continue through the first row doing the same thing:
To get better at | Root cause analysis | Collaboration | System knowledge | Use of Automation | Reporting status | Total |
Root cause analysis | X | 1 | 1 | 0 | 1 | 3 |
Similarly, if instead of doing the exercise on my own I had my whole team of 6 people do the exercise I would get each person to complete the row separately and then update the table with the total votes that people gave to Root cause analysis compared to everything else:
To get better at | Root cause analysis | Collaboration | System knowledge | Use of Automation | Reporting status | Total |
Root cause analysis | X | 4 | 3 | 5 | 3 | 15 |
So I could say that the team believe that focusing on getting better at root cause analysis is more important than focusing on collaboration and they also believe that getting better at root cause analysis is more important than using automation to improve testing.
But the real benefit comes when you have completed the whole table:
To get better at | Root cause analysis | Collaboration | System knowledge | Use of automation | Reporting status | Total |
Root cause analysis |
X |
4 |
3 |
5 |
3 |
15 |
Collaboration |
2 |
X |
3 |
3 |
5 |
13 |
System knowledge |
1 |
3 |
X |
3 |
1 |
8 |
Use of automation |
1 |
3 |
3 |
X |
5 |
12 |
Reporting status |
3 |
1 |
1 |
1 |
X |
6 |
So you can see from the table that the team (as a group) believe the biggest benefit would come from getting better at root cause analysis. And if we could pick a top three to focus on then we would pick Root cause analysis, Collaboration and Use of automation.
The numbers may not add up in this table because I just made them up quickly. But even in the real world some people seem to vote that (say) root cause analysis is more important than collaboration the first time you compare them and the vote the opposite way the second time. To counter this, some teams only allow comparison once and then simply put the number (6 – 5) in the field that has collaboration as the left-hand heading and root-cause analysis as the top. But I don’t do this, because I think even that flip-flopping can help you get a better understanding of where the team feels the value is.
In the real world, I often use this technique when there are between 10 and 20 variables to compare. I find it a really good way to prioritise when I am faced with both intangible stuff (values and preferences) and great complexity (too many variables for my little brain).
Of course it does not tell you how to fix the problems, but it does help you choose which ones to work on first.