We often say that we are focused on quality and yet only report on time and money when reviewing our projects.
Or we report on “the number of defects” but not on the actual “quality”, not “the ability to maintain this system when we go live”.
I think this is because people often thing that measuring quality and maintainability is hard. But here is an easy (if not fully robust) approach to measuring quality.
Each retrospective (or team meeting or implementation planning meeting) we can ask the team to report on their own perception of the quality of the work they are producing. But rather than asking for an objective measure of quality, we ask how they view their work compared to last time (the last iteration or a typical project done by the team).
To do this we ask a question and score the answer as “0” if the quality (or other factor) is the same as usual. If it is better (or a lot better) then the answer is +1 (or +2). On the other hand if it is not as good as usual then the answer is -1, or -2 if it is a lot worse than usual.
For example we could have the following survey for the team to complete prior to the retrospective:
When we deploy this into production … | -2 | -1 | 0 | +1 | +2 |
The quality of what we produced will be … | |||||
I will be proud of what we produced | |||||
Production support’s feedback will be … |
These ratings are not designed for a project office to use in assessing projects against each other but rather for the project team themselves to use in deciding how they are travelling and how to tune their own performance.