I talked about setting a goal in my last article, and also referred to an approach called “GQM,” or “Goal – Question – Metric”. I thought that this time I would drill a little deeper into coming up with a goal.
How does the goal fit with the Question and the Metric
The focus I have when I use GQM is to come up with a clear goal before I measure something.
- Once I have a clear goal that is understood by me and the others who are interested in the goal, then we ask questions about how we would know if we achieved our goal, how we would know if we were on track etc.
- Once we have questions that we could ask then we seek metrics that could help us answer the questions. We might also complement these with deciding who we could ask for an opinion or other qualitative “measures”.
The key is that we do not start with the metrics or measures, we start with a reason to measure and then we ask questions about how we would achieve our goal, BEFORE we worry about things like leading/lagging indicators, qualitative/quantitative data, confidence intervals or how to visualise our results.
It is better to have a rough measure that supports the right measurement goal than a precise measure that does not support a goalJames King – 4 December 2022 while writing this blog
A lazy goal
Rather than just reporting on velocity or outstanding bugs I want to have a goal.
My goal might be to “go really fast” or to “have less bugs”.
This is a pretty rough starting point but it is better than measuring velocity without knowing why. Now at least the team can debate the value of of velocity or cycle time or team gut feel to know if they are getting things done faster. They can also share their open derision for the whole idea of speeding up when what they really should be doing is removing impediments or improving quality.
So my lazy goal is already helpful, even if only a little bit.
When I tell the team this though, they might challenge “go really fast” to become “identify impediments and issues that are slowing the team down” or “improve expectation setting and estimating” or something. We are still not there, but starting with the lazy goal and then realising it is not the right one is setting us on the right track.
A SMART Goal
SMART goals have been all the rage for a long time now. I would say that they are less a fad and more an ongoing custom. I have heard slightly different versions over the years, but more or less (ie not specifically) they are something like:
- Specific – something concrete that you can see happened or did not
- Measurable – you can score/measure/rate/assess the goal
- Actionable – this is something that the team can take action on themselves and achieve through that action. Note that I prefer actionable to achieve-able because it should be both something that is in their control and can reasonably be expected to happen within the commitments, resources and constraints of the team
- Relevant – the goal makes sense the team and is relevant because it aligns to a higher goal, a strategy, a service agreement, a happier customer or a chance to make life easier for the team.
- Time based – there is a specific point in time where we can measure our progress. Eg “do this by a date” or “reduce this to a smaller number per month”.
I have nothing against SMART goals, although I do think people jump to a SMART goal too quickly sometimes when they should still be clarifying what their goal means and what questions they might ask to achieve it.
For example, the team might move from “go faster” to “increase velocity by 12.34% by 1 October at 3:15pm Eastern Australian Summer Time”. The goal looks smarter but might still be irrelevant or meaningless because the team didn’t take time to explore the goal properly.
So maybe the team could at least turn each of the letters of “SMART” into some questions like “what specifically ..”; “how is that relevant to our strategy” etc. I believe we can do better than that, but for the moment, let’s assume that we have a SMART goal instead of a lazy goal.
A measurement table
If we have a SMART goal then it is relatively easy to turn that goal into something that can be measured and discussed as we move closer to the goal.
Let’s say we have a goal of “Reduce the number of bugs found in production within 60 days of each release where the feature containing the bug was worked on.”
We are nearly there, so we create a table with a couple of questions in it (disguised as headings). We complete this table and then discuss it in our retrospectives.
|Goal||Signal/source to measure||Target||Current baseline||Next checkin|
|Reduce bugs in production||Bugs raised by support/product team and stored in backlog or resolved||<6 actual bugs raised within 60 days of the release||10 within 60 days of each release||Each retro since we release per sprint|
|Better allocate root causes to bugs||Bugs closed in the backlog and reviewed in the sprint review||100% of bugs closed||100% super-critical and roughly 10% others||Each sprint review where the PO presents the bug report|
|Resolve critical bugs within one lunar cycle||Bugs in backlog (that are at any point viewed as critical)||95% resolved within 5 days of being raised and then included in next sprint release.||No baseline so assume 0||End of quarter OKR review|
Slowing down with the “GQM goal approach”
So a lazy goal might have been enough and a SMART goal is our gold standard. But I mentioned that we can have SMART goals that are not as well thought through as they might have been.
The GQM approach usually takes a little longer between “Lazy goal” and “SMART Goals in Measurement table” because we refine the goal for a while until we think it is good enough.
This usually results in a goal tree rather than a simple goal, but it does not have to.
The way I learned GQM (last century) was to go from a goal that the team aims for (go faster or reduce bugs) to a goal for the measuring that we might do.
If we assume that we want to go faster, then that is great. But measuring something will not actually make us go faster. The same way, measuring whether we resolve bugs and improve quality will not actually result in less bugs.
The goal of measuring things
The goal of measuring things is to:
- Help someone make a decision;
- Create a warning if we go beyond some tolerance level;
- Keep something visible to us so we can be reminded to stick with a new habit or practice; or
- BS people to make them think we are doing a great job (not part of the GQM recommended approach”.
So let’s say we want to go faster and have less bugs. Why would we measure anything?
- We might want to help the team decide how to prioritise their work or we might help our business customer decide if the bugs are under control;
- We might want to say that we will just let the team manage their bugs, but if we get 1 super-critical or 3 critical or 2 critical and a major bug then we will stop doing other work and focus on clearing our bugs; or
- We might want to report our bugs next to our velocity and sprint goal discussions so that we remember to focus on bugs.
with this in mind the first questions to ask when setting up our metrics are:
- Whose perspective are we measuring this from?
- And will the metrics/report/information help them to do?
So instead of “go faster” or “reduce bugs”, we add a perspective:
- We want to go faster from the perspective of the business customer; or
- We want to reduce the bugs found from the perspective of the first line support team.
Notice that our goal is now more specific and we have a better guide to the thing we might measure.
- Velocity is of no interest to the business customer. Perhaps they would rather measure the time between “request made” and “story ready to release” or perhaps they might prefer “time from commitment to release.”
- Bugs found in the sprint is not of any interest to the first line support team. They would just see this as part of the creative (development) process. They would rather measure “bugs impacting someone,” or “bugs identified in production.”
Once we settle on whose perspective then we just ask questions about what the person/team/ perspective taker might ask in order to get the information they need to make decisions/see a benefit etc.
So now we might have:
- The questions a business customer might ask about speed are:
- How long does it typically take to get a minor request actioned?
- How long does it take from when we ask for something to when it is committed to a sprint/release/baking competition?
- What is still in the backlog that is taking a lot of time?
- The questions the call center might ask about bugs are:
- What bugs have impacted customers? Have they been fixed?
- How many stupid-workarounds are still out there?
- How long will it usually take to get something fixed if it is really critical?
Finally – we can come back to the measures that actually answer the questions we are asking. Then we might have
- Measure how long OKR related epics take from commitment to mvp release, so that we the business person can set expectations and stop hoping things will appear when any passing rabbit would know it is not happening
- Alert if a critical bug is taking longer than 3 days to fix, so we can jump on it before it reaches 5 days
- Report on how long critical bugs take on average so that we can start planning SLAs.
OK – I am sure you can word them better, but at least we can now prioritise the things that we want measure and explain the reason that we are measuring it.
After that, where one or two measures really matter, we can focus on them while not expending our effort and attention on things less relevant.
At this point it might be worth looking at turning the few, valuable measures that we have into measures attached to SMART goals and to create a measurement table.
So that is where we will leave GQM for now.
But perhaps another time we can look at the old school wording of goals and the value of turning them into goal trees and measurement trees.