I have been thinking about how we measure our performance and how we track our goals at work.
Specifically, I want to look at how we would use assessment in the teams that are customer focused, product led and/or agile. There are plenty of good books in each of these domains and they have a lot of good techniques, tips and even philosophical concepts to help me.
However, many of the books seem to look only at product metrics or only at agile delivery. So I thought that I would take a step back an look at what these teams all have in common and then look at how we should approach assessing our teams and our goal progress from that perspective.
One thing all these teams have in common is that work is driven by empowered, self-organising teams. I know that R Simons has done some great work on how this might alrter the way we implement reporting and assessment.
But then I thought about something else these teams have in common – Continuous learning is core to the way Product led team, customer experience teams and agile teams all plan and manage their work.
Then I also started thinking that assessment and measurement are really learning focused processes. Why would you start measuring and assessing something if it is not for the purpose on ongoing learning?
With this in mind I thought I would look at how experts in learning (and teaching) might approach measuring our work, if it was up to them.

A rudimentary starting point – The Kirkpatrick model
The job of a teacher is to teach and the job of a student is to learn.
A simplistic approach to assessing learning would therefore be to test whether the student learned something they were meant to learn.
If the student learned something then they can move onto a new topic and if they did not, then I guess they can give up or go back to try again. The teacher use the testing results to make some decisions. They will keep teaching the same way if the students all past the test and change their approach if people did not seem to understand some of the concepts.
I guess if we applied this to a work context then measuring things is simple. We would ensure that teams had clear goals (set by themselves ideally, but sometimes perhaps inflicted on them). Then we would test whether the team hit their goals. If they do, then they set a new goal and move on, if not they either quit or try again.
Leaders, coaches and others can then keep doing what they are doing if the team is hitting their goals or think about how best to support them if they are missing some core parts of their goals.
Measuring the impact of teaching is a little more complex though and so teachers have gone beyond just testing whether a student passes a regular test. For example, you might say that a Geography lesson was unsuccessful even if the students did learn what the teacher told them to learn. What if the lesson was absorbed but the student could no apply what they learned later in different contexts? What if the student started to hate Geography and stopped learning in the future, missing out on the wonders of contour lines and the effect of hills on climate and people? What if they mastered the class but then found out that what they learned was fundamentally incorrect?
A man called Donald Kirkpatrick tried to tackle this complexity with a 4 stage model that is now over 50 years old, but still in use.
While Kirkpatrick was looking specifically at the success of training, I think we could use the same approach to assess the success of the work that agile teams do, since agile teams are learning teams.
Level | Original meaning | Applied to team delivery | Applied to customer value |
1 Reaction | How did the students rate the class? | What did the team think of its last sprint? How do the team feel about their work and culture? | What do customers and stakeholders think of our work? |
2 Learning goal | Was the intended learning aquired? | Was the intended goal achieved? This could be definition of done, sprint goal, OKR. | Was the stakeholder’s goal achieved? |
3 Behaviour | How well do students apply what they learned? | Are teams applying their learning from customers and their own reflection? | Are customers actually using what we produced? |
4 Results | Did applying the learning lead to the results we wanted? | Is what the team produces creating the outcomes and value we want? | Is using our product solving the Jobs To Be Done? Are customer and stakeholder outcomes being achieved? |
I think if we tried to apply this lens to our work, we would start to see where our (potential) metrics are helping us to learn and improve.
Many teams do stand-ups and regular retrospectives. These meetings are valuable but where do they fit into the “dodgy reinterpretation” of the Kirkpatrick Model?
Some teams limit their discussions to talking about whether they feel they are on track and how they felt that they went in a sprint. In other words they are limited to learning and improving based on their own reactions. More mature teams will include a showcase with stakeholders to get their reactions and will focus both stand-ups and retrospectives on the achievement of their goals. Thus each learning cycle (stand-up or retro) will generate a healthy discussion of both team reactions, an assessment of whether (and how) they achieved their goals and then the setting of new goals based on what they concluded.
In this context, I think we can start to assess the value of the team’s ongoing assessment processes
- Team meetings (planning meetings, stand-ups, showcases, sprint reviews, retros, scrum of scrums – whatever the team is using to learn, plan and improve).
- Team metrics such as burn down charts, cycle times, velocity, quality standards, customer feedback
There are 3 more advantages though.
- Back in Kirkpatrick’s time, I assume that people were assessing multiple training courses or classes and today we are working with multiple agile teams; and
- Stand-ups and burn-down charts are great for understanding team reactions and potentially the achievement of team goals. However if these are our primary team measurement approaches, then they leave us blind to seeing whether customers use our shiny new features and whether we are achieving the outcomes that our teams, stakeholders and customers want to achieve in the longer term.
- Traditional project measurement processes are often not built for ongoing team learning or customer value.
Multiple teams
Many organisations are now operating with one or more portfolios of product teams. Others have multiple teams in a program or value stream and others have a gaggle of semi-dependent teams.
If we want to apply and agile way of measuring and learning then we should remain loyal to the core principles of agility. These include empowerment, transparency and a focus on continual delivery of value.
This creates a challenge for scaling agile mindsets and practices, but it also adds substantial value to both the team and the stakeholders when you struggle to understand what you want to learn (measure) and how you can put that into practice. I believe this exercise is a valuable end in itself.
An incomplete picture
If a team focuses entirely on showcases, internal discussions and measuring throughput (ie typical scrum ceremonies, supplemented by a story wall and burn down chart) then they are learning from a very limited understanding of their work.
To set up an agile team for success, you need to include some kind of feedback cycle that informs the team about what happened when they deployed their work. They need to learn about whether customers liked what they got, whether they actually used it on a regular basis and whether it helped them achieve their goals.
These things might be assessed at a team or feature level, but they are more often assessed at a product or portfolio level. Regardless, the team and the other decisions makers must gain access to this information if they are to learn from it and make their own empowered decisions.
There is also a significant impact on the agile attitude to testing. In an old school team, they often completed their work and then someone assessed if it “passed the test.” But in an agile team, testing is a core part of the team’s learning. We do not just test earlier to avoid risk but also to help the team learn to make better decisions about what creates quality and customer joy and what typically leads to issues and rework.
So testing is no longer a pass/fail test but an ongoing assessment process that could also be applied with the “dodgy reinterpretation” of the Kirkpatrick Model. In agile teams, testing is learning – and since it is learning it should actually help the team (and stakeholders) learn continuously.
Thus we can apply the basic concepts of basing our measures on the creation of value for the customer of the measurement:
- Who is going to learn from this specific measure?
- What will they learn?
- Why is it important that they learn that? (And is it worth the effort?)
However this might not necessarily result in exactly the same measures. You might or might not have a showcase across multiple teams, a shared burn down chart etc. Instead you need to look at the “Dodgy reinterpretation” of the Kirkpatrick model model and ask what you want to learn.
The further you go through the levels (1-4) and the more you want to learn from customers rather than opinionated team members, the harder it will get. This means that you need to look at the real value and the real cost of setting up, using and applying the measures you put in place.
One conclusion I reached as a trainer was that measurement only mattered if you or someone else was going to act on it.
Traditional organisational measures may or may not apply
When I first started helping teams become more agile, we often ran into organisational metrics that did not align with the new way we wanted to work.
Sometimes we almost had to employ a “translator” who let the team get on with their work but then tried to reverse engineer reporting in the old format for steering committees and PMOs.
This did not work too well, so some people just claimed that agile teams were immune to the organisation’s “outdated bureaucracy.” Even I remember telling stakeholders that if they really cared what the team was up to, then they should join the team ceremonies. The more they cared, the more time they should spend.
I still think there is a value in telling people to go and see the team in action rather than looking at charts that were produced a week ago.
I also recognise that sometimes we need to report in a way that does not actually help the team. Some organisations have complex reporting that helps them decide on things like OPEX, CAPEX, Anti-crime reporting and (in one of my jobs) ensuring that they still qualify for the R&D tax breaks and funding they received. Where this is the case, there is an argument for telling the team to suck it up and provide the required information. Part of the value they create (and part of their job) is to ensure and support the organisation to survive and thrive. The difference in an agile team is that people empowered to make decisions need to understand the criteria and rationale for those decisions. In other words we need to explain why we want to collect and track information that is not used by the team itself.
Having conceded these points though – I think we can do better now days. Rather than just saying “come and talk” or just applying last year’s organisational measures again, we should step back and question what we hope to learn from those measures and who is actually going to interpret and apply those lessons.
If you sit down and spend some time thinking about using your assessments to create learning, you will probably come up with something better than a dodgy reinterpretation of the old Kirkpatrick model. But I also think that it is better than accepting a default of doing what we used to do, or applying some model that allegedly worked somewhere else, without thinking about how it will be useful to you in the context that you work in.
It will of course take work and experimentation, but I believe that the effort will be worthwhile because it will enable faster learning across the different teams in your organisation.
Also – Kirkpatrick created his model more than half a century ago. Since then teachers in schools and universities have continued to learn how to better use ongoing assessment to create better feedback for learning and so it would be worth looking at what they have come up with more recently.
In part 2 of this article I will look at how teachers have come to break their assessment into 4 different types, each with different goals to be achieved by specific stakeholders in different, related contexts.
This is an amazing analysis, James, and it’s wonderful to think about judgement from another general point of view. Agile task management truly is a complex thing. Thanks for sharing your thoughts!
LikeLike
Thanks Olivia, I appreciate the comment. It is definitely a complex space and I hope people can take an agile (evolving) approach to coming up with what works for them.
LikeLiked by 1 person