Maybe our definition of coach is too narrow

Some time ago, I was wondering why I was noticing people moving away from agile approaches (and cultures). One of my theories was that people were neglecting “craftsmanship” when coaching agile teams.

I am an agile coach and I often explain to people that agile coaching is about helping teams build the capability to interact better together and better adapt to change. I also say that we are all about helping people discover, pursue and share value.

Alas, some stakeholders thing that agile coaching is really about implementing agile best practices from the big agile book of correct behaviour.

But when I feel sorry for myself I think of the journey that some other “coaches” are going through and I realise that we agile crew still have it fairly easy. I spoke to a QA a while ago who saw her job as coaching the team to build better systems and to build learning into their work. This was an awesome approach to improving both the quality of the end product AND the quality of life for the team. Alas the team disagreed. They thought that QA was a latin word meaning “Break things” or “notice things that developers should have seen and then raise a ticket for them to ignore”.

But really, helping the team to improve their technical learning processes through experimentation (testing) and through understanding what quality means in context, is a lot like what I do for a living.

I also see technical leads who are actively teaching coding to others in the team, both as a craft and as a way of thinking. Teaching how to merge practices, actions and thinking sounds a lot like coaching to me. Perhaps technical coach is a role that is included in the technical leadership role.

Now product managers are looking at “discovery coaching” and “transformation coaching” as different needs within the product community.

So maybe I should stop complaining that people do not know what agile coaching is and I should find better ways to help artisans and professionals to build more effective coaching into their existing roles.

What do you think, is coaching a role in and of itself (such as I like to think it is) or is it a capability that starts as a seed within many professions and needs to now start to flourish a a core competency among many roles?

Keeping people on track in meetings

Some people have amazing leadership skills such as communicating a clear vision or aligning people to a cause. I did not receive those awesome gifts, but I did inherit a super power that often comes in handy – managing Bureaucracy.

For example, I like to have meetings that are effective. This probably does not seem like a super power to you – but nor should meetings be a terrible curse.

When people encounter Scrum, they are often shocked that there are so many meetings (OK we hide the fact by calling them ceremonies, events, rituals, celebrations or something, but they are still times when people meet). In Kanban people are shocked that I sneak in a retro and in other agile approaches I also sneak in some meetings. I even sneak in some governance and sometimes budgets and things. You could call me a bureaucracy coach.

10% of my time in meetings? Say what?

A typical developer who didn’t see that coming

Not cool dude

Does this seem dodgy? You ask for a coach who is anti waste and he brings in more meetings.

Hopefully though there is a difference – the meeting is not a goal in its own right. The meeting should improve the interaction of the team, the team should not go to meetings because they are a bunch of tools following a process (yes I have said that before,).

So if I invite/drag people into a meeting, it should be a valuable meeting. It should also take the minimum time needed to create that value.

Keeping people on track is good

A valuable meeting is one where there is a clear purpose, people stay on task to achieve that purpose and people leave feeling that they got to participate.

In some meetings it is fair enough that people chat and catch up – if that is what they want and it is helping with the interaction.

In other meetings we want people to stay on track and not convert the meeting into a chat fest about how somebody should break down some stories in our backlog. So how do we stay on track?

Here are some ideas:

Start by reminding people of the purpose of the meeting and the commitment it will take.

  • For (someone) this meeting will provide (something) so they (get some value) is a possible way to intro a meeting. If this sounds too “user story-ish” then at least let people know whether this meeting is
    • To make a decision together
    • To discuss something so one person can make a decision
    • To inform people of things and have a general conversation
    • To commit to or track actions
  • If there are people coming for the first time, you could ask them what they want to get out of the meeting. You can also ask if the right people are in the room – some might be happy to let others cover the topics or someone might be needed if you are going to reach an agreement
  • State how long the meeting should go for and ask if people plan to attend for the whole meeting and agree with the goal

Consider having an agenda

  • If you do an agenda, leave time in it to wrap things up at the end. Otherwise everyone will be rushing off when you try to come to an agreement
  • Consider adding time at the start to read any pre-reading or to secretly admit that people will come late
  • Think about the order you put things in. Put things you don’t really want to talk about at the end and look for a generally good flow

Consider having a “5 minute rule” or a “7 minute rule.”

This means that if people deviate from the agenda or topic, anyone in the room can start a clock running. After 5 minutes, they announce that time is up and the group agree to get back on task, or they agree that this deviation from the agenda is more important that the other topics.

Observe who talks and gently make time for others if needed

Some people will talk a lot and others will prefer to listen. This can be OK but it can also mean that the group are not really getting value from all the brain power in the room.

  • Give the task of taking minutes to the loudest person in the room so they are constantly distracted and shut down. You can even give them 3 tickets that they have to spend to talk, though this might seem extreme.
  • Stop for quick check-ins like a thumbs up or 5-finger vote to make sure things are on track
  • If you have time and presence, notice how much of the conversation is inquiry (asking questions), how much is advocacy (stating opinions) and how much is echoing (just repeating what others have said). Be brave and comment on this or ask people if they have questions/opinions

Warn people when the meeting is half over or near the end

  • I sometimes call a time out in meetings halfway through. I ask for a quick vote among the group to see if they are getting value and believe we are having the right discussions. Remember my superpower is bureaucracy so you might not do this, but I promise it works
  • State clearly something like “OK 15 minutes to go .. let’s focus on blah blay”
  • It can also be good to summarise the meeting at the end, referring back to the goal or to key agreements.
  • Sometimes it is worth rating the meeting half way through or at the end and then committing to improve where needed.

Don’t pretend to take minutes

  • Some people take minutes and action items, which seems like a good idea
  • Minutes are a good idea if you will want to refer back to them. Action items are a good idea if people will hold each other accountable for them.
  • Minutes and action items that will be lost in the void will undermine the sense of purpose in a meeting. If you will not use them after the meeting then just be honest and say you will rely on people to remember/do what they said.

OK – good luck and enjoy your meetings 🙂

How would a teacher assess our agile teams (part 2)

Previously in “teachers assessing agile teams”

In my previous post, I derived a lightweight model for building team and product metrics from the Kirkpatrick model, which is normally used to assess training effectiveness.

A key theme was that we should think about what, specifically, we want to learn from our metrics before adopting any typical “agile,” or “product” metrics out of the box.

The result was a series of questions that you would want to use your metrics to answer. The questions are, similar to the Kirkpatrick model, taken from different levels or perspectives. But you do not need to answer all the questions with a fleet of metrics. Instead, you choose your focus based on the benefit and the relative effort needed to create an assessment based on only 1 or 2 perspectives/levels. That way, you only measure 1 or 2 things at a time, allowing you and the team to focus on learning from the assessment rather than capturing and reporting data.

This is the first cut of the model that I played with as I created the post:

A fairly basic assessment model

Using this model (or a similar one) we can combine data, feedback and team meetings to build a basis for team learning. We can also use a basic communication plan to decide on how we use the same (or additional) information to keep the team’s stakeholders well informed.

The story continues

Coaches, like teachers should be good at assessing assessing performance, results and improvement.

Assessment in education is quite sophisticated these days. In fact, from my reading as an outsider, we can look at 4 distinct types of assessment that are in common use, all of which include feedback loops, achieve specific goals and support continuous growth.

Assessment typeDescriptionValueAudience
DiagnosticCreating a baseline of where a student or cohort is at across the boardUnderstanding overall learning needs and maturityInstructional designers; Administrators assigning students to classes; Teachers building a curriculum
Summative (or “high stakes”)Evaluating whether learning goals were achievedRecognising and certifying the outcomes of learning Assessors awarding a qualification; Teachers evaluating student achievement
Formative (or “low stakes”)Providing useful feedback during the learning processCreating useful feedback that assists ongoing learningStudents as they learn; Teachers adapting their lesson to student needs
Ecosystem Assessing both whether the current curriculum is valuable and whether the overall system of learning is as helpful as it might beMaking education relevant and removing systemic impediments to learningSchool management; Teachers working on improvement beyond the classroom
Types of assessment in education

How is that relevant to agile teams?

The approach that educators take to assessment is different to what many agile teams (and agile managers) currently do.

Many teams start with the assumption that either:

  • We are agile, so we should empower teams to go and sprint all over the place, confident that things will end well if they are a good team;
  • Agile teams need to deliver, just like everyone else, so we can use the existing metrics and governance that the organisation has in place. We can shoe-horn what we have to assess self managing teams that are continuously sensing and responding to new information; or
  • Agile frameworks include the metrics, ceremonies and governance that will work well in our context. Thus we can start with by implementing the basic model and then adapt it over time as we become more mature at agile stuff. The agile framework or way of working we select will contain the most suitable metrics and other assessments available to our teams.

I might agree that we can trust the people in our agile teams; that many of the existing organisational measures and governance process that organisations have are good; and that agile frameworks often include useful metrics and ceremonies that we can use.

However I reject the idea that any of the three assumptions above is a sound starting point on its own. I believe that we should select fit-for-purpose assessment approaches (metrics, ceremonies, coaches assessing teams) based on what we are trying to achieve.

I think that the types of assessment used in modern education might actually be a better place to start, even though it will involve more work and iteration to create our overall assessment approach.

Let me see if I can identify the relevance of each category of assessment and see if it will lead us to a comprehensive assessment system (or toolkit) that that can be used by teams, coaches and managers alike, rather than a reporting system that seems to consume energy without generating real learning or ownership of ongoing improvement.

Diagnostic measures are good and bad

In Australia, new students arriving at a school are often subjected to a battery of tests when they arrive. We also inflict a similar battery of tests on students across our schools on a semi regular basis (every couple of years) to baseline where students are at across a core set of learning areas.

This is fairly expensive in terms of student time, teacher time and administration time. Hence it is only done rarely.

The equivalent in and agile work context is a team health-check or a team assessment prior to a coach working with a team to set improvement goals in a coaching contract.

Where this adds a lot of value is in the “battery or tests” which allow a holistic picture of the team’s development and growth needs.

We can look broadly across many aspects of the teams processes, technical capability, interpersonal interactions, impediments and so forth. After doing this we can hone in on the 1-2 key areas that will become the focus of development in the near future (the duration of the initial coaching agreement).

We can use a similar approach across multiple teams, to find common areas of need that can inform the creation of training or coaching sessions across the portfolio.

However, this is expensive (in terms of time) and can produce an overwhelming amount of data.

This is why teachers do not continually assess progress across the whole battery of tests. Rather, they assign students to an appropriate learning journey, they create a specific curriculum for the class or they design specific improvements based on what they discover.

Whatever they design then has its own specific objectives and related assessment mechanism.

In an agile work environment this should also be the case. A diagnostic assessment could become a semi regular thing for a portfolio of teams (say once every 1-2 years). It also fits well in the toolkit that coaches choose from at the beginning of each coaching assignment.

But it is not well suited to continuous, ongoing assessment. Indeed it can be harmful:

We come up with a dozen improvement areas for the teamTeams, like students do not know where to start and will have the chance to focus on small, specific and achievable goals. This is lazy coaching and leaves the job of the coach half done.
We compare teams, telling them who is good or bad, or secretly talking about them in our management meetingsThis encourages a fixed mindset where teams, like students, learn to game the system. They learn to pass the next tests really well, while minimise the actual learning and growth involved. This also encourages bias among coaches and managers, creating a halo effect for some teams and a hard-to-shift stigma for others.
Ask the team to track and report on the “gaps” or opportunities identifiedWhile this seems sensible at first glance, it results in a lot of overhead, which means that teams either adopt a “compliance with instructions” mindset in reporting or drop the tracking when they inevitably get busy.

Another problem is that teams will often mistake the metric (gap) for a goal, when it is only a measure that informs goal setting. Putting in the extra effort to interpret the information from the diagnostic assessment and then create a specific initiative or team goal, with its own agreed assessment mechanism creates greater focus and better motivation
What not to do

So diagnostic testing is, I believe, something that should be in every coach’s toolkit. It might also be something that teams use to “self-assess” without a coach, but it is important that teams understand that it is a only starting point. The next step is to find an area for focus and then agree on how to pursue and assess progress on the goal that they set. That will involve a combination of “summative” and “formative” assessment.

Assessing the ecosystem

Before we get to summative and formative assessment though, I want to look at assessing the ecosystem, because it is related to to diagnostic testing.

Running a battery of tests on a student is usually aimed at understanding the student’s overall level of learning in order to derive their learning needs. We can then channel our students into the right courses and we can design a curriculum for a whole cohort that takes them to “the next level.”

However, teachers might also identify patterns in the learning maturity and needs of different groups of students. They might find that all students with a common set of characteristics, unrelated to the curriculum, are consistently performing well or badly. For example, students who live in dangerous neighbourhoods might be getting the same maths lessons as everyone else, but the vast majority of them might be struggling to reach the standard that we think they should be able to achieve.

We might realise that we should stop looking at (or blaming) the students for this gap and instead look for a cause that is outside the classroom and often outside the control of the student or the teacher.

We might find that, for example, constantly living in fear and being “hyper-aware” means there is not mental capacity left for learning geometry. Or we might find that the culture in an elite school seems to be toxic for some groups of students (girls, migrants, kids with specific needs).

We might even find that while students are passing their exams, none of them are getting jobs after they graduate. Our classes are now excellently designed to waste student time and teach them things that they will never use.

In education, teachers and school administrators often conduct a separate assessment of the ecosystem in which they are creating what they intend to be first class learning. This is not about assessing the student but identifying the systemic impediments and challenges that are outside the control of the student.

An excellent example of this approach can be found in the “data wise” approach to evidence based improvement in education. While this focuses on and education based ecosystem, I have also found that it translates well into building an evidence based approach to identifying, leading and assessing systemic improvements in the work environment that agile coaches typically find themselves in.

In an agile work context or a product led organisation, this means moving from coaching the team to work well within the current context they work in to changing the context within which teams work.

Doing so can provide great benefits. However, this approach is time intensive and assumes:

  • There is an eager group of people who are committed to evidence based improvement;
  • That group’s stakeholders are willing to play a long game of slow but impactful improvement beyond team based improvement; and
  • The eager group and their stakeholders have the agency (permission and opportunity) to implement ongoing improvement.

So how does this impact our team metrics and ceremonies?

Most out of the box agile (and organisational) metrics are designed to assess team performance or product success. They are great in that context but only provide glimpses of useful insights about the organisation’s design, governance and business model.

There are however good foundations for assessing the wider ecosystem. The key is to understand systems thinking and that the system you are assessing is NOT the teams and their performance.

You will find some great starting points lean approaches, business model driven approaches and even traditional change management and organisation design approaches.

if you are not sure where to start then you can start with “data wise” or a similar approach and slowly learning and adapting the way you use it to iteratively grow your own approach. You will be successful if you are persistent and if the 3 assumptions that I listed as a starting point are in place.

I believe that for some coaches and teams, assessing the wider ecosystem is a key part of their toolkit and that it involves creating fit for purpose metrics, not generally the same as the team (or even customer) metrics as they are seen out of the box.

This toolkit should not be confused with the tools that are used to assess and support team growth, because they are designed to change the context within which teams operate, rather than helping teams improve within the current constraints of the organisation.

Thus, coaches and managers have to choose their focus. To what extent do they want to focus on team improvement and to what extent do they want to focus on improvements outside the teams themselves? This is where approaches like OKRs can be useful in signalling what coaches and managers are planning to focus on.

But most assessment happens “in the classroom.”

I have covered diagnostic assessment and assessing the wider ecosystem, but most of the metrics a coach and agile team use will probably fall under the categories of Summative and Formative assessment.

So let’s look at how these integrate into learning in schools and, I hope, how they are relevant for our teams of “grown ups” at work.

I visited my daughter’s “agile kindergarten class” a few years ago. I was shocked how “agile” they really were and how their learning seemed to be. But the real thing that impressed me how well assessment and feedback was seamlessly integrated into the day, even for a class of 5 and 6 year old kids.

My recollection of school was that we would have exams at the end of every learning adventure. But I did not see these as part of the learning process. Rather, they were like regular tornadoes or other natural disasters that I learned to mitigate.

I would muck around most of the term and then a test would suddenly appear on the horizon. I would plan to study but generally avoid doing so. Then I would pack a term of study into 1-2 days before the test, sit the test and then escape back to my normal life.

As soon as I escaped, all the knowledge I had packed into my brain for the test did the same thing – it escaped into the ether. I am not sure if I would have done as well a week later if I had to repeat the test, little own a month or two down the track.

That is not how it is supposed to work, at school or at work. Assessments, tests and exams are not supposed to be external events that you do because you have to do them. They are an integrated part of the learning (and continuous growth) process.

Even at a young age kids are taught how to learn. They are taught how to control their stress to move from their comfort zone to their learning zone and then they are taught how to recognise and manage themselves when they enter the “fear zone” where learning is hard because their bodies are moving from thinking to fight or flight.

This builds an ownership of learning where the child (hopefully) starts to aspire to grow and learn more. However this is not possible without clear learning objectives and ongoing assessment of progress toward that objective.

Summative assessment and feedback

Summative assessment is the high stakes assessment that evaluates whether the student achieved mastery of the subject (or a pass mark or a fail). This might be a project, exam or activity where the performance of the student is evaluated.

Summative is still powerful and it is important. At the end of grade 1, kids get assessed to see if they are ready to move to grade two. If a young girl wants to play basketball, she will be assessed to see if she makes the team.

A team might rate themselves as great in a retro, they might find no defects in testing and they might have a throughput of work that they are proud of. But when a customer encounters the work that the team did, there is an immediate moment of truth.

Is the product or feature that just arrived desirable? Does it support a job to be done, or is it just a feature produced because the team liked it? This moment of truth is the equivalent of a “summative feedback loop” or a “high stakes assessment. The team achieves the goal or they do not.

Teams need some way to measure themselves against an external goal or standard. Metrics like velocity and retrospectives are not designed for this (which I will explain under formative assessment).

So in my dodgy Kirkpatrick-like model, you might want to assess not just level 1 “how did you enjoy that” types of feedback, but meatier (and harder to measure) feedback like contribution to revenue, containment of cost, customer adoption of the product or other metrics that tell the team that they are creating value and being successful.

The limitations of summative assessment

Kids with a growth mindset learn this lesson too. When they do a presentation or they sit a test, they are rated on their performance. The work was of a high standard or it was not.

However, students also learn that this not a reflection of whether the student worked hard or had talent (although these help). This is a reflection on the quality of the work produced this time.

This is the same for the team at work. For example we might use customer adoption metrics. A team might build awesome products that are years ahead of their time, but if customers are not adopting the product, revenue will not grow. If revenue does not grow then the team will score an F, even if their product is awesome. This is not a reflection of whether the team worked hard or have great potential. This is an assessment of whether what the team are producing is worth buying.

Summative assessment often provide delayed feedback (the work is done, not in progress) and the focus on rating performance rather than guiding the student (or team member) as they learn.

Formative assessment and feedback

Formative assessment is designed for the learner rather than the teacher. It is continuous feedback during the process of learning rather that at the end. For this reason it is sometimes called low stakes feedback or “assessment FOR learning.”

Since it is designed to support learning, as it is happening, formative assessment cannot be delayed. There is little point in telling a student that she has been holding her pencil wrong for the last 3 months. Instead the teacher observes the student in the moment and offers immediate, specific feedback, such as “hold the pencil further back.”

In an agile team, this may come from the interaction between the team members (an coach). It might involve a peer review of the code, or some testing. All of these things help the team improve as they work.

Sometimes formative assessment can look a little odd to old school parents who do not understand it. Students might be assessed by their fellow students instead of the teacher and the process of assessing others is teaching the student as much as the feedback they get from others. Some “exams” allow for multiple attempts. The student can actually sit the exam when they feel they are ready, receive results and then resit the sections they want to improve in. They can therefore set their own learning standard and quit when they hit it.

A similar thing happens with some of the “agile metrics” like velocity, retros and even cycle time. The team can define their own definition of done and define a story point that is not related to time or delivery. They can then assess their progress every day in the stand-up and in the ongoing discussions they have as they run outputs past each other.

Even testing moves from being summative to formative as we use approaches like TDD, prototypes, MVPs, iterative discovery and evolutionary design.

The key thing about formative assessment is that it is not designed (primarily) for the assessor or stakeholders to understand and make a decision. It is designed for the one being assessed to understand and make a decision about what to do next.

This is where a lot of agile concepts align very well with modern education and this is where a lot of the metrics that came from the agile community have, at least broadly, their parentage. They assume that the consumer of the information is the self managed team, who are continuously inspecting and adapting and learning, with the goal of continual improvement and continual delivery of value.

So where does that leave us?

Teachers and others evaluating education have come to look at “learning” at different levels (such as in the Kirkpatrick model) and they have created a very robust assessment system that can include:

  • Assessing the outcome of learning;
  • Using continual feedback and ongoing assessment as par of the learning process
  • Creating a known baseline for designing and improving large scale construction of a complete portfolio of learning programs; and
  • Evaluating and questioning the system within which learning is taking place.

I believe this same approach can form the basis of building an assessment approach for our teams (product teams, agile teams, BAU support teams) and our portfolios of work. I think the existing metrics that are available to agile teams are well suited to this approach, but that they may not create a holistic approach if we simply start using them out of the box, rather than seeing them as part of the system (or Way of Working) that we work in.

I would like to try an example or a straw man to look at how that might be done, but I feel as though this article is already too long.

My only choices now are to edit this article effectively or to delay the construction of a template or model of assessment until I get another burst of writing energy.

I choose to leave things as they are and come back another time. But let me know if you can see value in this approach (based on what you see).

How would a teacher assess our agile teams (part 1)?

I have been thinking about how we measure our performance and how we track our goals at work.

Specifically, I want to look at how we would use assessment in the teams that are customer focused, product led and/or agile. There are plenty of good books in each of these domains and they have a lot of good techniques, tips and even philosophical concepts to help me.

However, many of the books seem to look only at product metrics or only at agile delivery. So I thought that I would take a step back an look at what these teams all have in common and then look at how we should approach assessing our teams and our goal progress from that perspective.

One thing all these teams have in common is that work is driven by empowered, self-organising teams. I know that R Simons has done some great work on how this might alrter the way we implement reporting and assessment.

But then I thought about something else these teams have in common – Continuous learning is core to the way Product led team, customer experience teams and agile teams all plan and manage their work.

Then I also started thinking that assessment and measurement are really learning focused processes. Why would you start measuring and assessing something if it is not for the purpose on ongoing learning?

With this in mind I thought I would look at how experts in learning (and teaching) might approach measuring our work, if it was up to them.

Photo by Tima Miroshnichenko on

A rudimentary starting point – The Kirkpatrick model

The job of a teacher is to teach and the job of a student is to learn.

A simplistic approach to assessing learning would therefore be to test whether the student learned something they were meant to learn.

If the student learned something then they can move onto a new topic and if they did not, then I guess they can give up or go back to try again. The teacher use the testing results to make some decisions. They will keep teaching the same way if the students all past the test and change their approach if people did not seem to understand some of the concepts.

I guess if we applied this to a work context then measuring things is simple. We would ensure that teams had clear goals (set by themselves ideally, but sometimes perhaps inflicted on them). Then we would test whether the team hit their goals. If they do, then they set a new goal and move on, if not they either quit or try again.

Leaders, coaches and others can then keep doing what they are doing if the team is hitting their goals or think about how best to support them if they are missing some core parts of their goals.

Measuring the impact of teaching is a little more complex though and so teachers have gone beyond just testing whether a student passes a regular test. For example, you might say that a Geography lesson was unsuccessful even if the students did learn what the teacher told them to learn. What if the lesson was absorbed but the student could no apply what they learned later in different contexts? What if the student started to hate Geography and stopped learning in the future, missing out on the wonders of contour lines and the effect of hills on climate and people? What if they mastered the class but then found out that what they learned was fundamentally incorrect?

A man called Donald Kirkpatrick tried to tackle this complexity with a 4 stage model that is now over 50 years old, but still in use.

While Kirkpatrick was looking specifically at the success of training, I think we could use the same approach to assess the success of the work that agile teams do, since agile teams are learning teams.

LevelOriginal meaningApplied to team deliveryApplied to customer value
How did the students rate the class?What did the team think of its last sprint? How do the team feel about their work and culture?What do customers and stakeholders think of our work?
Learning goal
Was the intended learning aquired?Was the intended goal achieved? This could be definition of done, sprint goal, OKR.Was the stakeholder’s goal achieved?
3 BehaviourHow well do students apply what they learned?Are teams applying their learning from customers and their own reflection?Are customers actually using what we produced?
Did applying the learning lead to the results we wanted?Is what the team produces creating the outcomes and value we want?Is using our product solving the Jobs To Be Done? Are customer and stakeholder outcomes being achieved?
A dodgy reinterpretation of the Kirkpatrick model

I think if we tried to apply this lens to our work, we would start to see where our (potential) metrics are helping us to learn and improve.

Many teams do stand-ups and regular retrospectives. These meetings are valuable but where do they fit into the “dodgy reinterpretation” of the Kirkpatrick Model?

Some teams limit their discussions to talking about whether they feel they are on track and how they felt that they went in a sprint. In other words they are limited to learning and improving based on their own reactions. More mature teams will include a showcase with stakeholders to get their reactions and will focus both stand-ups and retrospectives on the achievement of their goals. Thus each learning cycle (stand-up or retro) will generate a healthy discussion of both team reactions, an assessment of whether (and how) they achieved their goals and then the setting of new goals based on what they concluded.

In this context, I think we can start to assess the value of the team’s ongoing assessment processes

  • Team meetings (planning meetings, stand-ups, showcases, sprint reviews, retros, scrum of scrums – whatever the team is using to learn, plan and improve).
  • Team metrics such as burn down charts, cycle times, velocity, quality standards, customer feedback

There are 3 more advantages though.

  • Back in Kirkpatrick’s time, I assume that people were assessing multiple training courses or classes and today we are working with multiple agile teams; and
  • Stand-ups and burn-down charts are great for understanding team reactions and potentially the achievement of team goals. However if these are our primary team measurement approaches, then they leave us blind to seeing whether customers use our shiny new features and whether we are achieving the outcomes that our teams, stakeholders and customers want to achieve in the longer term.
  • Traditional project measurement processes are often not built for ongoing team learning or customer value.

Multiple teams

Many organisations are now operating with one or more portfolios of product teams. Others have multiple teams in a program or value stream and others have a gaggle of semi-dependent teams.

If we want to apply and agile way of measuring and learning then we should remain loyal to the core principles of agility. These include empowerment, transparency and a focus on continual delivery of value.

This creates a challenge for scaling agile mindsets and practices, but it also adds substantial value to both the team and the stakeholders when you struggle to understand what you want to learn (measure) and how you can put that into practice. I believe this exercise is a valuable end in itself.

An incomplete picture

If a team focuses entirely on showcases, internal discussions and measuring throughput (ie typical scrum ceremonies, supplemented by a story wall and burn down chart) then they are learning from a very limited understanding of their work.

To set up an agile team for success, you need to include some kind of feedback cycle that informs the team about what happened when they deployed their work. They need to learn about whether customers liked what they got, whether they actually used it on a regular basis and whether it helped them achieve their goals.

These things might be assessed at a team or feature level, but they are more often assessed at a product or portfolio level. Regardless, the team and the other decisions makers must gain access to this information if they are to learn from it and make their own empowered decisions.

There is also a significant impact on the agile attitude to testing. In an old school team, they often completed their work and then someone assessed if it “passed the test.” But in an agile team, testing is a core part of the team’s learning. We do not just test earlier to avoid risk but also to help the team learn to make better decisions about what creates quality and customer joy and what typically leads to issues and rework.

So testing is no longer a pass/fail test but an ongoing assessment process that could also be applied with the “dodgy reinterpretation” of the Kirkpatrick Model. In agile teams, testing is learning – and since it is learning it should actually help the team (and stakeholders) learn continuously.

Thus we can apply the basic concepts of basing our measures on the creation of value for the customer of the measurement:

  • Who is going to learn from this specific measure?
  • What will they learn?
  • Why is it important that they learn that? (And is it worth the effort?)

However this might not necessarily result in exactly the same measures. You might or might not have a showcase across multiple teams, a shared burn down chart etc. Instead you need to look at the “Dodgy reinterpretation” of the Kirkpatrick model model and ask what you want to learn.

The further you go through the levels (1-4) and the more you want to learn from customers rather than opinionated team members, the harder it will get. This means that you need to look at the real value and the real cost of setting up, using and applying the measures you put in place.

One conclusion I reached as a trainer was that measurement only mattered if you or someone else was going to act on it.

Traditional organisational measures may or may not apply

When I first started helping teams become more agile, we often ran into organisational metrics that did not align with the new way we wanted to work.

Sometimes we almost had to employ a “translator” who let the team get on with their work but then tried to reverse engineer reporting in the old format for steering committees and PMOs.

This did not work too well, so some people just claimed that agile teams were immune to the organisation’s “outdated bureaucracy.” Even I remember telling stakeholders that if they really cared what the team was up to, then they should join the team ceremonies. The more they cared, the more time they should spend.

I still think there is a value in telling people to go and see the team in action rather than looking at charts that were produced a week ago.

I also recognise that sometimes we need to report in a way that does not actually help the team. Some organisations have complex reporting that helps them decide on things like OPEX, CAPEX, Anti-crime reporting and (in one of my jobs) ensuring that they still qualify for the R&D tax breaks and funding they received. Where this is the case, there is an argument for telling the team to suck it up and provide the required information. Part of the value they create (and part of their job) is to ensure and support the organisation to survive and thrive. The difference in an agile team is that people empowered to make decisions need to understand the criteria and rationale for those decisions. In other words we need to explain why we want to collect and track information that is not used by the team itself.

Having conceded these points though – I think we can do better now days. Rather than just saying “come and talk” or just applying last year’s organisational measures again, we should step back and question what we hope to learn from those measures and who is actually going to interpret and apply those lessons.

If you sit down and spend some time thinking about using your assessments to create learning, you will probably come up with something better than a dodgy reinterpretation of the old Kirkpatrick model. But I also think that it is better than accepting a default of doing what we used to do, or applying some model that allegedly worked somewhere else, without thinking about how it will be useful to you in the context that you work in.

It will of course take work and experimentation, but I believe that the effort will be worthwhile because it will enable faster learning across the different teams in your organisation.

Also – Kirkpatrick created his model more than half a century ago. Since then teachers in schools and universities have continued to learn how to better use ongoing assessment to create better feedback for learning and so it would be worth looking at what they have come up with more recently.

In part 2 of this article I will look at how teachers have come to break their assessment into 4 different types, each with different goals to be achieved by specific stakeholders in different, related contexts.