I help teams be “more agile.”
My favourite approach is to help them build a culture of self improvement. I figure that if we get that right, then the team will invent all the agile things that they need.
This should work in theory, but it means the team will be limited to learning based on their own experience. So how will they know if they are learning the right skills?
A coach who I respect told me that his favourite approach is to adopt basic Scrum Ceremonies for 2-3 sprints and just tell the team to do it properly.
With a couple of sprints under their belt, the team should have enough experience to decide whether all the meetings work for them.
The team also learn quickly how to use the ceremonies and tools of the approach they are using.
This is based on “Shu Ha Ri” and I know it works because I have done it successfully myself.
I also know there is a danger that the team will actually adopt “Shu Shu Shu” and just implement new ways of working because they think they are supposed to do so – and then not feel that they can throw out the “correct” ways of working to invent their own.
Yet another coach told me that, while it is good to teach ceremonies, focusing on then without teaching relentless use of data limits the range growth.
The reason is that the team is making improvements based on their own opinions.
He recommends a strenuous focus on metrics. For example cycle time of stories, time spent on crises or whatever.
His view of agile it is an evidence based approach and the coach is the one who brings good, objective data with which to identify improvements.
This also works, but assumes that the team will learn to create and use the right data, or that the coach will spend a lot of time analysing things on behalf of the team if they don’t. This could, I fear, leave the team feeling that they must focus a lot of effort on reporting.
Which approach do you think is the best?
In theory, different coaches might choose different approaches based on their own experience, the needs of the team and whatever seems cool at the moment. However I think that we sometimes do teams a dis-service when we choose a single approach to coaching and try to get teams to adopt what we believe is a better way of working.
I think we need to not only help the team lift their current performance (and happiness etc) but also help them learn how to go on lifting their own performance. So whatever approach we adopt use must involve a conscious focus on teaching the team to get better at learning and improving on thrush own.
All of the approaches I mentioned above work IF the team learns how to learn.
Having said that, I have been surprised at the amount of agile literature that seems to focus on delivery, but does not highlight “teaching the team to learn” as a goal.
I was equally surprised when I completed a MOOC recently, because it offered tools to help a team go on learning and improving as a part of the learning material.
In particular, I was surprised because the MOOC was not about agile, or team effectiveness, or leadership. It was the “Big History” course in Coursera.
I did the course to learn about the history of the big bang, cavemen and other things, but what I also learned was the idea of stopping with each module and asking “What did I rely on to form my opinions here?” by using what you might call “claim testers.”
For example, I learned that the universe is really old and that that is was quite hot and then expanded really fast and that it is all a bit weird. But what led me to that conclusion?
Well there was a scientist telling me stuff and they seemed intelligent, so one thing that might convince me is “Authority.”
Authority – Is the source credible? Should I take their word for it?”Authority as a claim tester
This should apply in a retrospective for example. If someone says that “we need to reduce the cyclomatic complexity of the devopsoids in sector 12.” then I will ignore them if they are an agile coach with no development experience. On the other hand, if the developers in an IT team are convinced that there is a better coding practice then I would defer to their authority.
Taking this further, if you tell me we need to use “Epics according to SAFE and not BDD Epics” then I will ignore you, but if you tell me that the requirements for an epic are based on specific privacy legislation, then I recommend we use the legislation, or an expert in it, as our authority to decide what is needed.
But authority is not always the best guide. Sometimes we have evidence that disputes what we hear from the “expert.” For example when (probably) believed that heavy objects fell faster than light ones, experimental evidence proved them wrong.
Evidence – can we verify that? What is that claim based on? Is this an assumption or a fact?Evidence as a claim tester
But is evidence always better than the authority we would like to rely on? At first glance I would say yes, but when I think about it there are many situations when I would defer to an expert or reference guide:
- When the cost of gathering data is high and the cost of being wrong is low.
- When the evidence seems valid, but perhaps not relevant to our situation, or perhaps is incomplete, out of date etc.
But if all we relied on were evidence and authority, I would be a pretty sad coach, because a lot of my decisions, recommendations and questions are based on gut feel. Is that a good thing?
Instinct/intuition – does that make sense to me? What does my gut tell me?Instinct as a clam tester
There are situations where I think seems just seem wrong. I can not put my finger on it but something is not adding up. Similarly, there are times when someone asks a question and an answer just pops into my head.
Relying on instinct has the real advantage that I can move quickly, but is prone to bias, such as confirmation bias, recency effect and many other tricks of the mind.
In fact when I was at school and university I annoyed some of those teaching me because I would form my own opinion and undervalue the opinions of experts if what they said just did not seem right to me. So maybe I should defer an Authority or maybe to the evidence more often.
But in my defence I think sometimes people rely on a lot of data to justify something that experienced team members do not really believe. I have found that if you cannot get someone comfortable with an idea then it is hard to convince them. I could call this “change resistance” but I believe that would be to devalue the experience and expertise that often creates good instincts.
So there are times when I would ignore what the dial says and rely on what the pilot knows in their gut is the right course of action. Similarly there are times when I will act well before I have the evidence to support my action, and it is often the right thing to do.
Finally, there is logic. Even if a person who is an authority in their field is arguing with the support of a lot of evidence, I have often found that there is a flaw in the logic itself.
For example – it is dangerous to drink (alcohol) and drive (a car). This means that a drunk driver is in danger every second they are driving while drunk. This means that the best strategy to avoid a crash is to spend as little time on the road as possible. The conclusion of all this is that if a driver is drunk, then they should drive as fast as possible, so they they complete their journey quickly.
Hmmm — dodgy. But a similar thing happened when people thought that the earth was in the centre of the solar system. Things did seem to go around the earth and some experts explained that this was the natural order of things. But models of the solar system became ridiculously complex and still could not explain what was going on. It was only when people decided to put the sun in the middle, that the models worked and things made sense. Perhaps they would get there through more observation, but it was tackling the logic that got things moving.
In agile scenarios I have actually worked with teams who have issues with defects and yet they are still trying to increase their velocity. They want to reduce defects too, so their focus is on going faster and they have a lot of data about the things that are slowing them down … but in fact they are fixing the wrong problem. Some simple logic will help make sense of the evidence already available to see that having a terrible product will eventually slow them down, while simultaneously making speed irrelevant. This may seem odd, but I have also had teams talking about the need to rebuild a system because of defects, when in fact the defects are in a related system.
Logic -is the reasoning systematic and sound. If we apply critical thinking here, how well will the claim stand up?Logic as a claim tester
So where does that leave us?
I know that I have a bias for instinct and logic. This is my natural tendency. However this means that I should force myself to question/defer to authority and gather/question evidence more often.
I would also suggest that if I want others to come on the journey with me, that I need to present an argument in terms that they understand. For some people, evidence, authority, logic or instinct will be a stronger driver than the other claim testers and if I do not know this, I might present the wrong argument to them. I might keep claiming something “does not make sense” while someone keeps claiming “this is correct according to this great authority.” We could end up in a pointless argument when we should be considering both the authority and the instincts/logic that cause me to question it.
Beyond this though, I am going to try and use this concept as a way of building the culture of improvement that I mentioned at the start of this article.
Rather than just using one approach (my authority as alleged agile guru, evidence from Jira, having a retro based on gut feel) I am going to try to formally introduce the concept of asking “how did you come to that conclusion” in terms of 4 different approaches:
Hopefully, by making the thinking process clear, coupled with basic psychological safety and collaboration, I will be able to coach teams to become awesome. I do not have any evidence yet, but my thinking tells me this should work.
PS – if you want to learn more about this, from those with greater expertise than me, then try the MOOC I mentioned or have a look at examples like this one, where teachers are using this claim testers in their lessons.