In my last article, I gave some thought to using data. Rather than saying you should use data, which you should, I looked briefly at the limitations and dangers of using data.
This time though I thought I would start looking at how to use data for coaching teams and finding ongoing improvement. I use different approaches to doing this, depending on the mood I am in (though officially it is based on situation assessment, alignment of best practice to local context and ancient geomancy based methodologies.
Sometimes I do a full data collection and audit, sometimes I start with a goal and then generate questions and sometimes I just observe and wait for insight.
Observing and waiting for insights
When I am observing and waiting for insight, I try not to come up with an initial hypotheses or goal before I start working. Instead I might have a high level goal such as “work with the team and see if we can do some good stuff.”
If this sounds dodgy then let me introduce to a legitimate and highly effective approach used in education – The “Spiral of Inquiry”. I am going to loosely refer to it here to justify my relaxed approach to coaching, but you can learn about here (PDF), or if you want a whole playbook on using it, here (PDF).
When I start coaching, I often ask leaders what they want from the team, but there is a danger that
- The leaders need coaching too, both enabling and constraining the team while trying to make things better;
- The team is a complex thing in a complex environment and the leader is simplifying things for me too much; or
- There are people in the team who are already doing really good work, but they have not been unleashed.
In this situation my first step is to act confident (saying I am sure I can help) while also being nervous about making any commitments.
My next step is to go an see the team in action. In this situation I have to sometimes put my own bias aside. For example:
- I might hear that the team has implemented the best agile framework on the planet (obviously https://scaledagiledevops.com/) but that the team members are resisting change, causing me to expect trouble and overlook what is actually working.
- I might have coached the team before, causing me to expect they are doing good stuff and trying hard.
So I try to just observe. You can call this “the mind of the child” or just listening without judgement, but the effect is the same. I just start to interact and see what is happening.
In the Spiral of Inquiry this is called “scanning.”
Next I want to work out where I can help, which means focusing on things that might be important.
This could be identifying a key stakeholder and asking what they are hoping for, or it could be taking notes on what seems to be important and then selecting something to dig into further. Sometimes I use a sophisticated approach like Perill to explore things in detail, but other times I just use my gut instinct.
Let’s assume I am just using my gut instinct. I still want to convert “a feeling” into something I can verbalise. So I ask myself 3 questions several times while I am observing:
- What is going on here?
- Why is it important?
- How do I know?
These general questions lead me to start forming hunches, such as “run very far away” or “maybe they need some help with their team ceremonies.”
I try to put these rough thoughts into a structure so I can think more deeply – I might ask more questions or just try to create a sentence:
- What is most important here?
- What is the challenge of opportunity for me to help?
- What do I want to see happen? What do I think the people I am helping want?
- What now?
Now I might have a hunch such as “these people need help defining what they want to work on. Their sprint goals seem flaky and they seem to be coping random requests rather than getting closer to a goal”
Finally you think – I might gather some data.
Kind of – but what I actually do is share my hunch.
Now that I have shared my hunch I might ask some questions to gain opinions, or I might look at some evidence (watching ceremonies, looking at what happens to a story as it is realised into production or maybe looking at the data a team has.
I don’t study it to really understand in detail though. I gather enough evidence to gain the confidence to take some kind of action.
I might run a quick session in avoiding Geomancy and actually testing stories to decide on when the story is done, or I might help the team break a couple of stories down. Whatever it is I act on the assumption that my hunch might be right but that I might still be wrong.
Since I honestly hoped that my actions were helpful and that the team tried something new that might help, I want to find out if we were right. So I want to check in quickly to see if the action helped confirm my hunch and helped make things better.
This is where I will again use some kind of evidence (asking people, running a retro, checking if things got faster/safer/easier, etc.
Since I might be wrong I also want to check if the new action was worth doing and if it had any unintended consequences.
Now that I the team and I are learning something, we might start defining proper hypotheses and establishing better data. Just as often though we go back to the beginning and I start observing/scanning again.
Often I want to build on what we started, but just as often I notice new things that are going on, then form new hunches to share.
What do you think?
I have managed to get some really good results using this “hunch based” approach, with data and evidence coming AFTER I have a hunch. I have also been told more than once that the approach is too informal and is not really repeatable.
Of course there are also times when I use a different approach – maybe I will share more on those approaches next time.
I believe that this “gut feel”, “hunched bases” approach can help to create “generative conversations” where the team get more used to questioning, inquiring and sharing hunches. Demonstrating that we can act on incomplete data and be wrong some of the time can be powerful.
What do you think – dodgy or a potentially effective approach?
One thought on “Hunches, spirals, data and ongoing improvement”