I was talking to someone about a request from a client to “look into something weird.” The client was not sure if something was a problem or not, so they raised it with someone they knew in the team and a couple of hours later the case was solved.
It got me thinking. Where do these odd requests fit in?
Sometimes people do not know how a product works or how to use it, so they ask for guidance – is that “a matter for the help desk”, is it evidence of a need for client training or is it a hint that we need to improve our usability?
The requests often seem trivial, yet there is still a well hidden hint of learning for the humble development team to better understand the context in which their clients operate. It might also be a breadcrumb on a trail to learn about their “jobs to be done.”
So a strange, unexpected request could be both a chance to deliver immediate value beyond the delivery of features and to understand where there is room for improvement in the product or system we support.
It could also be a pointless seeming diversion from producing new features and improvements that are already in what could be an endless backlog of work.
Customer collaboration
I used to think of this as “ongoing collaboration” with customers, but that seems to be going out of fashion now that we have chatbots who give the appearance of emerging as new life forms that are actually more interested in talking to customers than many teams of humans are.
Well, maybe that is not entirely true, but many teams today have split the “ongoing discussion with customers” from the “building values for customers”. The people talking to customers help them with their daily confusion or needs and the builder team build new value from their own research or the requests of others in the organisation.
I guess it does not matter who is talking to customers, or even if we are making the conversations more efficient with technology. What matters is that we are learning from them as well as helping them.
1st level conversations
Helpful staff often solve customer problems. This is great, but the same staff sometimes lack the ability to capture and share how they helped and whether there is an opportunity to be pro-active in the future.
I like to think that part of the “sense and respond” in an agile team is to somehow sense what customers are experiencing and synthesise this into new solutions and remedies for old solutions that are not working so well.
Let’s assume this is happening though. Let’s assume that someone talks to customers when they contact the organisation and that the team gains some insights from this.
Sometimes though, the client asks a question that we cannot answer, even if we look it up in our big book of team knowledge.
Not only that, but similar situations arise internally.
Good testers ask annoying questions that go beyond the scope of checking if a story meets its definition of done. They discover something odd or intriguing, or they might even discover a bug or odd feature in an unrelated part of the system.
Peer reviews of code and of stories can also highlight points of curiousity, not related to the subject at hand – it is neither helping us to code the specific story we are working on, nor helping us to break down a story in our backlog.
What should happen to these requests?
The deliberate ignorance strategy
un-structured requests can involve an unknown amount of investigation. This is kind of like a detective investigating a crime before there is a clear mandate to do so.
I guess one approach to solving these mysteries is to simply ignore them so we can focus on our more concrete work and our existing commitments. This is kind of a focus on “plan and execute” rather than “sense and respond,” but for busy people this is a tempting option.
There are several well-tried approaches to clearing these mysteries out of the way. They each help us maintain our velocity, but at the cost of also maintaining our ignorance.
One trick is to just add vague things to our backlog and then move on. We can then say that an investigation or request is “on the list of things to look at”, knowing that we will not in fact ever have time to properly understand the issue. Even if it bubbles up again and comes to the top of the list, we will not understand the context that is needed to actually investigate it properly.
When I put it like that is seems like a sub-optimal approach, but I see teams doing it from time to time.
A better approach is to be honest and tell people that you are not going to prioritise the analysis of this mystery. Instead you will focus on your team goals.
Single point curiousity
A slightly different approach is to have a volunteer take on the role of investigator. This volunteer can be the scrum master, product owner, triage officer, service manager or whatever they are called.
This single person can then choose the amount of time they will spend helping unravel mysteries, versus the time that they will manage the “backlog” of things that the team is already committing to work on.
However, some mysteries cannot be solved without someone technical getting involved. That technical person needs to look at log files, look a code, consult the runes or do something that helps unravel the mystery.
Perhaps this work is called a spike then? That is what I used to call it – where spike meant “any timeboxed detective work done by the team.” We added this to our wall of work but did not put points on it, instead just committing a limited amount of time for specific people to experiment away. We did not wait until a future sprint so it usually caused our velocity to drop a bit and we had to mention the spike in our stand-ups, which we were happy with.
But the term spike seems to mean something specific to a lot of people now days – they define it as “technical work needed to remove ambiguity from a story, create estimates or create a throw-away test of a possible solution before investing too much time.”
That means that we need to first come up with a story, prioritise it and then commission a spike, by which time the trail may have gone cold and we may not be able to solve the mystery.
Maybe call it an experiment then? No again, the client is not a scientist and their is no hypothesis to test or disprove yet. We do not yet know our hypothesis.
So maybe just call it an investigation.
I am happy to commit time to investigations during a sprint and drop my velocity, but I still think there is something missing. There should be both a resolution (or failure to resolve) and a knowledge sharing to other team members. Doing this increases knowledge and future mystery solving power – but again distracts from the velocity and spring goal focus.
A team of investigators
I remember a scrum master who reported to me put in place a “shield team” to protect the rest of the team from distractions caused by the support team, the business crew and from me. Apparently my curiousity and requests for “a few minutes” had the risk of wasting quite a bit of time.
The idea was that two people would volunteer (or be volunteered) to be the shield for the sprint. They would monitor things, cop requests and help the PO with investigations. They did not do so full time but they had lower priority stories to work on that others and so they could drop them to jump into investigations.
That approach worked really well, for that team.
It did require maturity for the shield team to remember what to do:
- Ask about the problem, knowing people did not fill out any template properly;
- Be curious or escalate panic if it is a symptom of a crisis;
- Commit specific time to resolve and a specific question to answer;
- Fix something or create a workaround and proper fix plan, especially if it is a “problem” and not an “incident/one-off query”; and
- Capture the learning to share with others.
I like the idea of having a mystery solving team. Dropping our throughput of features to have members of the team take the time to stop, smell the roses, unravel mysteries and solve problems that others did not realise were problems. This will impact the speed of delivering new features and fixing bugs though.
What approach do you think your team should take here?