Sometimes my approach to coaching is to pay attention to what is going on, form hunch and then act on it. Similarly I sometimes just enter into conversations with people to hear their views and then discuss whether we should act on their hunches.
I believe that doing this can lead to some good insights and the ability to help others clarify their thinking, leading to even better insights and then meaningful action.
But my hunches are based on my own biases and on my ability to notice the signals that contain useful information, among the vast background of noise in the modern world.
So the team and I often seek to use data to help us make improvements beyond just a hunch about what might work and a feeling that we probably made a difference.
My goal in using data then, is to use the data to create visibility of what is happening, so that we can make improvements. This is a worthy goal and it often provides real value, as long as we actually use the data rather than serving it.
Data can be simple
Using data is often straightforward as long as we try to use the data to help ourselves and recognise when it has served its purpose before seeking new data. The trick here is to stay in tune with what benefit we want from the data and not get involved in overengineering the data collection itself.
For example, we want to know if we are a productive team or not. One thing that we can ask is “how long do things take to get done?”
Rather than building detailed flow models we can start with a quick look at the story wall. We might notice that things don’t all flow at the same speed and realise our question should be “What things take longer than others?”
Now we look into a different question, which then leads us to ask “Why do these things take us longer than those things do?”. We might then have a good discussion about when we are dependent on others or when we start work without really understanding what will be involved. We have now found a potential area for improvement.
A simple story wall can provide quite a bit of insight, as can a short, accurate backlog.
But let’s say that instead, we wanted to log all requests and questions and activities into what becomes a huge backlog. The team might see that as a huge pool of potential data but I see it as a huge pile of, let’s say, noise. We could then build piles of reports on our pile of noise and create a lot of noise that shows us very little.
Data-phobes and data addicts
So data can be easy to use, and insight can come from a relatively small amount of information.
In fact we use data like this all the time in our lives. We look at the time to know to turn up at a meeting, we look at the “time until the next train arrives” to know when the next train will come and we use our mobile watches to decide is we should go for a walk to get some exercise. OK – the data that says you should get some exercise might not be useful because we know the answer is always “yes – get some exercise,” but still, it does remind us that we should do something.
So humans are good a consuming data that are useful and simple to digest.
The use of data to improve the work we do can be similar, but psychology seems to get in the way.
Some people hesitate to look at data. If not terrified of it, they are at least worried that it will cause them to feel confused, overwhelmed or embarrassed. More than that, some people have memories of being interrogated about data that they did not really understand, making them feel foolish or inadequate in front of their peers.
People do not (mostly) fear clocks, because they know what the information means and whether they will put it to use. The problem with things like cycle time, velocity, user churn and feature use funnels is that for many people they are vague things that might be useful, might be confusing or might be just a chance to feel stressed.
I guess in theory we could send people to data aversion therapy or start giving them rewards each time they successfully survive a meeting about data. That seems to be focused on the wrong problem though – instead of trying to coax people to use data that they don’t understand, we need to make the data easy to understand and at least potentially relevant to the people using it.
While some people seem to fear being hit by confusing data, others seem positively addicted to it.
They seek some data and then prepare a report on it. They present the data in all its glory and then start to discuss how there is still some data missing and that they will be able to get that together for the next meeting. The problem is that we are not seeing the “glory” of making improvements, we are seeing the gory detail of noise without improvement.
In fact I have seen people actually present things to me with recommendations, but when I ask what the data is that I am looking at, they do not know the answer. They have somehow come up with conclusions, allegedly based on evidence they neither understand nor took time to question.
A bad decision is still a bad decision, even when supported by numbers and a biased opinion is still a biased opinion even when there are multiple tables of data with multiple graphs showing things that seem complicate.
Is that really a thing though?
The problem is, I think, that people think data has power and so they either fear it or worship it. But the truth is that data is dumb and not at all powerful on its own.
Raw ingredients might have the potential to become a delicious cake, but they do not have the power to force a cook to present them in graphs, face questions from their peers and then find the food cooking itself perfectly. In the same way a good cook knows how to make use of the ingredients and what they are likely to get from cooking them, but the cook stays in charge.
No, I am not suggesting that we should use data to cook the books (though you can do so), but rather that we should not fear or admire data for its own sake. We should form an intent for how it can help us achieve what we want and then we should make use of it.
Starting with a goal
This is where an old technique called GQM comes in. I will not describe it fully here, but rather say that you should not start with a metric, and then decide on a goal, you should start with a goal before deciding on a metric or measure. (GQM stands for goal-question-metric).
The first step is to stop asking “What are we going to measure?” or even the harmless seeming question “How are you going to measure that?” until we define what we want to achieve. One of the greatest causes of data phobia and data addiction is the simple starting point of starting to measure something we do not understand.
To measure something we understand like “how long until the next train gets here” is possible, but to measure something obscure like “what is the average instance of train arrival with reference to our current temporal and physical location in the existing assumed timeline or our primary universe of existence” will probably not help me to work out when to get on the train.
What gets measured gets managed, but if it is misunderstood then it gets managed badlyJames King – just then
Another problem with starting to measure something we do not understand is that humans seek certainty over uncertainty, so where there is an unclear goal and also something easy to measure, we will often measure what is easy and turn it into a goal. Once the measure itself becomes the goal then we will pursue the measure at the expense of value and common sense.
You may not believe me but not long after I arrived in Sydney, people complained that the trains ran late. The government published timetables and measured how late trains were. It turned out trains were running late, so they instructed the drivers to skip stations if they needed to in order to catch up with the train timetable. The drivers were happy and the people in charge could report improvements. But apparently the passengers were a bit grumpy when they missed their station or they saw their train rocket past, on time but empty.
Anyway I think I have made that point. So back to my goal of suggesting that you start with a goal.
I know that train timetables are useful, but I don’t think I would use one unless I had a reason to know something about local trains.
A timetable is definitely useful if I want to know when to head to the station and I know roughly how long it will take to get to the station. So if my goal is “to know when to go to the station to get the next train without waiting a long time” then they are useful.
On the other hand if I want to know “what time to catch the train if I want to get home before 6pm” then the arrival time of the train might be somewhat useful, but more useful is knowing how long the trip will take. In fact for me, the trains at peak hour run every few minutes but not all stop at my station. So knowing what time the next train leaves (in 6 minutes) is not that useful to me at all.
For my trip home each day, I just want to know that the train takes 30 minutes and is pretty frequent – plus a reminder to check what stations a train stops at before I jump onboard.
So I generally don’t check any data until I get to the station and even then the “minutes to the next train” or the “times trains arrive” is not as useful to me as it might at first sound.
On the other hand if I am catching a train on a weekend (when there are less trains) and I want to get to a concert or event at a certain time, then I probably want to know both when the train leaves and how long it takes, so I can pick a train that gets me to the event on time but without leaving me waiting for longer than I want to when I get there.
Is it really start with the customer then?
So after all that – how do I know where to start with data? The answer is to not actually start with data.
The exception is where you are feeling curious and want to play with the data, forming hunches and then testing them out. If you want to do that then you can read my last article. But if you want to make data driven decisions then you probably want to know what decisions you are making before you start using data to make them.
So instead of asking “what should we measure?” you can ask:
- Who is going to use the data I want to collect?
- What are they going to use it for?
- If that is what they are going to use it for, what questions might they ask?
- What can I measure that can help answer some of those questions? or What information/data might help here?
Next you can start seeking the information and data that can help you (or your stakeholders) make better decisions or see the potential for improvement. You can also assess whether specific data are useful and worth collecting, rather than feeling obliged to do something with every number or data point that you see.
You can also start unpacking data that might be useful (or useless) by blaming the data for the gaps you have in it, rather than feeling guilty for not understanding it or being or scared of it.