Should I replace my hunch with data?

Sometimes my approach to coaching is to pay attention to what is going on, form hunch and then act on it. Similarly I sometimes just enter into conversations with people to hear their views and then discuss whether we should act on their hunches.

I believe that doing this can lead to some good insights and the ability to help others clarify their thinking, leading to even better insights and then meaningful action.

But my hunches are based on my own biases and on my ability to notice the signals that contain useful information, among the vast background of noise in the modern world.

So the team and I often seek to use data to help us make improvements beyond just a hunch about what might work and a feeling that we probably made a difference.

My goal in using data then, is to use the data to create visibility of what is happening, so that we can make improvements. This is a worthy goal and it often provides real value, as long as we actually use the data rather than serving it.

Data can be simple

Using data is often straightforward as long as we try to use the data to help ourselves and recognise when it has served its purpose before seeking new data. The trick here is to stay in tune with what benefit we want from the data and not get involved in overengineering the data collection itself.

For example, we want to know if we are a productive team or not. One thing that we can ask is “how long do things take to get done?”

Rather than building detailed flow models we can start with a quick look at the story wall. We might notice that things don’t all flow at the same speed and realise our question should be “What things take longer than others?”

Now we look into a different question, which then leads us to ask “Why do these things take us longer than those things do?”. We might then have a good discussion about when we are dependent on others or when we start work without really understanding what will be involved. We have now found a potential area for improvement.

A simple story wall can provide quite a bit of insight, as can a short, accurate backlog.

But let’s say that instead, we wanted to log all requests and questions and activities into what becomes a huge backlog. The team might see that as a huge pool of potential data but I see it as a huge pile of, let’s say, noise. We could then build piles of reports on our pile of noise and create a lot of noise that shows us very little.

Data-phobes and data addicts

So data can be easy to use, and insight can come from a relatively small amount of information.

In fact we use data like this all the time in our lives. We look at the time to know to turn up at a meeting, we look at the “time until the next train arrives” to know when the next train will come and we use our mobile watches to decide is we should go for a walk to get some exercise. OK – the data that says you should get some exercise might not be useful because we know the answer is always “yes – get some exercise,” but still, it does remind us that we should do something.

So humans are good a consuming data that are useful and simple to digest.

The use of data to improve the work we do can be similar, but psychology seems to get in the way.


Some people hesitate to look at data. If not terrified of it, they are at least worried that it will cause them to feel confused, overwhelmed or embarrassed. More than that, some people have memories of being interrogated about data that they did not really understand, making them feel foolish or inadequate in front of their peers.

People do not (mostly) fear clocks, because they know what the information means and whether they will put it to use. The problem with things like cycle time, velocity, user churn and feature use funnels is that for many people they are vague things that might be useful, might be confusing or might be just a chance to feel stressed.

I guess in theory we could send people to data aversion therapy or start giving them rewards each time they successfully survive a meeting about data. That seems to be focused on the wrong problem though – instead of trying to coax people to use data that they don’t understand, we need to make the data easy to understand and at least potentially relevant to the people using it.

Data addicts

While some people seem to fear being hit by confusing data, others seem positively addicted to it.

They seek some data and then prepare a report on it. They present the data in all its glory and then start to discuss how there is still some data missing and that they will be able to get that together for the next meeting. The problem is that we are not seeing the “glory” of making improvements, we are seeing the gory detail of noise without improvement.

In fact I have seen people actually present things to me with recommendations, but when I ask what the data is that I am looking at, they do not know the answer. They have somehow come up with conclusions, allegedly based on evidence they neither understand nor took time to question.

A bad decision is still a bad decision, even when supported by numbers and a biased opinion is still a biased opinion even when there are multiple tables of data with multiple graphs showing things that seem complicate.

Is that really a thing though?

The problem is, I think, that people think data has power and so they either fear it or worship it. But the truth is that data is dumb and not at all powerful on its own.

Raw ingredients might have the potential to become a delicious cake, but they do not have the power to force a cook to present them in graphs, face questions from their peers and then find the food cooking itself perfectly. In the same way a good cook knows how to make use of the ingredients and what they are likely to get from cooking them, but the cook stays in charge.

No, I am not suggesting that we should use data to cook the books (though you can do so), but rather that we should not fear or admire data for its own sake. We should form an intent for how it can help us achieve what we want and then we should make use of it.

Starting with a goal

This is where an old technique called GQM comes in. I will not describe it fully here, but rather say that you should not start with a metric, and then decide on a goal, you should start with a goal before deciding on a metric or measure. (GQM stands for goal-question-metric).

The first step is to stop asking “What are we going to measure?” or even the harmless seeming question “How are you going to measure that?” until we define what we want to achieve. One of the greatest causes of data phobia and data addiction is the simple starting point of starting to measure something we do not understand.

To measure something we understand like “how long until the next train gets here” is possible, but to measure something obscure like “what is the average instance of train arrival with reference to our current temporal and physical location in the existing assumed timeline or our primary universe of existence” will probably not help me to work out when to get on the train.

What gets measured gets managed, but if it is misunderstood then it gets managed badly

James King – just then

Another problem with starting to measure something we do not understand is that humans seek certainty over uncertainty, so where there is an unclear goal and also something easy to measure, we will often measure what is easy and turn it into a goal. Once the measure itself becomes the goal then we will pursue the measure at the expense of value and common sense.

You may not believe me but not long after I arrived in Sydney, people complained that the trains ran late. The government published timetables and measured how late trains were. It turned out trains were running late, so they instructed the drivers to skip stations if they needed to in order to catch up with the train timetable. The drivers were happy and the people in charge could report improvements. But apparently the passengers were a bit grumpy when they missed their station or they saw their train rocket past, on time but empty.

Anyway I think I have made that point. So back to my goal of suggesting that you start with a goal.

I know that train timetables are useful, but I don’t think I would use one unless I had a reason to know something about local trains.

A timetable is definitely useful if I want to know when to head to the station and I know roughly how long it will take to get to the station. So if my goal is “to know when to go to the station to get the next train without waiting a long time” then they are useful.

On the other hand if I want to know “what time to catch the train if I want to get home before 6pm” then the arrival time of the train might be somewhat useful, but more useful is knowing how long the trip will take. In fact for me, the trains at peak hour run every few minutes but not all stop at my station. So knowing what time the next train leaves (in 6 minutes) is not that useful to me at all.

For my trip home each day, I just want to know that the train takes 30 minutes and is pretty frequent – plus a reminder to check what stations a train stops at before I jump onboard.

So I generally don’t check any data until I get to the station and even then the “minutes to the next train” or the “times trains arrive” is not as useful to me as it might at first sound.

On the other hand if I am catching a train on a weekend (when there are less trains) and I want to get to a concert or event at a certain time, then I probably want to know both when the train leaves and how long it takes, so I can pick a train that gets me to the event on time but without leaving me waiting for longer than I want to when I get there.

Is it really start with the customer then?

So after all that – how do I know where to start with data? The answer is to not actually start with data.

The exception is where you are feeling curious and want to play with the data, forming hunches and then testing them out. If you want to do that then you can read my last article. But if you want to make data driven decisions then you probably want to know what decisions you are making before you start using data to make them.

So instead of asking “what should we measure?” you can ask:

  • Who is going to use the data I want to collect?
  • What are they going to use it for?
  • If that is what they are going to use it for, what questions might they ask?
  • What can I measure that can help answer some of those questions? or What information/data might help here?

Next you can start seeking the information and data that can help you (or your stakeholders) make better decisions or see the potential for improvement. You can also assess whether specific data are useful and worth collecting, rather than feeling obliged to do something with every number or data point that you see.

You can also start unpacking data that might be useful (or useless) by blaming the data for the gaps you have in it, rather than feeling guilty for not understanding it or being or scared of it.

Hunches, spirals, data and ongoing improvement

In my last article, I gave some thought to using data. Rather than saying you should use data, which you should, I looked briefly at the limitations and dangers of using data.

This time though I thought I would start looking at how to use data for coaching teams and finding ongoing improvement. I use different approaches to doing this, depending on the mood I am in (though officially it is based on situation assessment, alignment of best practice to local context and ancient geomancy based methodologies.

Sometimes I do a full data collection and audit, sometimes I start with a goal and then generate questions and sometimes I just observe and wait for insight.

Observing and waiting for insights

When I am observing and waiting for insight, I try not to come up with an initial hypotheses or goal before I start working. Instead I might have a high level goal such as “work with the team and see if we can do some good stuff.”

If this sounds dodgy then let me introduce to a legitimate and highly effective approach used in education – The “Spiral of Inquiry”. I am going to loosely refer to it here to justify my relaxed approach to coaching, but you can learn about here (PDF), or if you want a whole playbook on using it, here (PDF).


When I start coaching, I often ask leaders what they want from the team, but there is a danger that

  • The leaders need coaching too, both enabling and constraining the team while trying to make things better;
  • The team is a complex thing in a complex environment and the leader is simplifying things for me too much; or
  • There are people in the team who are already doing really good work, but they have not been unleashed.

In this situation my first step is to act confident (saying I am sure I can help) while also being nervous about making any commitments.

My next step is to go an see the team in action. In this situation I have to sometimes put my own bias aside. For example:

  • I might hear that the team has implemented the best agile framework on the planet (obviously but that the team members are resisting change, causing me to expect trouble and overlook what is actually working.
  • I might have coached the team before, causing me to expect they are doing good stuff and trying hard.

So I try to just observe. You can call this “the mind of the child” or just listening without judgement, but the effect is the same. I just start to interact and see what is happening.

In the Spiral of Inquiry this is called “scanning.”


Next I want to work out where I can help, which means focusing on things that might be important.

This could be identifying a key stakeholder and asking what they are hoping for, or it could be taking notes on what seems to be important and then selecting something to dig into further. Sometimes I use a sophisticated approach like Perill to explore things in detail, but other times I just use my gut instinct.

Let’s assume I am just using my gut instinct. I still want to convert “a feeling” into something I can verbalise. So I ask myself 3 questions several times while I am observing:

  1. What is going on here?
  2. Why is it important?
  3. How do I know?


These general questions lead me to start forming hunches, such as “run very far away” or “maybe they need some help with their team ceremonies.”

I try to put these rough thoughts into a structure so I can think more deeply – I might ask more questions or just try to create a sentence:

  • What is most important here?
  • What is the challenge of opportunity for me to help?
  • What do I want to see happen? What do I think the people I am helping want?
  • What now?

Now I might have a hunch such as “these people need help defining what they want to work on. Their sprint goals seem flaky and they seem to be coping random requests rather than getting closer to a goal”


Finally you think – I might gather some data.

Kind of – but what I actually do is share my hunch.

Now that I have shared my hunch I might ask some questions to gain opinions, or I might look at some evidence (watching ceremonies, looking at what happens to a story as it is realised into production or maybe looking at the data a team has.


I don’t study it to really understand in detail though. I gather enough evidence to gain the confidence to take some kind of action.

I might run a quick session in avoiding Geomancy and actually testing stories to decide on when the story is done, or I might help the team break a couple of stories down. Whatever it is I act on the assumption that my hunch might be right but that I might still be wrong.


Since I honestly hoped that my actions were helpful and that the team tried something new that might help, I want to find out if we were right. So I want to check in quickly to see if the action helped confirm my hunch and helped make things better.

This is where I will again use some kind of evidence (asking people, running a retro, checking if things got faster/safer/easier, etc.

Since I might be wrong I also want to check if the new action was worth doing and if it had any unintended consequences.


Now that I the team and I are learning something, we might start defining proper hypotheses and establishing better data. Just as often though we go back to the beginning and I start observing/scanning again.

Often I want to build on what we started, but just as often I notice new things that are going on, then form new hunches to share.

What do you think?

I have managed to get some really good results using this “hunch based” approach, with data and evidence coming AFTER I have a hunch. I have also been told more than once that the approach is too informal and is not really repeatable.

Of course there are also times when I use a different approach – maybe I will share more on those approaches next time.

I believe that this “gut feel”, “hunched bases” approach can help to create “generative conversations” where the team get more used to questioning, inquiring and sharing hunches. Demonstrating that we can act on incomplete data and be wrong some of the time can be powerful.

What do you think – dodgy or a potentially effective approach?

Using data is good but watch out

A lot of product teams use a lot of data and I think that is great. I am even in the process of helping some teams better use the data that is available to them.

However, doing so has reminded me that “using data” and “benefiting from using data” are different things.

This article is about how using data might not be useful and, based on my experience and the opinions of others (an absolutely no actual data), you should take some steps to protect yourself from being the victim of misleading data.

Any data appearing here is of a dubious nature

I have said that the steps I recommend are not based on “actual data” but that statement is “actually” subject to interpretation.

Often when I see data it is based on unclear goals or questions and therefore may not actually be relevant. Similarly, I sometime see data that is based on something subjective but is turned into a number and thus it is like a well-laundered opinion.

The evidence is dirty and unreliable but it has been recreated to appear objective.

So a more accurate statement about data in this article is that it is of a dubious and unreliable nature. Additionally, where it might be accurate, it is not directly going to support the steps or recommendations I provide – there might be data but it does not support the conclusions that I share.

Does that happen outside the world of the blog article?

Just as I might rely on dubious data to support my recommendations here, I see teams in the real world trick themselves into doing the same thing in the real world.

The first cause of this, responsible for approximately 38.2% of all suffering in agile teams, is the desire to turn something subjective into something objective.

For example, a team uses story points to measure velocity. It helps them guess how much work they can commit to in the next sprint.

Then a well meaning manager wants to look at team performance and sees the same number. She decides that if there are 5 people in the team and the velocity is 30, then each team member probably produces 6 points. Thus if someone produces , a team assigns their name as the owner of 9 points, then they are performing above expectations and if someone else only has 3 points then they are not doing so well. There is no malice involved but the number being used does not actually represent the thing being judged.

Worse though, the existence of a clear and dubious number causes changes in the team’s behaviour. People start to compete to score points rather than collaborate.

Fortunately the manager sees this change in behaviour and takes action to stop it, relying on their instinct and the existing trust in the team. Phew.

But the team is not yet safe – somewhere in the organisation, there is someone looking to compare the delivery of teams across their portfolio. It is so hard to know which teams are going fast and which teams are going slow. In the dark ages people had function points but they relied on the mystical work of the function points priests and sorcerers. But wait – all teams have story points and they are a measure of velocity. So teams with a high velocity are going fast, or so the portfolio analyst thinks to himself.

Again – the subjective points used to guess how much work to throw into the next sprint is, indeed a number, but it is not an objective number representing speed or velocity. A point is not a kilometer of work, nor even a centimeter of work, it is a guess about how much a team can commit to in a week. 100% of attempts to use a arbitrary and basically random number to measure speed result in wasted effort and 42% of those attempts result in poor decisions. Fortunately 58% of the time the people relying on comparative velocity are merely using it for gossip and not for actionable decision making, so this practice is relatively harmless).

You get my point though, the use of points to make decisions about performance or cross-team comparisons is pointless.

Resulting advice

Based on this subjective anecdote, I think there are some obvious steps that we can take

  • Realise that not all data are relevant, or at least not relevant to to the question you are addressing.
  • Ask yourself what you want to decide or understand before selecting relevant data to use, rather than seeing data and trying to make it work.
  • Separate data used for curiousity (playing with the data to see if something reveals itself) from data used to answer specific questions (having a goal or hypotheses). Attempt to have “the mind of the child” when playing with data and avoid prejudgment. Similarly, attempt to ensure the relevance of data when using it to test a hypothesis or report on something.

If you want to learn more about these dubious areas, google “Selection bias”, “hindsight bias” and “confirmation bias” which undermine the use of data and result in 45% of irrational conclusions that look entirely rational.

When things do not add up

One of the best things about data is that it can reveal things that we have not noticed. Data can overcome our own subjective biases. But the opposite is also true. Data can be wrong were subjective experience is right.

If you want to have good data insights about customers, try running your conclusions by you customer support people. Sometimes you will have evidence that they have not seen, but often they also have direct observations and experience that your data do not account for. It is easy to assume that good, logical data and algorythm outperform experience and hunches, but it is not always the case – sometime hunches and lived experience see what is invisible or misleading in the data.

That is an easy one, but can result in people telling you that you are wrong, so 59% of product teams avoid validating their views with those who deal with customers every day. Similarly many managers and HR teams base policies on recommendations based on data from other companies (say google), but don’t run the ideas past their own staff. While HR and Managers do not collect data on this (79% of HR people are bad at using statistical inference and this is totally not a stereotype or bias, it is a number), I think this is still prevalent (no supporting data).

Laundering subjective data

Laundering money is where money you don’t want to explain is converted into money you can explain, through dodgy practices and criminal activity.

Most of our teams would not participate in laundering money, but do we launder data. Let me look at one experience I had.

I asked people in a retrospective whether things were going well. Most said yes, or gave a non-committal answer. So I decided to get a more quantitive and objective view. I asked people to score things out of 5. We got a real number and in fact even better it was a positive rational number. Let’s say that the average rating was 3.9.

I then thought about comparing that number over time – if it want up then things are improving and if it went down then things are going the wrong way. The graph is easy to do and I can easily report it.

Now I have a rating that might look objective. I can clearly show that an improvement from 3.9 to 4.1 is a growth of 0.2 happy factor points. However it is still from the same source as the “I guess things are good” rating. The number looks objective but it is really a different representation of the same thing. It is still subject to peer group pressure, differing definitions of good, different internal attitudes to how to rate things and so forth.

I am not against the agile fist of five or the converting of opinion into numbers for discussions or surveys. My suggestion though, is to remember that it is still subjective.

So the step here is to ask what something actually means and where it came from. When you see an NPS, Velocity, Engagement Score, be aware of the source and collection of the number. It might be useful but it should not gain credibility just because it has been “laundered” from opinion to a number or graph.

The lazy Bureaucrat

Measures are good because they support decision making. “What gets measured gets managed” as they say.

Measures are also good because they change behaviours. A team that sees something will react to it, so showing where their effort is being spent or the number of bugs that are being created can help them to adjust their behaviour to improve the future.

But there is another saying – what gets measured gets gamed.

I once worked with a customer support administration area that were tracking the resolution time to resolve customer issues. An executive set a goal of 5 days when the current average was close to 20. The numbers improved quickly and people were happy. But I was then investigating some complaints from a customer that came up as part of a warranty support role I played. I found they had been waiting weeks for answers to previous problems but that we were reporting that their requests were resolved within days.

When I checked, I found that a request could be put on hold if it was “awaiting information” which was there to account for customers not responding to requests for information. I was shocked to discover though, that in multiple work teams, a request was automatically put into the status of waiting for information if it was about to breach a service level agreement (SLA).

This offended me and I escalated to the head of the department. I have softened my view since then though – I think the practice is still outrageous, but I think the error is more systemic than moral now. I think many good people have committed “minor” sins to keep the boss happy without really pausing to think about the impact.

This has to do with 2 (or more) related things. The first is not seeing the impact of the incorrect number and the second is the avoidance of short-term stress when there is no clear resolution.

So the step here is to anticipate that every measure risks becoming a goal and that when it becomes a goal it is likely that people will forget the original goal and find the quickest, easiest and least painful way to achieve the (now gamed) measure.

So when you go to use any number or data for anything beyond a single use, consider the “lazy bureaucrat” who will find the easiest way to achieve the score, potentially without achieving the goal. Ask yourself “If someone wanted to achieve these numbers with the least effort possible, what would they do?”.

I got that test from the book “Upstream” by Dan Heath and it clarified something I had been observing for many years. Dan Heath also asks, in the same book, several other questions that influenced this article.

Another is “what are the unintended consequences of this?”, or more specifically “what if we achieved these short term measures but actually created a poor outcome – what could explain that happening?”

So the step here is to first ask these questions an then to also consider a “countermeasure” or secondary measure that keeps you alert to unintended consequences of behaviours changed by your measures.

A rising tide lifts all ships

The final step in this growing article is to look at what else might explain the data we see.

There is a saying that a rising tide lifts all ships. In other words the captain of the ship is doing nothing, yet the ship is rising.

No great shock there, I guess, but the saying has broader implications.

  • A company’s share price might go up and the executives celebrate their leadership and the success of their latest efforts. But the The first challenge with data is that people think it is more objective than opinions.
  • A home-owner might spend $100,000 on renovations and then sell their house for $500,000 more than they bought it, but the market went up while they were renovating. How much of the increase in price was market related and how much was the renovation?
  • Staff turnover might go down because of the time of year, or the outside market, while agile coaches think it is because of the learning culture they are building.

OK – so this is the step that comes from the saying about rising tides. In complex systems most things have multiple causes and most causes have multiple effects. I cannot give a number on what “most” means here because 85% of attempts to use statistics and measures in complex adaptive systems fail to account for the complex and adaptive parts of the system and are thus simplistic numbers to make collectors feel good rather than predictors of outcomes.

Anyway – the step is this. When you look at a result or number, ask yourself:

  • What else could explain this?
  • What might someone else attribute this change to?

Ignorant and curious or worldly and wise

I spoke to someone about being a Product Manager and what made them successful in the role. Of course, whole books have been written about the topic so I don’t think I can do it justice on one blog article.

One topic stood out for me though – the value of a wise and experienced Product Manager (and the risk).

Product Managers can add enormous value to an organisation if they really understand the organisation’s customers AND they really understand how to manage product development AND they really understand the organisation itself.

It can be hard to be an effective Product Manager if you fall short on any of these. For example, even if you know your customers and know how to build a product, but do not know how to get things done in your organisation or how the internal stakeholders interact, you will struggle to do great work.

This suggests that we want Product Managers who are both skilled and knowledgeable.

But what if the reverse is true? What if expertise is a weakness in the role rather than a strength?

  • What if expertise means you know what to look for, and since you know what to look for you miss things?
  • What if people turn to you because you are an expert and they take your word for things rather than challenging you or creating their own hypotheses from their observations?

The case for ignorance

One of my strengths as a coach is my frequent, complete ignorance. My lack of a well-educated opinion means that I must seek to understand what others think. My foolish questions and reflecting back of answers can sometimes uncover new insights, even for the experienced coachee.

Can this work for a PM though?

As a new PM, you need to establish credibility, and complete ignorance is not the best approach.

In fact Expertise is great in any field. It feels good to be respected and learning to master something is extremely fulfilling in its own right. It also creates credibility with stakeholders and helps to drive good decisions.

Sometimes though, expertise means you don’t need to listen to others. Instead you can educate and advise them.

I know – we should always listen, but sometimes it is good to actually hear a single, clear view of the way forward. I guess, in theory, the PM should listen to all views and then demonstrate the sagacity of King Solomon in determining the way forward. Inn practice it is sometimes better to have a consistent view that we can test and respond to.

This presents a challenge for a new PM, or a PM crossing from one field to another.

However, the competence and competence of the PM also comes at a cost, that the new, ignorant PM is incapable of paying.

So – if the new, potentially ignorant PM is neither experienced or competent in a field – how can their ignorance be an advantage?

Firstly, people speak up more when they have to. The knowledge that someone else has the answer makes people, potentially, guess what the person will say, rather than what they think the answer could be. It also means that people will question the answers less.

This is a clear gap – if we lock in the assumptions of the expert and do not challenge that expertise, then we have essentially replaced the entire scientific management of the product management process with the high priest approach practiced by people back in the days of the pyramids. While that worked for building pyramids, I am not sure if it works in a competitive, rapidly evolving product ecosystem.

There is another, often unseen, advantage to ignorance too, as long as it comes with humility and curiousity.

While displaying expertise can build credibility, listening to others and recognising their contribution can also do wonders for building strong relationships and coalitions. So the curious, open minded (ignorant) PM can create a great product coalition by making sure that the different views of stakeholders are honestly shared and understood – not for judgement but for exploration.

So expertise can shut people down while a willingness to display ignorance (with humility and curiousity) can build coalitions and wise crowds.

Which is better?

Which is better then – expertise or ignorance? It stands to reason that we want to work with experts, but at the same time I think we under-utilise our ignorance in all the roles we play.

So maybe the best answer is to start with some expertise and then leap on every chance to show that you learned something. In fact not just what you learned but who you learned it from and what the process was.

This is not about faking it (either expertise or ignorance) it is about using whichever tool is available to you at the time. At least that is my expert opinion πŸ™‚

Creating credibility

I remember my father talking about trust – he said “Whether you choose to trust someone or not, you will generally prove yourself right.” Based on this, and some recent reading I have done, my last article was about where to start with trust.

A quick warning about this article

This is an article about creating credibility, so I guess I should start with being trust worthy.

If you know me, you probably realise that no article from me that starts with the words “I remember my father talking about …” will be both short and succinct.

So in the interests of being honest, I want to warn you that this is a long article (2,000 words).

My intention is to bite off a large idea – that of building real and lasting credibility without needing amazing charisma or extraordinary abilities.

Based on my last article

Whatever follows in this article, I think that building trust starts with two things that are under your control. One is to start with yourself and the other is to take the risk of going first. I think these same rules apply to creating credibility.

Start without yourself

If you want others to trust you and find you credible, you need to think about whether you trust yourself and find yourself credible. You need to decide if you are worthy of the trust of others and to ask the, perhaps tough, question of whether you present someone to others that they trust.

This is a tough one, but the good news is that you do not need to master it, just to start reflecting on it and being willing to start showing yourself respect as you grow. Whether you are a bit cocky or a wracked with doubt, you can follow the remaining steps here and you will (I believe) develop more authentic self respect as your credibility is established.

The rule of going first

The other starting point is to believe that trusting others is a good way to go. You can trust yourself, but you also have to show that you are willing to trust others. You can show you are willing to expect high standards of others, but you still need to show you will trust them before they will really be willing to trust you.

In fact, I think there is a general rule about “going first” that applies here.

If you want to be trusted, a good starting point is to demonstrate that you trust others. Doing so will generally create a conversation where the other person feels some agency and also starts to share their trust in return.

The rule also applies to both empowerment and accountability. If you want empowerment or you want others to be accountable, it is generally good for you to be the first one to take the risk. Try being empowered and accountable and try empowering others before you know they will deliver.

In a similar way, if you want to be respected, a good starting point is to show (authentic) respect for others.

A cynical observation

I know that some people seem to be able to bully, harass and belittle others and still get their way, but I think this is more often an illusion rather than a way to create respect.

I believe that it is more about the arrogant person being, already, in a position of power. I do not think they actually earn respect as much as wield power, on the assumption the tide will not turn on them.

But the tide often does turn – pride cometh before the fall, as they say, and the once powerful stagger, it is surprising how quickly and dramatically they seem to fall if they lack the respect, trust and especially, the credibility that is needed to convince others to support them, when they do not wield a big, threatening stick – when they need to turn to others for help rather than being in a position to demand favours from them.

Instead – real respect comes from listening to others and being willing to see that they have strengths and experience. This simple starting point is enough to create a basis for being credible yourself.

Back to the main story

So – respect is earned, partly by showing respect; and respect is closely related to being credible.

I do think though, that there is a subtle difference between respect and credibility. I think both involve extending respect to others and acting in a way that inspires trust, but there are is a difference.

I respect Venus Williams for her tenacity, talent and her track record of winning at tennis; but if Venus Williams attempted to give me advice about how to build a nuclear reactor, or how to configure Jira for a team to use in tracking their work, I would need some convincing before I found her credible in that space.

And so the hard work begins

So how do we establish the credibility that is relevant in convincing teams to trust and rely on you?

Regardless of whether you are a coach, a new product manager or a people leader, I think it is important to credibility and not just to be liked. Doing so will make life easier when you ask others to commit to a goal, or to share their views and listen to yours. Plus, I think, being taken seriously is good for the soul.

One of the things that I loved, when re-reading “The speed of Trust,” was the simple, actionable advice the authors give on creating credibility.

It all starts with us trying to define what “credibility” actually means. Think about what it means to you and then see if the following aligns to what you come up with.

For me this “formula for credibility” helped to break the term down and provide some insight into where to start in building credibility. The formula is represented as:

The formula for credibility. Character plus competence

This is useful for both asking yourself why you trust someone (or not) and why you expect others to find you credible.

If you are dodgy and incompetent, maybe others should not rely on what you say – and if you think that others are dodgy and incompetent, you are probably not going to find them too credible either.

On the other hand if you are “of good character” and “really good at this stuff” then people are likely to feel confident in relying on you.

Let’s break it down further though, because I think there are some specific components of “credibility” that I think can help us come up with actionable insights rather than just a nice definition.

Being of good character

Being of good character is something that I feel is important, even if you or others feel the term sounds dated.

Once again though, we are looking at a topic on which whole books have been written.

For our purposes though, we we can define character as:

Character is a combination of integrity and intent

So you are credible if I perceive you to have integrity and I trust your intentions are something aligned to what I want (or at least accept as just).

But how how will you judge my integrity and the relevance of my intent? Well, the same book helps us again. For our purposes, integrity can be defined as:

Integrity is a mix of being honest and acting in accordance to what you say

So Integrity becomes something concrete that we can demonstrate. We can tell people things that are true and I can act in a way that is “congruent”.

Congruent means that we act in line with what we say, and that our actions are consistent over time rather than seeming arbitrary of random.

On this basis, to boost your own credibility, you should:

  • Say things that can be shown to be true, in a way that people understand you. Check for understanding and answer questions on your intentions,
  • Act in a way that is consistent with your belief in those truths – for example, if I say empowerment is important then I need to actually empower people

We can, however, break things down even further:

I am confident of your intent if I know your motive and agenda

So if we want people to be clear on our intentions, we them to understand both our motives and our current agenda. We want to share both what drives us (or we are motivate to achieve) and what we intend to do in order to succeed or fulfil our motivations (our agenda).

That can, potentially be scary – having no hidden motives or agendas. I guess you could create fake, marketable motives and agendas, but I believe others will see through that and your credibility will suffer as they start to see more cracks.

A bigger risk for me is that I know my motivations and agendas are reasonable in the context that I am operating so I often assume others know what they are. I guess it is lucky that others cannot read our minds, but I have sometimes found I am lacking the ability to persuade others to do sensible things, because they turn out to misunderstand (or not even guess) what my intentions are.

I guess if my agenda was to call others out in public to embarrass them and make myself look good, or my motives were to suck up to management and claim that the team are now doing what managers want, people should doubt my intentions. But more often, I have just failed to share my real intentions, which actually are to help the team improve or to grow the capability of someone I lead.

So I think this is actionable – if you want to build credibility, then explicitly and deliberatively:

  • Communicate what you hope to achieve and why you might be motivated to do that;
  • Communicate what you intend to do and how it might fit your real agenda (which you also share).

But there is one component left – Do I have the competence to come up with good advice and to achieve what I suggested I will do?


Just as everything else was broken down into simple components – competence also has a formula

I come across as competent if I am capable and if I get results

My concern about Venus Williams giving advice on Nuclear reactors is that I do not think she would have the knowledge or skills to understand how to build one. It is not a question of character or intentions, I just don’t have the confidence in her ability to design a reactor. On the other hand if she gave me advice on playing tennis, I would be very confident in her advice.

OK, I guess I don’t have to worry too much about Venus Williams trying to get me to build a nuclear reactor with her, nor even giving me advice on tennis. But I think the same principles apply in coaching.

When you start with a team you want to create trust and establish your credibility. It is easy if people have strong evidence of your capability and have see you deliver results. Without these though, it will be harder.

Rather than just writing this off though – I think there are some important lessons.

To build trust and credibility, we should still start with demonstrating trust and showing respect. We should also share our intentions openly and start to show people that we are true to our word (and that we act in accordance to what we preach). This will get us some momentum.

But then we need to get into the trenches as quickly as possible with people, to test ourselves in the their real world. I struck out the word “the” because we need to show results and capability in the environment that the stakeholders are living in. I think we can only maintain our credibility for so long based on victories outside the organisation or certifications in being a guru.

So as soon as possible, help someone fix something, and then do it again. The “something” can be small or large, easy or hard, but they need to see us help them.

Once we get some results and expose our capability, we will build credibility. This, I believe, is true even if we have limited experience or skill and the results are limited. Once people can baseline what they will get then we can build from there, but if there is no baseline or no direct evidence, people are often not quite sure.

A not on Vulnerability versus Capability and Results

I believe that being vulnerable increases trust and even your credibility. This is probably because it shows honesty and a willingness to share your agenda and motives.

It would be far worse to hedge and dodge and “fake it till you make it” when confronted with challenges than to show vulnerability.

However, I also think that it is better to start showing some results, however small, and hopefully some skill or knowledge, before being too vulnerable, too often. I think seeing great people act vulnerable really increases their credibility, such as if I saw Venus Williams openly discuss a bad game.

However, I personally think that it can be dangerous to appear vulnerable and potentially not competent. I think that might set off some warning signs for people and leave them nervous.

This is not established theory, but I really do think it is good to get some small results and then show vulnerability, and then hopeful more capability and results coming through. I think vulnerability without results can result in it being more difficult to really build credibility when asking others to take risks. I am curious to know if that is just my thinking or if others have a similar/different view.

Putting it all together

So where does that leave us?

Well, Trust is a big topic but we can start establishing trust fairly effectively by showing trust and respect. This does not involve blind trust with rose coloured glasses, but rather a propensity to trust, with a willingness to question and exercise judgement. We need to understand the agenda and motives of others and so we should ask, with honest curiousity, about the intentions others have.

We also need to do a lot of work on ourselves in order to be confident that we are worthy of trust.

But the next step is fairly straightforward if we want to take small continuous steps. In order to provide others with a credible, trustworthy person, we can reflect on and focus on building our credibility.

To do so we can see a basic model of what credibility is:

Credibility – as described above

This means that we should consciously “demonstrate” our credibility to others. While this can sound like a motherhood statement, I believe that it boils down to some basic habits, that are often forgotten in the rush to get work done.

  • Share your intentions with others
  • Act as though your intentions matter
  • Let others know when you expect a result or when you find out it will not be achieved
  • Ask others what their intentions are and share with them when they do not seem to be acting in a way that is congruent with them
  • Be honest and explicitly communicate more than you think is needed .

On the joy of coaching

This way of thinking also means that coaches are in an ideal place to help leaders and teams build their own credibility and help them question each other meaningfully to allow them to trust each other more.

Every time we expose the capability and results of any member of the team, we are helping them build credibility. Every time we help others make their intentions clear to each other, we build shared trust and mutual credibility. Every time we create the safety and forums to be honest with others, we again build trust and credibility.

This is important because if, as coaches, we help people establish their credibility with each other and we thereby increase mutual trust, we improve quality, speed and joy in any kind of work.

Trusting agile coaches

I was re-reading a book called The Speed of Trust and it had some reminders in it that are both energising and scary.

The book starts by setting a challenge that it should provide advice that is Timely, Relevant and Actionable. Great I thought, I can apply that to any coaching advice that I give too.

But then it had a scary reminder about trust. The book referred to companies being trusted, but I am going to twist it into a comment about coaches:

If a coach’s teams do not trust them, then the coach does not have a sustainable value proposition

James King – twisting the words of Stephen Covey

It is stark but true – coaching breaks down if people do not trust the coach. The obvious conclusion is that good coaches are good at establishing trust and great coaches are amazing at establishing AND MAINTAINING trust.

I have read some books and spoken to some people who have referred to the “fact” that teams are likely to be intimidated by a coach, but this is not at all aligned with establishing and maintaining trust.

So how do we go about building trust and sustaining it for the duration or our relationship with the people we coach?

Apparently one way is to be good at trusting others. Apparently being good at trusting others is a good step towards asking for trust in return. This is a little subtle though – it is not about blind trust (assuming, regardless of the evidence that we can trust people), it is about Smart Trust.

Smart trust involves both

  • A belief that we should trust people – that our results will overall be better if we trust rather than distrust; and
  • Being good at analysis – observing and exploring why and when we are let down or rewarded for trusting people

OK, let’s assume that coaches are good at analysing situations, what about our belief that we should trust others IN THE REAL WORLD and not in a theoretical situation?

Confusing actions and intentions

It is surprisingly common to assume that others know our intentions were good, even when we do not tell them, or when our actions (due of course to circumstance and not our innate cruelty or laziness) do not suggest our intentions are all that good.

Unfortunately, it is probably just as common for us to judge others by their actions and then infer poor attitudes or intentions.

So the absolute starting point is to start with some self reflection. I know right- we do that too much already, but this one will only take somewhere between 15 minutes and 60 years.

An exercise

Start by assuming that you own some “trust glasses” that alter the way you see they world. They were created over time by your history, your existing beliefs and your experience with different people over time:

  • What kind of “glasses” would you say you have – rose coloured glasses that lead you to be too trusting? Dark glasses that cause you to see shadows and become distrustful?
  • Where did those glasses come from?
  • You will act on the basis of what you see – are the glasses you see through creating the right outcomes for you? Is your instinct to trust/not trust in different situations serving you well, or is it constraining your opportunity to experience the joy and success you want?

Too easy right? Maybe.

But this one is harder. We are moving from thinking to acting.

Others do not get to see your intentions, but they are constantly seeing your actions and they might mistakenly trust you too much or judge you too harshly based on only that evidence.

When you interact with people:

  • Do you believe that you are worth trusting?
  • Do you present to them someone worth trusting?
  • What evidence does the way you act (say at work) suggest
    • That you trust (or do not trust) others?
    • That you believe you are worthy of their trust?
    • That you believe and apply the principle that you should lead with trust to be successful, while also setting high, clear and achievable expectations?

Anyway – this was not an article about others trusting the coach, it was an article about the “Trusting Coach” demonstrating that they trust others – not naively and not only in theory – but in practice, they are demonstrating Smart Trust and building a coaching relationship from there.

Creating the credibility needed to get others to trust us comes next week – trust me.

The obstacle is the way .. or not

I just heard the quote “the obstacle is the way,” which I have heard more than once.

I think it was originally said by Marcus Aurelius about 2,000 years ago. He probably actually said something similar in Latin and may have stolen the quote from someone else.

Either way I think the idea is that β€œThe impediment to action advances action. What stands in the way becomes the way.”

The idea is both intriguing and annoying to me at the same time.

I prefer planning based on optimistic ideas and a happy path scenario … until I begin implementing my plan/daydream.

Once I start work on my plans I inevitably find impediments and I am tempted to go and revise my optimistic plan to be a little less optimistic, but still similar. In fact though, this rarely works for me.

What does work surprisingly well is to come up with a dream and avoid (for the moment) the plan to get there. Instead I focus on what is currently stopping me from achieving my goal or aspiration. Inevitable a good plan falls out of this because I can then plan how to remove impediments.

Once I know how to remove impediments, the rest is cruising.

Of course some impediments are not going to be removed. I used to accept them and then plan for them, but this is actually the critical point where most of my initiatives seem to succeed or fail … even if it takes me a few weeks to realise.

By focusing on the major impediment, or biggest obstacle I get frustrated, then I go and have a cup of coffee and then I come back and get frustrated. Then suddenly I often realise there is a way forward and that it is not my original goal that I should be pursuing but some insight about tackling or working with the big obstacle.

Of course sometimes it should also tell me – this is not the way. For example if I want to catch a train and the train is cancelled, maybe I should abort my plan and then work from home. Maybe the obstacle is telling me “this is not the way – go another way.”

I don’t know if others have the same experience, but sometimes I regret not spending more time focusing on the big annoying obstacle and more often I gain an unexpected insight and way forward once I really understand the obstacle and why it matters.

Staying in the game is winning too

I was retrenched a few years ago (maybe 20) and one of the benefits I got was working with a coach who helped me work out what I wanted to do and also some “boring things” like writing resumes, going to interviews and other skills that I actually lacked.

Applying for a job was something I had just never given any attention to, but like everything, there is an art and a craft that you can work on to become excellent at it. A bit like learning to pass exams at school – a seemingly mundane part of life is actually something you can suck at or master and was a skill that carried me through many courses that I knew less about than some of my peers, who were bad at “exam technique”.

I never did master the art of applying for a job and I mostly rely on luck, hoping that someone I know will remember me just at the right time and then call me to say they need someone with my (broad rather than specialist) skills. Unfortunately I also suck at networking.

Maybe I should spend more time honing my survival skills to make sure I am resilient in tougher times, rather than just rolling along happily when times are good.

But then I don’t tend to change jobs that often so actually one thing I do focus on is trying to keep my skills up to data and remembering never to rest on my laurels. I learn new skills (current data analysis and then back to some cloud stuff); I review my “coaching contracts” to see if I made progress with teams or if I am just hanging out with them and I try to get involved in different things at work that force me to re-apply my skills in different contexts.

But there is one thing that my “get a new job” coach taught me, that has been with me ever since. I used to think that applying for a job was about winning. Just like watching a reality show where people win challenges to avoid getting voted off the island.

However what my coach taught me is that sometimes it is about staying in the game. The first evaluation a potential talent manager makes is to eliminate the candidates who are not even close to them mark. Then they pass through again and cull some more people. What this means is that they are not really looking for the awesome diamond candidate, they are looking for a reason to drop the candidate so they can focus more time on others.

This left me thinking – should I aspire to be mediocre, so I get through a round or two of culling, only to then compete with excellent candidates?

No – I want to be excellent (or lucky), but for some things it is about not dropping the ball, or being good enough to be OK. If I think I am good at interviews then I want to make sure that my Linkedin or resume is good enough to get me that far, but I do not need it to blow people away.

I rarely look at my resume, rarely network and I leave my LinkedIn profile looking pretty lame, but I do know that the time to work on them is when I don’t need them, so they are roughly ready.

I am pretty poor at applying that lesson, but one thing I believe that separates me from many coaches, product managers and decision makers is that I ask the question “what do we need to maintain in our teams to stay in the game?”

I think a lot of people ask “what should we excel at?” or “What feature will blow people away?” These are great questions, but I don’t think we should always be working on something amazing, while leaving other, more mundane, things to fall behind.

Many teams know the pain of letting their bugs get out of hand while they hit deadlines and then they “suddenly” find themselves in a crisis. So, I guess we should always be asking “What can we NOT drop the ball on here?” and “What is the standard we should maintain here?”. But that does not mean everything needs to be at an awesome standard (which would be nice) and it does not mean we can never drop the ball on anything. It means there are some things, that if they hit a boundary, we need to start bringing them back to “good enough” or “safe enough” to keep ourselves in the game.

Sometimes this means risk mitigation rather than new features and adventures. Sometimes this means spreading our skills through the team or reviewing our succession planning and internal opportunities so our good people don’t become stale or disgruntled people.

However sometimes it also means practicing the mundane and getting good enough to move something from “this sucks to do” all the way to “I don’t even notice we are doing it”. These things could include cleaning up our Jira backlogs, checking in that our whole team still knows what is going on or completing our release notes.

So one of the things I like to think I excel at is managing the mundane. Sometimes on Master Chef it is not about having the most amazing dish but it is about having a good enough dish to stay in the game. And sometimes at work it is not (just) about excelling at some important things or launching new products, it is about cleaning out bugs, closing the loop on customer requests, cleaning the noise from our team so they can focus on what they want to excel at.

Otherwise these mundane things will way on us, or inhibit us from being able to shine. Just as my competence in “exam technique” made many courses much easier for me and just as “knowing how to get a job” makes it easier to be employed, regardless of the economic environment, there are mundane sounding, but important things that we need to make visible and get good at, not because they are cool, but because they will constrain us or distract us from focusing on the real areas where we want to excel.

If being excellent is a goal for the team then paying attention to things that keep you in the game is also something we should all excel at. That is my focus for the next week at work – what should I be getting better at so that it becomes routine rather than a hinderance or stress-creator?

If legacy code is the excuse, legacy thinking is the cause

A lot of people I talk to are working with legacy code – old clunky systems that are hard to work with. These systems are slow to change, poorly understood and often fragile.

The often seem to rely on one or two gurus, who know the tips and traps for working with them.

What is less obvious to some teams though, is that they are also working with legacy processes and tools, old clunky ways of getting things done that are hard to work with. These ways of doing things are slow to change, poorly understood and often fragile.

I include expense processing, code reviews, issue tracking tools and even collaboration tools and internal web pages.

But these systems are not the law, they are not some ancient truth handed down in a great tome, where challenging them is the equivalent of rejecting the gods and forgetting the wisdom of the ancients, they are just old constraints we have not had the time or the passion to deal with.

But if we did not have the time to deal with them last year, and we don’t have the time to deal with them this year, what will happen next year?

Each month these legacy applications, clunky processes and even expectations of how we work together become slightly more obscure, slightly less understood and slightly more fragile.

Now we want to change them, they are overwhelming and we talk about having to form a task force, a project or a big budget to tackle them in one big hit.

The problem is that we can never win that way. The old systems are aging every day, relentlessly becoming more baked into to how we work and more likely to become a constraint. It is entropy in action and the forces of entropy do not take a break or wear out.

So day in, day out, chaos and ignorance grow, short-cuts are applied and workarounds get us through our day.

From time to time someone attempts to tackle something but everyone else just smirks and goes back to their complaining. It is all too hard.

But nothing about the above tale of doom and gloom has anything to do with technology.

Small wins, constantly applied create excellence

It is tempting to think we have to replace a whole legacy approach or system at once. It is tempting to believe that we need everyone to agree to our plan before we execute it, but it is also tempting to believe that we need big upfront design and a fully developed product before we can move forward.

Big up-front plans which consume a whole team’s time for weeks, are not efficient. We no longer think we should sit around pontificating before doing something new. So why do we not apply the same thing to looking after something old?

Small concessions, over time, mean that we accept we will lose. But on the other hand small, micro-wins, consistently applied over time mean that we are always moving toward excellence.

So maybe the term “legacy system” should be replaced with “the world we accept today.” Maybe we should actually celebrate that a mainframe was built so well it is still there a generation later and maybe we should celebrate that the work someone did 5 years ago on our appraisal system worked for them in their circumstances.

But we should also take over the mantle and instead of accepting what is not working, we should constantly make small tweaks. The problems will not all be solved in the near future, but they will get better over time.

And I believe a better way to think is “can tomorrow be better than today?” rather than “can we fix everything at once?”.

Accepting that we can make constant small changes means that we just take a little longer to get things done today, we we take a little less time tomorrow. It accepts that we need to spend a little more time making changes, spreading skills and iteratively making things easier.

Most importantly though, it requires a change in mindset – from focusing on building new things and then forgetting them, to maintaining and improving old things. We need to realise that maintenance, quality improvement and refactoring how we do things are constant, valuable activities.

We need to prioritise getting better over adding new things to the pile of things we have already.

We (should not) judge ourselves by our intentions

I recently posted some “agile tips from my Grandmother,” or some old saying that still apply in agile teams today.

I want talking to someone this week and I shared another old saying:

We judge others by their actions and ourselves by our intentions.

On old quote

I think the saying is true, we do tend to observe the actions of others and then “reverse engineer” what we imagine their intentions to be. For example, they run late for a meeting so we assume they did not care about being on time, even though we have no idea what happened to them on the way to the meeting.

What about the second part of the saying though? I often forgive my own mistakes because I know I meant well. This might be a source of latent conflict though, if

  • I am assessing and explaining my intentions; while
  • Others are observing my actions and then inferring (alleged) intentions based on them.

According to Steven Covey, in his book “The Speed of Trust,” This simple disconnect has major ramifications and can be addressed fairly simply.

One tip is to state our own intentions before we act, which both let’s others know our thinking AND keeps us honest when we try to be consistent with what we say.

Interestingly, there are two more tips though:

  • Assume others are likely to misunderstand our intentions – so make them clear both upfront and as we go along. Then live up to them, by acting in a way that is in coherence with what we want.
  • Assume others have good intentions unless there is good reason to believe otherwise.

I think this is good advice, which can be put into practice by being curious and being open with others, both of which build trust over time.

There is a related phenomenon in the that Dominica DeGrandis mentions in her book “Making Work Visible“, apparently caused by us being human:

  • We tend to want to make connections with others, so we want to be nice to them. As a result we often say yes to requests without thinking about whether we really have time for them, or how much effort we will have to expend;
  • At the same time we usually underestimate the impact of our requests on others because we do not see all the other things they are doing.

Based on this, it is almost inevitable that we will take on more work that we can deliver AND inflict lots of unimportant work on others, who will say yes to doing it and then suffer as a result.

Our intention is NOT to take on too much work, nor to inflict suffering on others. It just turns out that way.

So the secret in her book is not to just mean well, but to MAKE OUR WORK VISIBLE to each other. If we make our work visible to each other, then we will overcome this phenomenon and collaborate better.

So both authors seem to suggest that good intentions alone are not enough. It is far better if we:

  • Openly share (or make visible) our intentions;
  • Make the work we are doing visible in some simple way;
  • Do not assume that others trust us because we are doing the right thing (or mean to do); and
  • Remain curious about what others intend and are up to, assuming that they mean well but that we do not really understand what is going on unless it becomes visible to us.

This is not about blind trust though. We should not just be curious but also open in our intention to validate our assumptions (and trust) not because we fail to trust the person but because we think there is so much risk of miscommunication and mistaken assumptions/expectations.

So I guess I will continue to have good intentions and judge myself on them, but also try to remember that visibility is at least as important as intention. Maybe I should even judge myself on how well I make things visible to others and how well I stay curious without judgement when working with others.