Use the river diagram to communicate data

I run a lot of workshops, including planning workshops and retrospectives.  Sometimes the crew votes on things like “what is the best feature, or how did we go this time.”

But sometimes we collect data and then want to discuss it as a team.  And this presents a challenge – I like data in a spreadsheet and I like pictures on the wall but sometimes it seems hard to capture numbers in a useful way in the workshop.

But do not fear – the River Diagram is here and this is exactly what it is for.

river1

Read More »

Manual regression testing may not suck so badly after all

I often work with good developers and one thing I notice about all good developers is that they seem to love the idea of building robots.

Bad developers see problems and sit there waiting for someone to come up with a solution in enough detail for the developer to transcribe the solution into code, much like an old fashioned typist takes dictation and types it onto a page.
So if a bad developer noticed that their house needed cleaning, then he or she would simply complain that someone should clean it. Then if you point out that it is their house that needs cleaning then they will either claim management won’t let them clean or that the problem is more complex than it seems cannot be solved.

In fact even if you ask them to try and clean, they will just start to reveal that cleaning is “more than vacuuming” and could involve the removal of micro-particles that only quantum physicists could possible manipulate. Indeed, they will contend, it is unlikely that anyone really cleans their house and the only practical solution would be to upgrade to a new cleaner house.

But good developers are different. A good developer will notice that the house needs cleaning, work out that actually cleaning it less fun than designing a better way to clean houses and immediately begin working on the design for a new robot.
Read More »

A daily status meeting? Really? Now you want daily performance measures? Really?

I run agile training courses and I often preach the benefit of a daily standup.  The idea is that it is 15 minutes a day where everyone in the team lists

  • What they have done since last time
  • What they will do today
  • What obstacle or issue is in their way

It works really well in my slides but now I am trapped in the real world temporarilly working on a real project. Do I really want a daily meeting?

Read More »

Measuring the impact on production support

I was running a course in agile development when I mentioned that one of the good things about agile is being able go live with something valuable sooner.

One of the class asked whether you can measure the benefit of going live sooner.  “Of course,” I replied, “and of course you should be doing so”.

Some of the group asked if we measured value in features deployed or some other way. So we had a good discussion around measuring value.

But then one of the group told us that his project was about “simplifying IT” and so his agile project manager had told her that, since the project was not adding any value to “the business” the only real measure of success was whether they deployed the features they were supposed to deploy.

But this seemed a bit silly. So we agreed that adding value to IT was in fact adding value to the business, since IT is part of the business.

Read More »

Agile governance and the problem with measuring self-organising teams

I have been in some interesting conversations recently about agile development teams and sound organisational governance. 

One of the challenges faced by organisations is that the traditional measures used to monitor and control teams are not necessarily suited to the style and approach of agile teams, while agile approaches may seem to remain silent on, or even discourage, the outside governance of project teams.

imageimage

Fortunately this is not a new problem and people have been discussing it since self-organising teams (or work cells, or self-managed teams) first appeared in management theory.

Read More »

Talking about retrospectives on another blog

I was running a course on “facilitating workshops in agile projects” when some of the crew asked what different questions they could ask in retrospectives (instead of just “what worked and what didn’t).

We got talking about the retrospective at the end of a project.  So I published a story of sorts on how to run a retro for 70 people at the end of a project.

At the end of the discussion I promissed to publish some of my comments so I put them together in a short story about running a retrospective.  You can read it hear if you are interested:

http://www.theagiletribe.net/2010/11/04/unusual-questions-to-ask-at-a-retrospective/

Some downloadable notes on estimating

One of tha participants from a course asked for some more information on estimating – so here is a downloadable copy of my mimi-ebook with some rough notes on estimating. 

Most of the material also appears in this blog somewhere and I have since updated some of my thinking (as I am continuing to do).  So it would be great to get suggestions/challenges or comments for me to use in the future.

Estimation toolkit – June 2010 v1.0

Unusual predictors of team success

I was reading Daniel Pink’s book “Drive” and I came across a passage about predicting which teams are likely to be successful.  It describes a guy who counts the number of times he hears people use the word “we” and the number of the times they used the word “they” when referring to their own company.

Apparently the “they” teams are likely to fail and the “we” teams are generally successful.

This made sense to me because when I have always listened when managers use the terms “we”, “you” and “I”. The dodgy managers I have worked for tend to use pronouns like this:

  • “They” or “management” want us to … rather than “I would like to … “;
  • “You” messed up or “team member x messed up”; and
  • “I delivered” something.

I even heard a manager once say “if it was up to me I would do … but you know that management wouldn’t accept that”.  Which was interesting since the person I was talking to was “management”.

On the other hand I have also had good managers and I have noticed that they tend to say:

  • “We” messed up or “We” have a problem;
  • “You” did a good job; and
  • “The team” delivered something.

But I have not previously thought about actually counting the times members of the team say “they” versus “we” so I think I will try that next time I am auditing a project or coaching a team.

A similar thing I have used in the past though was to see if the team (particularly IT teams) refer to their internal customers by name (eg Brian or Mary) or whether they refer to them as “the business”.  It’s interesting how often people refer to “the business” as the customer. 

So my new predictor of success for teams I coach is going to be based on

  1. The number of the times they use the term “we or us” versus “them” and the number of times they refer to stakeholders by name versus using the terms “they”, “management” or “the business”.
  2. The number of times team members use the term “we delivered something” versus “I delivered something”. 
  3. The number of times the team say “we stuffed up” versus “x stuffed up”, “they don’t know what they are doing”, or “that’s not how I think they should have done it”.

What do you think – will there be a strong correlation between the use of different pronouns and team success?

Should measures be “SMART”?

I was having a debate with a friend of mine recently.  He said that all measures should be “SMART”.

The term is usually used for goals and says that the goal should be specific, measurable, achievable, relevant and timely (or something similar).

The problem I pointed out was that the M in SMART means measurable so the concept of SMART measures is a bit redundant. He responded that in his experience many of the “measures” he saw in place were not actually being measured and so, he claimed, it would be good to remind people to actually have measures that they will measure.

I am not sure if he is exactly right or not, but I came up with my own standard for measures.  Its a little lazy compared to some, but I find it useful.

I think measures should be Credible, Useful and Easy (CUE).  

The most important component is Useful – the meansure must assist you to either make a better decision, change a specific bahaviour or reduce the ambiguity in your understanding of something.  It is suprising how easy it is to forget that the measure is only worthwhile if it can be used for something – in which case you should understand what you are likely to use it for before designing the measure.

But the measure also needs to be Credible so that you and others are confident that you can rely on it.  Note however that being Credible without being Useful is actually worse that not being Credible – you will confidently make decisions on the wrong data.  Thus you also need to understand where the measure will sound good but not be relevant.

Easy means that it should be easy enough to measure to make it worth doing.  So a measure might be worthwhile even if it is a substantial effort as long as you gain a huge benefit from it.  But my experience is that if it is hard to measure then people will forget to measure it, take short cuts and even fudge the information.  So the harder it is to perform the measure the more you need to ensure the people doing the measuring believe it is Credible and Useful to measure it propoerly.

So there you have it – measures should be CUE.  Not quite the same ring to it as saying goals should be SMART but good enough to be useful (I hope).