I have been in some interesting conversations recently about agile development teams and sound organisational governance.
One of the challenges faced by organisations is that the traditional measures used to monitor and control teams are not necessarily suited to the style and approach of agile teams, while agile approaches may seem to remain silent on, or even discourage, the outside governance of project teams.
Fortunately this is not a new problem and people have been discussing it since self-organising teams (or work cells, or self-managed teams) first appeared in management theory.
One of my favourite discussions is “Control in the Age of Empowerment – how can managers promote innovation while preventing unwelcome surprises?” by Robert Simons. If you have access to HBR then you can look for it in Harvard Business Review, vol. 73 (1995), no. 2, pp. 80-88. Apparently Simons also wrote a book on the subject but I have not read it.
Simons found that traditional measures (those inherited from hierarchical team structures) often hindered the effectiveness of empowered teams rather than supporting them. But he also found that leaving empowered teams to their own devices was inadequate and even potentially harmful.
So he came up with four “levers” that managers can apply to control the direction and success of teams. Two are measures (interactive and diagnostic) and two are cultural (belief systems and boundaries). But in addition to their individual impact, he found that the interplay between the levers was what really drove team success in empowered (self organising) teams.
When Simons talks about diagnostic control systems he is referring to the collection and assessment of data (budgets, velocity, defects etc). These are effective for alerting management to trends and to variance outside given parameters.
However some information will not be revealed through the collection of data, so the second lever is “interactive measurement”. Essentially I interpret this to mean meetings.
Simons argues that meetings should not focus on reviewing diagnostic measures, but rather on ambiguous or rapidly changing information that may need some exploration. Steering committees, stand-ups, iteration planning meetings and even monthly management or sales meetings fall into this category.
For example, one of my clients holds a monthly “cool or scary” meeting in which the team discuss recent innovations or releases by competitors (potentially “scary” new things to compete with) and also new ideas seen in other industries or on the web (potentially “cool” ideas). The discussions about these cool or scary things are comprised of opinions, deductive leaps and other things that regular data collection and analysis would not reveal. This information complements but does not replace the collection and analysis of web usage and usability testing done by the team (diagnostic measures).
Simons also found, however, that measures alone will not be sufficient to provide good governance and so his third lever is “belief systems”.
His view is that a shared vision is critical to a self managing team. While this might not seem to be a control system, the belief systems of the team will constrain the team from certain decisions and help to guide teams to prioritise and innovate effectively by providing a framework against which to measure “value”.
Examples of the controls covered by belief systems include a team purpose or charter, a value proposition, a corporate mission statement, shared team values, a project objective or problem statement and so forth.
Belief systems are powerful, but Simons found that some things are better communicated as boundaries or “no go zones”. For example it is easy to say we value integrity, but you might add a boundary “we will accept no gifts from vendors” to provide true clarity. Examples of boundaries include cultural taboos (never tell the officers the truth?), project scope, constraints such as fixed deadlines or budgets.
As I hinted above these levers do not act independently. If people see no value in timesheets then this will impact the accuracy of what they report and thus your ability to base future estimates on timesheet data.
Similarly, if you claim (and even believe) that people are your greatest asset but all your team meetings (interactive measures) focus on performance against budget, then this will lead people to think that all that matters to you is money and not people or the quality of the work they do. On the other hand, if your meetings focus on customer experiences then they will be more likely to believe that customer service is important, which will change the beliefs and culture of the team.
Thus Simons felt that clear boundaries and expectations from management enhance rather than inhibit agility. And he also found that measures (diagnostic and interactive) that are not aligned to the desired values and vision of the team do more harm than good.
(note for fans of risk management that I neglected to include risk measures in the diagram above because risk measurement and management permeates through all four levers rather than because agile teams don’t need both diagnostic and interactive measurement of risk).
Agile teams have a preference for moving from diagnostic measures to interactive ones and yet still support these with a host of diagnostic measures. I think demonstrating the robustness of these measures and their integration into the values of self organising teams (collaboration, accountability, joint ownership of solutions) will help to dispel the myth that agile teams are light on governance.
However I also think it is important for agile teams to understand that providing (or refusing to provide) useful data (diagnostic measures) and reporting (interactive measures) to central governance teams and senior management impacting the belief systems of those stakeholders, especially with respect to their ability to trust the team. And in return, measuring where they fit into the bigger picture improves the cultural levers of the team making success and job satisfaction more likely.
A steering committee on a large enterprise project should probably not be questioning the velocity of every team in every iteration. But they should have valid information to focus their discussions on the key risks facing the overall program and the key areas where additional resources are needed or tough decisions need to be made.
Agile teams in turn should not be wasting effort duplicating all of their interactive measures (such as stand-ups and showcases) by producing reams of diagnostic data for the benefit of bureaucrats. But the local focus of some agile teams means that they become tribal and start optimising the success of their project (which is important) at the expense of contributing to the wider picture of creating organisational value or an improved customer experience (even more important).
So if they are not reporting their progress at all, or they are not gaining information about the measures used in the wider organisation then they are operating in ignorance and cannot live up to the potential of a truly self-managing, well informed and accountable team.
Similarly, if an agile IT development team is part of a wider business solution involving not just technology but training, marketing, process change, and so forth then they must integrate their interactive and diagnostic measures into the wider program of work, or risk working in isolation and not providing an effective business or customer solution at all.
I am currently working on some other thoughts around balanced scorecards for agile teams and integrating self organising teams into the wider picture of IT operations management, so I would love to hear your views, your challenges and even your opposition to my current thinking.