When a small team is working with great rhythm, it often looks like there is no upfront planning or design … and sometimes that is in fact the case.
But more often than not, that small team is integrating into an existing “ecosystem” of products, services, internal processes, team structures and IT systems. And generally speaking the small team is building something complex that needs both some foresight and some willingness to learn and improve the system’s designs customer expectations or company direction change.
In fact a small team of intelligent, well meaning people can do a lot of damage if they are no suited to their ecosystem, or not aligned to the strategy and direction of their organisation.
That is all great – but how much needs to be done up front before the team start punching out value … and how much evolves as the team creates something and learns more by doing so?
The convenient answer is “Just enough” should be done upfront … which is pretty annoying even if it is true.
Many “agile teams” are forced to be “sort of agile” because the organisation welcomes changing requirements late in the piece as long as nothing changes; and is happy to empower the team to make decisions as long as those decisions all involve deciding to do exactly what the grand vizier of corporate architecture has decreed. Oh and as long as any creatively is based on specific and detailed requirements before the team even thinks about working on anything.
That is not very agile …
So many teams cunningly avoid any communication with outside teams in order to be able to optimise their work based on the need for speed – which means no design documents, no performance or technical testing and no commitment to making sure things work during the haste to load features on top of an ever growing pile of (something).
So the real answer should be – of we do SOME upfront design and of course some decisions are better left until our we have learned more down the track.
Sadly i can’t tell you exactly what should be done upfront,but I can give you a typical example of what I like.
Generally this is what I like to do up front – step one
I always want to agree a “technical stack”, which means that we agree on which tools we will use.
Since many of my projects are IT ones, we have a “white list” of the languages, platforms and classes of magic we will use by default.
Then there is a black list, consisting of languages that are not to be used because they are being retired, they suck or they are just uncool.
We also have a grey list of things we might use, but which require some thought or decisions before we do.
Next we need to agree on coding standards, security standards, error standards, integration standards, and accessibility standards.
These might not be complete (or even in existence) when we start but we should agree what is locked in, what is needed to evolve our solution and what is up to the individuals in the team.
Next we have support requirements (patching, monitoring, testing, security, astrological alignment etc). To discover these we need to identify the business and technical teams that will support what we produce.
But I a little more thorough here because I used to be a production support manager and I am still negotiating with dark demons and monsters from your nightmares to bring violent retribution on some of those “agile” teams that have “handed over” (or “inflicted” a minimum “totally-not-viable- to- support” viable product). So not everyone in the agile community says this stuff needs to be done upfront but I am quite keen on it.
Next we define when we are done, which most people seem to be doing – but by this I don’t really mean “finished for the day” … I mean “this is now usable – we are good to implement it”.
Almost finally, I like to do a map of where our “thing we are delivering” fits into the rest of the ecosystem. This could be a “things that matter when we build each feature” table as shown in the diagram. It could also be an integration model, a context diagram or another map of what we impact (or will be impacted by), what we send data to; and what we get stuff from in order to do our stuff.
Finally – the context diagram and the definition of done lead to a need to agree on what we test. So we should do a test strategy. But I have found something odd on my projects …. doing a test strategy after you decide how to build your solution should help you design an efficient way to validate your delivery, but it doesn’t. I have actually found that the way I decide to test my “thing I am delivering” changes the way I build it and even the design of it. So I now believe you need to define a test strategy to learn your design parameters and priorities.
That’s it – except for step two. Once you have done step one, I also usually want you to do the following:
Step 1(a) part one – (and you thought this would be quick). Agree who is going to design what:
- The team will collaborate on some things – eg process
- The “techo gizmo team (geeks, UX hippies, Process lovers or data junkies) will take the high level understanding of “stuff” and disappear into a cave where they will discuss highly obscure but cool things to achieve design harmony
- We will need to send some of the team on a quest to visit “The Grand Order of Wizards” to learn what we need to comply with to avoid the wrath of the dark integration monsters that will rip our beautiful system apart as it tries to integrate into the outside world unless we collect the Talisman of Compliance Virtue.
I like the team to all have a shared understanding of the business glossary, rough application design (for dummies version), rough process design for what we integrate into (touchpoint model, value stream map, swimlane, business model canvas … or something) and rough UX view (personas, empathy maps, customer journey maps, value proposition canvas or something)
There will then be some techo gizmo stuff that you can trust the team to agree on – it might be extensive or it might be “not to worry – we will sort it as we go”. Common gizmos are “SEO”, “UX”, “Quality”, “IT Ruby is better than dumb Java”, “Data model”, and so forth
Typical Grand Wizards include the Brand team, central IT architecture team, (unless you work for the Marketing department in which case you can just “move to the cloud” and thumb your nose at them), vendor management street gang (I mean “partner-collaboration” team), IT security, Internal audit, etc.
This is the generic set of things I like to do up-front. On many projects you don’t need all this and on others you need other stuff. But I think you can do all of the above, as a team, in about two weeks. People sometimes doubt me but generally it is more about getting the right people together for a short, focused effort to get it 80% right rather than waiting until it is well documented and set in concrete somewhere in the cellar.
Of course there are organizations that cannot get all this done in two weeks upfront – and they are generally great places for other people to work so they can learn why it is better to do it collaboratively, roughly but properly. For you and me – it is generally possible to do it in two weeks … or it is generally not possible to deliver the project in a way we can be proud of.