I have often admitted that I am not a test manager, but I know enough to realise that any project with an IT component needs to have an IT test strategy.
I also know that creating a test strategy should involve more than just taking the 25 page strategy from the last project and replacing the project name.
The minimum I think you can get away with is a discussion with the whole team about:
- What could go wrong;
- What needs to be done right; and
- What the team will actually get around to testing with the limited time and money available in the real world.
Hopefully, a professional tester or developer can do a lot better than that. But in the absence of anything else, I think a brief conversation or two should allow the team to produce the following “optimised for laziness” artefacts before diving into more detail.
Create some kind of understanding of the architecture
Start with a basic architecture diagram. It can be a simple as a heat map or as complex as you would like it to be.
Use the diagram as the starting point to discuss the kinds of things that can go wrong from a technical point of view.
Create a system impact table
In this system impact diagram I have created the following data:
- The name of each system the team will work on or interface with;
- A column detailing whether I am changing the system or simply interfacing with it;
- A column detailing whether the system is a new one or an existing one;
- A column to detail how complex the system is to work with, but the team decided not to use this one;
- A summary of the risk of trouble when we work on the system
- How likely are we to make a noticeable mistake;
- How bad would it be (the impact);
- How easy would it be to detect or test for the errors we might make; and
- How easy would it be to recover if we did deploy something wrong (for example if we are in a plane that took off or we are distributing via DVD then it would be hard to correct any mistake, if we are creating pages in an intranet it might be easy).
So far so good – I think most teams should be able to create this much understanding through a single workshop.
Of course, since many projects involve interfaces, integration or process changes then you can add these to the table just as you would another system.
Create a test strategy
Once again, for the lazy team, it is easier to create a table on a whiteboard than it is to create a 25 page document:
For each system (or process?) the team add a row to the table. Then they add columns for:
- The testing that will be done continuously throughout the project (every week or every iteration);
- The team have added unit testing, UAT, regression testing, factor testing and sector testing.
- I don’t actually know what factor or sector testing are, but that is part of the discussion. You would be amazed how many teams claim they are doing system or integration testing but are not sure what it means.
- The testing that will be done each time the software is released, for example DR testing, performance testing or user experience testing.
- The testing that will be done on an adhoc basis or is a one off at some point in the project.
- The testing that the team are going to skip on the each system. Of course the team should robustly test for every conceivable bug on every system they are going near … but they also have no money or time available.
Finally, the team go through each column determining whether the testing will be
- N/A – not done for this system;
- Manual – done by the team;
- Other – done by the customer, another team or another organisation. This requires coordination for the team but no actual testing effort;
- Automated – done by robots or through automated testing tools; and
- Done by a specialist vendor or team.
Do better if it is justified
Of course, testing is a critical component of most IT projects so you might do a lot better than what I have detailed here.
But as a minimum I think the sponsor and team members should also understand what testing is being done and what is not being done (Say DR testing), so that they understand either the risk the team is taking or the reason the team is spending so much time and money on testing.