Agile development · Idea management · Implementation

An international standard for being stupid? The mistakes users always make

Before I worked in IT and even knew what testing was, I knew people made mistakes. But I didn’t know there was an international standard you should comply with when you want to make a mistake.

Then I worked on a project with a mining company and one of the consultants explained human factor analysis to me in simple terms. He told me that mine sites can be dangerous and part of his job was to “stop people killing themselves when they are stupid”.

I suggested he stop hiring stupid people but he told me that they tried that and it didn’t work. Apparently you can be really intelligent on a mine site 800 days in a row but then be stupid for 10 minutes one day and be in an accident and then be killed.

“Luckily we have a standard for being stupid though” he said

I was surprised that he could get people to comply with a standard for making mistakes, but then he showed me “iso90210 (not a real standard) – the standard mistakes you can make when using a machine”.

  1. Action omitted
  2. Action too early
  3. Action too late
  4. Action too much/too often
  5. Action too little/not often enough
  6. Action too long
  7. Action too short
  8. Action in wrong direction
  9. Right action on wrong object
  10. Wrong action on right object
  11. Wrong action on wrong object
  12. Information not obtained/transmitted
  13. Wrong information obtained/transmitted

It seemed like a long list of obvious things, which is what it turned out to be. I found when I was analysing processes (which I was doing at the time) that many errors could occur in the process when one of the things happened in the list. So I combined this with the “international standard for being scared” (FMEA) to find I could tear most processes apart and find errors really quickly.

Even if you can’t break the process instantly (which you usually can), you can almost always break it if you:

  • Apply one standard mistake; and
  • See what happens and then apply a second one rather than simply moving on.

This also applies to computer applications. For example

  • The user gets to question 5 and realises they do not have the information they need to complete the form.
  • The system gives them a polite error message.
    • Rather than completing the form or hitting cancel (the right thing to do) they exit by hitting the “back button” or closing the browser (right action wrong object); or
    • They hit cancel and nothing happens so they hit it again 3 times (too often) and third time they hit the button the browser decides to reload and process the click on the wrong button (something from the next screen).

I find that even though I am not a tester, I can often break applications in regression testing by assuming my users will comply with the standard for being stupid at least once or twice.

You can take things a lot further if you study “human factors” because it turns out that “human factor analysts” (I am not sure what they call themselves) have spent a lot of time trying to find common reasons for all kinds of things. For example:

  • Common reasons why people do the wrong thing deliberately (peer pressure, not enough time in the day, rewarded for wrong behaviour etc),
  • Common reasons people make mistakes (information overload, falling back on how the old system worked, lack of knowledge, lack of caring etc).

So there seems to be a whole field of study about how to make mistakes properly. I guess if you are going to do something you may as well do it properly … and since we make mistakes so often we may as well get really good a making them Smile

But what is of more use to IT projects where people want to build good systems:

  • If your system could kill people, probably get an expert in safety to test it.
  • Even if your system won’t kill people … surely you want to anticipate the most obvious mistakes that users will almost certainly make.  So it follows that you would also want to at least do some simple testing to find out what happens when they do.

So I think it is fair to say that any team who want to produce good processes, good IT applications or good products will do some regular testing to see what happens when “the dumb users” make the same predictable mistakes we all tend to make.

So I would like assume every developer, BA and tester would do some simple exploratory testing  as they add features and I would assume that every team would do some regular regression testing as their system emerges.

But maybe that is the uncharted territory for human factor analysts – why do IT teams consistently make the same mistake of assuming the users will do exactly what they are meant to, when everybody knows they won’t?

Why would anyone spend$5m and 6 months of their life on a project and then only test how a user might actually use the system at the  very end of the project when there is no time left to modify the way the system behaves?

4 thoughts on “An international standard for being stupid? The mistakes users always make

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.