Failure is a 50% probability for every new experiment. That does not stop us from experimenting, which is an important and positive attribute of humans. But there is another side of experiment, which is the lesson learned. Whether an experiment succeeds or fails, it is critical for the experimenter to retrospect. Indeed the entire task will be dear to the experimenter but the emotions should not bias the retrospect. If it is difficult to let go of the emotional attachment then it is best to employ the services of a neutral party for the retrospection.
The outcome of this task holds great future benefits. It tells us
- what we did rightly
- what we did not do rightly
- are the rights more than the wrongs, or vice-versa
- do we require to do the same experiment all over again
- should this experiment be aborted or continued with enhancement
- if the objective was met and to what extent
- what is the feedback of the stakeholders, satisfied or dissatisfied
To put the above theory into real-life scenario, lets take the most common experiment that occurs in the tester's work-life; "the automation experiment".
There is a lot of rich and useful information written on the subject of automation by almost all thought-leaders of testing such as Cem Kaner, James Bach, Michael Bolton, Jonathan Kohl, Scott Barber and others. But somehow the people who take decisions seem to be unaware of this information. I can safely vouch that in this case, ignorance is not bliss. Periodically, I have been subject to the experiment of automation by enthusiastic, starry eyed seniors who assume that automation is THE only and best alternative to manual testing. In order to sell the idea of automation they draw up graphs and charts that show the huge savings in terms of time and effort if the entire regression suite is automated. Statements such as 'it will take 3 hours to run the entire automated test suite as opposed to 3 weeks of manually testing it by a 5 member team' is a strong seller indeed. Its not surprising then that the management is willing to give a green signal to this experiment.
What stumps me though is that after investing a good chunk of the year in this experiment and having seen it fail, the very same initiators want to automate another product with a new tool. No retrospection, no lesson learned. Will a new tool magically wipe out the mistakes that can be repeated? Or will the new product bend itself to accommodate the new tool? Or will the testers, who (in most cases) do not know the basics of programming, begin to write sensible robust test scripts?
Automation is not magic. An automation tool is not a magic wand. The user of the tool has to be trained to use the tool effectively first. The automation tester should be a good manual tester first. Secondly, she should also know the basics of programming, know flow-charting and algorithms or at least to write pseudo-code of the flow of sequences.
Its important to automate but its even more important to form an effective automation team. But its most important to retrospect after each experiment, to avoid wasting precious time, money, effort and importantly to maintain the sanity of testers!