Friday, November 18, 2011

Automation Coverage


Automation Coverage is widely asked metric by management to see how much of test case repository is automated. Managers use this metric for estimation of manual efforts and automation scripts execution time etc.

Automation coverage percentage cannot give good picture unless what kind of test cases are automated, how they are automated, how well are they maintainable and context in which automation is going to be used. I believe automation is more useful in sanity tests, happy path test cases, post production hot fix testing, test data creation, pre-requisite setup creation, repeatable data driven testing, and to support regression testing. Hence, instead of using one coverage metric for automation, I think it is better to use different automation coverage metrics for each type of testing, like automation coverage of sanity tests, automation coverage of happy path test cases, automation coverage of regression test cases etc.

The power of automation lies in the way the test cases are automated. Automation should add more value to the manual test cases. Automation to take care of the areas where manual testing finds it difficult. The framework on which automation tests are builts needs to be robust and easy to use. The efficiency of automation scripts should be increased to complete the tests faster so that automation can cover more tests. Automation scripts need to reveiwed time to time in order to improve their efficiency in finding bugs. Definitely automation can drastically reduce the efforts where test cases to be executed with volumes of data, they can ensure that positive paths are working fine as a final check, and in test data creation. Automation can be of great help in certain areas of testing if they cover the areas including what manual tester cannot easily do. Automation is a good aid in testing when it is meticulously planned, efficiently used as demanded by context, and treated it as good as software development.

In summary, test management tools to provide more realistic metrics on automation coverage than simply calculating the percentage of automated test cases of total test repository.

Wednesday, November 9, 2011

How mind maps are useful in testing?

I have been using mind maps since many months, but recently I started using them extensively using in test design, and test ideas. As visual presentation is more powerful than linear representation, I prefer using mind maps whereever I use multi level multiple bullet lists.

Earlier I used to write my test ideas as a linear list of bullet points. I found this approach is not much productive when system under test has integrations with multiple third party systems or system itself has lot of interfaces.

Mind maps are useful in Lean Test Design, where it can save a lot of time in documenting the test cases in test management tools. This technique is handy when I explain test ideas with peers and dev team. Dev team can use mind map created by tester as a checklist for sanity testing before delivering the code to test team. Mind map gives a quick snapshot of how entire system looks like, how quickly I can test the system.

In one of my projects, the system has integration with 4 external systems and all these systems are integrated with one another. I found designing tests and writing test cases in traditional model do not add value and also tough to visualize how all these systems are integrated. When I used mind map to represent the test design, more test ideas started flowing in and everyone in the team has more clarity on the system. More I use mind maps, I get more ideas and better the understanding of the system.

I will write more on mind mapping with examples in next blog post.

Tools: Open source desktop tools like FreeMind, XMind or web based tools like Mind Meister. I use XMind and Mind Meister in my work.

Further Help: Darren McMillan written excellent articles on using Mind Maps in Lean Test Design and other areas of application.