Friday, November 18, 2011

Automation Coverage


Automation Coverage is widely asked metric by management to see how much of test case repository is automated. Managers use this metric for estimation of manual efforts and automation scripts execution time etc.

Automation coverage percentage cannot give good picture unless what kind of test cases are automated, how they are automated, how well are they maintainable and context in which automation is going to be used. I believe automation is more useful in sanity tests, happy path test cases, post production hot fix testing, test data creation, pre-requisite setup creation, repeatable data driven testing, and to support regression testing. Hence, instead of using one coverage metric for automation, I think it is better to use different automation coverage metrics for each type of testing, like automation coverage of sanity tests, automation coverage of happy path test cases, automation coverage of regression test cases etc.

The power of automation lies in the way the test cases are automated. Automation should add more value to the manual test cases. Automation to take care of the areas where manual testing finds it difficult. The framework on which automation tests are builts needs to be robust and easy to use. The efficiency of automation scripts should be increased to complete the tests faster so that automation can cover more tests. Automation scripts need to reveiwed time to time in order to improve their efficiency in finding bugs. Definitely automation can drastically reduce the efforts where test cases to be executed with volumes of data, they can ensure that positive paths are working fine as a final check, and in test data creation. Automation can be of great help in certain areas of testing if they cover the areas including what manual tester cannot easily do. Automation is a good aid in testing when it is meticulously planned, efficiently used as demanded by context, and treated it as good as software development.

In summary, test management tools to provide more realistic metrics on automation coverage than simply calculating the percentage of automated test cases of total test repository.

Wednesday, November 9, 2011

How mind maps are useful in testing?

I have been using mind maps since many months, but recently I started using them extensively using in test design, and test ideas. As visual presentation is more powerful than linear representation, I prefer using mind maps whereever I use multi level multiple bullet lists.

Earlier I used to write my test ideas as a linear list of bullet points. I found this approach is not much productive when system under test has integrations with multiple third party systems or system itself has lot of interfaces.

Mind maps are useful in Lean Test Design, where it can save a lot of time in documenting the test cases in test management tools. This technique is handy when I explain test ideas with peers and dev team. Dev team can use mind map created by tester as a checklist for sanity testing before delivering the code to test team. Mind map gives a quick snapshot of how entire system looks like, how quickly I can test the system.

In one of my projects, the system has integration with 4 external systems and all these systems are integrated with one another. I found designing tests and writing test cases in traditional model do not add value and also tough to visualize how all these systems are integrated. When I used mind map to represent the test design, more test ideas started flowing in and everyone in the team has more clarity on the system. More I use mind maps, I get more ideas and better the understanding of the system.

I will write more on mind mapping with examples in next blog post.

Tools: Open source desktop tools like FreeMind, XMind or web based tools like Mind Meister. I use XMind and Mind Meister in my work.

Further Help: Darren McMillan written excellent articles on using Mind Maps in Lean Test Design and other areas of application.

Wednesday, October 26, 2011

Why students are not passionate about software testing?

Yesterday I attended alumni meeting of my college where I completed my post graduation. I gave a presentation on software testing, opportunities, skills required, how to improve skills etc. Very surprisingly, not many of the students knew that there are many people involved in successful release of many software products and projects in addition to software developers. I spent more than an hour in explaining the value of software testing, why students need to understand the testing. I tried to explain them, students especially computer applications and computer sciences disciplines should not only look for developer jobs, they should also realize that they should get the skills required for software testing. There is a lot of passion and satisfaction involved in software testing.

When I asked the students how many of them are interested in software testing, none of them raised hands. All of them answered that they want to become software developers when they take up jobs after completion of their course. I think the reason for this response may be due to process how software testers are selected in campus placements and recruitment process. Another reason may be they didn't get exposure to software testing in their academics. May be there were not encouraged by seniors and faculty to take up software jobs.

At the end of presentation, I felt I could make a little difference to change the attitude of students on software testing. I will be happy if any of them understand my talk and start working on developing their skills to become industry ready passionate software testers.




Thursday, October 20, 2011

Thoughts on Automation - I

Automation is always an interesting topic. It adds a lot of value to testing especially when it comes to data driven tests, end-to-end flows and business logic validations. The strength of automation lies in the strength of good manual tests and very good test data.

Many times automaton engineers tend to create complex tests for small features which in future becomes very brittle and tough to maintain. The effort and time spent on maintaining these scripts increase with application development cycle and eventually return on investment of these scripts decreases drastically. The power of automation is controlled by easy to maintain scripts and robust frameworks that drive these tests. The more complex is a script, the more it is likely to fail in future.

In the race of getting most of the scripts in test repository automated and trying to improve the coverage of automation, we tend to automate every manual test case. Automating tests like verification of color schemes, graphics, section widths, icons, backgrounds, minor images, layouts, and styles etc will not give good results. Rather, these tests can be effectively tested manually, with the Human Eye. By avoiding automating these tests and keeping the scripts light and maintainble is good for good automation strategy.
The success of automation stays in judging what to automate and what not to automate. Tests for visual aspects, usability need to left for manual testing, which takes less time to test when compared to automation.

I'll cover some more concepts automation in next post. Till then, happy testing !!!

Wednesday, October 19, 2011

First Post

I have been thinking to blog on software testing since many years, and finally I could make it. I will start posting my understanding and experiences software testing. Stay tuned for updates on this blog.


I have great inspiration from eminent testers in India and across the world like James Bach, Cem Kaner, Micheal Bolton, Pradeep Soundararajan, Shrini Kulkarni, and many more. My blog posts may reflect some of their views and guidelines or inline with their thoughts because I greatly follow for their schools of thought.


Disclaimer: Views expressed in this blog are my own, do not represent of my employer in any way.