Monday, June 4, 2012

Collaborative Testing

If your team consists of many new team bees who are going to test a complex release and you don't have enough time for training and knowledge sharing sessions, what do you do? You may ask them to do exploratory testing, take up some test cases and execute the steps. If business logic of the system is so complex, then new team members have to interact with senior members of the team for every question they have. That means seniors will have to spend considerable amount of their time to clarify the doubts of new bees. In such situations seniors may not get the time to complete their assignments.

What do you do in such situation? You want your new guys to pick up product functionality quickly, at the same time contributing to the testing efforts. This requires more collaboration, brain storming, and quick meetings in the team.
How about team is sitting in a conference room during the testing cycle rather than sitting in their work bays? Team located across the table can have more interaction, more discussions, brain storming, frequent bug bash sessions. Testing in Collaboration encourages testers to explore the application more, improves the knowledge, brings the team together. This approach reduces the duplicate bugs, and increases quality bugs. This approach helps development, analysts and test team huddle quickly to discuss any changes in requirements and technical aspects of the system.

I have seen the advantages of this technique in one of my project releases, in which we achieved good test coverage, brain storming new test ideas, sharing of knowledge and tools, found good bugs. Testers working on one module and sometimes people across multiple modules worked from huddle rooms throughout the testing cycle. Collaboration is the key to success of any project- it brings the people, ideas, and techniques together for greater test results. Such collaboration techniques can produce high quality releases esp. with new teams, changing requirements, and complex systems.

Approach outlined here can also be implemented not necessarily when there are new members in the team, but can be applied with existing members too.This technique can also applied not only for testing, but for all disciplines of software development. Recent studies quote that there is more room for ideas in a smaller offices and such technique improves collaboration and productivity.

Happy Testing.

Wednesday, February 1, 2012

Load Balancing Automation Scripts Execution


Some years back I developed a tool whose primary objective was to optimally utilize the automation lab machines and to achieve faster execution cycles. This tool was developed using Visual Basic using WinSock component of the programming language. This tool automates many tasks of automation scripts execution including uninstalling existing build on all targeted machines, installing new build on all targeted machines, executing the scripts on basis of next available machine logic, provides a console to the administrator where he can view the utilization of the machine, check the status of each machine, and status of each script execution. Let us discuss each component of the tool in detail.

There are two major components of the tool, server and client. Client is a small program is installed in each client machine, which listens on a particular port for instructions from server and also sends status of client machine, script execution status to server periodically. A properties file is created on remote client which stores the configuration details of hardware, software of the client and details of server. If GUI automation tool provides script event model (tools like Silk Test provides such options), we can write custom code in events of the tool (like before script start, script execution is in progress, script execution is completed, an error occurred etc.,) to update the status to a common database table or in XML file. This approach of designing the tool is feasible only if the automation tool supports command line execution. If status is written in DB, then there will be no burden on client program of sending the script execution status to server. If status is written to a text file or XML file, then client program has to propagate the status to server for every change in event.

Server is an ASP web site which provides an option of choosing the scripts from existing inventory based on tags, priority and type associated with each script and creating a master driver with the selection. Server interface provides an option to maintain repository of client machines along with their hardware and software configurations. Once scripts are selected and master driver is created, then administrator can select the client machines based on the configuration requirement and can invoke the execution of master driver. Server provides an option to check which client machines are available and which are not available. Scripts repository maintains Average Execution Time of each script and Last Execution Time, based on this data server program provides total estimated execution time of master drive. Client program places the log files, error logs, and screen shots on a shared folder or ftp site from which administrator can access these resources from server page whenever administrator wants to debug the error in case of script failure. Server also provides various reports to get the status of overall script execution, and at the end of the master driver execution, it zips all logs, error screenshots into a file and copies the zip file to a ftp location. I have also added options like subscribing to email notifications in case of errors, completion etc.

Prerequisites for using such tools are stabilization of the automation scripts, good recovery mechanism, ease of debugging the errors, stabilized automation framework, providing required information to report bugs. These tools are very useful in Continuous Integration of the builds. We can even automate installing the new build as soon as a successful build is created on build machine and start executing the scripts. Since scripts are executed soon after build is created in nightly build machine, test execution results will be available to engineers by the time they are in office next day. This tool also saves the hardware costs by connecting all machines to a common monitor with a switch and utilizing the existing hardware optimally. Such tools also helps to run the tests frequently to uncover the bugs for any change in the system.

Friday, January 27, 2012

Cross Browser Compatibility Testing

In the world of multiple browsers and their versions, it is mandatory for web applications to support multiple browsers and their versions and on multiple platforms. A web page works well in Firefox on Windows, may not work as expected on Firefox on Linux, same is the case with Safari on Windows and Safari on Mac. As Mozilla is releasing more frequent releases of Firefox, testing and development teams have to gear up to support the last available versions of browser.

Browser compatibility testing is one of the important non-functional testing types in any product release. You cannot develop an application, assuming the end users use one or two browsers, and you cannot plan your development and testing on those browsers. Since there are many browsers and their versions in the market, testing teams have to make sure browser compatibility testing is part of final checklist without which product can not delivered.

There are few ways testing teams usually test the browser compatibility, like 1. A tester switches to different  the browser and version for each build, 2. Different testers work on different browsers, 3. Executing all high priority and sanity test cases on all browser versions in final or certification builds etc. These techniques fundamentally cover the browser compatibility of the application, but they are not efficient, time consuming, issues are not reported early due to priority of functional testing, and result insufficient coverage on multiple browsers.

There are various online services and tools available to check the web applications in various browsers. Some popular online tools are BrowserShots, CrossBrowserTesting, Adobe BrowserLab etc., and desktop tools are IE Tester, Microsoft Expression Web SuperPreview. Smashing Magazine has a detailed review of Cross BrowserTesting Tools. If your web application cannot be accessed on internet, then online services cannot be used for this testing. Microsoft SuperPreview is a good tool that supports all versions of IE and Firefox 3.5. This tool is yet to support other browsers like Chrome, Opera, Safari and latest versions of Firefox.

Further References:




P.S. I am sure this is not a latest topic or latest happening in testing industry.  Recently I had one specific requirement for testing a web application with multiple browsers in one of my projects and hence sharing my views on this topic.

Tuesday, January 17, 2012

Web Services Load Testing


Now-a-days large number of web applications are either consuming web services or providing their business functions as web services.  Like the performance of accessing the application services from user interface (web pages or desktop forms) is important for satisfaction of the end users, same should be with web services that application is providing.  I found performance requirements are not explicitly mentioned in requirements documents, testers should ensure the performance requirements are mentioned in requirement document similar to the way the performance requirements are mentioned for web pages and servers.

We can test performance of web services like the way we test the performance of normal web applications. We can track metrics like Response Time, Throughput, Memory Consumption, Application Server performance, I/O performance for web services performance testing. I have worked on testing of some applications where  complete  application is built on web services. It's a smart client application which interacts with the server via web services. In such scenarios it is difficult to performance test the application from user interface for huge loads. In such cases we can actually test the core of the application which are web services, rather than testing the client. Performance testing of web services is supported many open source and commercial tools.  There are some open source and free tools available which can server basic performance testing requirements. I found JMeter is good choice among open source tools, JMeter offers good extensibility and scalability and has good options to performance test web services.  In one of the projects, I had to test the web services with 3000 concurrent users with huge data transfers, which is difficult to test with some SOAP testing tools as the data exchange and processing involved with each request is huge. JMeter is the good choice and has good extensibility of the performance testing.

The challenges in load testing of web services is, verification of response of each web service, parameterization of input data, and finding error rate of the requests. We can add assertions on web service response to check whether response is correct or not. This may cause some overhead on JMeter to verify each response and may affect the correct metrics. If your web services have robust logging mechanism, the verification for correctness of response or output can be verified from log files or log message tables.

Some considerations while designing performance tests are, to include think time between calling of different web service methods to when trying to achieve a business flow,  providing appropriate ramp up time, adding more remote machines to JMeter is necessary step if we want to test with large number of virtual concurrent users, configuring the proper Heap memory of JMeter to accommodate more load.