online marketing Prabakar's blog on Software Testing: 03/01/2011 - 04/01/2011
Lead Gen Banner

Total Pageviews

Friday, March 4, 2011

Twill - Web Automation Tool in Python

Twill is a simple language that allows users to browse the Web from a command-line interface. With twill, you can navigate through Web sites that use forms, cookies, and most standard Web features. Twill supports automated Web testing and has a simple Python interface. Twill is open source and written in Python. Unlike many test automation tools for web application testing, Twill does not launch or need browser. Twill works from the command line (or from your Python script) and allows you to perform standard operations like navigating to specific pages, use forms, cookies and so on. As mentioned earlier, Twill does not launch real browser and hence its usage as test automation tool is limited. Twill can not be used to test rendering of pages, functionality related to Java scripts and so on. Tools like Selenium are better suited for that purpose. Twill is better suited for validating functionality which can be exercised using http request / response and analyzing page source.

Download link: http://darcs.idyll.org/~t/projects/twill-0.9.tar.gz

You can also use Python's easy_install to install or upgrade twill. It works with Python 2.3 or later.

To start using twill, install it and then type twill-sh. At the prompt type:

go http://www.slashdot.org/
show
showforms
showhistory

Source: Testing Site

Thursday, March 3, 2011

Selenium IDE - Introduction


Selenium is an open source tool for web application testing. This tool is primarily developed in Java Script and browser technologies and hence supports all the major browsers on all the platforms. For example, you can have your automation scripts written for Firefox on Windows and run them on Firefox in Mac. Most of the time, you will not need to change your scripts for them to work on Mac. In terms of coverage for platform and browser, Selenium is probably one of the best tool available in the market for web applications. There are three variants of Selenium, which can be used in isolation or in combination to create complete automation suite for your web applications.
  • Selenium IDE
  • Selenium Core
  • Selenium Remote Control

In this article, we will discuss Selenium IDE. Subsequent articles in the series will cover Selenium Remote Control and Selenium Core as well.

Selenium IDE

Selenium IDE is the easiest way to use Selenium and most of the time it also serves as a starting point for your automation. Selenium IDE comes as an extension to the Firefox web browser. This can be installed from either openqa or mozilla distribution site. Selenium extension will be downloaded as XPI file. If you open this file using File -> open in Mozilla, it should get installed.

Biggest drawback of Selenium IDE is its limitation in terms of browser support. Though Selenium scripts can be used for most of the browser and operating system, Scripts written using Selenium IDE can be used for only Firefox browser if it is not used with Selenium RC or Selenium Core.

Selenium IDE is the only flavor of Selenium which allows you to record user action on browser window. It can also record user actions in most of the popular languages like Java, C#, Perl, Ruby etc. This eliminates the need of learning new vendor scripting language.

For executing scripts created in these languages, you will need to use Selenium Remote Control. If you do not want to use Remote Control than you will need to create your test scripts in HTML format.

So if you are excited about the tool, Lets start playing with Selenium IDE now. If installed properly, Selenium can be accessed from tool --> Selenium IDE in your browser toolbar.

As compared to most of the test automation tools it is very simple and lightweight. The small red button on the right hand side gives you an indication on whether Selenium is in recording mode or not. Also, Selenium IDE will not record any operation that you do on your computer apart from the events on Firefox browser window. So go ahead read your mail, open a word doc or do anything else, Selenium will record only your actions on browser.

If you are curious to know about other options present on the Selenium IDE, there are not much :) . Other options present on the Selenium IDE toolbar are related to test execution. Run will execute the tests with the maximum possible speed, Walk will execute them with relatively slow speed and in step mode you will need to tell Selenium to take small steps.

Final button present on the Selenium IDE toolbar is the Selenium TestRunner. Test Runner gives you nice browser interface to execute your tests and also gives summary of how many tests were executed, how many passed and failed. It also gives similar information on commands which were passed or failed. TestRunner is also available to tests developed in HTML Only.

If you open the option window by going to Option , you will see there are some self explanatory options available. For example, encoding of test files, timeout etc. You can also specify Selenium Core and Selenium IDE extensions on this page. Selenium extensions can be used to enhance the functionality provided by Selenium. Selenium extensions are not covered in this article, there will be a separate article for specifying and developing extensions for Selenium...

Tuesday, March 1, 2011

Test Automation [Key Factors]

Based on the test automation experience the followings are the key factors contributed to a successful test automation project:

1] Dedication to automation (Instead of treating it as a spare-time activity)

2] Commitment by the entire team (rather than just one or two testers)

3] Commitment to automation from the start (rather than trying to automate a manual process later)

4] Making use of correct tools/frameworks/technology

5] Allocating sufficient time and resources

Source: SoftwareQA Blogspot

QA Outsourcing [Reduce Testing Cost]

There are all manner of outsourcing firms who can provide you with impressive powerpoint presentations showing how you can cut your QA costs by 30% or more by outsourcing the work. I would caution you to consider the source however.

Of course Outsourcing firms will produce evidence showing why their service is a valuable cost savings, they will even have little problem letting the managers and executives who decided to use them generate the metrics that show how much of a cost savings it was to outsource. In the case of the Outsourcing firm, it is little more than marketing speak trying to generate new business, expecting them to say any different would be like expecting a Cigarette company executive to go on Television and say "Of course Cigarettes kill people but as long as they are willing to buy them we'll be happy to take their money". As far as their Customers, well admitting that a decision to outsource was a failure would effectively end most executive's or manager's careers so they have little incentive to look too critically at the numbers.

So how much is the cost savings?

That depends on a LOT of factors. First, how much do your inhouse testers cost? Not just their salaries but their total cost.

In the US and UK the total cost of an employee is considered to be around double their salary (that varies some based on the level of benefits offered, location, and other similar factors) and QA testers earn between $45,000 and $90,000 a year depending on experience level, industry, and location. This would mean that on average a Tester in the US should have an employment cost somewhere around $60 - $75 per hour. Only you would have any clue what your costs would be, I'm just including that as a reference point.

The Bill Rates for Outsourced testers ranges from $25 - $40 per hour in places like India and China up to as much as $60 an hour for outsourced testers in industrialized countries (and it can be MUCH higher than that in some locations of if you require specialized knowledge, I've personally seen them as high as $150 per hour).

On top of that bill rate from the outsourcing firm there are additional costs that would work out to an extra $5 - $10 per hour to cover the legal expenses of negotiating the contract with them, the necessary network infrastructure work to allow the outsourcing vendor's network interface with yours securely and so on.

So, a reasonable projection is that in most cases Outsourcing IT work will cost around Half of what it cost to do the work in house on a per hour basis (at least for companies based in the US).

We're not at the end of the caluclation yet though. See that is a Per Hour cost. It assumes that the Outsourced testers are able to work exactly as efficiently as your inhouse testers and with the same skill. While the question of tester skills is highly random from company to company the question of efficiency is actually fairly static. Simply moving your testers into a different building within the same complex creates enough inefficiency to make testing projects take 25 to 50% longer than they would have if they sat next to the developers. Adding in different corporate cultures or even worse national cultures, language barriers, time zone differientals, etc. and the man hour increase is closer to an additional 100% to as much as 200%.

So a 50% reduction in cost per man hour but it takes you 2 to 3 times as many man hours says that Outsourcing is anything from a break even to a money loosing proposition until you factor in variability in skill levels between your existing test staff and the staff at whoever you outsource to. If you have very weak testers and are unable to recruit better ones the it may be that outsourcing will produce a small cost savings of maybe 10 to 20% of your testing costs.

I have yet to encounter an Outsourced testing situation where it actually saved money. Many VP's, Director's, and Manager's claimed it did on the basis of reduced per hour labor costs but never one that was a net positive to the company once all of the outsourcing costs were factored in.

Source: SoftwareQA Blogspot

Product Testing

Product would be developed as a project first and would undergo all the tests that a project normally undergoes, namely, unit, integration, and system testing. System testing is carried out more rigorously and on multiple systems. In addition, it needs some more
rigorous tests. These are:

1. Load Testing – in web applications and multi-user applications, large numbers of users are logged in and try to use the software in a random manner. The
objective is to see if the software is managing multiple requests and serving up accurate results or mixing them up. . This unearths the issues connected with the bandwidth, database, sufficiency of RAM, hard disk etc

2. Volume Testing – subject the software to a high volume of data and see the performance, whether it degrades.

3. Functional Testing – test that all functions expected of the software are functioning correctly.

4. End-to-End Testing – in this type of testing, one entity is tracked from birth to death in the application. For example, in a payroll application, an employee joins the system; then is promoted; then is demoted; salary increases are effected, salary decreases are effected; kept in abeyance; transferred, then retired, dismissed, terminated and so on to ensure that the state transitions designed in the applications happen as desired

5. Parallel Testing – a number of users using the same function and are either inputting or requesting same data. This brings out the system’s ability to handle requests at the same time and preserving the data integrity.

6. Concurrent Testing – Concurrent testing is carried out to unearth issues when two or more users use the same functionality and update or modify same data
with different values at the same time – normally using a testing tool. For example, take ticket reservation scenario, there is only one seat and it is shown
as available to two people. When both confirm purchase, the system should give to only one and reject the other request. It should not happen that money is
collected from both credit cards and reserve for only one – the credit card transaction must be reversed for the rejected party. Scenarios like this will be
tested.

7. Stress Testing – cause stress to the software by making expected resources unavailable or causing deadlock like scenarios or not releasing resources and so on to ensure that the software has routines built in to handle such stress. This will bring out software responses for events like machine-rest, Internet disconnection, server timeouts etc.

8. Positive Testing – test the software as specified and not trying any negative acts – to ensure that all defined functions are performing. Used mostly for
customer / end user acceptance testing.

9. Negative Testing – using the software in a manner that is not expected to be used – this will bring out all hidden defects in the software. This is to ensure even malicious usage would not affect the software or data integrity.

10. User Manual Testing – use the software conforming to the user manual to ensure that they both are in synch with each other

11. Deployment Testing – Simulate the target environment and deploy the software and ensure that deployment specified is appropriate.

12. Sanity Testing – this cursory testing to ensure that the components, of software package, are complete and are of appropriate versions, carried out before delivery or before making a software-build.

13. Regression Testing – testing carried out after unearthed defects are fixed

14. Security Testing – testing to determine that an information system protects data and maintains functionality as intended.

15. Performance Testing – testing to ensure that the response times are in acceptable range

16. Usability Testing – testing the software for different types of usage to ensure that it satisfactorily fulfills the requirements of specified functional areas

17. Install / uninstall Testing – test the software on all target platforms to ensure that install and uninstall operations are satisfactorily performed

18. Comparison Testing – testing the product with competing products to contrast the differences for determining the relative position of the product

19. Intuitive Testing – testing without reference to user manuals to see if the product can be used without much reference to user guides

Source: SQA Forum

Test Effort Estimation

Effort estimation can be done, based on different techniques available like Function Point Analysis, COCOMO, Use Case Point Analysis, Test Case Point Analysis, Metrics based.

Effort estimation is basically done any of the above techniques for different test activities like Test Case Preparation, Automation Script Creation and Test Execution.

Metrics based and very commonly used effort estimation procedure in doing effort estimation for Yahoo Mail application.

Identify the requirements (Login Page, Inbox, Compose, Address)

Classify the requirements in different complexity (Simple, Average, High)

Based on the past experience, metrics will be collected on how much time it took to write test case for simple, average and high functionality. Similarly it is collected for other testing activities.

Now multiply your complexity with the time factor which you derived from metrics to calculate the effort.

Don't report the total has your total effort, always use buffer time, it various based on the domain, tool and other factors. Example we use 20% on the previous total. This buffer will save you on risk and other deadline factors

Now add the buffer time and the effort calculated from metrics, and this is the Total effort for the activity.

Test Case Point Analysis: You can use this effort estimation technique for Test Automation and Test Execution.

1. Identify the total number of test cases to be automated or executed.

2. Classify the steps into High, Medium and Low complexity based on the business process it performs.

3. Based on the previous experience (metrics based) on how much time it would take to execute High complexity steps. You can have the average time.

4. Multiply this average time with total number of steps and get the total

5. Add the buffer time with the total to get Final estimation time. This buffer time will vary based on the application, domain and other factors.

Most of these effort estimation techniques use Metrics Based, which you are going to calculate the actual time for all effort and then take the average.

Source: SoftwareQA Blogspot