online marketing Prabakar's blog on Software Testing: 2011
Lead Gen Banner

Total Pageviews

Sunday, July 10, 2011

Taking Selenium to Next Level

Excellent Selenium testing resources:
  1. Testing Versus Checking - Adam's opinion article on the difference between testing a Web application and checking that a Web application works
  2. Abstracting Locators in Selenese (Se-IDE) - How to write really concise and easily changeable Selenium element locators
  3. Selenium IDE (Se-IDE) plugin tutorial - How to write your extensions to the Selenium IDE record/playback tool
  4. CSS Locators Reference - a one-page quick reference
  5. Page Objects - an object oriented approach to writing Selenium tests. Read an introductory article and an article on understanding Elements and Actions
  6. Using a Continuous Integration (CI) server as a Se-Grid replacement. Read the TestMaker CI Guide.
  7. Read Adam's Blog
  8. A 30-minute tutorial on Repurposing Selenium Tests as Load and Performance Tests
  9. Adam's Big Continuous Delivery Summary commentary on DevOps
  10. Selenium 2 RefCard from DZone
  11. Selenium RefCard (covers Selenium 1 and TestMaker)
Source: Cohen Blog

TestMaker Object Designer

TestMaker Object Designer is for fast and easy data-driven test authoring for Ajax and Flex Applications. Designer is free open source test (OST) tool published under the GPL license.

Designer for Ajax, Flex, Flash Record/Playback

TestMaker 6 simplifies test creation and maintenance. TestMaker comes with everything you need to build tests, operate tests, and present the results graphically and with minimal training:

  • Functional Test Record/Playback Tool
  • The Open Source Alternative to HP QuickTest Pro (QTP)
  • The Alternative To Selenium IDE for Selenium test development
  • Record tests in Internet Explorer, Chrome, Firefox, Safari, and Opera
  • Record and playback functional tests of Flash and Flex (SWF) applications
  • Data-enable tests using simple drag-and-drop features
  • Add assertions and check-points to tests visually
  • If-then, looping, and conditional test execution without scripting
  • Object Repository for sharing Web page objects between team members
  • Instant and context senstivie help and reference documentation
  • Support for Ajax and Javascript asynchronous events without additional test scripting
  • Selenium, Sahi, and Flex test type support in one tool.
  • Outputs to Selenium unit tests, Selenium IDE Selenese table format, Sahi, and Flex test formats.
Source: Opensource Testing

Trends in Software Testing

As the complexity of software applications increases, testing becomes more crucial. And in the process, more time consuming. Here is a list of emerging testing practices.

Software is everywhere today and is becoming increasingly mission critical, whether in satellites and planes, or e-commerce websites. Software complexity is also on the rise - thanks to distributed, multi-tier applications targeting multiple devices (mobile, thin/thick clients, clouds, etc). Added to that are development methodologies like extreme programming and agile development. No wonder software testing professionals are finding it hard to keep up with the change.

As a result, many projects fail while the rest are completed significantly late, and provide only a subset of the originally planned functionality. Poorly tested software and buggy code cost corporations billions of dollars annually, and most defects are found by end users in production environments.

Given the magnitude of the problem, software-testing professionals are finding innovative means of keeping up - both in terms of tools and methodologies. This article covers some of the recent trends in software testing - and why they're making the headlines. Test driven development (TDD)

TDD is a software development technique that ensures your source code is thoroughly unit-tested as compared to traditional testing methodologies, where unit testing is recommended but not enforced. It combines test-first development (where you write a test before you write just enough code to fulfil that test), and refactoring (where, if the existing design isn't the best possible to enable you to implement a particular functionality, you improve it to enable the new feature).

TDD is not a new technique-but it is suddenly centre stage, thanks to the continued popularity of software development methodologies such as agile development and extreme programming.

Optimisations to TDD include the use of tools (such as PEX/peer exchange for Visual studio - http://research.microsoft.com/en-us/projects/pex/ ) to improve code coverage, by creating parameterised unit tests that look for boundary conditions, exceptions, and assertion failures.

TDD is gaining popularity as it allows for incremental software development - where bugs are detected and fixed as soon as the code is written, rather than at the end of an iteration or a milestone.

For more details on TDD, use the following links:
http://en.wikipedia.org/wiki/Test-driven_development
http://www.agiledata.org/essays/tdd.html

Virtualisation testing:-

Testing is becoming increasingly complex - the test environment set-up, getting people access to the environment, and loading it with the right bits from development, all take up about 30-50 per cent of the total testing time in a typical organisation. What is worse is that when testers find bugs, it is hard to re-create the same environment for developers to investigate and fix bugs. Test organisations are increasingly gravitating towards virtualisation technologies to cut down test set-up times significantly. These technologies include:

  • accelerate set-up/tear down and restoration of complex virtual environments to a clean state, improving machine utilisation
  • eliminate no repro bugs by allowing developers to recreate complex environments easily
  • improve quality by automating virtual machine provisioning, building deployment, and building verification testing in an integrated manner (details later)

As an offshoot, virtualisation ensures that test labs reduce their energy footprint, resulting in a positive environmental impact, as well as significant savings.

Some of the companies that have virtual test lab management solutions are VMware, VMLogix, and Surgient. Microsoft has recently announced a Lab Management (http://channel9.msdn.com/posts/VisualStudio/Lab-Management-coming-to-Visual-Studio-Team-System-2010/) product as part of its Visual Studio Team System 2010 release. Lab Management supports multiple environment management, snapshots to easily restore to a previous state, virtual network isolation to allow multiple test environments to run concurrently, and a workflow to allow developers to have easy access to environments to reproduce and fix defects.

Theresa Lanowitz, founder of Voke, a firm involved with analysis of trends in the IT world, expects virtualisation to become ‘the defining technology of the 21st century', with organisations of every size set to benefit from virtualisation as a part of its core infrastructure.

Continuous integration:-

CI is a trend that is rapidly being adopted in testing, where the team members integrate their work with the rest of the development team on a frequent basis by committing all changes to a central versioning system. Beyond maintaining a common code repository, other characteristics of a CI environment include build automation, auto-deployment of the build into a production-like environment, and ensuring a self-test mechanism such that at the very least, a minimal set of tests are run to confirm that the code behaves as expected.

Leveraging virtualised test environments, tools such as Microsoft's Visual Studio Team System (VSTS) can create sophisticated CI workflows. As soon as code is checked in, a build workflow kicks in that compiles the code - deploys it on to a virtualised test environment, triggers a set of unit and functional tests on the test environment, and reports on the results.

VSTS takes the build workflow one step further, and performs the build before the check-in is finalised, allowing the check-in to be aborted if it would cause a break, or if it fails the tests. And given historical code coverage data from test runs, the tool can identify which one of the several thousand test cases needs to be run when a new build comes out - significantly reducing the build validation time.

One obvious benefit of continuous integration is transparency. Failed builds and tests are found quickly rather than having to wait for the next build. The developer who checked in the offending code is probably still nearby and can quickly fix or roll back the change.

For a complete set of tools that help enable CI, see http://en.wikipedia.org/wiki/Continuous_Integration

Crowd testing:-

Crowd testing is a new and emerging trend in which, rather than relying on a dedicated team of testers (in-house or out sourced), companies rely on virtual test teams (created on demand) to get complete test coverage and reduce the time to market for their applications.

The company defines its test requirements in terms of scenarios, environments, and the type of testing (functional, performance, etc). A crowd test vendor (such as uTest - www.utest.com) identifies a pool of testers that meet the requirements, creates a project, and assigns work. Testers check the application, report bugs, and communicate with the company via an online portal. Crowd testing vendors also provide other tools, such as powerful reporting engines and test optimisation utilities. Some of the crowd testing vendors are domain specific - such as Mob4hire (www.mob4hire.com), which focuses on mobile application testing. Testers will bid on various projects specific to their handsets. Developers will choose the testers that they require, and will deploy test plans for the mobile application they are developing. On completion of the test, the mobile tester will get paid for the work.

One obvious advantage is in terms of reducing the test cycle time. But crowd testing is being used in various other scenarios as well - for example, to do usability studies on new user interfaces. The cost savings can be substantial.

Tools driven developer testing:-

Traditionally, developer testing was primarily limited to unit testing and some code coverage metrics. However, as organisations realised that the cost of defects found in development was exponentially lower than that found in test or production, they have begun to invest in tooling to enable developers to find bugs early on.

IDE-integrated tools have made the self-testing practice acceptable to developers, and the unit-testing and coverage analysis process automated for them. These tools also make it easy to analyse performance and compare it with a baseline by extending the unit test infrastructure.

Development teams are also expected to perform a level of security testing (threat modelling, buffer overflow, sequel injection, etc). For teams developing on native languages such as C/C++, developers are also required to use run-time analysis tools to check for memory leaks, memory corruptions and thread deadlocks. Developers are also using static analysis tools to find accessibility, localisation and globalisation issues -- and in some cases more sophisticated errors related to memory management and performance simulation -- by using data flow analysis and other techniques.

As a result of using these innovative methods, testers can now spend a lot more of their time on integration testing, stress, platform coverage, and end-to-end scenario testing. This will help them detect higher-level defects that would have otherwise trickled down to production.

Source: IT Magazine

Friday, March 4, 2011

Twill - Web Automation Tool in Python

Twill is a simple language that allows users to browse the Web from a command-line interface. With twill, you can navigate through Web sites that use forms, cookies, and most standard Web features. Twill supports automated Web testing and has a simple Python interface. Twill is open source and written in Python. Unlike many test automation tools for web application testing, Twill does not launch or need browser. Twill works from the command line (or from your Python script) and allows you to perform standard operations like navigating to specific pages, use forms, cookies and so on. As mentioned earlier, Twill does not launch real browser and hence its usage as test automation tool is limited. Twill can not be used to test rendering of pages, functionality related to Java scripts and so on. Tools like Selenium are better suited for that purpose. Twill is better suited for validating functionality which can be exercised using http request / response and analyzing page source.

Download link: http://darcs.idyll.org/~t/projects/twill-0.9.tar.gz

You can also use Python's easy_install to install or upgrade twill. It works with Python 2.3 or later.

To start using twill, install it and then type twill-sh. At the prompt type:

go http://www.slashdot.org/
show
showforms
showhistory

Source: Testing Site

Thursday, March 3, 2011

Selenium IDE - Introduction


Selenium is an open source tool for web application testing. This tool is primarily developed in Java Script and browser technologies and hence supports all the major browsers on all the platforms. For example, you can have your automation scripts written for Firefox on Windows and run them on Firefox in Mac. Most of the time, you will not need to change your scripts for them to work on Mac. In terms of coverage for platform and browser, Selenium is probably one of the best tool available in the market for web applications. There are three variants of Selenium, which can be used in isolation or in combination to create complete automation suite for your web applications.
  • Selenium IDE
  • Selenium Core
  • Selenium Remote Control

In this article, we will discuss Selenium IDE. Subsequent articles in the series will cover Selenium Remote Control and Selenium Core as well.

Selenium IDE

Selenium IDE is the easiest way to use Selenium and most of the time it also serves as a starting point for your automation. Selenium IDE comes as an extension to the Firefox web browser. This can be installed from either openqa or mozilla distribution site. Selenium extension will be downloaded as XPI file. If you open this file using File -> open in Mozilla, it should get installed.

Biggest drawback of Selenium IDE is its limitation in terms of browser support. Though Selenium scripts can be used for most of the browser and operating system, Scripts written using Selenium IDE can be used for only Firefox browser if it is not used with Selenium RC or Selenium Core.

Selenium IDE is the only flavor of Selenium which allows you to record user action on browser window. It can also record user actions in most of the popular languages like Java, C#, Perl, Ruby etc. This eliminates the need of learning new vendor scripting language.

For executing scripts created in these languages, you will need to use Selenium Remote Control. If you do not want to use Remote Control than you will need to create your test scripts in HTML format.

So if you are excited about the tool, Lets start playing with Selenium IDE now. If installed properly, Selenium can be accessed from tool --> Selenium IDE in your browser toolbar.

As compared to most of the test automation tools it is very simple and lightweight. The small red button on the right hand side gives you an indication on whether Selenium is in recording mode or not. Also, Selenium IDE will not record any operation that you do on your computer apart from the events on Firefox browser window. So go ahead read your mail, open a word doc or do anything else, Selenium will record only your actions on browser.

If you are curious to know about other options present on the Selenium IDE, there are not much :) . Other options present on the Selenium IDE toolbar are related to test execution. Run will execute the tests with the maximum possible speed, Walk will execute them with relatively slow speed and in step mode you will need to tell Selenium to take small steps.

Final button present on the Selenium IDE toolbar is the Selenium TestRunner. Test Runner gives you nice browser interface to execute your tests and also gives summary of how many tests were executed, how many passed and failed. It also gives similar information on commands which were passed or failed. TestRunner is also available to tests developed in HTML Only.

If you open the option window by going to Option , you will see there are some self explanatory options available. For example, encoding of test files, timeout etc. You can also specify Selenium Core and Selenium IDE extensions on this page. Selenium extensions can be used to enhance the functionality provided by Selenium. Selenium extensions are not covered in this article, there will be a separate article for specifying and developing extensions for Selenium...

Tuesday, March 1, 2011

Test Automation [Key Factors]

Based on the test automation experience the followings are the key factors contributed to a successful test automation project:

1] Dedication to automation (Instead of treating it as a spare-time activity)

2] Commitment by the entire team (rather than just one or two testers)

3] Commitment to automation from the start (rather than trying to automate a manual process later)

4] Making use of correct tools/frameworks/technology

5] Allocating sufficient time and resources

Source: SoftwareQA Blogspot

QA Outsourcing [Reduce Testing Cost]

There are all manner of outsourcing firms who can provide you with impressive powerpoint presentations showing how you can cut your QA costs by 30% or more by outsourcing the work. I would caution you to consider the source however.

Of course Outsourcing firms will produce evidence showing why their service is a valuable cost savings, they will even have little problem letting the managers and executives who decided to use them generate the metrics that show how much of a cost savings it was to outsource. In the case of the Outsourcing firm, it is little more than marketing speak trying to generate new business, expecting them to say any different would be like expecting a Cigarette company executive to go on Television and say "Of course Cigarettes kill people but as long as they are willing to buy them we'll be happy to take their money". As far as their Customers, well admitting that a decision to outsource was a failure would effectively end most executive's or manager's careers so they have little incentive to look too critically at the numbers.

So how much is the cost savings?

That depends on a LOT of factors. First, how much do your inhouse testers cost? Not just their salaries but their total cost.

In the US and UK the total cost of an employee is considered to be around double their salary (that varies some based on the level of benefits offered, location, and other similar factors) and QA testers earn between $45,000 and $90,000 a year depending on experience level, industry, and location. This would mean that on average a Tester in the US should have an employment cost somewhere around $60 - $75 per hour. Only you would have any clue what your costs would be, I'm just including that as a reference point.

The Bill Rates for Outsourced testers ranges from $25 - $40 per hour in places like India and China up to as much as $60 an hour for outsourced testers in industrialized countries (and it can be MUCH higher than that in some locations of if you require specialized knowledge, I've personally seen them as high as $150 per hour).

On top of that bill rate from the outsourcing firm there are additional costs that would work out to an extra $5 - $10 per hour to cover the legal expenses of negotiating the contract with them, the necessary network infrastructure work to allow the outsourcing vendor's network interface with yours securely and so on.

So, a reasonable projection is that in most cases Outsourcing IT work will cost around Half of what it cost to do the work in house on a per hour basis (at least for companies based in the US).

We're not at the end of the caluclation yet though. See that is a Per Hour cost. It assumes that the Outsourced testers are able to work exactly as efficiently as your inhouse testers and with the same skill. While the question of tester skills is highly random from company to company the question of efficiency is actually fairly static. Simply moving your testers into a different building within the same complex creates enough inefficiency to make testing projects take 25 to 50% longer than they would have if they sat next to the developers. Adding in different corporate cultures or even worse national cultures, language barriers, time zone differientals, etc. and the man hour increase is closer to an additional 100% to as much as 200%.

So a 50% reduction in cost per man hour but it takes you 2 to 3 times as many man hours says that Outsourcing is anything from a break even to a money loosing proposition until you factor in variability in skill levels between your existing test staff and the staff at whoever you outsource to. If you have very weak testers and are unable to recruit better ones the it may be that outsourcing will produce a small cost savings of maybe 10 to 20% of your testing costs.

I have yet to encounter an Outsourced testing situation where it actually saved money. Many VP's, Director's, and Manager's claimed it did on the basis of reduced per hour labor costs but never one that was a net positive to the company once all of the outsourcing costs were factored in.

Source: SoftwareQA Blogspot

Product Testing

Product would be developed as a project first and would undergo all the tests that a project normally undergoes, namely, unit, integration, and system testing. System testing is carried out more rigorously and on multiple systems. In addition, it needs some more
rigorous tests. These are:

1. Load Testing – in web applications and multi-user applications, large numbers of users are logged in and try to use the software in a random manner. The
objective is to see if the software is managing multiple requests and serving up accurate results or mixing them up. . This unearths the issues connected with the bandwidth, database, sufficiency of RAM, hard disk etc

2. Volume Testing – subject the software to a high volume of data and see the performance, whether it degrades.

3. Functional Testing – test that all functions expected of the software are functioning correctly.

4. End-to-End Testing – in this type of testing, one entity is tracked from birth to death in the application. For example, in a payroll application, an employee joins the system; then is promoted; then is demoted; salary increases are effected, salary decreases are effected; kept in abeyance; transferred, then retired, dismissed, terminated and so on to ensure that the state transitions designed in the applications happen as desired

5. Parallel Testing – a number of users using the same function and are either inputting or requesting same data. This brings out the system’s ability to handle requests at the same time and preserving the data integrity.

6. Concurrent Testing – Concurrent testing is carried out to unearth issues when two or more users use the same functionality and update or modify same data
with different values at the same time – normally using a testing tool. For example, take ticket reservation scenario, there is only one seat and it is shown
as available to two people. When both confirm purchase, the system should give to only one and reject the other request. It should not happen that money is
collected from both credit cards and reserve for only one – the credit card transaction must be reversed for the rejected party. Scenarios like this will be
tested.

7. Stress Testing – cause stress to the software by making expected resources unavailable or causing deadlock like scenarios or not releasing resources and so on to ensure that the software has routines built in to handle such stress. This will bring out software responses for events like machine-rest, Internet disconnection, server timeouts etc.

8. Positive Testing – test the software as specified and not trying any negative acts – to ensure that all defined functions are performing. Used mostly for
customer / end user acceptance testing.

9. Negative Testing – using the software in a manner that is not expected to be used – this will bring out all hidden defects in the software. This is to ensure even malicious usage would not affect the software or data integrity.

10. User Manual Testing – use the software conforming to the user manual to ensure that they both are in synch with each other

11. Deployment Testing – Simulate the target environment and deploy the software and ensure that deployment specified is appropriate.

12. Sanity Testing – this cursory testing to ensure that the components, of software package, are complete and are of appropriate versions, carried out before delivery or before making a software-build.

13. Regression Testing – testing carried out after unearthed defects are fixed

14. Security Testing – testing to determine that an information system protects data and maintains functionality as intended.

15. Performance Testing – testing to ensure that the response times are in acceptable range

16. Usability Testing – testing the software for different types of usage to ensure that it satisfactorily fulfills the requirements of specified functional areas

17. Install / uninstall Testing – test the software on all target platforms to ensure that install and uninstall operations are satisfactorily performed

18. Comparison Testing – testing the product with competing products to contrast the differences for determining the relative position of the product

19. Intuitive Testing – testing without reference to user manuals to see if the product can be used without much reference to user guides

Source: SQA Forum

Test Effort Estimation

Effort estimation can be done, based on different techniques available like Function Point Analysis, COCOMO, Use Case Point Analysis, Test Case Point Analysis, Metrics based.

Effort estimation is basically done any of the above techniques for different test activities like Test Case Preparation, Automation Script Creation and Test Execution.

Metrics based and very commonly used effort estimation procedure in doing effort estimation for Yahoo Mail application.

Identify the requirements (Login Page, Inbox, Compose, Address)

Classify the requirements in different complexity (Simple, Average, High)

Based on the past experience, metrics will be collected on how much time it took to write test case for simple, average and high functionality. Similarly it is collected for other testing activities.

Now multiply your complexity with the time factor which you derived from metrics to calculate the effort.

Don't report the total has your total effort, always use buffer time, it various based on the domain, tool and other factors. Example we use 20% on the previous total. This buffer will save you on risk and other deadline factors

Now add the buffer time and the effort calculated from metrics, and this is the Total effort for the activity.

Test Case Point Analysis: You can use this effort estimation technique for Test Automation and Test Execution.

1. Identify the total number of test cases to be automated or executed.

2. Classify the steps into High, Medium and Low complexity based on the business process it performs.

3. Based on the previous experience (metrics based) on how much time it would take to execute High complexity steps. You can have the average time.

4. Multiply this average time with total number of steps and get the total

5. Add the buffer time with the total to get Final estimation time. This buffer time will vary based on the application, domain and other factors.

Most of these effort estimation techniques use Metrics Based, which you are going to calculate the actual time for all effort and then take the average.

Source: SoftwareQA Blogspot

Wednesday, February 23, 2011

TestLink 1.9.1 - Released

TestLink 1.9.1:

New features
- Requirement revisioning
- Requirement history with log messages
- New requirement and test case comparison method
- Expand/Collapse Buttons for trees
- PHPMAILER update (allows SSL or TLS) - googlemail can be used
- Mechanisms implemented to prevent data loss on editing when sessions times out or the user tries to navigate away without saving latest changes

Important Bugfixes (~70 in total)
- better MSSQL support
- Query metrics: start and end date input fields are respected
- minor usability improvements
- event viewer fixed for IE8

Source: opensourcetesting

Requirements and Test Management Repository [RTMR]

What RTMR?

RTMR stands for Requirements and Test Management Repository translated Repository requirements management and testing .
This is a test tool open source software that allows:
manage software requirements throughout its life cycle
describe the scenarios and test cases to ensure the validation of these requirements
run test campaigns targeted
follow all the anomalies encountered during testing:
via the internal fault handler
via an external fault handler (bugzilla, mantis)
The solution includes a system for version management by project, by requirement, by scenario and test cases that keeps track of software changes and can easily cover all regression tests.

Why would a software testing tool?

The test objective is to validate the proper functioning of software to the needs and requirements gathered from users.
It ensures a sufficient level of quality during the development cycle and software life.
The test phase allows to identify such malfunctions, abnormalities or regression that prevent full or partial coverage requirements formluées initially and as functional changes.

The software testing tool comes at this time to structure, organize and target testing (risks, priorities ... etc.).

RTMR Why?

Although fundamental, the profession of software testing will become widespread only recently in the computational structures of small and medium sizes.
Although the panel of test tools grows, existing solutions are often architecturally binding (proprietary software) and require substantial financial investment (initial cost of licensing, training, maintenance) for the structures that need it.

Through RTMR, materializes the idea of an alternative open source solutions based on open (GNU / Linux, PostgreSQL, Qt), a robust architecture (3 others), while providing a rich client interface available on different platforms ( Linux , MacOS X and Windows ).

Source: opensourcetesting