Friday, April 27, 2007
Tester Tool Box
AllPairs - AllPairs is a free tool - written by James Bach - which is useful for generating test pairs for testing combinations of software features, two at a time.
BareTail-Free real-time log file monitoring and highlighting tool.
Bugzilla-The famous, free, open-source defect tracking system.
Daphne-Daphne is a free system tray application for killing, controlling and debugging Windows processes.
DbVisualizer-Free cross-platform database independent visual browsing and editing tool.
Ethereal-Tool for capturing and analyzing network traffic.
InstallWatch-Records modifications made to your PC during the installation of software, hardware, or configuration changes.
IrfanView-A small freeware graphic viewer for Windows.
Jenny-Jenny is tool for generating pairwise regression tests.
JR Screen Ruler-Free virtual ruler for your computer screen.
Log-Watch-Tool - written by Janes Bach - to watch a log file and play a sound when the desired text appears.
MyIVO-Free remote PC access tool.
OpenSTA-OpenSTA is an open source tool used for HTTP and HTTPS load and performance testing.
PerlClip-PerlClip is a free tool - written by James Bach - used to create strings of test data.
Qlock-Qlock is a free World Clock for both your browser and your desktop.
Screen Rip 32-A freeware screen capture utility that let's you capture areas of the screen with a variety of methods.
SmartReplace-Free tool to search and replace text and file names at once in one simple action.
SysInternals Utilities-A collection of useful and powerful utilities for Windows.
TimeSnapper Classic-Automatic screenshot journal tool.
Xenu LinkSleuth-Site spidering tool used to check for broken links.
WebMon-Freeware web page update monitoring program by Colin Markwell.
WinMerge-Open Source visual text file differencing and merging tool for Win32 platforms.
WinTask-A terrific general-purpose Windows and browser automation tool.
Thursday, April 19, 2007
SQL Injection [Login Page]
http://www.securitydocs.com/library/2656
http://www.imperva.com/application_defense_center/glossary/sql_injection.html
http://www.securiteam.com/securityreviews/5DP0N1P76E.html
Courtesy: Internet
Tuesday, April 17, 2007
Test Efficiency
What is Test Execution Efficiency?
It is generally very difficult to measure the efficiency of the testing process or the testing team for a project. Test Efficiency helps to calculate the efficiency of testing i.e. how many defects were leaked to the customer as compared to number of defects reported by the testing team. Generally almost 10-15 % of defects will be leaked and is considered acceptable. In the recent years, Companies have stared spending huge amount of money for developing quality. Due to this defect leakage percentage has come down to less than 10%.
How to Measure?
The Excel Sheet Attached helps us to calculate the efficiency of a testing process based on the number of defects reported by Customer and to the number of defects identified by the Testing Team.
Steps:
1) Provide Ranking to each severity.
In the excel sheet the severity rankings have been assigned as
a) Critical--4
b) Serious – 3
c) Moderate –2
d) Minor –1
For example:
The customer has Reported 1 Critical, 1 Serious, 2 Moderate and 5 Minor Defects. The Testing Team should have identified these defects.
The Testing Team has Reported 10 Critical, 5 Serious, 10 Moderate and 10 Minor Defects.
The Test Efficiency is calculated as follows: (T/T+C) * 100
T=4*10+5*3+10*2+10*1=85
C= 1*4+1*3+2*2+5*1=16
So Test Efficiency is (85/85+16) * 100=84.16%
Suppose if the Customer had not identified any defects in above example then the Test Efficiency will be 100%.
Consider a small project in which the testing team and customer did not find any defects (Assume you had a good programmer who did unit testing properly) then also the Test Efficiency will be 100%.If the Testing Team Failed to find any Defects and the Customer were finding them then the efficiency of the testing will be 0%. The formula used required fine-tuning in this case.
Find Defects Otherwise the Customer Will!
What Is Requirements Traceability?
In the broadest terms the RTM is simply a matrix of requirements showing each requirement’s relation to something. It could be the source for the requirement or the design component that describes the implementation of the requirement. Or it could be the test case(s) that covers the requirement. Just think of a spreadsheet with a list of requirements, a unique ID for each requirement, and the corresponding relationship.
Advantages of Using the RTM
The tool itself is very straight forward - it’s the proper use of the tool that provides benefit to your project. The RTM is not something that is used solely by the analysts or testers - it is something that the entire team utilizes.
Ensure Complete Test Coverage
How do you approach defining your test cases? More importantly, how do you know when you’re done? If you use a RTM, you always know the answer to this. As you plan out each test case, decide which requirements each test case will cover - then map those in the RTM. When all requirements are matched against a test case, you’re done (though you’ll probably still want to look at special cases, alternative tests, etc…).
Help Focus Testing
Along with knowing when your test coverage is complete, a RTM can help reduce unnecessary focus on the same set of requirements. You can quickly see in the matrix when a requirement is covered and avoid continuing to assign more test cases to it.
Validation of Requirements Consistency
I’m not suggesting that testers are responsible for validating requirements - we’re not. This should be done during requirements reviews and walkthroughs, and if problems are not caught there, they should certainly be found during technical design. But the reality of life is that these things are often not done as well as they should be, and requirements sometimes conflict with each other. You may have one requirement state that all numeric input fields are integer, and then another one that states that input field #5 on screen #2 is a numeric field with precision to 2 decimals. Using a RTM can help find these kind of problems.
Permit Prioritization
Another question for you - you’ve just been given the system by the development team and have been asked by management to do a quick assessment of quality. How would you approach this? There are probably numerous ways to attack this, but one way is to look at the RTM and see which set of test cases would cover the largest set of requirements and do those first.
Provide Proof of Testing
Your sponsor asks you “How do you know you’re done with testing? Did you test everything?” With a RTM it’s easy to provide the proof.
Help With Future Projects
Finally, if you have a system to collect the right kind of data, you can provide valuable insight to management for future work. Which requirements resulted in the most defects? Which requirements had the most Change Requests performed? Data like this allows management to assess high risk areas of the application and plan against those risks in the future.
Courtesy: Internet
Monday, April 16, 2007
Security Glossary
Access management - The centralized or unified implementation and management of user authentication and entitlement to a site's secure resources.
Audit - An examination of records and activities to ensure compliance with established security controls, policies, and procedures.
Authentication - Identifies an individual or application through the use of username/password, profiles, digital certificates or other means.
Authorization - Develops rules or policies relating to what information users are allowed to view and manipulate.
Basic authentication - Base64-encoding the username and password and transmitting the result to the server.
Biometric security - A security science where body or physical attributes are used for secure identification and authentication. Some of the common Biometric identifiers are fingerprints, voice patterns, face geometry, hand geometry, retinal scans, signatures, and typing patterns.
Certificate - A digital "passport." A certificate is a secure electronic identity conforming to the X.509 standard. Certificates typically contain a user's name and public key. A CA authorizes certificates by signing the contents using its CA signing private key.
Certificate expiry - The date after which a user's certificate should no longer be trusted. The certificate expiry date is contained within the certificate.
Certificate revocation - The act of identifying certificates that are no longer trusted. Revoked certificates are identified on Certificate Revocation Lists (CRLs).
Certification authority (CA) - The internal or trusted third party responsible for issuing secure electronic identities to users in the form of digital certificates.
Cryptography - The science of transforming readable text into cipher text and back again.
Confidentiality - Keeps information private.
Cookies - Snippets of user information delivered by a Web site to the user's browser to persist information during and between sessions.
Decryption - The process of transforming cipher text into readable text.
Digest authentication - Transmits username and password information in a manner that cannot be easily decoded. The Digest mechanism includes an encoding of the realm for which the credentials are valid, so a separate credentials database must be provided for each realm using the Digest method.
Digital ID - An encrypted file containing your personal security data, including your private keys.
Digital certificate - An electronic document that verifies the owner of a public key, issued by a certificate authority.
Digital signature - Any type of text or message, encrypted with a private key, thereby identifying the source.
Discretionary Access Control (DAC) - Check the validity of credentials given at the discretion of the user (e.g., username and password).
Encryption - The process of turning readable text into cipher text.
Encryption algorithm - A mathematical formula used to encrypt or decrypt a string of text.
Entitlements - These are your rights and privileges, from an application perspective, based on who you are.
Hash - A fixed-length value created mathematically to uniquely identify data.
Integrity - Proves that information has not been manipulated.
Identity-management - The processes and procedures for administering user authentication and authorization in the enterprise and between domains over the Internet.
Kerberos - A system that provides a central authentication mechanism for a variety of client/server applications, using passwords and secret keys. Developed at the MIT.
Key - A single numeric value that is part of an algorithm for encrypting text.
Lightweight directory access protocol (LDAP) - A client-server protocol for accessing a directory service. It runs over TCP and can be used to access a stand-alone LDAP directory service or to access a directory service back-ended by X.509.
Mandatory Access Control (MAC) - Check the validity of credentials that validate aspects that the user cannot control (e.g., IP address, host name).
Non-repudiation - Ensures that information cannot be disowned.
Organization - A group of users and/or roles.
Public Key Infrastructure (PKI) - The infrastructure used to create a secure chain of trust for Internet-based communications. A PKI solution consists of a security policy, a Certificate Authority (CA), a Registration Authority (RA), certificate distribution system, and PKI-enabled applications.
Policy-based authorization - Enables development of rules or policies that define what information users are allowed to view and manipulate. Mirrors real-world business practices and policies depending upon factors such as who is making the request, where and when the request is generated, and why the user needs the data.
Policy-based provisioning - Policy-based provisioning automates the deployment of access rights to applications based on the business' policies to employees, contractors and business partners. It is a single point of administration for the set-up, teardown and reconciliation of access rights. It can maintain policies, assure privacy and reinforce security in changing business environments throughout the enterprise and beyond.
Private key - The key that a user keeps secret in asymmetric encryption. It can encrypt or decrypt data for a single transaction but cannot do both.
Public key - The key that a user allows the world to know in asymmetric encryption. It can encrypt or decrypt data for a single transaction but cannot do both.
Remote Authentication Dial-In User Service (RADIUS) - A standard for authenticating the identity of remote dial-in users.
Realm - A unique name given to each protected area on a server, whether it be a single document or an entire server.
Rights - The privileges a user or role has on a system.
Roles - A working description of a user. Roles are assigned rights.
RSA Encryption (Rivest-Sharmir-Adelman) - A popular encryption and authentication standard that uses asymmetric keys and was developed by Rivest, Sharmir, and Adelman. Based on a public key system, every user has 2 digital keys, one to encrypt information, and the other to decrypt. Authentication of both sender and recipient is achieved with this method.
Secret key encryption - A method in which a single key known only to the participants encrypts and decrypts data.
Security Assertion Markup Language (SAML) - Protocol that facilitates the secure exchange of authentication and authorization information between partners regardless of their security systems or e-commerce platforms.
Single Sign-On (SSO) - Users sign onto a site only once and are given access to one or more applications in a single domain or across multiple domains.
Smart card - A credit-card-size authentication device containing a microprocessor and data, which is read by a smart-card reader and sent across the network.
SSL (Secure Sockets Layer) - A transport-layer technology, developed by Netscape, that allows secure transactions among compliant browsers and servers, usually Web servers.
Sub administrator - Administrator with a limited set of administration rights.
Super administrator - Administrator with rights to the entire system.
Symmetric encryption - A method involving a single secret key for both encryption and decryption.
Token - A credit card size or key FAB sized authentication device that a user carries. It usually displays numbers that change over time and synchronizes with an authentication server on the network, and it may also use a challenge/response scheme with the server. Tokens are based on something you know (a password or PIN) and something you have (an authenticator - the token).
Two-factor authentication - Provides a higher level of trust than passwords alone because it requires something a user knows, such as a password, as well as something that person has, such as a smart card or a token.
URL (Uniform Resource Locator) - A standard addressing system used on the Internet. The URL describes everything that is necessary for a Web Browser to locate the requested site.
Users - Accounts that are created to represent individuals.
X.509 - A standard for digital certificates developed by the International Telecommunications Union (ITU).
Courtesy: Internet
How does SSL/TLS work?
• Handshake and cipher suite negotiations. Client and server contact each other and choose a common cipher suite. The suite includes a method for exchanging the shared secret key; a method for encrypting data; and a Message Authentication Code (MAC) specifying how application data will be hashed and signed to prove integrity.
• User identity authentication. The server always authenticates its identity to the client. However, whether the client needs to authenticate with the server depends on the application. The exact authentication method (primarily, which digital certificate format will be used) depends on the negotiated cipher suite.
• Key exchange. After choosing a cipher suite, the client and server exchange a key, or the precursors with which to create a key, that they will use for data encrypting (again, depending on the negotiated cipher suite's requirements).
• Application data exchange. The client application and the server application communicate with each other. All data is encrypted using the negotiated bulk encryption method.
Courtesy: Internet
What Am I Worth?
In general, you are worth whatever someone is willing to pay you.
What you ultimately end up receiving for a salary depends on many factors:
- The specific details of the job
- What you have done, and how long you have done it
- What the employer thinks you can do
- Where you work
- The industry in which you work
- The size of the hiring company
- The size of the hiring department
- Your education
- How much the employer values the specific position
- How well you negotiate
- How many other people are vying for the same position
- The job market in general
- etc, etc
Remember that, within reason, everything about a job is negotiable, including salary.
If the hiring company wants you badly enough, they can sometimes increase their offer.
Remember also, that there is more to a job than simply the salary. Consider the whole package:
- Bonus
- 401(k) contributions
- Stock options
- Other benfits
- Opportunity for advancement
- Company culture
- Commuting distance
- Telecommuting options
- etc, etc.
Here are a few web sites that can help you calculate what particular jobs might be worth in your area:
http://www.payscale.com/mypayscale.aspx
http://www.telecomcareers.net/Resources/SalaryWizard/SalarySurvey.htm
http://techexpousa.salary.com/
http://www.pencom.com/isg.html
http://www.ticker.computerjobs.com/content/ticker.aspx
http://salary.monster.com/salarywizard/layoutscripts/swzl_newsearch.asp
http://www.salary.com/
http://hotjobs.yahoo.com/salary
http://www.vault.com/salaries.jsp
Courtesy: Internet
Perhaps They Should Have Tested More - Yahoo! Japan
Yahoo Japan mistakenly deletes 4.5 mil. e-mails
[The Yomiuri Shimbun]
Yahoo Japan Corp. accidentally deleted about 4.5 million e-mails sent to 275,600 users due to a mistake in its e-mail service system, the company said.
Yahoo allows users to exchange e-mail messages for free without using an e-mail software. According to the company, most of the deleted e-mails were received between Dec. 26 and Feb. 25.
The system usually removes e-mails deemed "junk mail" from its server about 40 days after the e-mails are received. However, it wrongly deleted e-mails that should have been kept on the server.
The company discovered the mistake after receiving complaints from users that they could not open certain e-mails they had received.
(http://www.yomiuri.co.jp/dy/national/20070408TDY02004.htm)Courtesy: Internet
QA and Testing Interview Questions (and some answers)
http://www.geocities.com/xtremetesting/InterviewQuestions.html
http://sqa.fyicenter.com/sqa/SQAInterviewQ.html
http://www.geekinterview.com/Interview-Questions/Testing/QA-Testing
http://www.devbistro.com/tech-interview-questions/Testing.jsp
http://www.careercup.com/show/?cat=cd70b254-e295-4ace-a656-1e262051bd6c
http://www.grove.co.uk/pdf_Files/Test_Questions.pdf
http://www.grove.co.uk/pdf_Files/Test_Answers.pdf
http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=6787
http://www.softwaretester.com/ContentDisplay.cfm?ContentID=28
Questions (and some answers) for any interview:
http://interview.monster.com/archives/interviewquestions/
http://hotjobs.yahoo.com/interview
http://www.dahlstromco.com/Samples/Questions.pdf
http://www.acetheinterview.com/interview/
http://www.careercc.com/interv3.shtml
http://www.quintcareers.com/job_interview_preparation.html
Interview questions (and some answers) for Microsoft/Google/etc:
http://www.acetheinterview.com/questions/cats/index.php/microsoft_google
http://www.sellsbrothers.com/fun/msiview/
http://www.unboxedsolutions.com/sean/articles/830.aspx
http://geekswithblogs.net/jolson/archive/2005/01/21/20636.aspx
http://www.drizzle.com/~jpaint/google.html
http://www.facebook.com/jobs_puzzles/
Puzzle-style Interview Questions (and some answers):
http://www.techinterview.org/
http://halcyon.usc.edu/~kiran/msqs.html#puzzles
http://tickletux.wordpress.com/2007/01/10/why-logic-puzzles-make-good-interview-questions/ http://www.softwaretester.com/ContentDisplay.cfm?S_ID=SWT_574284_40164449&ContentID=29
Other articles related to interviewing:
http://www.stickyminds.com/s.asp?F=S12122_COL_2
http://www.jrothman.com/Papers/detecting-great-testers.html
http://www.jrothman.com/Papers/hiringforteamfit.html
http://www.jrothman.com/Papers/interviewing-college-grads.html
http://www.jrothman.com/Papers/cultural-fits-starts.html
http://testobsessed.com/wordpress/wp-content/uploads/2007/01/taoiastbt.pdf
http://www.testobsessed.com/2004/11/01/the-art-of-interviewing-and-selecting-the-best-testers/
http://www.kaner.com/pdfs/QWjobs.pdf
Courtesy: Internet
Friday, April 13, 2007
Black Box Testing for Web-based Applications
* Is the browser compatible with the application design?
* There are many different types of browsers available.
# GUI design components
* Are the scroll bars, buttons, and frames compatible with the browser and functional?
* To check the functionality of the scroll bars on the interface of the Web page to make sure the the user can scroll through items and make the correct selection from a list of items.
* The button on the interface need to be functional and the correct hyperlink should go to the correct page.
* If frames are used on the interface, they should be checked for the correct size and whether all of the components fit within the viewing screen of the monitor.
2. User Interface
One of the reasons the web browser is being used as the front end to applications is the ease of use. Users who have been on the web before will probably know how to navigate a well-built web site. While you are concentrating on this portion of testing it is important to verify that the application is easy to use. Many will believe that this is the least important area to test, but if you want to be successful, the site better be easy to use.
3. Instructions
You want to make sure there are instructions. Even if you think the web site is simple, there will always be someone who needs some clarification. Additionally, you need to test the documentation to verify that the instructions are correct. If you follow each instruction does the expected result occur?
4. Site map or navigational bar
Does the site have a map? Sometimes power users know exactly where they want to go and don't want to wade through lengthy introductions. Or new users get lost easily. Either way a site map and/or an ever-present navigational bar can help guide the user. You need to verify that the site map is correct. Does each link on the map actually exist? Are there links on the site that are not represented on the map? Is the navigational bar present on every screen? Is it consistent? Does each link work on each page? Is it organized in an intuitive manner?
5. Content
To a developer, functionality comes before wording. Anyone can slap together some fancy mission statement later, but while they are developing, they just need some filler to verify alignment and layout. Unfortunately, text produced like this may sneak through the cracks. It is important to check with the public relations department on the exact wording of the content.
You also want to make sure the site looks professional. Overuse of bold text, big fonts and blinking (ugh) can turn away a customer quickly. It might be a good idea to consult a graphic designer to look over the site during User Acceptance Testing. You wouldn't slap together a brochure with bold text everywhere, so you want to handle the web site with the same level of professionalism.
Finally, you want to make sure that any time a web reference is given that it is hyperlinked. Plenty of sites ask you to email them at a specific address or to download a browser from an address. But if the user can't click on it, they are going to be annoyed.
6. Colors/backgrounds
Ever since the web became popular, everyone thinks they are graphic designers. Unfortunately, some developers are more interested in their new backgrounds, than ease of use. Sites will have yellow text on a purple picture of a fractal pattern. (If you've never seen this, try most sites at GeoCities or AOL.) This may seem "pretty neat", but it's not easy to use.
Usually, the best idea is to use little or no background. If you have a background, it might be a single color on the left side of the page, containing the navigational bar. But, patterns and pictures distract the user.
7. Images
Whether it's a screen grab or a little icon that points the way, a picture is worth a thousand words. Sometimes, the best way to tell the user something is to simply show them. However, bandwidth is precious to the client and the server, so you need to conserve memory usage. Do all the images add value to each page, or do they simply waste bandwidth? Can a different file type (.GIF, .JPG) be used for 30k less?
In general, you don't want large pictures on the front page, since most users who abandon a page due to a large load will do it on the front page. If you can get them to see the front page quickly, it will increase the chance they will stay.
8. Tables
You also want to verify that tables are setup properly. Does the user constantly have to scroll right to see the price of the item? Would it be more effective to put the price closer to the left and put miniscule details to the right? Are the columns wide enough or does every row have to wrap around? Are certain columns considerably longer than others?
9. Wrap-around
Finally, you will want to verify that wrap-around occurs properly. If the text refers to "a picture on the right", make sure the picture is on the right. Make sure that widowed and orphaned sentences and paragraphs don't layout in an awkward manner because of pictures.
10. Functionality
The functionality of the web site is why your company hired a developer and not just an artist. This is the part that interfaces with the server and actually "does stuff".
11. Links
A link is the vehicle that gets the user from page to page. You will need to verify two things for each link: that the link brings you to the page it said it would and that the pages you are linking to actually exists. It may sound a little silly but I have seen plenty of web sites with internal broken links.
12. Forms
When a user submits information through a form it needs to work properly. The submit button needs to work. If the form is for an online registration, the user should be given login information (that works) after successful completion. If the form gathers shipping information, it should be handled properly and the customer should receive their package. In order to test this, you need to verify that the server stores the information properly and that systems down the line can interpret and use that information.
13. Data verification
If the system verifies user input according to business rules, then that needs to work properly. For example, a State field may be checked against a list of valid values. If this is the case, you need to verify that the list is complete and that the program actually calls the list properly (add a bogus value to the list and make sure the system accepts it).
14. Cookies
Most users only like the kind with sugar, but developers love web cookies. If the system uses them, you need to check them. If they store login information, make sure the cookies work. If the cookie is used for statistics, verify that totals are being counted properly. And you'll probably want to make sure those cookies are encrypted too, otherwise people can edit their cookies and skew your statistics.
Application specific functional requirements Most importantly, you want to verify the application specific functional requirements. Try to perform all functions a user would: place an order, change an order, cancel an order, check the status of the order, change shipping information before an order is shipped, pay online, ad naseum. This is why your users will show up on your doorstep, so you need to make sure you can do what you advertise.
16. Interface Testing
Many times, a web site is not an island. The site will call external servers for additional data, verification of data or fulfillment of orders.
16a. Server interface
The first interface you should test is the interface between the browser and the server. You should attempt transactions, then view the server logs and verify that what you're seeing in the browser is actually happening on the server. It's also a good idea to run queries on the database to make sure the transaction data is being stored properly.
17. External interfaces
Some web systems have external interfaces. For example, a merchant might verify credit card transactions real-time in order to reduce fraud. You will need to send several test transactions using the web interface. Try credit cards that are valid, invalid, and stolen. If the merchant only takes Visa and MasterCard, try using a Discover card. (A script can check the first digit of the credit card number: 3 for American Express, 4 for Visa, 5 for MasterCard, or 6 for Discover, before the transaction is sent.) Basically, you want to make sure that the software can handle every possible message returned by the external server.
18. Error handling
One of the areas left untested most often is interface error handling. Usually we try to make sure our system can handle all of our errors, but we never plan for the other systems' errors or for the unexpected. Try leaving the site mid-transaction - what happens? Does the order complete anyway? Try losing the internet connection from the user to the server. Try losing the connection from the server to the credit card verification server. Is there proper error handling for all these situations? Are charges still made to credit cards? Is the interruption is not user initiated, does the order get stored so customer service reps can call back if the user doesn't come back to the site?
19. Compatibility
You will also want to verify that the application can work on the machines your customers will be using. If the product is going to the web for the world to use, you will need to try different combinations of operating system, browser, video setting and modem speed.
20. Operating systems
Does the site work for both MAC and IBM-Compatibles? Some fonts are not available on both systems, so make sure that secondary fonts are selected. Make sure that the site doesn't use plug-ins only available for one OS, if your users will use both.
21. Browsers
Does your site work with Netscape? Internet Explorer? Lynx? Some HTML commands or scripts only work for certain browsers. Make sure there are alternate tags for images, in case someone is using a text browser. If you're using SSL security, you only need to check browsers 3.0 and higher, but verify that there is a message for those using older browsers.
22. Video settings
Does the layout still look good on 640x400 or 600x800? Are fonts too small to read? Are they too big? Does all the text and graphic alignment still work?
23. Modem/connection speeds
Does it take 10 minutes to load a page with a 28.8 modem, but you tested hooked up to a T1? Users will expect long download times when they are grabbing documents or demos, but not on the front page. Make sure that the images aren't too large. Make sure that marketing didn't put 50k of font size -6 keywords for search engines.
23. Printers
Users like to print. The concept behind the web should save paper and reduce printing, but most people would rather read on paper than on the screen. So, you need to verify that the pages print properly. Sometimes images and text align on the screen differently than on the printed page. You need to at least verify that order confirmation screens can be printed properly.
24. Combinations
Now you get to try combinations. Maybe 600x800 looks good on the MAC but not on the IBM. Maybe IBM with Netscape works, but not with Lynx.
If the web site will be used internally it might make testing a little easier. If the company has an official web browser choice, then you just need to verify that it works for that browser. If everyone has a T1 connection, then you might not need to check load times. (But keep in mind, some people may dial in from home.) With internal applications, the development team can make disclaimers about system requirements and only support those systems setups. But, ideally, the site should work on all machines so you don't limit growth and changes in the future.
25. Load/Stress
You will need to verify that the system can handle a large number of users at the same time, a large amount of data from each user, and a long period of continuous use. Accessibility is extremely important to users. If they get a "busy signal", they hang up and call the competition. Not only must the system be checked so your customers can gain access, but many times crackers will attempt to gain access to a system by overloading it. For the sake of security, your system needs to know what to do when it's overloaded and not simply blow up.
Many users at the same time
If the site just put up the results of a national lottery, it better be able to handle millions of users right after the winning numbers are posted. A load test tool would be able to simulate large number of users accessing the site at the same time.
Large amount of data from each user
Most customers may only order 1-5 books from your new online bookstore, but what if a university bookstore decides to order 5000 different books? Or what if grandma wants to send a gift to each of her 50 grandchildren for Christmas (separate mailing addresses for each, of course.) Can your system handle large amounts of data from a single user?
Long period of continuous use
If the site is intended to take orders for flower deliveries, then it better be able to handle the week before Mother's Day. If the site offers web-based email, it better be able to run for months or even years, without downtimes.
You will probably want to use an automated test tool to implement these types of tests, since they are difficult to do manually. Imagine coordinating 100 people to hit the site at the same time. Now try 100,000 people. Generally, the tool will pay for itself the second or third time you use it. Once the tool is set up, running another test is just a click away.
26. Security
Even if you aren't accepting credit card payments, security is very important. The web site will be the only exposure some customers have to your company. And, if that exposure is a hacked page, they won't feel safe doing business with you.
27. Directory setup
The most elementary step of web security is proper setup of directories. Each directory should have an index.html or main.html page so a directory listing doesn't appear.
One company I was consulting for didn't observe this principal. I right clicked on an image and found the path "...com/objects/images". I went to that directory manually and found a complete listing of the images on that site. That wasn't too important. Next, I went to the directory below that: "...com/objects" and I hit the jackpot. There were plenty of goodies, but what caught my eye were the historical pages. They had changed their prices every month and kept the old pages. I browsed around and could figure out their profit margin and how low they were willing to go on a contract. If a potential customer did a little browsing first, they would have had a definite advantage at the bargaining table.
SSL Many sites use SSL for secure transactions. You know you entered an SSL site because there will be a browser warning and the HTTP in the location field on the browser will change to HTTPS. If your development group uses SSL you need to make sure there is an alternate page for browser with versions less than 3.0, since SSL is not compatible with those browsers. You also need to make sure that there are warnings when you enter and leave the secured site. Is there a timeout limit? What happens if the user tries a transaction after the timeout?
28. Logins
In order to validate users, several sites require customers to login. This makes it easier for the customer since they don't have to re-enter personal information every time. You need to verify that the system does not allow invalid usernames/password and that it does allow valid logins. Is there a maximum number of failed logins allowed before the server locks out the current user? Is the lockout based on IP? What if the maximum failed login attempts is three, and you try three, but then enter a valid login? What are the rules for password selection?
29. Log files
Behind the scenes, you will need to verify that server logs are working properly. Does the log track every transaction? Does it track unsuccessful login attempts? Does it only track stolen credit card usage? What does it store for each transaction? IP address? User name?
30. Scripting languages
Scripting languages are a constant source of security holes. The details are different for each language. Some exploits allow access to the root directory. Others allow access to the mail server. Find out what scripting languages are being used and research the loopholes. It might also be a good idea to subscribe to a security newsgroup that discusses the language you will be testing.
31. Web Server Testing Features
* Feature: Definition
* Transactions: The nunber of times the test script requested the current URL
* Elapsed time: The number of seconds it took to run the request
* Bytes transferred: The total number of bytes sent or received, less HTTP headers
* Response time: The average time it took for the server to respond to each individual request.
* Transaction rate: The average number of transactions the server was able to handle per second.
* Transferance: The average number of bytes transferred per second.
* Concurrency: The average number of simultaneous connections the server was able to handle during the test session.
* Status code nnn: This indicates how many times a particular HTTP status code was seen.
Load/Volume Test
Focus of Load/Volume Testing:
* Pushing through large amounts of data with extreme processing demands.
* Requesting many processes simulateously.
* Repeating tasks over a long period of time
Load/volume tests, which involve extreme conditions, are normally run after the execution of feature-level tests, which prove that a program functions correctly under normal conditions.
Difference between Load and Stress testing:
The idea of stress testing is to find the breaking point in order to find bugs that will make that break potentially harmful. Load testing is merely testing at the highest transaction arrival rate in performance testing to see the resource contention, database locks etc..
Web Capacity Testing Load and Stress:
The performance of the load or stress test Web site should be monitored with the following in mind:
* The load test should be able to support all browser
* The load test should be able to support all Web server.
* The tool should be able to simulate up 500 users or playback machines
* The tool should be able to run on WIndows NT, Linux, Solaris, and most Unix variants.
* There should be a way to simulate various users at different connection speeds.
* After the tests are run, you should be able to report the transactions, URL, and number of users who visited the site.
* The test cases should be asssembled in a like fashion to set up test suites.
* There should be a way to test the different server and port addresses.
* There should be a way to account for the user's cookies.
Performance Test:
The primary goal of performance-testing is to develop effective enhancement strategies for maintaining acceptable system performance. Performance testing is a capacity analysis and planning process in which measurement data are used to predict when load levels will exhaust system resources.
The Mock Test:
It is a good idea to set up mock test before you begin your actual test. This is a way to measure the server's stressd performance. As you progress with your stress testing, you can set up a measurement of metrics to determine the efficiency of the test.
After the initial test, you can determine the breaking point for the server. It may be a processor problem or even a memory problem. You need to be able to check your log to determine the average amount of time that it takes your provessor to perform the test. Running graphics or even ASP pages can cause processor problems and a limitation every time you run your stress test.
Memory tends to be a problem with the stress test. This may be due to a memary leak or lack of memory. You need to log and monitor the amount of disk capacity during the stress test. As mentioned earlier, the bandwidth can account for the slow down of the processing of the Web site speed. If the test hanges and there is a large waiting period, your processor usage is too low to handle the a,ount of stress on the system.
Simulate Resources:
It is important to be able to run system in a high-stress format so that you can actually simulate the resources and understand how to handle a specific load. For example, a bank transaction processing system may be designed to process up to 150 transactions per second, whereas an operating system may be designed to handle up to 200 separate terminals. The different tests need to be designed to ensure that the system can process the expected load. This type of testing usually involves planning a series of tests where the load is gradually increased to reflect the expected usage pattern. The stress tests can steadily increase the load on the system beyond the maximum design load until the system fails.
This type of testing has a dual function of testing the system for failure and looking for a combination of events that occur when a load is placed on the server. Stress testing can then determine if overloading the system results in loss of data or user sevice to the customers The use of stress testing is particularly relevant to an ecommerce system with Web database.
Thursday, April 12, 2007
Web Application Testing Cheatsheet
Click this URL: Web Application Testing Cheatsheet
Google Vs. Microsoft
We all know that Google is no longer just a search company. In fact, Microsoft perceives Google as a major threat to its supremacy. Bill Gates may well be losing his sleep over Google, as in spite of his efforts, Microsoft has failed to slow down Google.
An article in the Fortune magazine that talked about how Google with all its innovations and success has Microsoft worried. Here is the summary from the article:
1. Bill Gates sends an email to a handful of execs saying, ‘We had to watch these guys. It looks like they are building something to compete with us.’
2. Microsoft is facing corporate identitiy crisis. Every month Google hires away one of the Mircosoft’s top developers. Recently, Marc Lucovsky, one of the chief architects of windows, left Microsoft for Google.
3. To rub it in Microsoft’s face, Google even setup an office five miles down the road from Microsoft’s Redmond, Wash. Headquarters.
4. Microsoft spent 150 million in ad campaign and another 150 million to develop MSN Search, noted in the inner Microsoft circle as a Google killer. But it failed to create any buzz and Microsoft only holds around 13% of search users. I think that also happens because of MSN set as a default page.
5. Hardly anyone talks about Hotmail anymore, it is Gmail with 2 gigabytes that is really cool.
6. Hardly anyone knows or uses Photo Story from Microsoft. Compare this with Google’s Picasa the very popular photo management software
7. Recently launched MSN Spaces is late to the world of blogging. Blogger still remains number 1.
8. Microsoft’s desktop search tool was two months behind Google’s. “Here Microsoft was spending $600 million a year in R&D for MSN, $1 billion a year on Windows, and Google gets desktop search out before us? It was a real wake-up call,” says a Microsoft exec. People said, ‘If they can do desktop search, what prevents them from doing a version of Excel, PowerPoint, or Word, or buying StarOffice’
9. Google’s Maps and Satellites caused a great buzz among internet users. Microsoft has Mapblast; but Google Maps are way cooler.
10. In spring 2003, Gates told one of his executives, “These Google guys, they want to be billionaires and rock stars and go to conferences and all that. Let us see if they still want to run the business in two or three years.” Well, Mr Gates, Google has survived two or three years and is still rocking.
11. In fall 2003, Microsoft briefly considered buying Google, only to realize that even if Brin, Page and their board could have persuaded to sell which seemed unlikely; Microsoft would have been left to explain to the world why it was running a search engine built entirely on Linux instead of Windows.
Unfortunately, Microsoft is battling some old warriors from its previous competitors. Eric Schmidt, the CEO of Google, has been battling Gates as CTO of Sun Microsystems and CEO of Novell. Omid Kordestani, Google’s head of ad sales, was a top executive at Netspace. Three of Google’s directors, Ram Shriram, John Doerr, and Michael Moritz, have been on the front lines of Silicon Valley’s war with Microsoft for over the years.
One reason Google has been rolling out so many new or improved products is that Schmidt understands that innovation is the only sure edge Google has. The moment Google allows itself to slow, Microsoft could overwhelm it.
Here is the article.
So, is Microsoft losing its edge. It is too early to say. One thing that Google does not have is huge cash. Microsoft, with nearly $40 billion in revenues, is nearly ten times the size of Google. It has $34 billion in cash, generating $1 billion in new cash a month. Microsoft still remains and will remain as the number 1 software company in the world.
Gmail Drive Shell Extension
GMail Drive creates a virtual filesystem on top of your Google Gmail account and enables you to save and retrieve files stored on your Gmail account directly from inside Windows Explorer. GMail Drive literally adds a new drive to your computer under the My Computer
folder, where you can create new folders, copy and drag'n'drop files to.
Ever since Google started to offer users a Gmail e-mail account, which includes storage space of 2000 megabytes, you have had plenty of storage space but not a lot to fill it up with. With GMail Drive you can easily copy files to your Gmail account and retrieve them again.
When you create a new file using GMail Drive, it generates an e-mail and posts it to your account. The e-mail appears in your normal Inbox folder, and the file is attached as an e-mail attachment. GMail Drive periodically checks your mail account (using the Gmail search function) to see if new files have arrived and to rebuild the directory structures. But basically GMail Drive acts as any other hard-drive installed on your computer.
You can copy files to and from the GMail Drive folder simply by using drag'n'drop like you're used to with the normal Explorer folders.
Because the Gmail files will clutter up your Inbox folder, you may wish to create a filter in Gmail to automatically move the files (prefixed with the GMAILFS
letters in the subject) to your archived mail folder.
Selenium: Test tool for web applications
Selenium is a test tool for web applications. Selenium tests run directly in a browser, just as real users do. And they run in Internet Explorer, Mozilla and Firefox on Windows, Linux, and Macintosh. No other test tool covers such a wide array of platforms.
- Browser compatibility testing. Test your application to see if it works correctly on different browsers and operating systems. The same script can run on any Selenium platform.
- System functional testing. Create regression tests to verify application functionality and user acceptance.
Try it out! Get started with Selenium IDE for your first taste of Selenium's power. You can run Selenium IDE tests in any supported browser using Selenium Core.
Any Language! Want to write tests in your favorite programming language? Try Selenium Remote Control; we currently support writing tests in Java, .NET, Perl, Python and Ruby.
Tuesday, April 10, 2007
FireBug : A firefox extension for debugging
Thursday, April 5, 2007
Paros [For Web Application Security Assessment]
URL: http://www.parosproxy.org/index.shtml
What is Re- test ? What is Regression Testing ?
Wednesday, April 4, 2007
Interview Tips
Usually interviewers form an impression on whether the candidate is good or not within 5 minutes of seeing and talking to him/her.Rest of the time, they try to validate whether their impression is correct or wrong(atleast this is what I do).So its important you make a good impression in the first few minutes.
Dress code:
Well dress you wear for the interview is not everything, but you dress well, it will help you to make a initial good impression. The color of the dress gives certain impression, it seems (I don't remember paying attention to the dress, the candidate wore, but it seems it would have affected my impression at a subconscious level. But what you loose dressing up well)
- Red: You wear it to attract other people's attention. If you are presenting something or you want others to pay attention to you, dress in red. This is not the best color to wear for interview.
- Blue: If you want other people to like you, dress in blue.
- White: It gives the impression that you are hardworking.Wear this. I mean not entirely in white but the predominant color should be white.Like a black trouser and white shirt should do.
Resume:
- Whatever you mention in the resume, you must try to be thorough in it.
- Be honest of what you put in it.
- Try to keep it small,it should not be longer than 2 to 3 pages.
- One of the main reason of rejection is, something is mentioned in the resume and when asked questions about it, not able to answer them and in few cases saying that "I have done that 'something' long back and I don't remember anything".In many cases its true that you have used a tool or something a year back and you don't remember much about it, in that case before attending the interview, brush up atleast some basics regarding it, do some home work and prepare yourself to talk about it or don't mention about it in the resume, if it is not that important.
Testing fundamentals: Testing terminology, definitions etc.
Testing Processes: The testing processes used in your current/previous company.Knowledge of life cycle testing etc.
Testing tools: Given a problem, how to solve them using testing tools.
Testing deliverables: Able to write a test plan or test case for a given product or requirements
Analytical skills: Solving Puzzles
Programming skills(optional): Able to write pseudo code for a given problem.
Few more tips:Good Communication skills: Obvious isn't it?
Good Listening skills or understanding skills: Able to understand the questions posed by interviewer. Try your best to understand fully what the interviewer is blabbering in the first say.But if what he says is not clear, better to ask or try to confirm what you heard is correct before answering.
Confidence level: Watch your body language, keep eye contact, don't be over confident just be confident.
Two most important tips: You should have the right skills for the job.Also equally important is you SHOW or DISPLAY in front of the interviewer that you have the right skills.
Glossary: QA and Software Testing
Alpha Testing: Testing of a software product or system conducted at the developer’s site by the end user.
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.
Agile Testing: Testing practice which emphasize on a test-first design paradigm.
Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.
Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Black Box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.
Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.
Boundary Testing: Testing that focuses on the boundary or limit conditions of the software being tested. Stress Testing can also be considered as form of boundary testing.
Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.
Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed atleast once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. Its in a way is used when there is no enough time to execute all the test cases
Bug: A design or implementation flaw that will result in symptoms exhibited by some module when module is subjected to an appropriate test.
Code Complete: Phase of development where functionality is implemented in entirety. Bug fixes are all that are left. All functions
found in the functional Specifications have been implemented. Code complete module may be far from release as it may have many bugs
Code Coverage: An analysis method that determines which parts of the software/code have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Concurrency Testing: Multi-user testing geared toward determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores. This is one area where the cause for many bugs which were considered random can be identified.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Debugging: The process of finding and removing the causes of software failures. Tools used for debugging are called debuggers.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware,applications, or systems if appropriate.
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test. This is practically infeasible.
Failure: The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.
Fault: A manifestation of an error in software. A fault, if encountered, may cause a failure.
Fault-based Testing: Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.
Function Points: A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 =minor influence, 5 = strong influence.
Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.
Gorilla Testing: Testing one particular module, functionality heavily.
Gray Box Testing: A combination of Black Box and WhiteBox testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Heuristics Testing: Another term for failure-directed testing.
Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.
Infeasible Path: Program statement sequence that can never be executed. i,e the unreachable code
Inspection: A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. Its generic term for all inspections similar to code inspections.
Integration Testing: An orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.
Intrusive Testing: Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
IV&V: Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.
Life Cycle: The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Manual Testing: That part of software testing that requires operator input, analysis, or evaluation.
Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out. Its a form of adhoc testing.
Mutation Testing: A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.
Non-intrusive Testing: Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".
Path Analysis: Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.
Path Coverage Testing: A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.
Peer Reviews: A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.
Path Testing: Testing wherein all paths in the program source code are tested at least once.
Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing: Testing aimed at showing software works. Also known as "test to pass".
Proof Checker: A program that checks formal proofs of program properties for logical correctness.
Qualification Testing: Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.
Random Testing: An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.
Regression Testing: Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.
Ramp Testing: Continuously raising an input signal until the system breaks down.Form of stress testing
Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Reliability: The probability of failure-free operation for a specified period.
Run Chart: A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.
Statement Coverage Testing: A test method satisfying coverage criteria that requires each statement be executed at least once.
Static Testing: Verification performed without executing the system’s code. Also called static analysis.
Statistical Process Control: The use of statistical techniques and tools to measure an ongoing process for change or stability.
Structural Coverage: This requires that each pair of module invocations be executed at least once.
Structural Testing: A testing method where the test data is derivedsolely from the program structure.
Sanity Testing: Brief test of major functional elements of a piece of software to determine if it is basically operational.
Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in workload.
Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
Software Testing: A set of activities conducted with the intent of finding errors in software.
Static Analysis: Analysis of a program carried out without executing the program.
Static Analyzer: A tool that carries out static analysis.
Static Testing: Analysis of a program carried out without executing the program.
Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software.
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Test Bed: 1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a logically or physically separate component.2) A suite of test programs used in conducting the test of a component or system.
Test Development: The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures, cases, documentation, etc.
Test Harness: A software tool that enables the testing of softwarecomponents that links test capabilities to perform specific tests, accept program inputs, simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.
Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors. The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Case: Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Test Driver: A program or test tool used to execute tests. Also known as a Test Harness.
Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires that programmers do not write any production code until they have first written a unit test.
Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.
Test Procedure: A document providing detailed instructions for the execution of one or more test cases.
Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.
Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.
Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.
Test Objective: An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.
Unit Testing: The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.
Usability Testing: Testing the ease with which users can learn and use a product.
Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
V- Diagram (model): a diagram that visualizes the orderof testing activities and their corresponding phases of development
Verification: The process of determining whether or not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files),can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
Validation: The process of evaluating software to determine compliance with specified requirements.
Walkthrough: Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.
White-box Testing: Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box testing, glass-box or open-box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.
Workflow Testing: Scripted end-to-end testing which duplicates specific workflows, which are expected to be utilized by the end-user.