Monday, December 26, 2011

Software Testing Types

There are many software testing types, which are used to test a software product. The different types of software testing help in identifying the defects, which may be left undetected with a particular type of testing.

What are the Software Testing Types?

The different software testing methodologies used for software testing help to identify completeness, correctness, security and quality of a developed software. The process of software testing life cycle is carried out on behalf of the stakeholders, which helps in revealing quality information about a particular software product.

White Box Testing
White box testing as the name suggests gives the internal view of the software. This type of testing is also known as structural testing or glass box testing as well, as the interest lies in what lies inside the box. It is often used to measure the thoroughness of testing through the coverage of a set of structural elements or coverage items.

Unit Testing
Unit testing is also known as component testing, module testing or program testing. The aim of this testing type is to search for defects in and verify the functioning of the individual software component.

Static Testing
It is the testing of a softwar or a component of the software at the specification or implementation level without any sort of execution of the software. The different types of methodologies used include different forms of reviews, coding standard implementation, code metrics, code structure, etc.

Code Coverage
It is an analysis method implemented to determine, which parts of the software have been covered by the test suite and which parts of the software have not been executed. There are different types of coverage methods, that are used for the same. They are statement coverage, decision coverage and condition coverage. Statement coverage is the process, which gives the percentage of executable statements, which have been exercised by a test suite. The decision coverage on the other hand, is the percentage of decision outcomes, which have been exercised by a test suite. 100% decision coverage means 100% statement coverage.

Error Guessing
A test design technique where an experienced tester is used to anticipate the defects, that might be a present in the software or in a component of the software under test, as a result of errors made. The tests are designed to specifically expose such defects.

Black Box Testing
Black box as the name suggests gives only the external view of the software. This type of testing involves, testing either functional or non-functional aspects of the software, without any sort of reference to the internal structure of the software. We will now see the different types of black box testing techniques.

Integration Testing
Integration testing involves testing the interfaces between components, interactions to different parts if a system, like the computer operating system, file system, hardware or interfaces between different software systems.

Functional Testing
It is the testing based on an analysis of the specification of the functionality of a particular software or a component of a software. Functional testing is often based on five main points. They are suitability, interoperability, security, accuracy and compliance.

Performance Testing
The testing methodology used to determine the performance of a software product. To understand performance testing better, we will take an example of a website. How does the website work in an environment of third-party products, like servers and middleware software. This type of testing helps to identify any kind of performance bottlenecks in high use applications. Normally automation tests are used for performance testing, which have normal peak and exceptional load conditions and the response of the software to these conditions.

Load Testing
This is a test conducted to determine and measure the behavior of a component or a software by increasing the load on the component or the software. For example, a number of parallel users and/or number of transactions are carried out on the system simultaneously, to find out what is the highest amount of load, which can be handled by the component or the software.

Stress Testing
There is often a confusion between stress testing and load testing and they may be used interchangeably, which is wrong. Stress testing involves conducting a test on the software to evaluate the system at or beyond the limits of its specified requirements. It helps to determine the load under which the software fails and how. The process used for stress testing is similar to performance testing, but load employed is of very high level and stimulated.

Exploratory Testing
This is one of the software testing techniques, which has a hands-on approach. There is minimum planning and maximum test execution carried out in exploratory testing. The tester actively controls the design of the tests, when those tests are performed. The information gained while testing is used to design and better tests.

Usability Testing
Usability testing involves tests, which are carried out to determine the extent to which the software product is understood, easy to learn and operate and attractive to the users under specific conditions. The user-friendliness of the software is under check in this type of testing. The application flow is checked to know, what is the flow of the software.

Reliability Testing
The ability of the software to perform its required functions under stated conditions for a specific period of time and/or for a specific number of operations or transactions;

Ad-Hoc Testing
It is the least formal method implemented for testing a software. It helps in deciding the scope and duration of the various testing, which need to be carried out on the application. It also helps the tester in better understanding of the software.

Smoke Testing
This software testing type covers the main functionality of a component or the software. It helps to determine the most crucial functions of the software, but does not concern the finer details.

System Testing
This type of software testing involves testing the entire system in accordance with the requirements of the client. It is based on overall requirements specifications and covers all combined parts of a system.

End to End Testing
This software testing type involves testing the entire application in real world like scenario. Here the software interacts with the database, uses the network for communication, interacts with other hardware, applications or systems, if necessary. Compatibility testing and security testing are a part of end to end testing.

Regression Testing
One of the important type of testing carried out on the software product. The focus of regression testing is on retesting the software to check if new defects are not introduced into the software product after certain defects have been fixed.

Acceptance Testing
This is a formal testing carried out to determine whether the system satisfies the acceptance criteria and to enable the users or other authorized entity to determine if the system has to be accepted or not. Acceptance testing is carried in respect to needs of the user, requirements of the users and the business processes to be carried out using the software.

Alpha Testing
Alpha testing involves stimulated or actual operational testing by potential users or an independent test team at the developers site, but outside the development arena. It is often performed on off-the-shelf software products, as a form of internal acceptance testing.

Beta Testing
Operational testing carried out by potential or existing users at an external site to determine if the system satisfies the user needs and fits within the business processes is known as beta testing. It is carried out as a form of acceptance testing for off-the-shelf software to acquire feedback from the market.

Other than the white box and the black box testing is the gray box testing. In grey box testing type, the tester has knowledge of the internal data structure and algorithms to design test cases. However, the testing done is similar to black box testing.

Classification of Defects / Bugs

There are various ways in which we can classify. Below are some of the classifications:
 
Severity Wise:
  • Major: A defect, which will cause an observable product failure or departure from requirements.
  • Minor: A defect that will not cause a failure in execution of the product.
  • Fatal: A defect that will cause the system to crash or close abruptly or effect other applications.
 Work product wise: 
  1. SSD: A defect from System Study document
  2. FSD: A defect from Functional Specification document
  3. ADS: A defect from Architectural Design Document
  4. DDS: A defect from Detailed Design document
  5. Source code: A defect from Source code
  6. Test Plan/ Test Cases: A defect from Test Plan/ Test Cases
  7. User Documentation: A defect from User manuals, Operating manuals
 Type of Errors Wise:  
  1. Comments: Inadequate/ incorrect/ misleading or missing comments in the source code
  2. Computational Error: Improper computation of the formulae / improper business validations in code.
  3. Data error: Incorrect data population / update in database
  4. Database Error: Error in the database schema/Design
  5. Missing Design: Design features/approach missed/not documented in the design document and hence does not correspond to requirements
  6. Inadequate or sub optimal Design: Design features/approach needs additional inputs for it to be completeDesign features described does not provide the best approach (optimal approach) towards the solution required
  7. In correct Design: Wrong or inaccurate Design
  8. Ambiguous Design: Design feature/approach is not clear to the reviewer. Also includes ambiguous use of words or unclear design features.
  9. Boundary Conditions Neglected: Boundary conditions not addressed/incorrect
  10. Interface Error: Internal or external to application interfacing error, Incorrect handling of passing parameters, Incorrect alignment, incorrect/misplaced fields/objects, un friendly window/screen positions
  11. Logic Error: Missing or Inadequate or irrelevant or ambiguous functionality in source code
  12. Message Error: Inadequate/ incorrect/ misleading or missing error messages in source code
  13. Navigation Error: Navigation not coded correctly in source code
  14. Performance Error: An error related to performance/optimality of the code
  15. Missing Requirements: Implicit/Explicit requirements are missed/not documented during requirement phase
  16. Inadequate Requirements: Requirement needs additional inputs for to be complete
  17. Incorrect Requirements: Wrong or inaccurate requirements
  18. Ambiguous Requirements: Requirement is not clear to the reviewer. Also includes ambiguous use of words – e.g. Like, such as, may be, could be, might etc.
  19. Sequencing / Timing Error: Error due to incorrect/missing consideration to timeouts and improper/missing sequencing in source code.
  20. Standards: Standards not followed like improper exception handling, use of E & D Formats and project related design/requirements/coding standards
  21. System Error: Hardware and Operating System related error, Memory leak
  22. Test Plan / Cases Error: Inadequate/ incorrect/ ambiguous or duplicate or missing - Test Plan/ Test Cases & Test Scripts, Incorrect/Incomplete test setup
  23. Typographical Error: Spelling / Grammar mistake in documents/source code
  24. Variable Declaration Error: Improper declaration / usage of variables, Type mismatch error in source code
Status Wise: 
  • Open
  • Closed
  • Deferred
  • Cancelled

Static Testing & Dynamic Testing

Static Testing:
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Review's Inspection's and Walkthrough's are static testing methodologies.

Dynamic Testing:
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.

Q: What is the difference between static and dynamic testing?


1. Static testing is about prevention, dynamic testing is about cure.
2. The static tools offer greater marginal benefits.
3. Static testing is many times more cost-effective than dynamic testing.
4. Static testing beats dynamic testing by a wide margin.
5. Static testing is more effective!
6. Static testing gives you comprehensive diagnostics for your code.
7. Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often achieves less than 50% statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed.
8. Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take longer than compilation.
9. Dynamic testing finds fewer bugs than static testing.
10. Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking.
11. Static testing can find all of the followings that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform to coding standards, and ANSI violations.

Bug Priority & Severity

Severity is defined as the impact of defect on the application and Priority of defect is categorized into 4 different phases:

Critical / Show Stopper — This type of defect prevents further testing of the product or function under test, and is classified as Critical Bug. Examples of this include a missing menu option or VB script errors or security permission required to access a function under test or Broken Links etc. Work cannot be continued on the application.

Major / High — A defect that does not function as projected for or causes other functionality (ies) to fail to meet requirements is categorized as Major Bug. Examples of this include inaccurate calculations; the wrong field being updated frames of the application giving error or Links navigate to wrong pages etc. Work (work around) can be continued with the occurrence of this defect.

Average / Medium — These defects do not conform to standards and conventions are categorized as Medium Bugs. Examples include matching visual, fonts and text links which lead to different end points. Workaround can be continued to achieve functionality objectives.

Minor / Low — These defects are also called as Cosmetic defects. These defects do not affect the functionality of the system and are classified as Minor Bugs. Continuous testing of an application can be done and these defects are fixed at the end of build. Examples: Spelling errors in descriptions of text boxes, etc.

Bug Life Cycle

Any abnormality in software is known as Bug Or if application does not match with the requirements, it is reported as Bug. The elimination of bugs from the application depends upon the efficiency of testing done on it.

The different stages of a bug are as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Stages:

1. New: When the bug is logged for the first time, its stage is known as “NEW”. This means that the bug is not yet approved/validated.

2. Open: After a tester has logged a bug, the lead of the tester approves that the bug is valid and he changes the state as “OPEN”. (Depends upon Company's way of working, Tester can also change the status to "Open").

3. Assign: Once the Test lead validates the bug as “OPEN”, he assigns the bug to corresponding developer or developer team. At this stage bug status is changed to “ASSIGN”.

4. Test: Once the developer fixes and corrects the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It ensures that the bug has been fixed and is again released for testing to the testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this status have many factors. Some of them are priority of the bug may be low, lack of time for the release, bug cannot be fixed due to architecture reasons or the bug may not have major effect on the software.

6. Rejected: If the developer replicates the bug, and results are not same, he can reject the bug. And the status is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept/scenario of replication, then one of the bug statuses is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software and is fixed properly keeping in mind other functionality is unharmed due to fixation of defect, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer or after fixing the defect it is creating problems in other part of application, the tester changes the status to “REOPENED”. The bug again follows the same life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. Tester validates that the bug no longer exists in the software, and changes the status of the bug to “CLOSED”. At this stage, bug is fixed, tested and approved.

Monday, December 12, 2011

Usability Testing Approaches

Usability requirements are not always testable & cannot be measured accurately. Classic non-testable requirement: "System must be user-friendly." But think about this – User friendly to whom? Who are the users?

Suggested Approaches for Usability Testing:
• Qualitative & Quantitative
• Qualitative Approach:

Qualitative Approach:  Each and every function should available from all the pages of the site. User should able to submit each and every request with in 4-5 actions. Confirmation message should be displayed for each and every submit.

Quantitative Approach:  Heuristic Checklist should be prepared with all the general test cases that fall under the classification of checking. This generic test cases should be given to 10 different people and ask to execute the system to mark the pass/fail status. The average of 10 different people should be considered as the final result.
Example: Some people may feel system is more users friendly, If the submit is button on the left side of the screen. At the same time some other may feel its better if the submit button is placed on the right side

Software Testing Life Cycle (STLC)

The different stages in Software Test Life Cycle -
Each of these stages has a definite Entry and Exit criteria, Activities & Deliverables associated with it. In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible.

REQUIREMENT ANALYSIS
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, and System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability) .Automation feasibility for the given testing project is also done in this stage.
Activities
• Identify types of tests to be performed.
• Gather details about testing priorities and focus.
• Prepare Requirement Traceability Matrix (RTM).
• Identify test environment details where testing is supposed to be carried out.
• Automation feasibility analysis (if required).
Deliverables
• RTM
• Automation feasibility report. (if applicable)

TEST PLANNING
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.
Activities
• Preparation of test plan/strategy document for various types of testing
• Test tool selection
• Test effort estimation
• Resource planning and determining roles and responsibilities.
• Training requirement
Deliverables

• Test plan /strategy document.
• Effort estimation document.

TESTCASE DEVELOPMENT
This phase involves creation, verification and rework of test cases & test scripts. Test data, is identified/created and is reviewed and then reworked as well.
Activities
• Create test cases, automation scripts (if applicable)
• Review and baseline test cases and scripts
• Create test data (If Test Environment is available)
Deliverables
• Test cases/scripts
• Test data

Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Activities
• Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
• Setup test Environment and test data
• Perform smoke test on the build
Deliverables
• Environment ready with test data set up
• Smoke Test Results.

Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.
Activities
• Execute tests as per plan
• Document test results, and log defects for failed cases
• Map defects to test cases in RTM
• Retest the defect fixes
• Track the defects to closure
Deliverables
• Completed RTM with execution status
• Test cases updated with results
• Defect reports

Test Cycle Closure
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.
Activities
• Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software, Critical Business Objectives , Quality
• Prepare test metrics based on the above parameters.
• Document the learning out of the project
• Prepare Test closure report
• Qualitative and quantitative reporting of quality of the work product to the customer.
• Test result analysis to find out the defect distribution by type and severity.
Deliverables

• Test Closure report
• Test metrics

Finally, summary of STLC along with Entry and Exit Criteria
  • STLC Stage 
  • Entry Criteria 
  • Activity 
  • Exit Criteria 
  • Deliverables