Improve Your Technology

Just another blog for techology

Software Testing….

Software Testing

 

1.        Strategic Approaches to Software Testing:

 

A number of software testing strategies provide the software developer with a template for testing and all have the following generic characteristics:

ü        Testing begins at the component level and works “outward” toward the integration of the entire computer-based system.

ü        Different testing techniques are appropriate at different points in time.

ü        Testing is conducted by the developer of the software and an independent test group.

ü        Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.

 

1.1. Verification and Validation

       

Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements.

Verification: “Are we building the product right?”

Validation: “Are we building the right product?”

 

1.2. Organizing for Software Testing

 

There are often a number of misconceptions that can be erroneously inferred from the preceding discussion:

(1)     That the developer of software should do no testing at all

(2)     That the software should be “tossed over the wall” to strangers who will test it mercilessly.

(3)     That testers get involved with the project only when the testing steps are about to begin.

 

The software developer is always responsible for testing the individual units of the program, ensuring that each performs the function for which it was designed. In many cases the developer also conducts integration testing – a testing step that leads to the construction of the complete program structure. Only after the software architecture is complete does an independent test group become involved.

 

Independent test group is to remove the inherent problems. It also removes the conflict of interest that may otherwise be present.

 

2.        Strategic Issues

 

The following issues must be addressed if a successful software testing strategy is to be implements.

ü        Specify product requirements in a quantifiable manner long before testing commences.

ü        State testing objectives explicitly.

ü        Understand the users of the software and develop a profile for each user category.

ü        Develop a testing plan that emphasizes “rapid cycle testing”.

ü        Build “robust” software that is designed to test itself.

ü        Use effective formal technical reviews as a filter prior to testing.

ü        Conduct formal technical reviews to assess the test strategy and test cases themselves.

ü        Develop a continuous improvement approach for the testing.

 

3.        Unit Testing

 

Unit testing focuses verification effort on the smallest unit of software design – the software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. Unit testing is white-box oriented.

 

3.1. Unit Test considerations:

 

The tests that occur as part of unit tests are illustrated with modules and programs. The module interface is tested to ensure that information properly flows into and out of the program unit under test. All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.

 

The common errors in computations are

(1)     misunderstood or incorrect arithmetic precedence,

(2)     mixed mode operations,

(3)     incorrect initialization,

(4)     precision inaccuracy,

(5)     Incorrect symbolic representation of an expression.

Comparison and control flow are closely coupled to one another. Test cases should uncover errors such as

(1)     comparison of different date types,

(2)     incorrect logical operators or precedence,

(3)     expectation of equality when precision error makes equality unlikely,

(4)     incorrect comparison of variables,

(5)     improper or nonexistent loop termination,

(6)     failure to exit when divergent iteration is encountered, and

(7)     Improperly modified loop variables.

Among the potential errors that should be tested when error handling is evaluated are

(1)     Error description is unintelligible.

(2)     Error noted does not correspond to error encountered.

(3)     Error condition causes system intervention prior to error handling.

(4)     Exception – condition processing is incorrect.

(5)     Error description does not provide enough information to assist in the location of the cause of the error.

 

3.2 Unit Test Procedures:

 

Unit testing is normally considered as an adjunct to the coding step. After source level code has been developed, reviewed and verified for correspondence to component level design, unit test case design begins. Each test case should be coupled with a set of expected result. A component is not a stand alone program; driver and/or stub software must be developed for each unit test.

 

Unit testing is simplified w  hen a component with high cohesion is designed. When only one function is addressed by a component, the number of test cases is reduced and errors can be easily predicted and uncovered.

 

4.        Integration Testing

 

Integration testing is a systematic technique for constructing the program structure while at the same time conduction tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. All components are combined in advance and the entire program is tested as a whole.

 

The Program is constructed and tested in small increments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.

 

4.1. Top-down Integration

 

Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module. Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.

 

The integration process is performed in a series of five steps:

(1)     The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.

(2)     Depending on the integration approach selected, subordinate stubs are replaced as each component is integrated.

(3)     Tests are conducted as each component is integrated.

(4)     On completion of each set of tests, another stub is replaced with the real component.

(5)     Regression testing may be conducted to ensure that new errors have not been introduced.

 

The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper level in the hierarchy and is therefore encountered first. Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels.

 

4.2. Bottom-up Integration

 

Bottom-up integration begins construction and testing with atomic modules. Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated. A bottom-up integration strategy may be implemented with the following steps:

(1) Low level components are combined into clusters that perform a specific software sub function.

(2) A driver is written to coordinate test case input and output.

(3) The cluster is tested.

(4) Drivers are removed and clusters are combined moving upward in the program structure.

 

4.3 Regression Testing

 

Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.

 

Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison. The regression test suite contains three different classes of test cases:

ü        A representative sample of tests that will exercise all software functions.

ü        Additional tests that focus on software functions that are likely to be affected by the change.

ü        Tests that focus on the software components that have been changed.

 

4.4 Smoke Testing

 

Smoke testing is an integration testing a approach that is commonly used when “shrink wrapped” software products are being developed. It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis; the smoke testing approach encompasses the following activities:

ü        Software components that have been translated into code are integrated into a “build”. A build includes all data files, libraries, reusable models and engineered components that are required to implement one or more product functions.

ü        A series of tests is designed to expose errors that will keep the build from properly performing its function.

ü        The build is integrated with other builds and the entire product is smoke tested daily.

Smoke testing provides a number of benefits when it is applied on complex, time-critical software engineering projects:

ü        Integration risk is minimized.

ü        The quality of the end-product is improved.

ü        Error diagnosis and correction are simplified.

ü        Progress is easier to assess.

 

Integration test plan describes the overall strategy for integration. Testing is divided into phases and builds that address specific functional and behavioral characteristics of the software. This integration testing includes, “User interaction”, “Data manipulation and analysis”, “Display process and generation” and “Database management”

 

A history of actual test results, problem, or peculiarities is recorded in the Test Specification. This information can be vital during software maintenance.

 

5.        Validation Testing

 

Software is completely assembled as a package, interfacing errors have been uncovered and corrected, and a final series of software tests – validation testing will begin. Validation can be defined in many ways, but a simple definition is that validation succeeds when software functions in a manner that can be reasonably expected by customer.

 

5.1 Validation Test Criteria

 

Software validation is achieved through a series of black-box tests that demonstrate conformity with requirements. The plan and procedure are designed to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved, all performance requirements are attained, documentation is correct, and human engineered and other requirements are met. This validation tests conducted when (1) the function or performance characteristics conform to specification and are accepted or (2) a deviation from specification is uncovered and a deficiency list is created.

 

5.2. Alpha and Beta Testing

 

It is virtually impossible for a software developer to foresee how the customer will really use a program.

 

The Alpha test is conducted at the developer’s site by a customer. The software is used in a natural setting with the developer “looking over the shoulder” of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment.

 

The Beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a “live” application of the software in an environment that cannot be controlled by the developer. The customer records all problems that are encountered during beta testing and reports these to the developer at regular intervals.

 

6.        System Testing

 

Software is only one element of a larger computer-based system. Software is incorporated with other system elements and a series of system integration and validation testes are conducted. These tests fall outside the scope of the software process and are not conducted solely by software engineers. A classic system testing problem is “finger-pointing”. This occurs when an error is uncovered and each system element developer blames the other for the problem.  The software engineer should anticipate potential interfacing problems and

(1)           deign error-handling paths that test all information coming from other elements of the system,

(2)           conduct series of tests that simulate bad data or other potential errors at the software interface,

(3)           record the results of tests to use as “evidence” if finger-pointing does occur, and

(4)           participate in planning and design of system tests to ensure that software is adequately tested.

 

System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.

 

6.1 security Testing

 

Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. During security testing, the tester plays the roles of the individual who desires to penetrate the system. The role of the system engineer is to make penetration cost more than the value of the information that will be conducted.

 

6.2 Stress Testing

 

Stress testing executes a system in a manner that demands resources in abnormal quality, frequency, or volume. A variation of stress testing is a technique called sensitivity testing. A very small range of data contained with the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation.

 

6.3 Performance Testing

 

Performance testing is designed to test the run time performance of the software with the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as white box tests are conducted. Performance tests are often coupled with stress testing and usually required both hardware and software instrumentation.

Advertisements

August 23, 2008 Posted by | Software Engineer, Testing | | Leave a comment

Software Testing

Software Testing

 

1. Software Testing Techniques:

               

Once source code has been generated, software must be tested to uncover as many as possible before delivery to your customer. Your goal is to design a series of test cases that have a high likelihood of finding errors. Software is tested in two different perspectives:

(1) Internal program logic is exercised using “white box” test case design techniques and

(2) Software requirements are exercised using “black box” test case design techniques.

In both test cases the intent is to find maximum number of errors with minimum amount of effort and time

 

1.1. Testing Objectives

 

  1. Testing is a process of executing a program with the intent of finding an error.
  2. A good test case is one that has a high probability of finding an as yet undiscovered error.
  3. a successful test is one that uncover an as yet discovered error

 

These objectives imply a dramatic change in viewpoint. They move counter to the commonly held view that a successful test is one in which no error are found. Our objective is to design tests that systematically uncover different classes of errors and to do so with minimum amount of time and effort.

 

1.2 Testing Principles

 

ü       All tests should be traceable to customer requirements.

ü       Tests should be planned long before testing begins.

ü       The Pareto principle applies to software testing.

Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program components.

ü       Testing should begin “in the small” and progress toward testing “in the large”.

ü       Exhaustive testing is not possible.

ü       To be most effective, testing should be conducted by an independent third party.

 

1.3. Testability

 

Software testability is simply how easily can be tested. Testing is so profoundly difficult, it pays to know what can be done streamline it. Testability used to mean how adequately a particular set of tests will cover the product. It’s also used by military to mean how easily a tool can be checked and repaired in the field. These two meanings are not the same as software testability.

ü       Operability. “The better it works, the more effectively it can be tested.”

ü       Observability. “What you see is what you test.”

ü       Controllability. “The better we can control the software, the more the testing can be automated and optimized.”

ü       Decomposability. “By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting.”

ü       Simplicity. “The less there is to test, the more quickly we can test it.”

ü       Stability. “The fewer the changes, the fewer the disruptions to testing.”

ü       Understandability. “The more information we have, the smarter we will test.”

 

Attributes of a “Good” test:

1.       A good test has a high probability of finding an error: To achieve this goal, the .tester must understand the software and attempt and attempt to develop a mental picture of how the software might fail. Ideally, the classes of failure are probed.

2.       A good test is not redundant. Testing time and resources are limited. There is no point conducting a test that has the same purpose as another test. Every test should have a different purpose.

3.       A good test should be “best of breed”.

4.       A good test should be neither too simple nor too complex.

 

2. Test case design:

 

A rich variety of test case design methods have evolved for software. These methods provide the developer with a systematic approach to testing. (1) Knowing the specified that demonstrate each function; (2) knowing the internal workings of a product, tests can be conducted to ensure that “all gears mesh”, that is, internal operations are performed according to specifications and all internal components have been adequately exercised. The first approached is called black-box testing and the second, white-box testing. When computer software is considered. Black-box testing alludes to tests that are conducted at the software interface. White-box testing of software is predicted on close examination of procedural detail.

 

3. White-box testing:

 

White-box testing also called as glass-box testing, is a test case design method that uses the control structure of the procedural design to derive test cases.

(1)     Guarantee that all independent paths within a module have been exercised at least once,

(2)     exercise all logical decisions on their true and false sides,

(3)     Execute all loops at their boundaries and with in their operational bounds, and

(4)     Exercise internal data structures to ensure their validity.

 

Why don’t we spend all our energy on black-box tests? The answer lies in the nature of software defects

ü       Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.

ü       We often believe that a logical path is not likely to be executed when, in fact, it may be executed on regular basis.

ü       Typographical errors are random.

 

3.1. Basis Graph Notation

 

Basis path testing is white-box testing technique. The basis path method enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths.

3.1.1.         Flow Graph Notation: A simple notation for the representation of control flow is called a flow graph notation. The flow graph depicts logical control flow. We use a flow graph to consider the procedural design representation.

3.1.2.         Cyclomatic Complexity: Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program. It defines the number if independent paths in the basis set of a program and provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.  An independent path is any path through the program that introduces at least one new set of processing statements or a new condition.

3.1.3.         Graph Matrices: to develop a software tool that assists in basis path testing, a data structure, called a graph matrix. A graph matrix is a square matrix whose size is equal to the number of nodes on the flow graph. Each row and column corresponds to an identified node, and matrix entries correspond to connections between nodes.

 

3.2. Control Structure Testing

3.2.1.         Condition Testing: Conditional testing is a case design method that exercises the logical conditions contained in a program module. A compound condition is composed of two or more simple conditions. A condition without relational expressions is referred to as a Boolean expression.

The purpose of condition testing is to detect not only errors in the conditions of a program but also other errors in the program.

 

Brach testing is probably the simplest condition testing strategy.

Domain testing requires three or four tests to be derived for a relations expression.

A condition testing strategy that builds on the techniques just outlined, called BRO (branch and relational operator) testing, the technique guarantees the detection of branch and relational operators in the condition occurs only once and has no common variables.

 

3.2.2.         Data Flow Testing: Data flow testing methods selects test paths of a program according to the locations of definitions and uses of variables in the program.

 

3.2.3.         Loop Testing: Loops are the cornerstone for the vast majority of all algorithms implemented in software. It focuses exclusively on the validity of loop constructs.

 

4. Black-Box Testing

 

Black-box testing also called behavioral testing, focus on the functional requirements of the software. This test testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. It tends to be applied later stages of testing, because black-box testing purposely disregards structure, attention is focused on the information domain.

 

4.1.              Graph-Based Testing Methods: 

 

Graph – a collection of nodes that represent objects; links that represent the relationships between objects; node weight that describe some characteristic of link. A direct link indicates that a relationship moves in only one direction. A bidirectional link, also called a symmetric link, implies that the relationship applies in both directions. Parallel links are used when a number of different relationships are established between graph nodes.

 

The transitivity of sequential relationships is studied to determine how the impact of relationships propagates across objects defined in a graph. The symmetry of relationship is also important guide to the design of test cases. If a link is indeed bidirectional, it’s important to test this feature.

 

4.2.              Equivalence Partitioning:

An equivalence class represents a set of valid or invalid states for input conditions. Below are some examples

ü       If an input condition specifies a range, one valid and two invalid equivalence classes are defined.

ü       If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.

ü       If an input condition specifies a member of a set, one valid and one invalid equivalence classes are defined.

ü       If an input condition is Boolean, one valid and one invalid class are defined.

 

4.3.              Boundary Value Analysis

 

Boundary value analysis has been developed as a testing technique. It is a test case design technique that complements equivalence partitioning. For e.g. if internal program data structures have prescribed boundaries, be certain to design a test case to exercise the data structure at its boundary.

 

4.4.              Comparison Testing

 

There are some situations in which the reliability of software is absolutely critical. In such applications redundant hardware and software are often used to minimize the possibility of error. When redundant software is developed, separate software engineering teams develop independent version of an application using the same specification. In such situations each version can be tested with the same test data to ensure that all provided identical output. These independent versions form the basis of black-box testing technique called comparison testing or back-t-back testing.

 

August 22, 2008 Posted by | Software Engineer, Testing | | Leave a comment