Improve Your Technology

Just another blog for techology

Software Design

Software Design Concepts and Principles

 

1. Software Design and Software Engineering

 

Software design sits at the technical kernel of software engineering and is applied regardless of the software process model that is used. Analysis provides information that is necessary to create the four design models requires for a complete specification of design. The data design transforms the information domain model created during analysis into the data structures that will be required to implement the software. The data objects and relationships defined in the entity relationship diagram and the detailed data content depicted in the data dictionary provide the basis for the data design activity.

 

The architectural design defines the relationship between major structural elements of the software, the “design patterns” that can be used to achieve the requirements that have been defined for the system, and the constrains that effect the way in which architectural design patterns can be applied.

 

The interface design describes how the software communicates within itself, with the systems that interoperate with it, and with humans who use it. The component-level design transforms structural elements of the software architecture into a procedural description of software components.

 

The importance of software design can be stated with single word – quality. Design is the place where quality is fostered in software engineering.

 

2. The Design Process

 

The design represented at a high level of abstraction; a level that can be directly traced to the specific system objective and more detailed data, functional, and behavioral requirements. As design iterations occur, subsequent refinement leads to design representations at much lower levels of abstraction.

 

2.1 Design and Software Quality

 

Three characteristics that serve of a good design:

ü        The design must implement all of the explicit requirements contained in the analysis model, and it must accommodate all of the implicit requirements desired by the customer.

ü        The design must be a readable, understandable guide for those who generate code and for those who test and subsequently support the software.

ü        The design should provide a complete picture of the software, addressing the data, functional, and behavioral domains from an implementation perspective.

 

In order to evaluate the quality of a design representation, we must establish technical criteria for good design.

(1)     A design should exhibit an architectural structure that

a.        has been created using recognizable design patterns,

b.       is composed of components that exhibit good design characteristics, and

c.        can be implemented in an evolutionary fashion, there by facilitating implementation and testing.

(2)     A design should be moderate

(3)     A design should contain distinct representations of data, architecture, interfaces, and components.

(4)     A design should lead to data structures that are appropriate for the objects to be implemented and are drawn from recognizable data patterns.

(5)     A design should lead to components that exhibit independent functional characteristics.

(6)     A design should lead to interfaces that reduce the complexity of connection between modules and with the external environment.

(7)     A design should be derived using a repeatable method that is driven by information obtained during software requirements analysis.

 

2.2 The Evolution of Software Design

 

Design work concentrated on critical for the development of modular programs and methods for refining software structures in a top-down manner. Procedural aspects of design definition evolved into a philosophy called structured programming. News design approaches proposed an object-oriented approach to design derivation.

 

Software design method introduces unique heuristics and notation, as well as a some what parochial view-of what characteristics design quality. Common characteristics:

(1)     a mechanism for the translation of analysis model into a design representation;

(2)     a notation for representing functional components and their interfaces,

(3)     heuristics for refinement and partitioning, and

(4)     guidelines for quality assessment

 

3. Design Principles

 

The design process is a sequence of steps that enables the designer to describe all aspects of the software to be built. Design process is not simply as cookbook. Basic design principles enable software engineer to navigate the design process. Following list are set of principles for software design.

 

ü        The design process should not suffer from “tunnel vision” (lack of requirements). A good design should consider alternative approaches, judging each based on the requirements of the problem.

ü        The design should be traceable to the analysis model

ü        The design should not reinvent the wheel. Design time should be invested in representing truly new ideas and integrating those patterns that already exists.

ü        The design should “minimize the intellectual distance” between the software and the problem as it exists in the real world.

ü        The design should exhibit uniformity and integration. A design is uniform if it appears that one person develop the entire thing.

ü        The design should be structured to accommodate change.

ü        The design should be structured to degrade gently, even when aberrant data, events, or operating conditions are encountered.

ü        Design is not coding, coding is not design. The level of abstraction of design model is higher than source code. The only design decisions made at the coding level address the small implementation details that enable the procedural design to be coded.

ü        The design should be assessed for quality as it is being created, not after the fact.

ü        The design should be reviewed to minimize conceptual errors.

 

4. Design Concepts

 

4.1 Abstraction

 

At the highest level of abstraction, a solution is stated in broad terms using the language of the problem environment. At lower levels of abstraction, a more procedural orientation is taken. Problem-oriented terminology is coupled with implementation-oriented terminology in an effort to state a solution. Finally, at the lowest level of abstraction, the solution is stated in a manner that can be directly implemented.

 

Each step in the software process model is a refinement in the level of abstraction of the software solution. A procedural abstraction is named sequence of instructions that has a specific and limited function. A data abstraction is named collection of data that describes a data object. The original abstract data type is used as a template or generic data structure from which other data structure can be instantiated.

 

Control abstraction is the third form of abstraction used in software design. Like procedural and data abstraction, control abstraction implies a program control mechanism without specifying details. An example of a control abstraction is the synchronization semaphore used to coordinate activities in an operating system.

 

4.2 Refinement

 

Stepwise refinement is a top-down design strategy. A program is developed by successively refining levels of procedural detail. The process of program refinement is analogous to the process of refinement and partitioning that is used during requirements analysis.

 

Refinement is actually a process of elaboration. A high level of abstraction describes function or information conceptually but provides no information about the internal working of the function or the internal structure of the information. Abstraction and refinement are complementary concepts. Abstraction enables a designer to specify procedure and data and yet suppress low-level details. Refinement helps the designer to reveal low-level details as design progresses.

 

4.3 Modularity

 

Software is divided into separately named and addressable components, often called module, that are integrated to satisfy problem requirements. Modularity is the single attribute of software that allows a program to be intellectually manageable.

 

Five criteria that enable us to evaluate a design method with respect to its ability to define an effective modular system:

ü        Modular decomposability. If a design method provides a systematic mechanism for decomposing the problem into sub-problems. It will reduce the complexity of the overall problem, thereby achieving an effective modular solution.

ü        Modular composability. If a design method enables existing design components to be assembled into a new system, it will yield a modular solution that does not reinvent the wheel.

ü        Modular understandability. If a module can be understood as s standalone unit, it will be easier to build and easier to change.

ü        Modular continuity. If small changes to the system requirements result in changes to individual modules, rather than system wide changes, the impact of change-induced side effects will be minimized.

ü        Modular protection. If an aberrant condition occurs within a module and its effects are constrained within that module, the impact of error-induces side effects will be minimized.

 

4.4 Software Architecture

 

Software architecture alludes to “the overall structure of the software and the ways in which that structure provides conceptual integrity for a system”. Architecture is the hierarchical structure of program components, the manner in which these components interact and the structure of data that are used by the components.

 

One goal of software design is to derive an architectural rendering of a system. A set of architectural patterns enable a software engineer to reuse design level concepts.

 

Structural modules represent architecture as an organized collection of program components. Framework models increase the level of design abstraction by attempting to identify repeatable architectural design frameworks that are encountered in similar types of applications. Dynamic models address the behavioral aspects of the program architecture, indicating how the structure or system configuration may change as a function of existing events. Process models focus on the design of the business or technical process that the system must accommodate. Functional models can be used to represent the functional hierarchy of a system. Architectural description languages have been proposed, the majority provide mechanisms for describing system components and the manner in which they are connected to one another.

 

4.5 Control Hierarchy

 

Control hierarchy, also called program structure, represents the organization of program components and implies a hierarchy of control. Different notations are used to represent control hierarchy for those architectural styles that are amenable to this representation. The most common is the treelike diagram.

 

4.6 Structural Partitioning

 

Horizontal partitioning defines separate branches of the modular hierarchy for each major program function. Control modules represented in a darker shade are used to coordinate communication between and execution of the functions. Partitioning the architecture horizontal provides a number of distinct benefits:

ü        Software that is easier to test

ü        Software that is easier to maintain

ü        Propagation of fewer side effects

ü        Software that is easier to extend

 

Because major functions are decoupled from one another, change tends to be less complex and extensions to the system tend to be easier to accomplish without side effects.

 

Vertical partitioning often called factoring, suggests that control and work should be distributed top-down in the program structure. The nature of change in program structures justifies the need for vertical partitioning.

 

4.7 Data Structure

 

Data structure is a representation of the logical relationships among individual elements of data. Data structure is as important as program structure to the representation of software architecture. It dictates the organization, methods of access, degree of associatively, and processing alternatives for information.

 

A scalar item is the simplest of all data structures. A scalar item represents a single element of information that may be addressed by an identifier. When scalar items are organized as a list or contiguous group, a sequential vector is formed. When the sequential vector is extended to two, three, and ultimately, an arbitrary number of dimensions, an n-dimensional space is created. A linked list is a data structure that organizes noncontiguous scalar items, vector, or spaces in a manner that enabled them to be processed as a list. A hierarchical data structure is implemented using multilinked lists that contain scalar items, vectors, and possibly, n-dimensional spaces. A hierarchical structure is commonly encountered in applications that require information categorization and associativity.

 

4.8 Software Procedure

 

Program structure defines control hierarchy without regard to the sequence of processing and decisions. Software procedure focuses on the processing details of each module individually. Procedure must provide a precise specification of processing, including sequence of events, exact decision points, repetitive operations, and even data organization and structure.

 

4.9 Information Hiding

 

Hiding implies that effective modularity can be achieved by defining a set of independent modules that communicate with one another only that information necessary to achieve software function. Abstraction helps to define the procedural entities that make up the software. Hiding defines and enforces access constrains to both procedural detail within a module and local data structure used by the module.

 

5. Effective Modular Design

 

5.1. Functional Independence

 

Functional independence is a direct out growth of modularity and the concept of abstraction and information hiding. Independence is measuring using two qualitative criteria: cohesion and coupling. Cohesion is measure of the relative functional strength of module. Coupling is a measure of the relative interdependence among modules.

 

5.2. Cohesion:

 

Cohesion is a natural extension of the information hiding concept. A module that performs a set of tasks that relate to each other loosely, are called as coincidentally cohesion. A module that performs tasks that are related logically is called as logically cohesion. A module that contains tasks that are related by the fact that all must be executed with the same span of time is called as temporal cohesion.

 

When processing elements of a module are related and must execute in a specific order, procedural cohesion exists. When all processing elements concentrate on one area of a data structure, communicational cohesion is present.

 

5.3. Coupling:

 

Coupling is a measure of interconnection among modules in a software structure. Coupling depends on the interface complexity between modules.

 

As long as a simple argument list is present, low coupling (data coupling) is exhibited in this portion of structure. A variation of data coupling called as stamp coupling, is found when a portion of data structure is passed via a module interface.

 

When coupling is coupling is characterized by passage of control between modules, called as control coupling. When modules are ties to an environment external to software, called as external coupling. When a number of modules reference a global data area is called as common coupling. Content coupling occurs when one module makes use of data or control information maintained within the boundary of another module. Compiler coupling ties source code to specific attributes of a compiler; operating system coupling ties design and resultant code to operating system.

 

6. Design Heuristic for Effective Modularity

 

The program structure can be manipulated according to the following set of heuristics:

1.        Evaluate the “first iteration” of the program structure to reduce coupling and improve cohesion. Once the program structure has been developed, modules may be exploded or imploded with an eye toward improving module independence. An exploded module becomes two or more modules in the final program structure. An imploded module is the result of combining the processing implied by two or more modules.

2.        Attempt to minimize structure with high fan-out; strive for fan-in as depth increases.

3.        Keep the scope of effect of a module within the scope of control of that module. The scope of effect of module is defined as all other modules that are affected by a decision made in module.

4.        Evaluate module interfaces to reduce complexity and redundancy and improve consistency. Module interface complexity is a prime cause of software errors. Interfaces should be designed to pass information simply and should be consistent with the function of a module.

5.        Define modules whose function is predictable, but avoid modules that are overly restrictive.

6.        Strive for “controlled entry” modules by avoiding “pathological connections”. This design heuristic warns against content coupling.

Advertisements

August 24, 2008 Posted by | Design, Software Engineer | | 2 Comments

Software Testing….

Software Testing

 

1.        Strategic Approaches to Software Testing:

 

A number of software testing strategies provide the software developer with a template for testing and all have the following generic characteristics:

ü        Testing begins at the component level and works “outward” toward the integration of the entire computer-based system.

ü        Different testing techniques are appropriate at different points in time.

ü        Testing is conducted by the developer of the software and an independent test group.

ü        Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.

 

1.1. Verification and Validation

       

Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements.

Verification: “Are we building the product right?”

Validation: “Are we building the right product?”

 

1.2. Organizing for Software Testing

 

There are often a number of misconceptions that can be erroneously inferred from the preceding discussion:

(1)     That the developer of software should do no testing at all

(2)     That the software should be “tossed over the wall” to strangers who will test it mercilessly.

(3)     That testers get involved with the project only when the testing steps are about to begin.

 

The software developer is always responsible for testing the individual units of the program, ensuring that each performs the function for which it was designed. In many cases the developer also conducts integration testing – a testing step that leads to the construction of the complete program structure. Only after the software architecture is complete does an independent test group become involved.

 

Independent test group is to remove the inherent problems. It also removes the conflict of interest that may otherwise be present.

 

2.        Strategic Issues

 

The following issues must be addressed if a successful software testing strategy is to be implements.

ü        Specify product requirements in a quantifiable manner long before testing commences.

ü        State testing objectives explicitly.

ü        Understand the users of the software and develop a profile for each user category.

ü        Develop a testing plan that emphasizes “rapid cycle testing”.

ü        Build “robust” software that is designed to test itself.

ü        Use effective formal technical reviews as a filter prior to testing.

ü        Conduct formal technical reviews to assess the test strategy and test cases themselves.

ü        Develop a continuous improvement approach for the testing.

 

3.        Unit Testing

 

Unit testing focuses verification effort on the smallest unit of software design – the software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. Unit testing is white-box oriented.

 

3.1. Unit Test considerations:

 

The tests that occur as part of unit tests are illustrated with modules and programs. The module interface is tested to ensure that information properly flows into and out of the program unit under test. All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.

 

The common errors in computations are

(1)     misunderstood or incorrect arithmetic precedence,

(2)     mixed mode operations,

(3)     incorrect initialization,

(4)     precision inaccuracy,

(5)     Incorrect symbolic representation of an expression.

Comparison and control flow are closely coupled to one another. Test cases should uncover errors such as

(1)     comparison of different date types,

(2)     incorrect logical operators or precedence,

(3)     expectation of equality when precision error makes equality unlikely,

(4)     incorrect comparison of variables,

(5)     improper or nonexistent loop termination,

(6)     failure to exit when divergent iteration is encountered, and

(7)     Improperly modified loop variables.

Among the potential errors that should be tested when error handling is evaluated are

(1)     Error description is unintelligible.

(2)     Error noted does not correspond to error encountered.

(3)     Error condition causes system intervention prior to error handling.

(4)     Exception – condition processing is incorrect.

(5)     Error description does not provide enough information to assist in the location of the cause of the error.

 

3.2 Unit Test Procedures:

 

Unit testing is normally considered as an adjunct to the coding step. After source level code has been developed, reviewed and verified for correspondence to component level design, unit test case design begins. Each test case should be coupled with a set of expected result. A component is not a stand alone program; driver and/or stub software must be developed for each unit test.

 

Unit testing is simplified w  hen a component with high cohesion is designed. When only one function is addressed by a component, the number of test cases is reduced and errors can be easily predicted and uncovered.

 

4.        Integration Testing

 

Integration testing is a systematic technique for constructing the program structure while at the same time conduction tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. All components are combined in advance and the entire program is tested as a whole.

 

The Program is constructed and tested in small increments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.

 

4.1. Top-down Integration

 

Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module. Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.

 

The integration process is performed in a series of five steps:

(1)     The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.

(2)     Depending on the integration approach selected, subordinate stubs are replaced as each component is integrated.

(3)     Tests are conducted as each component is integrated.

(4)     On completion of each set of tests, another stub is replaced with the real component.

(5)     Regression testing may be conducted to ensure that new errors have not been introduced.

 

The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper level in the hierarchy and is therefore encountered first. Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels.

 

4.2. Bottom-up Integration

 

Bottom-up integration begins construction and testing with atomic modules. Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated. A bottom-up integration strategy may be implemented with the following steps:

(1) Low level components are combined into clusters that perform a specific software sub function.

(2) A driver is written to coordinate test case input and output.

(3) The cluster is tested.

(4) Drivers are removed and clusters are combined moving upward in the program structure.

 

4.3 Regression Testing

 

Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.

 

Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison. The regression test suite contains three different classes of test cases:

ü        A representative sample of tests that will exercise all software functions.

ü        Additional tests that focus on software functions that are likely to be affected by the change.

ü        Tests that focus on the software components that have been changed.

 

4.4 Smoke Testing

 

Smoke testing is an integration testing a approach that is commonly used when “shrink wrapped” software products are being developed. It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis; the smoke testing approach encompasses the following activities:

ü        Software components that have been translated into code are integrated into a “build”. A build includes all data files, libraries, reusable models and engineered components that are required to implement one or more product functions.

ü        A series of tests is designed to expose errors that will keep the build from properly performing its function.

ü        The build is integrated with other builds and the entire product is smoke tested daily.

Smoke testing provides a number of benefits when it is applied on complex, time-critical software engineering projects:

ü        Integration risk is minimized.

ü        The quality of the end-product is improved.

ü        Error diagnosis and correction are simplified.

ü        Progress is easier to assess.

 

Integration test plan describes the overall strategy for integration. Testing is divided into phases and builds that address specific functional and behavioral characteristics of the software. This integration testing includes, “User interaction”, “Data manipulation and analysis”, “Display process and generation” and “Database management”

 

A history of actual test results, problem, or peculiarities is recorded in the Test Specification. This information can be vital during software maintenance.

 

5.        Validation Testing

 

Software is completely assembled as a package, interfacing errors have been uncovered and corrected, and a final series of software tests – validation testing will begin. Validation can be defined in many ways, but a simple definition is that validation succeeds when software functions in a manner that can be reasonably expected by customer.

 

5.1 Validation Test Criteria

 

Software validation is achieved through a series of black-box tests that demonstrate conformity with requirements. The plan and procedure are designed to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved, all performance requirements are attained, documentation is correct, and human engineered and other requirements are met. This validation tests conducted when (1) the function or performance characteristics conform to specification and are accepted or (2) a deviation from specification is uncovered and a deficiency list is created.

 

5.2. Alpha and Beta Testing

 

It is virtually impossible for a software developer to foresee how the customer will really use a program.

 

The Alpha test is conducted at the developer’s site by a customer. The software is used in a natural setting with the developer “looking over the shoulder” of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment.

 

The Beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a “live” application of the software in an environment that cannot be controlled by the developer. The customer records all problems that are encountered during beta testing and reports these to the developer at regular intervals.

 

6.        System Testing

 

Software is only one element of a larger computer-based system. Software is incorporated with other system elements and a series of system integration and validation testes are conducted. These tests fall outside the scope of the software process and are not conducted solely by software engineers. A classic system testing problem is “finger-pointing”. This occurs when an error is uncovered and each system element developer blames the other for the problem.  The software engineer should anticipate potential interfacing problems and

(1)           deign error-handling paths that test all information coming from other elements of the system,

(2)           conduct series of tests that simulate bad data or other potential errors at the software interface,

(3)           record the results of tests to use as “evidence” if finger-pointing does occur, and

(4)           participate in planning and design of system tests to ensure that software is adequately tested.

 

System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.

 

6.1 security Testing

 

Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. During security testing, the tester plays the roles of the individual who desires to penetrate the system. The role of the system engineer is to make penetration cost more than the value of the information that will be conducted.

 

6.2 Stress Testing

 

Stress testing executes a system in a manner that demands resources in abnormal quality, frequency, or volume. A variation of stress testing is a technique called sensitivity testing. A very small range of data contained with the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation.

 

6.3 Performance Testing

 

Performance testing is designed to test the run time performance of the software with the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as white box tests are conducted. Performance tests are often coupled with stress testing and usually required both hardware and software instrumentation.

August 23, 2008 Posted by | Software Engineer, Testing | | Leave a comment