Project Documentation Standards

Introduction

This documentation standard will be used during a one-semester software engineering course. The standard suggests common content for each phase of the development process. The use of this standard is a guideline rather than a law. Each team will adapt the suggested content of each document to their specific situation.

Software Requirements Specification (SRS)

Audience and goals of SRS

The requirements specification has two distinct audiences: The client and the technical personnel who will be designing the product.
The SRS has several goals:

  • establish grounds for agreement among the key players as to what problem is to be solved by the software
  • define a baseline for the remaining technical activities, such as design, implemenatation, testing, and maintenance
  • give a basis for traceability of the requirements through the lifecycle
  • provide a baseline that management can use in guiding the development process

Preliminary SRS vs. Final SRS

Often, the requirements specification is developed in two distinct steps: the preliminary requirements specification and the final requirements specification. The former is generally the one that the client and the development team use to agree on the major aspects of the product. The latter is the document that will support the design effort.

SRS Suggested Content

The following content suggests a reasonable organization for the Software Requirements Specification.
1.  Introduction
    1.1  Purpose
    1.2  Scope
    1.3  Definition, Acronyms, or Abbreviations
    1.4  References
    1.5  Overview
2.  General Description
    2.1  Product Perspective (i.e. relationship to other
         parts of the system or to other products)
    2.2  Product Functions
    2.3  User Characteristics
    2.4  General Constraints (eg. hardware limitations,
         policies, interfaces to other application software,
         networks)
    2.5  Assumptions and Dependencies
3.  Specific Requirements
    3.1  External Interface Requirements (User, Hardware,
         Software, Communications)
    3.2  Functional Requirements (for each, give introduction,
         inputs, processing, and outputs)
    3.3  Performance Requirements
    3.4  Design Constraints (due to environmental limitations,
         hardware limitations, compliance with standards, etc.)
    3.5  Attributes (e.g. security, availibility, reliability,
         maintainability)
    3.6  Other requirements

Tools for describing requirements

The following tools can be used in the SRS to describe various aspects of the requirements (based on Glass, p. 49):
  • natural languange -- The requirements can be written in everyday narrative English
  • structured English -- In writing structured English, the language is constrained to a restricted vocabulary of nouns and verbs that are relevant to the problem domain.
  • scenarios -- A scenario presents a specific state of the software system, including screen appearance, the options that are available, etc. A series of scenarios can be used to show a sequence of behaviors during a user session.
  • finite state machine -- This provides a concise notation for representing the states of the system and the functions that map actions to states (and thus allow for transitiosn between states).
  • decision tables and decision trees -- Decision tables provide a tabular form and decision trees provide a graphical form for expressing alternatives.
  • data structure model -- This is a graphial notation that is used to represent relathionships between the major data entities in the system.
  • data flow diagram -- Such a diagram represents the flow of data through a system and the states at which the data are processed.
  • data dictionary -- This is simply a table with definitions for each of the data items that are used in the requirements.
  • formal specification languages -- A formal notation is used to present the requirements. The notation will include defined operators and syntactic rules and will be used to express relations and constraints of the requirements.
  • Petri nets -- This is a graphical tool designed for modeling systems with interacting concurrent components. It has four components: a finite set of places, a finite set of transitions, an input function, and an output function.

Software Design Specification (SDS)

Audience and goals of SDS

The design specification has as its primary audience the coders who will implement the design. The SDS has several goals:
  • to fully understand the problem being solved
  • to decompose the problem into appropriate parts
  • to provide a high-quality basis for the implementation phase

SDS Suggested Content

The following content suggests a reasonable organization for the Software Design Specification.
1.  Introduction
    1.1  Purpose
    1.2  Scope
    1.3  Definition, Acronyms, and Abbreviations
    1.4  References (at a minimum, this should reference the SRS!)
    1.5  Overview of document
2.  System Architecture Description
    2.1  Overview of Modules / Components
    2.2  Structure and relationships
3.  Description of each component (see below for sections)
4.  Appendices (as needed), including pseudocode for all components 

Sections for each component described in Part 3 of the SDS

For each component described in section 3, the following information should be given:
  • Identification: The unique name for the component and the location of the component in the system.
  • Type: A module, a subprogram, a data file, a control procedure, a class, etc.
  • Purpose: Function and performance requirements implemented by the design component, including derived requirements. Derived requirements are not explicitly stated in the SRS, but are implied or adjunct to formally stated SDS requirements.
  • Function: What the component does, the transformation process, the specific inputs that are processed, the algorithms that are used, the outputs that are produced, where the data items are stored, and which data items are modified.
  • Subordinates: The internal structure of the component, the constituents of the component, and the functional requirements satisfied by each part.
  • Dependencies: How the component's function and performance relate to other components. How this component is used by other components. The other components that use this component. Interaction details such as timing, interaction conditions (such as order of execution and data sharing), and responsibility for creation, duplication, use, storage, and elimination of components.
  • Interfaces: Detailed descriptions of all external and internal interfaces as well as of any mechanisms for communicating through messages, parameters, or common data areas. All error messages and error codes should be identified. All screen formats, interactive messages, and other user interface components (originally defined in the SRS) should be given here.
  • Resources: A complete description of all resources (hardware or software) external to the component but required to carry out its functions. Some examples are CPU execution time, memory (primary, secondary, or archival), buffers, I/O channels, plotters, printers, math libraries, hardware registers, interrupt structures, and system services.
  • Processing: The full description of the functions presented in the Function subsection. Pseudocode can be used to document algorithms, equations, and logic.
  • Data: For the data internal to the component, describes the representation method, initial values, use, semantics, and format. This information will probably be recorded in the data dictionary.

Tools for describing designs

The following tools are some that can be used in the SDS to describe various aspects of the design (based on Behforooz and Hudson, p. 212). In addition, the tools listed for describing requirements can also be useful in capturing various parts of the design.
  • Data dictionary -- A comprehensive definition of all of the data (and control) elements in the software product or system. Includes a clear and complete definition of each data item and its synonyms. For each entry, the following information should be given:
    • Description of data item
    • Name of data item (its formal name)
    • Aliases or acronyms for this data item
    • Uses (which processes or modules use the data item; how and when the data item is used)
    • Format (standard format for representing the data item)
    • Additional information such as initial value(s), limitations, default values
  • Trade-off matrix -- a matrix that captures the decision criteria and the relative importance of each decision criterion; allows comparison of each alternative in quantifiable terms
  • Decision table -- a table that shows how an operator or system deals with input conditions and subsequent actions
  • Timing diagram -- describes the timing relationships among the various functions and behaviors of the component
  • Object Interaction Diagram: Illustrates the calling interaction of an object-oriented system. This diagram consists of an ellipse for the system driver, a box for each class, and directed lines between clients and servers. Each box contains the name of the object and its operations.
  • Inheritance Diagram: Shows the relationships among the various types and ADTs in the system. Inheritance diagrams should follow the notation defined by Rumbaugh, et al.. The Rumbaugh notation consists of one box for each class. Each box is positioned in a hierarchical tree according to its inheritance characteristics. Each class box includes the class name, its attributes, and its operations.
  • Aggregation Diagram: Shows relationships among objects that contain or exclusively manage objects of another module or ADT and the object of that other class. Each module will be represented by its name. The relationship will be indicated by a directed line from container to contained. The directed line will be labeled with the cardinality of the relationship (1:1, 1:n, etc.).
  • Structure Chart: A tree diagram of the subroutines in a program. Indicates the interconnections among the subroutines. The subroutines should be labeled with the same name used in the pseudocode.
  • Pseudocode: Describes, in an easily readable and modular form, how the software system will solve the given problem. "Pseudocode" does not refer to a precise form of expression; it refers to the simple use of standard English terms in a restricted manner to describe the algorithmic process involved. Good pseudocode must use a restricted subset of English in such a way that it resembles a good high level programming language. It must be formatted similarly to actual code. The pseudocode description of the problem should state the problem solution so clearly that it can easily be translated to the programming language to be used. Thus, it must include flow of control.

Verification and validation plan (V&V Plan)

Audience and goals of V&V Plan

The V&V Plan describes the various types of testing. These activities can also be referred to as quality assurance activities or test and evaluation activities. The primary audience for this document will be those persons who are responsible for carrying out the testing activities.
Many authors include the various parts of the V&V Plan as part of the SDS. For the software engineering class, I have split these areas into two separate documents.

V&V Plan Suggested Content

The following content suggests a reasonable organization for the Verification and Validation Plan.
1.  Traceability from SRS to SDS (see below)
2.  Test plans and procedures for individual software components
3.  Test plans and procedures for the software product as a whole
4.  Test cases (includes the expected results)
5.  Test cross-reference (see below)
6.  Acceptance test and preparation for delivery
    6.1  Procedure by which the software product will be acceptance tested
    6.2  Specific acceptance criteria
    6.3  Scenario by which the software product will be installed

Tools for the describing the V&V Plan

  • Test plan -- A test plan should be created for each type of test. The following major components should be included, although the details will vary depending on which testing phase is being planned for.
    • Testing process: Describe the procedure in terms of phases and activities.
    • Requirements traceability: Relate the testing to the requirements given in the SRS.
    • Items or components tested: Identify the items being tested as precisely as possible.
    • Testing schedule and resources: Give the overall testing schedule and resource allocation.
    • Test recording procedures: Describe how the results of the tests will be systematically recorded. It must be possible for an outside agency to inspect the test plan, the test cases, and the test results to determine that the testing process has been carried out correctly.
    • Hardware and software requirements: Describe the software tools that are needed for running these tests as well as any hardware that will be needed.
    • Constraints: Give any constraints that will affect the testing process.
  • Traceability matrix -- ties individual requirements in the SRS to specific paragraphs in the SDS and to specific lines in the pseudocode; the objective is to assure that all requirements listed in the SRS have been addressed in the design.
  • Test requirement matrix -- a matrix that shows how every requirement listed in the SRS will be tested; for each requirement, the following information is generated:
    • a unique identifier for the requirement (probably a number)
    • a description of the requirement
    • a reference to the part of the SRS where the requirement is first described
    • the means by which the requirement will be tested (e.g. by analysis, inspection, demonstration, or testing)
    • specific variables that are associated with the requirement
    • ranges of values for the variables
    • system functionality associated with the requirement
    • comments

Implementation documentation

Audience and goals of Implementation

The program code (or source listing) is the primary product of the implementation phase. The human audience of the implementation will be the team members during the implementation and test phases. After deployment, the main human audience will be those who are given the task of maintaining the product.

Issues related to the Implementation

Important issues include the physical organization of the system components, commenting, and programming style; these issues are addressed below.
Physical Organization of System Components
Different components of the system will appear in different compilation units, provided that the implementation language/environment supports this type of organization. This organization should be sensible and related to the overall logical design of the system. The physical organization should follow from the design documentation.
Comments
Each component of the implementation should be well commented and able to stand on its own. Comments fall into two categories: Prologue and explanatory.
The prologue comments will appear at the beginning of each component, module, or block (referred to as a "unit of code"). The following suggests the contents of the prologue:

  • Purpose: what does this unit of code accomplish?
  • Administrative information: author(s), company or institution
  • Version information: date of completion, change history
  • Instructions for correctly executing the unit of code
  • Special needs (operating system, system software, operational needs)
  • Pre- and post-conditions to the unit of code
  • Variable list, with a brief description of their use
  • Classes, abstract data types, and data structures, with a brief description of their use
  • Input data, with a brief description
  • Output: All output produced
  • Subprograms of the unit
  • Other information relevant to understanding and modifying the unit
  • Pseudocode for the algorithms represented in this unit of code
Explanatory comments are used to clarify the logic of the code or the program structure. Explanatory comments should be used sparingly, like salt: "A few grains enhance the flavor, too much and the taste is overwhelmed."
Programming Style
Programming style refers to those conventions that enhance the readability of programs. Some conventions are given below.
Prettyprinting: Use indentation and empty lines so that the visual appearance of the program listing mirrors its logical structure. (Be consistent with indentation increments!). Declare only one data item or variable per line. For each declared data item or variable include a brief comment documenting its purpose. Write only one program statement per line.
Meaningful identifier names: Well-chosen identifiers significantly enhance readability, and as such is considered a significant element of internal program documentation. Identifiers should be meaningful; avoid cuteness, single-letter identifiers, meaningless abbreviations, and identifiers that too closely resemble one another. (for example, HT is not a meaningful variable name for a hash table). Choose names so that they accurately describe the role of the associated entity (for example, COUNT or I is not the best name for an integer variable that indexes an array). Object names should indicate entities. Operation names should indicate actions. Observe any standards for abbreviations, prefixes, and suffixes.
Organizational consistency: Be systematic in grouping and ordering of declarations. For example, declared variables might be grouped by similar role, or listed alphabetically, but should not appear in random order. The same applies for all other declarations, such as subprogram declarations.

Verification and validation results (V&V Results)

Audience and goals of V&V Results

The Verification and Validation Results document describes the outcome of all testing. This report will be started as soon as component test is underway. Each section should be completed as the associated phase is completed.
There are several audiences for the V&V Results. Quality assurance will use the V&V Results to help monitor the quality of the product. Management will use the V&V Results in order to understand progress toward quality goals and planned completion dates. The development team will use the V&V Results to evaluate both the product and the process. The customer will want the results of acceptance testing.

V&V Results document suggested content

Each section in the V&V Results document should be related back to the relevant sections in the V&V Plan.
1.  Introduction and overview
2.  Results from testing individual components
    2.1  Summary of component test
    2.2  Evaluation of test cases
    2.3  Lessons learned
3.  Results from testing product as a whole
    3.1  Summary of integration test
    3.2  Evaluation of test cases
    3.3  Lessons learned
4.  Outcome of acceptance test and delivery
5.  Summary of defects and solutions
6.  Additional information

Tools for the describing the V&V Results

The exact contents of the V&V Results document will depend on how information was presented in the V&V Plan. Rather than giving detailed results within the report, it is generally better to summarize the outcomes. If appropriate, detailed outcomes can be included as appendices. Tables and figures should be used to show the outcomes as clearly as possible.
Some questions that can be considered in writing the report:

  • How well did the actual outcome correspond to the predicted outcome?
  • Where were most of the defects found?
  • Were some modes more error prone than others?

Project legacy

The project legacy is the collection of all documents listed in this standard plus additional information that will assist student teams in the future should your project be extended or maintained. The project legacy will be turned in both electronically and in paper form.
The project legacy will include the following information:

  • A README file that describes the contents of the project legacy
  • Final version of the SRS
  • Final version of the SDS
  • Final version of the V&V Plan
  • The final version of the implemented product
  • The final version of the V&V Report
  • Copies of all team reports
  • Copies of other relevant reports, if any
  • Copies of relevant email messages (to the CEO, the customer, between team members, etc.), if any
  • Final report
    1. Current status. What is the current status of the development project? Are there parts of the product that have not been adequately tested? Are there product features that could not be implemented due to time or resource constraints?
    2. Recommended work. List enhancements of existing features or new features or both. Some of these features may be described in your SRS or SDS; if so, provide references to the relevant document and pages.
    3. Advice to teams continuing this project. Use this section to help someone coming to this project understand information that is not documented elsewhere in this legacy. 

      Comments

      Popular posts from this blog

      Imote2 with Camera setup IMB400 over Ubuntu 9.10/10.04

      Branch and bound algorithm in C#- Article by AsiF Munir

      Tensorflow - Simplest Gradient Descent using GradientDescentOptimizer