11.3 Product Stability and Integrity

 < Day Day Up > 



11.3.1 Practice 14. Inspect Requirements and Design

Practice Essentials

A. All products that are placed under CM and are used as a basis for subsequent development need to be subjected to successful completion of a formal inspection prior to their release to CM.

B. The inspection needs to follow a rigorous process defined in the software development plan and should be based on agreed-to entry and exit criteria for that specific product.

C. At the inspection, specific metrics should be collected and tracked that will describe defects, defect removal efficiency, and efficiency of the inspection process.

D. All products to be placed under CM should be inspected as close to their production as feasible.

E. Inspections should be conducted beginning with concept definition and ending with completion of the engineering process.

F. The program needs to fund inspections and track rework savings.

Implementation Guidelines

A. The DEVELOPER will implement a formal, structured inspection/peer review process that begins with the first system requirements products and continue through architecture, design, code, integration, testing, and documentation products and plans. The plan needs to be documented and controlled as per the SDP.

B. The project should set a goal of finding at least 80 percent of the defects in every product undergoing a structured peer review or other formal inspection.

C. Products should not be accepted into a CM baseline until they have satisfactorily completed a structured peer review.

D. The DEVELOPER needs to collect and report metrics concerning the number of defects found in each structured peer review, the time between creating and finding each defect, where and when the defect was identified, and the efficiency of defect removal.

E. Successful completion of inspections should act as the task exit criteria for non-Level-of-Effort earned value metrics (and other metrics used to capture the effectiveness of the formal inspection process) and as gates to place items under increasing levels of CM control.

F. The DEVELOPER should use a structured architecture inspection technique to verify correctness and related system performance characteristics.

11.3.2 Practice 15. Manage Testing As a Continuous Process

Practice Essentials

A. All testing should follow a preplanned process, that is agreed to and funded.

B. Every product that is placed under CM should be tested by a corresponding testing activity.

C. All tests should consider not only a nominal system condition, but also address anomalous and recovery aspects of the system.

D. Prior to delivery, the system needs to be tested in a stressed environment, nominally in excess of 150 percent of its rated capacities.

E. All test products (test cases, data, tools, configuration, and criteria) should be released through CM and be documented in a software version description document.

F. Every test should be described in traceable procedures and have pass-fail criteria included.

Implementation Guidelines

A. The testing process must be consistent with the RFP and the contract. The award fee should incentivize implementation of the testing practices described below.

B. The ACQUIRER and DEVELOPER need to plan their portion of the test process and document this plan with test cases and detailed test descriptions. These test cases should use cases based on projected operational mission scenarios.

C. The testing process should also include stress/load testing for stability purpose (i.e., at 95 percent CPU use, system stability is still guaranteed . . .)

D. The test plan should include a “justifiable testing stoppage criteria.” This gives testers a goal. If your testing satisfies these criteria, then the product is ready for release.

E. The test process should thoroughly test the interfaces between any in-house and COTS functionality. These tests should include timing between COTS functionality and the bespoken functionality. The test plans need to pay serious attention to demonstrating how, if the COTS software fails, to test that the rest of the software can recover adequately. This involves some very serious stress testing using fault injection testing.

F. Software testing should include a traceable white-box and other test process verifying implemented software against CM-controlled design documentation and the requirements traceability matrix.

G. A level of the white-box test coverage should be specified that is appropriate for the software being tested.

H. The white-box and other testing should use automated tools to instrument the software to measure test coverage.

I. All builds for white-box testing need to be done with source code obtained from the CM library.

J. Frequent builds require test automation, because more frequent compiles will force quick turnaround on all tests, especially during regression testing. However, this requires a high degree of test automation.

K. A black-box test of integration builds needs to include functional, interface, error recovery, stress, and out-of-bounds input testing.

L. Reused components and objects require high-level testing consistent with the operational/target environment.

M. Software testing includes a separate black-box test level to validate implemented software. All black-box software tests should trace to controlled requirements and be executed using software built from controlled CM libraries.

N. In addition to static requirements, a black-box test of the fully integrated system will be against scenarios/sequences of events designed to model field operation.

O. Performance testing for systems (e.g., performing 10,000 tests per second still yields response times under 2 seconds) should be tested as an integral part of the black-box test process.

P. An independent QA team should periodically audit selected test cases, test traceability, test execution, and test reports, providing the results of this audit to the ACQUIRER. (The results of this or similar audits may be used as a factor in the calculation of Award Fee.)

Q. Each test developed needs to include pass-fail criteria.

11.3.3 Practice 16. Compile and Smoke Test Frequently

Practice Essentials

A. All tests should use systems that are built on a frequent and regular basis (nominally no less than twice a week).

B. All new releases should be regression tested by CM prior to release to the test organization.

C. Smoke testing should qualify new capabilities or components only after successful regression test completion.

D. All smoke tests should be based on a preapproved and traceable procedure and run by an independent organization (not the engineers who produced it).

E. All defects identified should be documented and subject to the program change control process.

F. Smoke test results should be visible and provided to all project personnel.

Implementation Guidelines

A. From the earliest opportunity to assess the progress of developed code, the DEVELOPER needs to use a process of frequent ( oneto two-week intervals) software compile-builds as a means for finding software integration problems early.

B. It is required that a regression facility that incorporates a full functional test suite be applied with the build strategy.

C. The results of the testing of each software build should be made available to all project personnel.



 < Day Day Up > 



Managing Software Deliverables. A Software Development Management Methodology
Managing Software Deliverables: A Software Development Management Methodology
ISBN: 155558313X
EAN: 2147483647
Year: 2003
Pages: 226

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net