Removing Defects In Software Quality

During the last
three decades, software engineering practitioners have produced many prediction
models. However, it is difficult to predict the behavior of the software before
it is actually being used. A good defect removal model must be able to provide
early signals of warning or of improvement so that timely actions can be planned
and implemented. It should be possible to manage and monitor the quality of the
software when it is under development.

Defect removal is
one of the top expanse elements in any software project. Effective defect
removal leads to:

  • Reduced development cycle

  • Reduced product cost

  • Improved product quality

  • Increased customer


Literature speaks
about the phase-based defect removal model used for predicting defects removed
at the end of each phase. Defects injected, removed and escaped in each phase
are expressed generally in terms of defects per KLOC (kilolines of code). The
KLOC size measure may not be accurate when we are in the software requirements
specifications (SRS) or design phase. In the current model, function point (FP)
is used for size estimate, which can be computed more accurately and from user
needs itself. International Function Point User’s Group (IFPUG) has
standardized FP as the size metric and International Organization for Standards
(ISO) has made a step to standardize it, since there is wide-spread belief that
FPs are a better size metric than LOC. In addition, we have considered the
phase-wise defect injection percentage to estimate the defects for each phase.
This gets around the problems of lack of uniformity and language dependence.

The closed loop defect removal

This model
accounts for injected defects and escaped defects separately. The creation of
the work product is subject to defect injection during SRS, design and
implementation phases whereas bad fixes cause for defect injection during the
testing phase. Defects are removed generally through one or other form of
reviews and testing.

Generating initial estimates
and tolerance limits

Using statistical
process control, the size of the project is estimated in terms of FP. The total
defect density (DD) value for the organization is baselined. It is expressed as
defects per FP. The total DD can be set based on other data too, like the
project’s defined software process, similarity to an earlier project and
equivalent team structure. The percentage defect injection is baselined for each

The mean, upper tolerance value
and lower tolerance value for injected defects in each phase is obtained as
mean number of defects injected into the phase = FP × Total DD × percentage
injection for the phase.

Upper tolerance value for
number of injected defects in SRS = (upper natural process limit — center
line) DefectDensitySRS × FP.

Lower tolerance value for
number of injected defects in SRS = (center line — lower natural process
limit) DefectDensitySRS × FP.

Defects to be removed at a
phase = defect removal effectiveness × (defects existing on phase entry +
defects injected during development of the phase).

The phase wise DRE for each phase
is set based on the organizational goals for process improvement.

initial estimates for the five phases–SRS, design, implementation, testing and
acceptance testing–are generated as per an algorithm. The testing defects
consist of unit test, module test and integration test defects.

Feedback mechanism

At the end of
each phase, actual number of defects obtained is fed to the model. This data is
an input to the decision process and action strategy to modify the process or
products. Determine if the defect detection profile is within the tolerance
limits established. If actuals are lesser than the lower natural process limit,
it can be due to lower error injection or ineffective reviews. If actuals are
more than the upper natural process limit, it can be due to higher error
injection or effective reviews. The effectiveness of reviews is determined based
on the effort spent for the review and the experience of the review team. High
and low values can be defined for the organization. We have made the following
guidelines for interpretation of the values.

If LOC per person hour review is
less than 30 LOC per person hour–assuming three persons review 100 LOC at a
rate of 100 LOC per hour for code review. Or if pages per person hour review is
less than three pages per person hour–assuming three persons review 30 page
document at a rate of 10 pages per hour for SRS and design review, the review
effort is assumed to be high, otherwise low. Similarly, if defects per KLOC is
more than 50 for code review or defects per 10 pages is more than seven for SRS
and design review, the defect injection is considered high.

If total experience per person of
the review team is more than three years it is considered as high.

Re-estimation is not required
when the actuals are within the tolerance limits. However, if the actuals are
consistently above the middle of the tolerance range, it indicates that the
baselined process needs to adopt an improved methodology for defect removal.
Conversely, this could indicate that the process was injecting more defects.


With proper input
on the review effectiveness and defect injection, the model can re-estimate the
defects for each phase. If high defect injection is indicated by the data, the
estimates for subsequent phases are also incremented proportionately, keeping
the phase wise percentage injection constant. If reviews are rated ineffective,
defect removal effectiveness for the phase is calculated based on the actual
output. If both these conditions do not apply, the percentage injection for the
phases is revised, based on actual output. While doing this, the sum of
injection percentages should be maintained at 100, and the ratio of injection
percentages of subsequent phases should be maintained constant.

This will enable taking earlier
action to rectify problems that will otherwise not be found until later in the
development process. This provides developers with data in a timely manner to
discover departure from the development process and to return to control. The
re-estimates are done after the SRS, design and coding phase. Data obtained
during testing is not used to modify the estimates. The test data–including
unit test, module test and integration test–are used along with the review
data to predict the latent defects using Rayleigh model.

Tools used

The closed loop
defect removal model is implemented using MS Excel. The user will give the size
of the project and actual defect data as input and will get subsequent estimates
based on these. The baselines can be changed if desired.


The model heavily
depends on the accuracy of FP estimates. It will not be accurate in cases where
there is a combination of high defect injection and ineffective review, or low
defect injection and ineffective review. These errors will be reduced after

Application with actual data

Results found
after applying this model to a project is encouraging. A 100 FP project began
with a low defect density estimate, based on a similar past project. But after
the SRS phase, the actual defects obtained were high and they decided it was due
to high defect injection. All estimates were revised subsequently. The actual
defect for the design phase was lesser in number and they assigned the cause as
poor review. Hence the model took into consideration all the escaped defects and
revised the estimates for subsequent phases. The code review defects were also
lesser than predicted and hence the testing defects were re-estimated. They
focused more on the testing phase, and it was found that the actual estimates
came well within the limits for the testing as well as acceptance test phase.
The data was found useful for taking decisions on the quality aspects of the

Predicting reliability

Number of latent
defects of the software product when available to customers is an objective
statement of the quality of the product. The latent defects are estimated by
using the Rayleigh model. An eight step developmental process with step size
equal to one–software requirements specifications, DES, coding, unit test,
module test, integration test, acceptance test and post acceptance test–is
used. The model parameters can be derived from the actual defect data obtained
upto acceptance testing phase. The end product reliability can be achieved by
substituting the data.

The effective model

From the above,
it is evident that the closed loop defect removal model is extremely effective
for in-process quality management. The project leader can plan effectively for
the effort and time to be spent for reviews based on the output given by the
model. The corrective actions can be taken during the course of the project
itself. On the higher end, the defect prevention activities can be applied
during development with proper planning and consequently we will end up with a
better quality product and improved customer satisfaction.

Excerpted from
Closed Loop Defect Removal Model Using SPC
Achamma Jose, Anju NK, SK Pilai

Courtesy: Network
Systems and Technologies, Trivandrum

This paper won QAI’s ‘Best of
the Best 2000’ award for the best paper category at the SEPG Conference India
2000, Bangalore, February 2000.

Leave a Reply

Your email address will not be published. Required fields are marked *