Unit-4
Software Project Estimation
Content…
The Management Spectrum
Metrics for size estimation –
LOC(Line of Code), FP(Function Points)
Project Cost estimation approach
COCOMO(Constructive Cost Model),
COCOMO II
Risk Management- Identification ,
Risk Assement , Risk Containment ,RMMM
Strategy
Flow of S/W project Estimation
The Management Spectrum
The management spectrum describes
the management of a software project or how
to make a project successful.
It focuses on the four P's;
people, product, process and project.
Here, the manager of the project has to
control all these P's to have a smooth flow in
the project progress and to reach the goal.
The People:
People of a project includes from manager to
developer, from customer to end user.
But mainly people of a project highlight the
developers.
It is so important to have highly skilled and
motivated developers.
The Product:
The product is the ultimate goal of the
project.
This is any types of software product that has
to be developed.
To develop a software product successfully,
all the product objectives and scopes should
be established, alternative solutions should
be considered, and technical and
management constraints should be identified
beforehand.
The Process:
A software process provides the framework
from which a comprehensive plan for
software development can be established.
A number of different tasks sets— tasks,
milestones, work products, and quality
assurance points—enable the framework
activities to be adapted to the characteristics
of the software project and the requirements
of the project team.
The Project:
The project is the complete software project
that includes requirement analysis,
development, delivery, maintenance and
updates.
The project manager of a project or sub-
project is responsible for managing the
people, product and process.
Metric
A Metric defines in quantitative terms the
degree to which a system, system
component, or process possesses a given
attribute.
Metrics for software project size
estimation
Accurate estimation of the problem size is
fundamental to satisfactory estimation of
effort, time duration and cost of a software
project.
The project size is a measure of the problem
complexity in terms of the effort and time
required to develop the product.
Currently two metrics are popularly being
used widely to estimate size:
Lines of Code (LOC) and Function Point (FP).
Function Point (FP)
Function Point Analysis was initially
developed by Allan J. Albercht in 1979 at IBM
it has been further modified by the
International Function Point Users Group
(IFPUG).
it can be used to easily estimate the size of a
software product directly from the problem
specification.
It assesses the functionality delivered to its
users, based on the user’s external view of
the functional requirements.
It measures the logical view of an application
not the physically implemented view or the
internal technical view.
FP characterizes the complexity of the
software system and hence can be used to
depict the project time and the manpower
requirement.
The effort required to develop the project
depends on what the software does.
FP is programming language independent.
FP method is used for data processing
systems, business systems like information
systems.
information domain characteristics
Various functions used in an application can
be put under five types, as shown in Table:
Measurements Parameters Examples
1.Number of External Inputs(EI) Input screen and tables
2. Number of External Output (EO) Output screens and reports
3. Number of external inquiries (EQ) Prompts and interrupts.
4. Number of internal files (ILF) Databases and directories
5. Number of external interfaces (EIF) Shared databases and shared routines.
Formula to Calculate FP
FP = VAF * UFP
Where FP=Function Point
VAF=Value Adjustment Factor.
UFP=Unadjusted Function Point(Count Total).
VAF = (TDI * 0.01) + 0.65
TDI= Total Degree of Influence
.
The functional complexities are multiplied with the corresponding
weights against each function, and the values are added up to
determine the UFP (Unadjusted Function Point) of the subsystem.
Here that weighing factor will be simple, average, or
complex for a measurement parameter type.
How to calculate TDI
It is based on 14 general system characteristics
(GSC's) that rate the general functionality of the
application being counted.
Degree of Influence (DI) for each of these 14 GSCs
is assessed on a scale of 0 to 5.
If a particular GSC has no influence, then its weight
is taken as 0 and if it has a strong influence then its
weight is 5
Degree of Influence Rating
Rating Degree of Influence
0 Not present, or no influence
1 Incidental influence
2 Moderate influence
3 Average influence
4 Significant influence
5 Strong influence throughout
TDI=∑(fi) where F= sum of all 14 general
system characteristics (given scale to 0 to 5
each)
Also note that ∑(f ) ranges from 0 to 70, i.e.,
i
0 <= ∑(fi) <=70
and VAF ranges from 0.65 to 1.35 because
When ∑(f ) = 0 then CAF = 0.65
i
When ∑(fi) = 70 then CAF = 0.65 + (0.01 *
70) = 0.65 + 0.7 = 1.35
General System Characteristic Brief Description
How many communication facilities are there to aid in
GSC 1 Data communications the transfer or exchange of information with the
application or system?
Distributed data How are distributed data and processing functions
GSC 2
processing handled?
Was the response time or throughput required by the
GSC 3 Performance
user?
Heavily used How heavily used is the current hardware platform where
GSC 4
configuration the application will be executed?
How frequently are transactions executed daily, weekly,
GSC 5 Transaction rate
monthly, etc.?
GSC 6 On-Line data entry What percentage of the information is entered online?
GSC 7 End-user efficiency Was the application designed for end-user efficiency?
How many ILFs are updated by online
GSC 8 On-Line update
transaction?
Does the application have extensive
GSC 9 Complex processing
logical or mathematical processing?
Was the application developed to
GSC 10 Reusability
meet one or many user’s needs?
How difficult is conversion and
GSC 11 Installation ease
installation?
How effective and/or automated are
GSC 12 Operational ease start-up, back-up, and recovery
procedures?
Was the application specifically
designed, developed, and supported
GSC 13 Multiple sites
to be installed at multiple sites for
multiple organizations?
Was the application specifically
GSC 14 Facilitate change designed, developed, and supported
to facilitate change?
Based on the FP measure of software many
other metrics can be computed:
Errors/FP
$/FP.
Defects/FP
Pages of documentation/FP
Errors/PM.
Productivity = FP/PM (effort is measured in
person-months).
$/Page of Documentation.
LOC(Line of Code)
LOC is the simplest among all metrics available to
estimate project size
LOC measure the size of a project by counting the
number of source instructions in the development
program ignoring the commenting code and
header lines.
In order to estimate the LOC count at the
beginning of a project, project managers usually
divide the problem into modules, and each
module into submodules and so on, until the
sizes of the different leaf-level modules can be
approximately predicted.
Example- Consider following module for student management system and its value of KLOC.
So KLOC value is – 21.
Name of Module KLOC(Kilo Lines of Code)
Login Module: –
0.2
Registration Module: –
0.3
Course Module: –
2
Search Module:-
1.5
Assignment Module: –
2
Attendance Module: –
5
Database Module:-
10
Total-
21
Project Estimation techniques
Estimation of various project parameters is a
basic project planning activity.
The important project parameters that are
estimated include:
project size, effort required to develop the
software, project duration, and cost.
These estimates not only help in quoting the
project cost to the customer, but are also
useful in resource planning and scheduling.
There are three broad categories of
estimation techniques:
Empirical estimation techniques
Heuristic techniques
Analytical estimation techniques
Empirical Estimation Techniques
based on making an educated guess of the
project parameters.
While using this technique, prior experience
with development of similar products is
helpful.
Expert judgment technique and Delphi cost
estimation.
Expert Judgment Technique
In this approach, an expert makes an
educated guess of the problem size after
analyzing the problem thoroughly.
estimates the cost of the different
components (i.e. modules or subsystems) of
the system and then combines them to arrive
at the overall estimate.
Delphi cost estimation
Delphi estimation is carried out by a team
comprising of a group of experts and a
coordinator.
In this approach, the coordinator provides
each estimator with a copy of the software
requirements specification (SRS) document
and a form for recording his cost estimate.
Estimators complete their individual
estimates anonymously and submit to the
coordinator.
Heuristic Techniques
Heuristic techniques assume that the
relationships among the different project
parameters can be modeled using suitable
mathematical expressions.
Once the basic (independent) parameters are
known, the other (dependent) parameters can
be easily determined by substituting the
value of the basic parameters in the
mathematical expression
heuristic estimation models can be divided into two classes:
single variable model and the multi variable model.
Single variable estimation
Estimated Parameter = c1 * ed1
e is the characteristic of the software which has
already been estimated (independent variable).
Estimated Parameter is the dependent parameter to
be estimated. The dependent parameter to be
estimated could be effort, project duration, staff
size, etc. c1 and d1 are constants
E.G. The basic COCOMO model is an example of
single variable cost estimation model.
A multivariable cost estimation model takes
the following form:
Estimated Resource = c1*e1d1 + c2*e2d2
+ ... The intermediate COCOMO model can be
considered to be an example of a
multivariable estimation model.
Analytical Estimation Techniques
required results starting with basic
assumptions regarding the project.
have scientific basis.
Halstead’s Software Science – An Analytical
Technique
Halstead’s software science is an analytical
technique to measure size, development
effort, and development cost of software
products.
Halstead used a few primitive program
parameters to develop the expressions for
over all program length, potential minimum
value, actual volume, effort, and development
time.
Example:
Let us consider the following C program:
main( ) {
int a, b, c, avg;
scanf(“%d %d %d”, &a, &b, &c);
avg = (a+b+c)/3;
printf(“avg = %d”, avg);
}
The unique operators are: main,(),{},int,scanf,&,“,”,“;”,=,
+,/, printf
The unique operands are: a, b, c, &a, &b, &c, a+b+c, avg,
3, “%d %d %d”, “avg = %d”
Therefore, η1 = 12, η2 = 11
Estimated Length = (12*log12 + 11*log11)
= (12*3.58 + 11*3.45) = (43+38) = 81
Volume = Length*log(23) = 81*4.52 = 366
COCOMO (Constructive Cost Model)
COCOMO (Constructive Cost Model) is a
regression model based on LOC, i.e number
of Lines of Code.
predicting the various parameters associated
a project such as size, effort, cost, time and
quality.
It was proposed by Barry Boehm in 1970 and
is based on the study of 63 projects, which
make it one of the best-documented models.
Effort: Amount of labor that will be required
to complete a task. It is measured in person-
months units.
Schedule: Simply means the amount of time
required for the completion of the job, which
is, of course, proportional to the effort put. It
is measured in the units of time such as
weeks, months.
Types of COCOMO model:
1. Basic COCOMO Model-The first level, Basic
COCOMO can be used for quick and slightly rough
calculations of Software Costs.
2. Intermediate COCOMO Model-
Intermediate COCOMO takes these Cost Drivers
into account
3. Detailed COCOMO Model-
Detailed COCOMO additionally accounts for the
influence of individual project phases
Types of S/W or Mode
Boehm’s definition of organic, semidetached,
and embedded systems:
Organic – A software project is said to be an
organic type if the team size required is
adequately small, the problem is well
understood and has been solved in the past
and also the team members have a nominal
experience regarding the problem.
Semi-detached –
A software project is said to be a Semi-
detached type if the vital characteristics such
as team-size, experience, knowledge of the
various programming environment lie in
between that of organic and Embedded.
Embedded – A software project with requiring
the highest level of complexity, creativity, and
experience requirement fall under this
category.
Formula to calculate Effort,Time and
Avg. Staff
Example1: Suppose a project was estimated to be 400
KLOC. Calculate the effort and development time for
each of the three model i.e., organic, semi-detached &
embedded.
The basic COCOMO equation takes the form
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i)Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 PM
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 PM
Example2: A project size of 200 KLOC is to be
developed. Software development team has average
experience on similar type of projects. The project
schedule is not very tight. Calculate the Effort,
development time, average staff size, and productivity
of the project.
The semidetached mode is the most appropriate mode,
keeping in view the size, schedule and experience of
development time.
Hence E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3PM
COCOMO-II
The COCOMO-II is the revised version of the original
Cocomo (Constructive Cost Model) and is developed at
the University of Southern California.
This model calculates the development time and effort
taken as the total of the estimates of all the individual
subsystems.
It is to develop Support capabilities for Continuous
model improvement & provide a quantitative analytics
framework.
It is Hierarchy of Estimation models that
address following Areas
Application Composition Model
Early Design Stage Model
Past-Architecture Model
Reuse Model
COCOMO I COCOMO II
COCOMO I is useful in the waterfall models of the COCOMO II is useful in non-sequential, rapid development
software development cycle. and reuse models of software.
It provides estimates that represent one
It provides estimates of effort and
standard deviation around the most likely
schedule.
estimate.
This model is based upon the linear This model is based upon the non linear
reuse formula. reuse formula
This model is also based upon the This model is also based upon reuse
assumption of reasonably stable model which looks at effort needed to
requirements. understand and estimate.
Effort equation’s exponent is determined Effort equation’s exponent is determined
by 3 development modes. by 5 scale factors.
Development begins with the
It follows a spiral type of development.
requirements assigned to the software.
Number of submodels in COCOMO I is 3 In COCOMO II, Number of submodel are 4
and 15 cost drivers are assigned and 17 cost drivers are assigned
Size of software stated in terms of Lines Size of software stated in terms of Object
Risk Mangamnet
Risk management is the process of identifying,
addressing, and resolving problems before they
harm the project.
The risks can be broadly categorized into three
categories, as illustrated below:
Project risks are those that have an impact on the
project's schedule or resources.
Product risks affect the quality or performance of
the product being developed.
Business risks are risks to the corporation
developing or licensing the software.
Risk Identification
Several types of risks can impact a software product; a
few of them are defined below:
Technology Risks: Risks arising from the software or
hardware technologies utilised to construct the system.
People Risks: Risks associated with an individual of the
development team.
Organizational Risks: Risks arise due to the organization
in which the software is being produced.
Tools Risks: Risks arising from the software tools and
other support software used to build the system.
Requirement Risks: Risks associated with changes in
client requirements and the process of managing those
changes.
Estimation Risks: Risks arising from management
estimates of the resources necessary to create the
system.
Risk Analysis
During the risk analysis process, you must
analyze each identified risk and form opinions
about its likelihood and severity.
The risk could be classified as extremely low
(0-10%), low (10-25%), moderate (25-50%),
high (50-75%), or very high (+75%) in
probability.
The risk's impact might be classified as
catastrophic (threatening the plan's survival),
severe (causing vital delays), bearable (delays
are within allowable contingencies), or trivial.
Risk Planning
The risk planning technique considers all of
the significant risks that have been identified
and develop strategies to mitigate them.
Risk Monitoring
Risk monitoring ensures that your
assumptions about the product, process, and
business risks remain unchanged.
Risk Mitigation, Monitoring and Management(RMMM) Plan
In most cases, a risk management approach
can be found in the software project plan.
This can be broken down into three sections:
risk mitigation, monitoring, and management
(RMMM).
All work is done as part of the risk analysis in
this strategy.
The project manager typically uses this
RMMM plan as part of the overall project plan.
Risk Mitigation
Risk Mitigation is a technique for avoiding risks
(Risk Avoidance).
The following are steps to take to reduce the
risks:
Finding out the risk.
Removing causes that are the reason for risk
creation.
Controlling the corresponding documents from
time to time.
Conducting timely reviews to speed up the
work.
Risk Monitoring
Risk monitoring is an activity used to track a
project's progress.
The following are the critical goals of the task.
To check if predicted risks occur or not.
To ensure proper application of risk aversion
steps defined for risk.
To collect data for future risk analysis.
To allocate what problems are caused by
which risks throughout the project.
.
Risk Management and planning :
It assumes that the mitigation activity failed and the risk
is a reality.
This task is done by Project manager when risk becomes
reality and causes severe problems.
If the project manager effectively uses project mitigation
to remove risks successfully then it is easier to manage
the risks.
This shows that the response that will be taken for each
risk by a manager.
The main objective of the risk management plan is the
risk register.
This risk register describes and focuses on the predicted
threats to a software project.