Sepm Notes (Vtucode)
Sepm Notes (Vtucode)
MODULE –1
CHAPTER 1 - INTRODUCTION
Software and Software Engineering
Software is more than just a program code. A program is an executable code, which serves some
computational purpose. Software is considered to be a collection of executable programming code,
associated libraries and documentations. Software, when made for a specific requirement is called
a software product.
Engineering on the other hand, is all about developing products, using well-defined, scientific
principles and methods.
Definitions
Fritz Bauer, a German computer scientist, defines software engineering as: Software engineering
is the establishment and use of sound engineering principles in order to obtain economically
software that is reliable and work efficiently on real machines.
As a Vehicle (Process), delivers the product. Software acts as the basis for:
• Control of other computer (Operating Systems)
Vtucode.in Page 1
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Software delivers the most important product of our time called information. It transforms personal
data (Individual financial transactions), It manages business information, It provides gateway to
worldwide information networks (Internet) and provides the means for acquiring information in all
of its forms.
1.1.1 Software
1. Instructions : Programs that when executed provide desired function, features, and
performance.
2. Data structures: Enable the programs to adequately manipulate information.
3. Documents: Descriptive information in both hard copy and virtual forms that describes the
operation and use of the programs.
Characteristics of software
Software has characteristics that are considerably different than those of hardware:
➢ In early stage of hardware development process the failure rate is very high due to
manufacturing defects, but after correcting defects failure rate gets reduced.
➢ Hardware components suffer from the growing effects of many other environmental
factors. Stated simply, the hardware begins to wear out.
➢ Software is not susceptible to the environmental maladies (extreme temperature, dusts and
vibrations) that cause hardware to wear out [Fig:1.1]
The following figure shows the relationship between failure rate and time.
Vtucode.in Page 2
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
➢ When a hardware component wears out, it is replaced by a spare part. There are no software
spare parts.
➢ Every software failure indicates an error in design or in the process through which the design
was translated into machine-executable code. Therefore, the software maintenance tasks that
accommodate requests for change involve considerably more complexity than hardware
maintenance. However, the implication is clear—the software doesn’t wear out. But it does
deteriorate (frequent changes in requirement) [Fig:1.2].
➢ A software part should be planned and carried out with the goal that it tends to be reused in
various projects (algorithms and data structures).
➢ Today software industry is trying to make library of reusable components E.g. Software
GUI is built using the reusable components such as message windows, pull down menu and
many more such components.
➢ In the hardware world, component reuse is a natural part of the engineering process.
Vtucode.in Page 3
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Nowadays, seven broad categories of computer software present continuing challenges for software
engineers:
1. System Software:
• A collection of programs written to service other programs. Some system software
(e.g., compilers, editors, and file management utilities). Other system applications
(e.g. Operating system components, drivers, networking software,
telecommunication processors) process largely indeterminate data.
• In both cases there is heavy interaction with computer hardware, heavy usage by
multiple users, scheduling and resource sharing.
2. Application Software:
• Stand-alone programs that solve a specific business need. [Help users to perform
specific tasks].
• Application software is used to control business functions in real time (e.g., point-of-
sale transaction processing, real-time manufacturing process control).
3. Engineering/Scientific Software:
• It has been characterized by “number crunching” algorithms. (Complex numeric
computations).
• Applications range from astronomy to volcanology, from automotive stress analysis
to space shuttle orbital dynamics, and from molecular biology to automated
manufacturing. Computer-aided design, system simulation, and other interactive
applications have begun to take a real-time and even system software characteristic.
4. Embedded Software:
• It resides within a product or system and is used to implement and control features
and functions for the end user and for the system itself.
• Embedded software can perform limited and esoteric functions (e.g., keypad control
for a microwave oven) or provide significant function and control capability (e.g.,
digital functions in an automobile such as fuel control dashboard displays, and
braking systems).
5. Product-line Software:
• Designed to provide a specific capability for use by many different customers.
• Product-line software can focus on a limited and esoteric marketplace (e.g.,
inventory control products) or address mass consumer markets (e.g., word
processing, spreadsheets, computer graphics, multimedia, entertainment, database
management, and personal and business financial applications).
6. Web Applications:
• It is a client-server computer program that the client runs on the web browser.
Vtucode.in Page 4
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
➢ Net sourcing: Architecting simple and sophisticated applications that benefit targeted end-
user markets worldwide (the Web as a computing engine).
➢ Open Source: Distributing source code for computing applications so customers can make
local modifications easily and reliably ( “free” source code open to the computing
community).
• The software must be adapted to meet the needs of new computing environment
or technology.
Vtucode.in Page 5
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Web-based systems and applications (WebApps) were born. Today, WebApps have evolved into
sophisticated computing tools that not only provide stand-alone function to the end user, but also
have been integrated with corporate databases and business applications.
WebApps are one of a number of distinct software categories. Web-based systems and
applications “involve a mixture between print publishing and software development, between
marketing and computing, between internal communications and external relations, and between
art and technology.”
• Network intensiveness. A WebApp resides on a network and must serve the needs of a
diverse community of clients. The network may enable worldwide access and
communication (i.e., the Internet) or more limited access and communication (e.g., a
corporate Intranet).
• Concurrency. A large number of users may access the WebApp at one time. In many
cases, the patterns of usage among end users will vary greatly.
• Unpredictable load. The number of users of the WebApp may vary by orders of
magnitude from day to day. One hundred users may show up on Monday; 10,000 may use
the system on Thursday.
• Data driven. The primary function of many WebApps is to use hypermedia to present text,
graphics, audio, and video content to the end user. In addition, WebApps are commonly
used to access information that exists on databases that are not an integral part of the Web-
based environment (e.g., e-commerce or financial applications).
• Content sensitive. The quality and aesthetic nature of content remains an important
determinant of the quality of a WebApp.
Vtucode.in Page 6
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• Continuous evolution. Unlike conventional application software that evolves over a series
of planned, chronologically spaced releases, Web applications evolve continuously.
• Security. Because WebApps are available via network access, it is difficult, if not
impossible, to limit the population of end users who may access the application. In order to
protect sensitive content and provide secure modes
• Aesthetics. An undeniable part of the appeal of a WebApp is its look and feel. When an
application has been designed to market or sell products or ideas, aesthetics may have as
much to do with success as technical design.
These simple realities lead to one conclusion. Software in all of its forms and across all of its
application domains should be engineered.
Software Engineering:
Fritz Bauer defined as:
Software engineering is the establishment and use of sound engineering principles in order to
obtain software that is reliable and works efficiently on real machines in economical manner.
Vtucode.in Page 7
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Software engineering is a fully layered technology, to develop software we need to go from one
layer to another. All the layers are connected and each layer demands the fulfillment of the
previous layer. [Fig:1.3]
• The foundation for software engineering is the process layer. The software engineering
process is the glue that holds the technology layers together and enables rational and timely
development of computer software. Process defines a framework that must be established
for effective delivery of software engineering technology.
• Software engineering methods provide the technical how-to’s for building software.
Methods encompass a broad array of tasks that include communication, requirements
analysis, design modeling, program construction, testing, and support.
• Software engineering tools provide automated or semi-automated support for the process
and the methods. When tools are integrated so that information created by one tool can be
used by another, a system for the support of software development, called computer-aided
software engineering is established.
Vtucode.in Page 8
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
An activity strives to achieve a broad objective (e.g. communication with stakeholders) and is
applied regardless of the application domain, size of the project, complexity of the effort, or degree
of rigor with which software engineering is to be applied.
An action encompasses a set of tasks that produce a major work product (e.g., an architectural
design model).
A task focuses on a small, but well-defined objective (e.g., conducting a unit test) that produces a
tangible outcome.
A process framework establishes the foundation for a complete software engineering process by
identifying a small number of framework activities that are applicable to all software projects,
regardless of their size or complexity. In addition, the process framework encompasses a set of
umbrella activities that are applicable across the entire software process.
• Planning. A software project is a complicated journey, and the planning activity creates a
“map” that helps guide the team as it makes the journey. The map—called a software
project plan—defines the software engineering work by describing the technical tasks to
be conducted, the risks that are likely, the resources that will be required, the work products
to be produced, and a work schedule.
• Construction. This activity combines code generation and the testing that is required to
uncover errors in the code.
• Deployment. The software is delivered to the customer who evaluates the delivered
product and provides feedback based on the evaluation.
These five generic framework activities can be used during the development of small, simple
programs, the creation of large Web applications, and for the engineering of large, complex
computer-based systems.
Vtucode.in Page 9
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• Software project tracking and control—allows the software team to assess progress
against the project plan and take any necessary action to maintain the schedule.
• Risk management—assesses risks that may affect the outcome of the project or the quality
of the product.
• Software quality assurance—defines and conducts the activities required to ensure software
quality.
• Measurement—defines and collects process, project, and product measures that assist the
team in delivering software that meets stakeholders needs; can be used in conjunction with
all other framework and umbrella activities.
The Software Engineering process is not rigid---It should be agile and adaptable. Therefore, a
process adopted for one project might be significantly different than a process adopted for another
project.
Among the differences are:
• Degree to which work tasks are defined within each framework activity
Vtucode.in Page 10
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Vtucode.in Page 11
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
The Fifth Principle: Be Open to the Future A system with a long lifetime has more value. Never
design yourself into a corner. Before beginning a software project, be sure the software has a
business purpose and that users perceive value in it.
The Sixth Principle: Plan Ahead for Reuse Reuse saves time and effort. Planning ahead for
reuse reduces the cost and increases the value of both the reusable components and the systems
into which they are incorporated.
The Seventh principle: Think! Placing clear, complete thought before action almost always
produces better results. When you think about something, you are more likely to do it right.
Management Myths:
Managers with software responsibility, like managers in most disciplines, are often under pressure
to maintain budgets, keep schedules from slipping, and improve quality. Like a drowning person
who grasps at a straw, a software manager often grasps at belief in a software myth.
Vtucode.in Page 12
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Myth: We already have a book that’s full of standards and procedures for building software.
Won’t that provide my people with everything they need to know?
Reality:
• The book of standards may very well exist, but is it used?
• Are software practitioners aware of its existence?
• Does it reflect modern software engineering practice?
• Is it complete?
• Is it adaptable?
• Is it streamlined to improve time to delivery while still maintaining a focus on Quality?
Myth: If we get behind schedule, we can add more programmers and catch up
Reality: Software development is not a mechanistic process like manufacturing. “Adding people
to a late software project makes it later.” At first, this statement may seem counterintuitive.
However, as new people are added, people who were working must spend time educating the
newcomers, thereby reducing the amount of time spent on productive development effort.
Myth: If we decide to outsource the software project to a third party, I can just relax and let that
firm build it.
Reality: If an organization does not understand how to manage and control software project
internally, it will invariably struggle when it out sources’ software project.
Customer Myths:
A customer who requests computer software may be a person at the next desk, a technical group
down the hall, the marketing /sales department, or an outside company that has requested software
under contract. In many cases, the customer believes myths about software because software
managers and practitioners do little to correct misinformation. Myths led to false expectations and
ultimately, dissatisfaction with the developers.
Myth: A general statement of objectives is sufficient to begin writing programs - we can fill in
details later.
Reality: Although a comprehensive and stable statement of requirements is not always possible, an
ambiguous statement of objectives is a recipe for disaster. Unambiguous requirements are
developed only through effective and continuous communication between customer and developer.
Vtucode.in Page 13
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Myth: Project requirements continually change, but change can be easily accommodated because
software is flexible.
Reality: It’s true that software requirement change, but the impact of change varies with the time
at which it is introduced. When requirement changes are requested early, cost impact is relatively
small. However, as time passes, cost impact grows rapidly – resources have been committed, a
design framework has been established, and change can cause upheaval that requires additional
resources and major design modification.
Practitioner's myths.
Myths that are still believed by software practitioners have been fostered by 50 years of
programming culture. During the early days of software, programming was viewed as an art form.
Old ways and attitudes die hard.
Myth: Once we write the program and get it to work, our job is done.
Reality: Someone once said that "the sooner you begin 'writing code', the longer it'll take you to
get done.” Industry data indicate that between 60 and 80 percent of all effort consumed on
software will be consumed after it is delivered to the customer for the first time.
Myth: Until I get the program "running" I have no way of assessing its quality.
Reality: One of the most effective software quality assurance mechanisms can be applied from the
inception of a project—the formal technical review. Software reviews are a "quality filter" that
have been found to be more effective than testing for finding certain classes of software defects.
Myth: The only deliverable work product for a successful project is the working program.
Reality: A working program is only one part of a software configuration that includes many
elements. Documentation provides a foundation for successful engineering and, more important,
guidance for software support.
Myth: Software engineering will make us create voluminous and unnecessary documentation and
will invariably slow us down.
Reality: Software engineering is not about creating documents. It is about creating quality. Better
quality leads to reduced rework. And reduced rework results in faster delivery times. Many
software professionals recognize the fallacy of the myths just described. Regrettably, habitual
attitudes and methods foster poor management and technical practices, even when reality dictates a
better approach. Recognition of software realities is the first step toward formulation of practical
solutions for software engineering.
Vtucode.in Page 14
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
A process was defined as a collection of work activities, actions, and tasks that are performed
when some work product is to be created. Each of these activities, actions, and tasks reside within
a framework or model that defines their relationship with the process and with one another.
The software process is represented schematically in Figure 2.1. Each framework activity is
populated by a set of software engineering actions. Each software engineering action is defined by
a task set that identifies the work tasks that are to be completed, the work products that will be
produced, the quality assurance points that will be required, and the milestones that will be used
to indicate progress.
As I discussed in Chapter 1, a generic process framework for software engineering defines five
framework activities—communication, planning, modeling, construction, and deployment. In
addition, a set of umbrella activities—project tracking and control, risk management, quality
Vtucode.in Page 15
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
The important aspect of software process is “Process Flow” which describes how the framework
activities and the actions and tasks that occur within each framework activity are organized with
respect to sequence and time and is illustrated in Figure 2.2.
A linear process [Fig:2.2(a)] flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment.
An iterative process flow [Fig:2.2(b)] repeats one or more of the activities before proceeding to
the next.
An evolutionary process flow [Fig:2.2(c)] executes the activities in a “circular” manner. Each
circuit through the five activities leads to a more complete version of the software.
Vtucode.in Page 16
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
A parallel process flow [Fig:2.2(d)] executes one or more activities in parallel with other
activities (e.g. modeling for one aspect of the software might be executed in parallel with
construction of another aspect of the software).
These five framework activities provide a basic definition of Software Process. These Framework
activities provides basic information like What actions are appropriate for a framework activity,
given the nature of the problem to be solved, the characteristics of the people doing the work,
and the stakeholders who are sponsoring the project?
If the project was considerably more complex with many stakeholders, each with a different set of
requirements, the communication activity might have six distinct actions: inception, elicitation,
elaboration, negotiation, specification, and validation. Each of these software engineering actions
would have many work tasks and a number of distinct work products.
• Each software engineering action can be represented by a number of different task sets
• Each a collection of software engineering o work tasks,
o related work products,
o quality assurance points, and
o project milestones.
• Choose a task set that best accommodates the needs of the project and the characteristics
of software team.
• This implies that a software engineering action can be adapted to the specific needs of the
software project and the characteristics of the project team.
Vtucode.in Page 17
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
suggests one or more proven solutions to the problem. Stated in more general terms, a process
pattern provides you with a template —a consistent method for describing problem solutions
within the context of the software process.
Patterns can be defined at any level of abstraction. a pattern might be used to describe a problem
(and solution) associated with a complete process model (e.g., prototyping). In other situations,
patterns can be used to describe a problem (and solution) associated with a framework activity
(e.g., planning) or an action within a framework activity (e.g., project estimating).
1. Pattern Name. The pattern is given a meaningful name describing it within the context of the
software process (e.g., Technical Reviews).
2. Forces. The environment in which the pattern is encountered and the issues that make the
problem visible and may affect its solution.
I. Stage pattern—Defines a problem associated with a framework activity for the process.
Since a framework activity encompasses multiple actions and work tasks, a stage pattern
incorporates multiple task patterns that are relevant to the stage (framework activity).
E.g.: Establishing Communication.
This pattern would incorporate the task pattern Requirements Gathering and others.
III. Phase pattern—Define the sequence of framework activities that occurs within the
process, even when the overall flow of activities is iterative in nature. eg: Spiral Model
or Prototyping.
4. Initial context. Describes the conditions under which the pattern applies. Prior to
the initiation of the pattern:
(1) What organizational or team-related activities have already occurred?
(2) What is the entry state for the process?
(3) What software engineering information or project information already exists?
Vtucode.in Page 18
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
7. Resulting Context. Describes the conditions that will result once the pattern has been
successfully implemented. Upon completion of the pattern:
(1) What organizational or team-related activities must have occurred?
(2) What is the exit state for the process?
(3) What software engineering information or project information has been developed?
8. Related Patterns. Provide a list of all process patterns that are directly related to this one.
This may be represented as a hierarchy or in some other diagrammatic form.
9. Known Uses and Examples. Indicate the specific instances in which the pattern is
applicable.
The existence of a software process is no guarantee that software will be delivered on time, that
it will meet the customer’s needs.
Assessment attempts to understand the current state of the software process with the intent on
improving it.
A number of different approaches to software process assessment and improvement have been
proposed over the past few decades.
SPICE (ISO/IEC15504)—a standard that defines a set of requirements for software process
Vtucode.in Page 19
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
ISO 9001:2000 for Software—a generic standard that applies to any organization that wants
to improve the overall quality of the products, systems, or services that it provides. Therefore,
the standard is directly applicable to software organizations and companies.
Vtucode.in Page 20
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• Each process model also prescribes a process flow (also called a work flow)—that is, the
manner in which the process elements are interrelated to one another.
• All software process models can accommodate the generic framework activities, but each
applies a different emphasis to these activities and defines a process flow that invokes each
framework activity in a different manner.
V- model
A variation in the representation of the waterfall model is called the V-model. represented
in the following Fig:2.4. The V-model depicts the relationship of quality assurance actions to the
actions associated with communication, modeling, and early construction activities.
Vtucode.in Page 21
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
As a software team moves down the left side of the V, basic problem requirements are
refined into progressively more detailed and technical representations of the problem and its
solution. Once code has been generated, the team moves up the right side of the V, essentially
performing a series of tests that validate each of the models created as the team moved down the
left side. The V-model provides a way of visualizing how verification and validation actions are
applied to earlier engineering work.
The waterfall model is the oldest paradigm for software engineering. The problems that are
sometimes encountered when the waterfall model is applied are:
1. Real projects rarely follow the sequential flow that the model proposes. Although a linear
model can accommodate iteration, it does so indirectly. As a result, changes can cause
confusion as the project team proceeds.
2. It is often difficult for the customer to state all requirements explicitly. The waterfall model
requires this and has difficulty accommodating the natural uncertainty that exists at the
beginning of many projects.
3. The customer must have patience. A working version of the program(s) will not be
available until late in the project time span.
This model is suitable whenever limited number of new development efforts and when
requirements are well defined and reasonably stable.
The incremental model combines elements of linear and parallel process flows. Referring
to Fig 2.5 The incremental model applies linear sequences in a staggered fashion as calendar time
progresses. Each linear sequence produces deliverable “increments” of the software in a manner
that is similar to the increments produced by an evolutionary process flow.
For example, word-processing software developed using the incremental paradigm might
deliver basic file management, editing, and document production functions in the first increment;
more sophisticated editing and document production capabilities in the second increment; spelling
and grammar checking in the third increment; and advanced page layout capability in the fourth
increment.
➢ When an incremental model is used, the first increment is often a core product. That is,
basic requirements are addressed but many extra features remain undelivered. The core
product is used by the customer. As a result of use and/or evaluation, a plan is developed
Vtucode.in Page 22
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
➢ The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality. This process is repeated
following the delivery of each increment, until the complete product is produced.
Vtucode.in Page 23
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
efficiency of an algorithm, the adaptability of an operating system, or the form that human-
machine interaction should take. In these, and many other situations, a prototyping paradigm may
offer the best approach.
The prototyping paradigm FIG:2.6 begins with communication. You meet with other
stakeholders to define the overall objectives for the software, identify whatever requirements are
known, and outline areas where further definition is mandatory. A prototyping iteration is planned
quickly, and modeling (in the form of a “quick design”) occurs. A quick design focuses on a
representation of those aspects of the software that will be visible to end users.
The quick design leads to the construction of a prototype. The prototype is deployed and
evaluated by stakeholders, who provide feedback that is used to further refine requirements.
Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders, while at the
same time enabling you to better understand what needs to be done.
The prototype serves as a mechanism for identifying software requirements. If a working prototype
is to be built, you can make use of existing program fragments or apply tools that enable working
programs to be generated quickly. The prototype can serve as “the first system.”
1. Stakeholders see what appears to be a working version of the software, unaware that the
prototype is held together randomly, unaware that in the rush to get it working you haven’t
considered overall software quality or long-term maintainability.
Vtucode.in Page 24
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Although problems can occur, prototyping can be an effective paradigm for software
engineering.
The spiral development model is a risk-driven process model generator that is used to
guide multi-stakeholder concurrent engineering of software intensive systems. It has two main
distinguishing features. One is a cyclic approach for incrementally growing a system’s degree of
definition and implementation while decreasing its degree of risk. The other is a set of anchor
point milestones for ensuring stakeholder commitment to feasible and mutually satisfactory
system solutions.
Using the spiral model, software is developed in a series of evolutionary releases. During
early iterations, the release might be a model or prototype. During later iterations, increasingly
more complete versions of the engineered system are produced.
Vtucode.in Page 25
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
A spiral model is divided into a set of framework activities defined by the software
engineering team.[Fig:2.7] As this evolutionary process begins, the software team performs
activities that are implied by a circuit around the spiral in a clockwise direction, beginning at the
center.
Risk is considered as each revolution is made. Anchor point milestones are a combination
of work products and conditions that are attained along the path of the spiral are noted for each
evolutionary pass.
The first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral might be used to develop a prototype and then
progressively more sophisticated versions of the software. Each pass through the planning region
results in adjustments to the project plan.
The spiral model can be adapted to apply throughout the life of the computer software.
Therefore, the first circuit around the spiral might represent a “concept development project”
that starts at the core of the spiral and continues for multiple iterations until concept development
is complete. The new product will evolve through a number of iterations around the spiral. Later, a
circuit around the spiral might be used to represent a “product enhancement project.”
The spiral model is a realistic approach to the development of large-scale systems and
software. Because software evolves as the process progresses, the developer and customer better
understand and react to risks at each evolutionary level. It maintains the systematic stepwise
approach suggested by the classic life cycle but incorporates it into an iterative framework that
more realistically reflects the real world.
The Fig 2.8 represents on Software Engineering activity within the modelling activity using a
concurrent model approach.
Vtucode.in Page 26
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
The activity modelling may be in any one of the states noted at any given time, similarly
other activities, actions or tasks (Communication, Construction) can be represented in analogous
manner.
All Software Engineering activities exist concurrently but reside in different states. E.g.
Early in a project the communication activity has completed its 1st iteration and exists in the
awaiting changes state.
The modelling activity (which existed in inactive state) while initial communication was
completed now make a transition into the under-development state.
If however, the customer indicates that changes in requirement must be made, the
modelling activity moves from under-development state to awaiting changes state.
Concurrent modelling defines a series of events that will trigger transitions from state to
state for each of the software engineering activities, actions or tasks.
Concurrent modelling is applicable for all types of software development and provides an
accurate picture of the current state of the project.
Vtucode.in Page 27
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• Modeling and construction activities begin with the identification of candidate components.
These components can be designed as either conventional software modules or object-
oriented classes or packages of classes. Regardless of the technology that is used to create
the components.
• The component-based development model leads to software reuse, and reusability provides
software engineers with a number of measurable benefits.
• Formal methods enable you to specify, develop, and verify a computer-based system by
applying a rigorous, mathematical notation. A variation on this approach, called
clean room software engineering.
• When formal methods are used during development, they provide a mechanism for
eliminating many of the problems that are difficult to overcome using other software
engineering paradigms. Ambiguity, incompleteness, and inconsistency can be discovered
and corrected more easily, but through the application of mathematical analysis.
• When formal methods are used during design, they serve as a basis for program
verification which discover and correct errors that might otherwise go undetected. The
formal methods model offers the promise of defect-free software.
Draw Backs:
• The development of formal models is currently quite time consuming and expensive.
• Because few software developers have the necessary background to apply formal
methods, extensive training is required.
Vtucode.in Page 28
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
unsophisticated customers.
Grundy provides further discussion of aspects in the context of what he calls aspect- oriented
component engineering (AOCE):
Vtucode.in Page 29
Software Engineering & Project Management (21CS61)
MODULE 2
CHAPTER 1 - UNDERSTANDING REQUIREMENTS
• Requirements engineering is a major software engineering action that begins during the
communication activity and continues into the modeling activity. It must be adapted to
the needs of the process, the project, the product, and the people doing the work.
Requirements engineering provides the appropriate mechanism for understanding what the
customer wants, analyzing need, assessing feasibility, negotiating a reasonable solution, specifying
the solution unambiguously, validating the specification, and managing the requirements as they
are transformed into an operational system.
1. Inception: It establishes a basic understanding of the problem, the people who want a
solution, the nature of the solution that is desired, and the effectiveness of preliminary
communication and collaboration between the other stakeholders and the software team.
2. Elicitation: In this stage, proper information is extracted to prepare and document the
requirements. It certainly seems simple enough ask the customer, the users, and others
what the objectives for the system or product are, what is to be accomplished, how the
system or product fits into the needs of the business, and finally, how the system or product
is to be used on a day- to-day basis.
Vtucode.in Page 2
Software Engineering & Project Management (21CS61)
believed to be “obvious,” specify requirements that conflict with the needs of other
customers/users, or specify requirements that are ambiguous or un testable.
➢ Problems of volatility. In this problem, the requirements change from time to time
and it is difficult while developing the project.
3. Elaboration: The information obtained from the customer during inception and elicitation
is expanded and refined during elaboration. This task focuses on developing a refined
requirements model that identifies various aspects of software function, behavior, and
information. Elaboration is driven by the creation and refinement of user scenarios that
describe how the end user (and other actors) will interact with the system.
6. Validation: Requirements validation examines the specification to ensure that all software
requirements have been stated unambiguously; that inconsistencies, omissions, and errors
have been detected and corrected; and that the work products conform to the standards
established for the process, the project, and the product. The primary requirements
validation mechanism is the technical review. The review team that validates requirements
includes software engineers, customers, users, and other stakeholders who examine the
specification looking for errors in content or interpretation, areas where clarification may
be required, missing information, inconsistencies, conflicting requirements, or unrealistic
requirements.
Vtucode.in Page 3
Software Engineering & Project Management (21CS61)
Hence by following the below steps we can start a requirement engineering process.
• Stakeholder is the one who benefits in a direct or indirect way from the system which is
being developed.
• Business operations managers, product managers, marketing people, internal and external
customers, end users, consultants, product engineers, software engineers, support and
maintenance engineers, and others.
• Each stakeholder has a different view of the system, achieves different benefits when the
system is successfully developed, and is open to different risks if the development effort
should fail.
• As many stakeholders exist, they all have different views regarding the system to be
developed, hence it is the duty of software engineers to consider all the viewpoints of
stakeholders in a way that allows decision makers to choose an internally consistent set of
requirements for the system.
• For example, the marketing group is interested in functions and features that will excite the
potential market, making the new system easy to sell and End users may want features that
are easy to learn and use.
Collaboration does not necessarily mean that requirements are defined by committee. In many
cases, stakeholders collaborate by providing their view of requirements, but a strong “project
champion” (e.g., a business manager or a senior technologist) may make the final decision about
which requirements make the cut.
Vtucode.in Page 4
Software Engineering & Project Management (21CS61)
Questions asked at the inception of the project should be “context free”. The first set of context
- free questions focuses on the customer and other stakeholders, the overall project goalsand
benefits. For example, you might ask:
These questions help to identify all stakeholders who will have interest in the software to be built.
In addition, the questions identify the measurable benefit of a successful implementation and
possible alternatives to custom software development.
The next set of questions enables you to gain a better understanding of the problem and allows the
customer to voice his or her perceptions about a solution:
• How would you characterize “good” output that would be generated by a successful
• solution?
• What problem(s) will this solution address?
• Can you show me (or describe) the business environment in which the solution will be
used?
• Will special performance issues or constraints affect the way the solution is approached?
The final set of questions focuses on the effectiveness of the communication activity itself.
Gause and Weinberg call these “meta-questions” and propose the following list:
• Are you the right person to answer these questions? Are your answers “official”?
• Are my questions relevant to the problem that you have?
• Am I asking too many questions?
• Can anyone else provide additional information?
• Should I be asking you anything else?
These questions will help to “break the ice” and initiate the communication that is essential to
successful elicitation.
Vtucode.in Page 5
Software Engineering & Project Management (21CS61)
The goal is to identify the problem, propose elements of the solution, negotiate different
approaches, and specify a preliminary set of solution requirements in an atmosphere that is
conducive to the accomplishment of the goal.
During inception basic questions and answers establish the scope of the problem and the
overall perception of a solution. Out of these initial meetings, the developer and customers write a
one- or two-page “product request.”
A meeting place, time, and date are selected; a facilitator is chosen; and attendees from the
software team and other stakeholder organizations are invited to participate. The product request
is distributed to all attendees before the meeting date.
While reviewing the product request in the days before the meeting, each attendee is asked to
make a list of objects that are part of the environment that surrounds the system, other objects that
are to be produced by the system, and objects that are used by the system to perform its functions.
In addition, each attendee is asked to make another list of services that manipulate or interact with
the objects. Finally, lists of constraints (e.g., cost, size, business rules) and performance criteria
(e.g., speed, accuracy) are also developed. The attendees are informed that the lists are not expected
to be exhaustive but are expected to reflect each person’s perception of the system.
The lists of objects can be pinned to the walls of the room using large sheets of paper, stuck to
the walls using adhesive-backed sheets, or written on a wall board. After individual lists are
presented in one topic area, the group creates a combined list by eliminating redundant entries,
adding any new ideas that come up during the discussion, but not deleting anything.
Vtucode.in Page 6
Software Engineering & Project Management (21CS61)
• Organize a kickoff meeting with all stakeholders to introduce the project and its goals.
• Use this meeting to explain the importance of gathering accurate requirements and how it
will impact the final product.
QFD is a quality management technique that translates the needs of the customer into
technical requirements for software. QFD “concentrates on maximizing customer satisfaction from
the software engineering process”. To accomplish this, QFD emphasizes an understanding of what
is valuable to the customer and then deploys these values throughout the engineering process.
• Normal requirements. The objectives and goals that are stated for a product or system
during meetings with the customer. If these requirements are present, the customer is
satisfied. Examples of normal requirements might be requested types of graphical displays,
specific system functions, and defined levels of performance.
• Expected requirements. These requirements are implicit to the product or system and may
be so fundamental that the customer does not explicitly state them. Their absence will be a
cause for significant dissatisfaction.
➢ Examples of expected requirements are: ease of human/machine interaction, overall
operational correctness and reliability, and ease of software installation.
• Exciting requirements. These features go beyond the customer’s expectations and prove
to be very satisfying when present.
For example, the mobile phone with standard features, but the developer adds few
additional functionalities like voice searching, multi-touch screen etc. then the customer
is more excited about that feature.
Although QFD concepts can be applied across the entire software process, QFD uses
customer interviews and observation, surveys, and examination of historical data as raw data for
the requirements gathering activity. These data are then translated into a table of requirements—
called the customer voice table—that is reviewed with the customer and other stakeholders.
Vtucode.in Page 7
Software Engineering & Project Management (21CS61)
3) Usage Scenarios
• As requirements are gathered, an overall vision of system functions and features begins to
materialize.
• However, it is difficult to move into more technical software engineering activities until
you understand how these functions and features will be used by different classes of end
users.
• To accomplish this, developers and users can create a set of scenarios that identify a thread
of usage for the system to be constructed. The scenarios, often called use cases, provide a
description of how the system will be used.
Each of these work products is reviewed by all people who have participated in requirements
elicitation.
The first step in writing a use case is to define the set of “actors” that will be involved in
the story. Actors are the different people (or devices) that use the system or product within the
context of the function and behavior that is to be described.
Actors represent the roles that people (or devices) play as the system operates. Formally,
an actor is anything that communicates with the system or product and that is external to the system
itself.
Every actor has one or more goals when using the system. It is important to note that an
actor and an end user are not necessarily the same thing. A typical user may play a number of
Vtucode.in Page 8
Software Engineering & Project Management (21CS61)
different roles when using a system, whereas an actor represents a class of external entities (often,
but not always, people) that play just one role in the context of the use case. Different people may
play the role of each actor.
Because requirements elicitation is an evolutionary activity, not all actors are identified
during the first iteration. It is possible to identify primary actors during the first iteration and
secondary actors as more is learned about the system.
Primary actors interact to achieve required system function and derive the intended benefit
from the system. Secondary actors support the system so that primary actors can do their work.
Once actors have been identified, use cases can be developed.
Vtucode.in Page 9
Software Engineering & Project Management (21CS61)
1. The homeowner observes the SafeHome control panel as shown in Fig 5.1 to determine if the
system is ready for input. If the system is not ready, a not ready message is displayed on the LCD
display.
2. The homeowner uses the keypad to key in a four-digit password. The password is compared
with the valid password stored in the system. If the password is incorrect and reset itself for
additional input. If the password is correct, the control panel awaits further action.
3. The homeowner select the keys in stay or away to activate the system.
✓ Stay activates only perimeter sensors (inside motion detecting sensors are deactivated).
✓ Away activates all sensors.
4. When activation occurs, a red alarm light can be observed by the homeowner. The basic use
case presents a high-level story that describes the interaction between the actor and the system.
Goal in context To set the system to monitor sensors when the homeowner leaves the house
or remains inside
Preconditions: System has been programmed for a password and to recognize various
sensors.
Trigger: The homeowner decides to “set” the system, i.e., to turn on the alarm
functions
Vtucode.in Page 10
Software Engineering & Project Management (21CS61)
Exceptions: 1. Control panel is not ready: homeowner checks all sensors to determine
which are open; closes them.
2. Password is incorrect (control panel beeps once): homeowner reenters
correct password.
3. Password not recognized: monitoring and response subsystem must be
contacted to reprogram password.
4. Stay is selected: control panel beeps twice and a stay light is lit;
perimeter sensors are activated.
5. Away is selected: control panel beeps three times and an away light is
lit; all sensors are activated.
When available: First increment
Vtucode.in Page 11
Software Engineering & Project Management (21CS61)
Note:
The basic use case presents a high-level story that describes the interaction between the actor and
the system.
Vtucode.in Page 12
Software Engineering & Project Management (21CS61)
The intent of the analysis model is to provide a description of the required informational,
functional, and behavioral domains for a computer-based system. The analysis model is a snapshot
of requirements at any given time.
The specific elements of the requirements model are dictated by the analysis modeling
method that is to be used. However, a set of generic elements is common to most requirements
models.
Scenario Based Elements: The system is described from the user’s point of view using a scenario-
based approach Ex: Use Case diagrams and Activity diagrams.
Class-based Elements: Each usage scenario implies a set of objects that are manipulated as an
actor interacts with the system. These objects are categorized into classes—a collection of things
that have similar attributes and common behaviors(operations). Ex: Class diagram,
Collaboration diagram.
Behavioral Elements: In Software Engineering, the Behavioral Elements Model is a concept used
to describe the dynamic behavior of a software system. It focuses on how the system's components
interact with each other and with external entities to achieve specific tasks or behaviors. The state
diagram is one method for representing the behavior of a system by depicting its states and the
events that cause the system to change state. This model indicates how the software will respond
on occurrence of external event. Ex: State diagram and Sequential Diagram.
Vtucode.in Page 13
Software Engineering & Project Management (21CS61)
Analysis patterns suggest solutions (e.g., a class, a function, a behavior) within the application
domain that can be reused when modeling many applications.
Geyer-Schulz and Hahsler suggest two benefits that can be associated with the use of analysis
patterns:
First, analysis patterns speed up the development of abstract analysis models that capture the main
requirements of the concrete problem by providing reusable analysis models with examples as well
as a description of advantages and limitations.
Second, analysis patterns facilitate the transformation of the analysis model into a design model
by suggesting design patterns and reliable solutions for common problems. Analysis patterns are
integrated into the analysis model by reference to the pattern name. They are also stored in a
repository so that requirements engineers can use
The intent of negotiation is to develop a project plan that meets stakeholder needs while at
the same time reflecting the real-world constraints (e.g., time, people, budget) that have been
placed on the software team. The best negotiations strive for a “win-win” result. That is,
stakeholders win by getting the system or product that satisfies the majority of their needs and you
win by working to realistic and achievable budgets and deadlines.
Boehm defines a set of negotiation activities at the beginning of each software process
iteration. Rather than a single customer communication activity, the following activities are
defined:
Vtucode.in Page 14
Software Engineering & Project Management (21CS61)
stakeholders and grouped within requirements packages that will be implemented as software
increments.
• Is each requirement consistent with the overall objectives for the system/product?
• Have all requirements been specified at the proper level of abstraction? That is, do some
requirements provide a level of technical detail that is inappropriate at this stage?
• Is the requirement really necessary or does it represent an add-on feature that may not be essential
to the objective of the system?
• Does each requirement have attribution? That is, is a source (generally, a specific individual)
noted for each requirement?
• Is each requirement achievable in the technical environment that will house the system or
product?
• Does the requirements model properly reflect the information, function, and behavior of the
system to be built?
• Has the requirements model been “partitioned” in a way that exposes progressively more detailed
information about the system?
• Have all patterns been properly validated? Are all patterns consistent with customer
requirements?
These and other questions should be asked and answered to ensure that the requirements model is
an accurate reflection of stakeholder needs and that it provides a solid foundation for design
Vtucode.in Page 15
Software Engineering & Project Management (21CS61)
• The requirements modeling action results in one or more of the following types of models:
➢ Data models that depict the information domain for the problem.
Vtucode.in Page 16
Software Engineering & Project Management (21CS61)
➢ Flow-oriented models that represent the functional elements of the system and how
they transform data as it moves through the system.
➢ Behavioral models that depict how the software behaves as a consequence of external
“events.
• The intent of the analysis model is to provide a description of the required informational,
functional, and behavioral domains for a computer-based system. The analysis model is a
snapshot of requirements at any given time.
• These models provide a software designer with information that can be translated to
architectural, interface, and component-level designs. Finally, the requirements model
provides the developer and the customer with the means to assess quality once software is
built.
• Throughout requirements modeling, primary focus is on what, not how. What user
interaction occurs in a particular circumstance, what objects does the system manipulate,
what functions must the system perform, what behaviors does the system exhibit, what
interfaces are defined, and what constraints apply?
• The analysis model bridges the gap between a system-level description that describes
overall system or business functionality as it is achieved by applying software, hardware,
data, human, and other system elements and a software design that describes the software’s
application architecture, user interface, and component-level structure.
Vtucode.in Page 17
Software Engineering & Project Management (21CS61)
Arlow and Neustadt suggest a number of worthwhile rules of thumb that should be followed
when creating the analysis model:
• The model should focus on requirements that are visible within the problem or business
domain. The level of abstraction should be relatively high.
• Delay consideration of infrastructure and other nonfunctional models until design. That
is, a database may be required, but the classes necessary to implement it, the functions
required to access it, and the behavior that will be exhibited as it is used should be
considered only after problem domain analysis has been completed.
• Be certain that the requirements model provides value to all stakeholders. Each
constituency has its own use for the model
• Keep the model as simple as it can be. Don’t create additional diagrams when they add
no new information. Don’t use complex notational forms, when a simple list will do.
Domain analysis doesn’t look at a specific application, but rather at the domain in which
the application resides.
The “specific application domain” can range from avionics to banking, from multimedia
video games to software embedded within medical devices.
The goal of domain analysis is straightforward: to identify common problem-solving
elements that are applicable to all applications within the domain, to find or create those analysis
classes and/or analysis patterns that are broadly applicable so that they may be reused.
Vtucode.in Page 18
Software Engineering & Project Management (21CS61)
• One view of requirements modeling, called structured analysis, considers data and the
processes that transform the data as separate entities. Data objects are modeled in a way
that defines their attributes and relationships.
Each element of the requirements model is represented in following figure presents the problem
from a different point of view.
• Scenario-based elements depict how the user interacts with the system and the specific
sequence of activities that occur as the software is used.
• Class-based elements model the objects that the system will manipulate, the operations
that will be applied to the objects to effect the manipulation, relationships between the
objects, and the collaborations that occur between the classes that are defined.
• Behavioral elements depict how external events change the state of the system or the
classes that reside within it.
Vtucode.in Page 19
Software Engineering & Project Management (21CS61)
Scenario-based elements depict how the user interacts with the system and the specific
sequence of activities that occur as the software is used.
Alistair Cockburn characterizes a use case as a “contract for behavior”, the “contract” defines
the way in which an actor uses a computer-based system to accomplish some goal. In essence, a
use case captures the interactions that occur between producers and consumers of information and
the system itself.
A use case describes a specific usage scenario in straightforward language from the point of
view of a defined actor.
These are the questions that must be answered if use cases are to provide value as a requirement
modeling tool.
• what to write about,
• how much to write about it,
• how detailed to make your description, and
• how to organize the description? To begin developing a set of use cases, list the functions
or activities performed by a specific actor.
Vtucode.in Page 20
Software Engineering & Project Management (21CS61)
Use case: Access camera surveillance via the Internet—display camera views (ACS-DCV)
Actor: homeowner
Each step in the primary scenario is evaluated by asking the following questions:
• Can the actor take some other action at this point?
• Is it possible that the actor will encounter some error condition at this point? If so, what
might it be?
• Is it possible that the actor will encounter some other behavior at this point (e.g., behavior
that is invoked by some event outside the actor’s control)? If so, what might it be?
Vtucode.in Page 21
Software Engineering & Project Management (21CS61)
• Are there cases in which some “validation function” occurs during this use case? This
implies that validation function is invoked and a potential error condition might occur.
• Are there cases in which a supporting function (or actor) will fail to respond appropriately?
For example, a user action awaits a response but the function that is to respond times out.
When a use case involves a critical activity or describes a complex set of steps with a significant
number of exceptions, a more formal approach may be desirable.
The typical outline for formal use cases can be in following manner:
• The goal in context identifies the overall scope of the use case.
• The precondition describes what is known to be true before the use case is initiated.
• The trigger identifies the event or condition that “gets the use case started”
• The scenario lists the specific actions that are required by the actor and the appropriate
system responses.
• Exceptions identify the situations uncovered as the preliminary use case is refined
Vtucode.in Page 22
Software Engineering & Project Management (21CS61)
However, scenario-based modeling is appropriate for a significant majority of all situations that
you will encounter as a software engineer.
The UML activity diagram supplements the use case by providing a graphical representation of
the flow of interaction within a specific scenario. Similar to the flowchart,
An activity diagram uses:
▪ Rounded rectangles to imply a specific system function
▪ Arrows to represent flow through the system
▪ Decision diamonds to depict a branching decision.
▪ Solid horizontal lines to indicate that parallel activities are occurring.
A UML activity diagram represents the actions and decisions that occur as some function is
performed.
Vtucode.in Page 23
Software Engineering & Project Management (21CS61)
The UML swimlane diagram is a useful variation of the activity diagram and allows you to
represent the flow of activities described by the use case and at the same time indicate which actor
or analysis class has responsibility for the action described by an activity rectangle.
Responsibilities are represented as parallel segments that divide the diagram vertically, like the
lanes in a swimming pool.
Vtucode.in Page 24
Software Engineering & Project Management (21CS61)
Vtucode.in Page 25
Software Engineering & Project Management (21CS61)
Vtucode.in Page 26
Software Engineering & Project Management (21CS61)
A data object can be an external entity (e.g., anything that produces or consumes
information), a thing (e.g., a report or a display), an occurrence (e.g., a telephone call) or event
(e.g., an alarm), a role (e.g., salesperson), an organizational unit (e.g., accounting department),
a place (e.g., a warehouse), or a structure (e.g., a file).
For example, a person or a car can be viewed as a data object in the sense that either can
be defined in terms of a set of attributes. The description of the data object incorporates the data
object and all of its attributes.
A data object encapsulates data only—there is no reference within a data object to
operations that act on the data. Therefore, the data object can be represented as a table as shown in
following table. The headings in the table reflect attributes of the object.
Vtucode.in Page 27
Software Engineering & Project Management (21CS61)
2.4.3 Relationships
Data objects are connected to one another in different ways. Consider the two data objects,
person and car. These objects can be represented using the following simple notation and
relationships are 1) A person owns a car, 2) A person is insured to drive a car.
Vtucode.in Page 28
Software Engineering & Project Management (21CS61)
Class-based modeling represents the objects that the system will manipulate, the
operations that will be applied to the objects to effect the manipulation, relationships between
the objects, and the collaborations that occur between the classes that are defined.
The elements of a class-based model include classes and objects, attributes, operations, class
responsibility collaborator (CRC) models, collaboration diagrams, and packages.
We can begin to identify classes by examining the usage scenarios developed as part of the
requirements model and performing a “grammatical parse” on the use cases developed for
the system to be built.
• External entities (e.g., other systems, devices, people) that produce or consume
information to be used by a computer-based system.
• Things (e.g., reports, displays, letters, signals) that are part of the information domain for
the problem.
• Roles (e.g., manager, engineer, salesperson) played by people who interact with the system.
• Organizational units (e.g., division, group, team) that are relevant to an application.
• Places (e.g., manufacturing floor or loading dock) that establish the context of the problem
and the overall function of the system.
Coad and Yourdon suggest six selection characteristics that should be used as you consider each
potential class for inclusion in the analysis model:
1. Retained information. The potential class will be useful during analysis only if information
about it must be remembered so that the system can function.
Vtucode.in Page 29
Software Engineering & Project Management (21CS61)
2. Needed services. The potential class must have a set of identifiable operations that can change
the value of its attributes in some way.
3. Multiple attributes. During requirement analysis, the focus should be on “major” information;
a class with a single attribute may, in fact, be useful during design, but is probably better
represented as an attribute of another class during the analysis activity.
4. Common attributes. A set of attributes can be defined for the potential class and these attributes
apply to all instances of the class.
5. Common operations. A set of operations can be defined for the potential class and these
operations apply to all instances of the class.
6. Essential requirements. External entities that appear in the problem space and produce or
consume information essential to the operation of any solution for the system will almost always
be defined as classes in the requirements model.
Attributes describe a class that has been selected for inclusion in the requirements model.
In essence, it is the attributes that define the class—that clarify what is meant by the class in the
context of the problem space.
To develop a meaningful set of attributes for an analysis class, you should study each use
case and select those “things” that reasonably “belong” to the class.
Operations define the behavior of an object. Although many different types of operations
exist, they can generally be divided into four broad categories: (1) operations that manipulate data
in some way (e.g., adding, deleting, reformatting, selecting), (2) operations that perform a
computation, (3) operations that inquire about the state of an object, and (4) operations that monitor
an object for the occurrence of a controlling event.
Vtucode.in Page 30
Software Engineering & Project Management (21CS61)
Vtucode.in Page 31
Software Engineering & Project Management (21CS61)
A CRC model is really a collection of standard index cards that represent classes.
The cards are divided into three sections. Along the top of the card, you write the name of the
class. In the body of the card, you list the class responsibilities on the left and the collaborators on
the right. The CRC model may make use of actual or virtual index cards. The intent is to develop
an organized representation of classes. Responsibilities are the attributes and operations that are
relevant for the class. i.e., a responsibility is “anything the class knows or does” Collaborators
are those classes that are required to provide a class with the information needed to complete a
responsibility. In general, a collaboration implies either a request for information or a request for
some action.
Classes: The taxonomy of class types can be extended by considering the following categories:
• Entity classes, also called model or business classes, are extracted directly from the
statement of the problem. These classes typically represent things that are to be stored in a
database and persist throughout the duration of the application.
• Boundary classes are used to create the interface that the user sees and interacts with as
the software is used. Boundary classes are designed with the responsibility of managing
the way entity objects are represented to users.
Vtucode.in Page 32
Software Engineering & Project Management (21CS61)
• Controller classes manage a “unit of work” from start to finish. That is, controller classes
can be designed to manage (1) the creation or update of entity objects, (2) the instantiation
of boundary objects as they obtain information from entity objects, (3) complex
communication between sets of objects, (4) validation of data communicated between
objects or between the user and the application. In general, controller classes are not
considered until the design activity has begun.
Responsibilities: Wirfs-Brock and her colleagues suggest five guidelines for allocating
responsibilities to classes:
1. System intelligence should be distributed across classes to best address the needs
of the problem. Every application encompasses a certain degree of intelligence; that is,
what the system knows and what it can do.
3. Information and the behavior related to it should reside within the same class. This
achieves the object-oriented principle called encapsulation. Data and the processes that
manipulate the data should be packaged as a cohesive unit.
4. Information about one thing should be localized with a single class, not distributed
across multiple classes. A single class should take on the responsibility for storing and
manipulating a specific type of information. This responsibility should not, in general,
be shared across a number of classes. If information is distributed, software becomes
more difficult to maintain and more challenging to test.
1. A class can use its own operations to manipulate its own attributes, thereby fulfilling a
particular responsibility, or
When a complete CRC model has been developed, stakeholders can review the model using
the following approach:
Vtucode.in Page 33
Software Engineering & Project Management (21CS61)
1. All participants in the review (of the CRC model) are given a subset of the CRC model
index cards. Cards that collaborate should be separated (i.e., no reviewer should have
two cards that collaborate).
2. All use-case scenarios (and corresponding use-case diagrams) should be organized into
categories.
3. The review leader reads the use case deliberately. As the review leader comes to a
named object, she passes a token to the person holding the corresponding class index
card.
4. When the token is passed, the holder of the card is asked to describe the responsibilities
noted on the card. The group determines whether one (or more) of the responsibilities
satisfies the use-case requirement.
5. If the responsibilities and collaborations noted on the index cards cannot accommodate
the use case, modifications are made to the cards. This may include the definition of
new classes (and corresponding CRC index cards) or the specification of new or revised
responsibilities or collaborations on existing cards.
Vtucode.in Page 34
Software Engineering & Project Management (21CS61)
2.5.6 Analysis Packages: An important part of analysis modeling is categorization. That is,
various elements of the analysis model (e.g., use cases, analysis classes) are categorized in a
manner that packages them as a grouping—called an analysis package—that is given a
representative name.
Vtucode.in Page 35
Software Engineering & Project Management (21CS61)
Vtucode.in Page 36
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
MODULE 3
CHAPTER 1 - AGILE DEVELOPMENT
➢ An agile team is a nimble team able to appropriately respond to changes. Change is what software
development is very much about. Changes in the software being built, changes to the team
members, changes because of new technology, changes of all kinds that may have an impact on
the product they build or the project that creates the product. Support for changes should be built-
in everything we do in software, something we embrace because it is the heart and soul of
software.
➢ An agile team recognizes that software is developed by individuals working in teams and that the
skills of these people, their ability to collaborate is at the core for the success of the project.
➢ The aim of agile process is to deliver the working model of software quickly to the customer For
example: Extreme programming is the best known of agile process.
➢ In conventional software development the cost of change increases non linearly as a project
progresses (Fig Solid Black curve).
➢ An agile process reduces the cost of change because software is released in increments and
change can be better controlled within an increment.
➢ Agility argue that a well-designed agile process “flattens” the cost of change curve shown in
following figure (shaded, solid curve), allowing a software team to accommodate changes late in
a software project without dramatic cost and time impact.
➢ When incremental delivery is coupled with other agile practices such as continuous unit testing
and pair programming, the cost of making a change is attenuated(reduced). Although debate about
the degree to which the cost curve flattens is ongoing, there is evidence to suggest that a
significant reduction in the cost of change can be achieved. application, design, architecture etc.
The verification process involves activities like reviews, walk-throughs and inspection.
Vtucode.in Page 1
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Any agile software process is characterized in a manner that addresses a number of key
assumptions about the majority of software projects:
1. It is difficult to predict in advance which software requirements will persist and which will
change. It is equally difficult to predict how customer priorities will change as the project
proceeds.
2. For many types of software, design and construction are interleaved. That is, both activities
should be performed in tandem so that design models are proven as they are created. It is
difficult to predict how much design is necessary before construction is used to prove the
design.
3. Analysis, design, construction, and testing are not as predictable.
It lies in process adaptability. An agile process, therefore, must be adaptable. But continual
adaptation without forward progress accomplishes little. Therefore, an agile software process
must adapt incrementally.
This iterative approach enables the customer to evaluate the software increment regularly, provide
necessary feedback to the software team, and influence the process adaptations that are made to
accommodate the feedback.
Vtucode.in Page 2
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
1. Our highest priority is to satisfy the customer through early and continuous delivery of
valuable software.
2. Welcome changing requirements, even late in development. Agile processes harness change
for the customer’s competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they
need, and trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
7. Working software is the primary measure of progress.
8. Agile processes promote sustainable development. The sponsors, developers, and users
should be able to maintain a constant pace indefinitely.
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. The best architectures, requirements, and designs emerge from self– organizing teams
12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts
its behavior accordingly.
Not every agile process model applies these 12 principles with equal weight, and some models
choose to ignore (or at least downplay) the importance of one or more of the principles.
• There is debate about the benefits and applicability of agile software development as opposed
to more conventional software engineering processes (produces documents rather than
working product).
• Even within the agile, there are many proposed process models each with a different approach
to the agility.
Agile development focuses on the talents and skills of individuals, molding the process to specific
people and teams.” The key point in this statement is that the process molds to the needs of the people
and team.
Vtucode.in Page 3
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
2. Common focus. Although members of the agile team may perform different tasks and bring
different skills to the project, all should be focused on one goal—to deliver a working
software increment to the customer within the time promised. To achieve this goal, the team
will also focus on continual adaptations (small and large) that will make the process fit the
needs of the team.
4. Decision-making ability. Any good software team (including agile teams) must be allowed
the freedom to control its own destiny. This implies that the team is given autonomy—
decision-making authority for both technical and project issues.
5. Fuzzy problem-solving ability. Software managers must recognize that the agile team will
continually have to deal with ambiguity and will continually be buffeted by change.
6. Mutual trust and respect. The agile team must become what DeMarco and Lister call a
“jelled” team. A jelled team exhibits the trust and respect that are necessary to make them “so
strongly knit that the whole is greater than the sum of the parts.”
Vtucode.in Page 4
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Extreme Programming (XP), the most widely used approach to agile software development,
emphasizes business results first and takes an incremental, get-something-started approach to
building the product, using continual testing and revision. XP proposed by Kent beck during the late
1980’s.
1.4.1 XP Values
Beck defines a set of five values that establish a foundation for all work performed as part of XP—
communication, simplicity, feedback, courage, and respect. Each of these values is used as a
driver for specific XP activities, actions, and tasks.
To achieve simplicity, XP restricts developers to design only for immediate needs, rather than
consider future needs. The intent is to create a simple design that can be easily implemented in code.
If the design must be improved, it can be refactored at a later time.
Feedback is derived from three sources: the implemented software itself, the customer, and
other software team members. By designing and implementing an effective testing strategy the
software provides the agile team with feedback. XP makes use of the unit test as its primary testing
tactic. As each class is developed, the team develops a unit test to exercise each operation according
to its specified functionality.
Beck argues that strict adherence to certain XP practices demands courage. A better word
might be discipline. An agile XP team must have the discipline (courage) to design for today,
recognizing that future requirements may change dramatically, thereby demanding substantial rework
of the design and implemented code.
By following each of these values, the agile team inculcates respect among its members,
between other stakeholders and team members, and indirectly, for the software itself. As they achieve
successful delivery of software increments, the team develops growing respect for the XP process.
Vtucode.in Page 5
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Extreme Programming uses an object-oriented approach as its preferred development paradigm and
encompasses a set of rules and practices that occur within the context of four framework activities:
planning, design, coding, and testing. Following figure illustrates the XP process and notes some of
the key ideas and tasks that are associated with each framework activity.
1) Planning. The planning activity begins with listening—a requirements gathering activity.
• Listening leads to the creation of a set of “stories” (also called user stories) that describe
required output, features, and functionality for software to be built.
• Each story is written by the customer and is placed on an index card. The customer
assigns a value (i.e., a priority) to the story based on the overall business value of the
feature or function.
• Members of the XP team then assess each story and assign a cost— measured in
development weeks—to it.
• If the story is estimated to require more than three development weeks, the story into
smaller stories and the assignment of value and cost occurs again. It is important to note
that new stories can be written at any time.
Vtucode.in Page 6
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• The stories with highest value will be moved up in the schedule and implemented first
• Refactoring is the process of changing a software system in a way that it does not change
the external behavior of the code and improves the internal structure.
3) Coding. After stories are developed and preliminary design work is done, the team does not
move to code, develops a series of unit tests for each of the stories that is to be included in the
current release (software increment).
• Once the unit test has been created, the developer is better able to focus on what must
be implemented to pass the test.
• Once the code is complete, it can be unit-tested immediately, and providing feedback to
the developers.
• A key concept during the coding activity is pair programming. i.e.., two people work
together at one computer workstation to create code for a story.
• As pair programmers complete their work, the code they develop is integrated with the
work of others.
4) Testing. The creation of unit tests before coding commences is a key element of the XP
approach. The unit tests that are created should be implemented using a framework that
enables them to be automated. This encourages a regression testing strategy whenever code is
modified.
• As the individual unit tests are organized into a “universal testing suite” integration and
validation testing of the system can occur on a daily basis. This provides the XP team with a
continual indication of progress and also can raise warning flags early if things go awry.
Wells states: “Fixing small problems every few hours takes less time than fixing huge
problems just before the deadline.”
Vtucode.in Page 7
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• XP acceptance tests, also called customer tests, are specified by the customer and focus on
overall system features and functionality that are visible and reviewable by the customer.
Acceptance tests are derived from user stories that have been implemented as part of a
software release.
1.4.3 Industrial XP
Joshua Kerievsky describes Industrial Extreme Programming (IXP) in the following manner:
“IXP is an organic evolution of XP. It is imbued with XP’s minimalist, customer-centric, test-driven
spirit. IXP differs most from the original XP in its greater inclusion of management, its expanded role
for customers, and its upgraded technical practices.” IXP incorporates six new practices that are
designed to help ensure that an XP project works successfully for significant projects within a large
organization.
Readiness assessment. Prior to the initiation of an IXP project, the organization should conduct a
readiness assessment. The assessment ascertains whether (1) an appropriate development
environment exists to support IXP, (2) the team will be populated by the proper set of stakeholders,
(3) the organization has a distinct quality program and supports continuous improvement, (4) the
organizational culture will support the new values of an agile team, and (5) the broader project
community will be populated appropriately.
Project community. Classic XP suggests that the right people be used to populate the agile team to
ensure success. The implication is that people on the team must be well-trained, adaptable and
skilled, and have the proper temperament to contribute to a self-organizing team. When XP is to be
applied for a significant project in a large organization, the concept of the “team” should morph into
that of a community. A community may have a technologist and customers who are central to the
success of a project as well as many other stakeholders (e.g., legal staff, quality auditors,
manufacturing or sales types) who “are often at the periphery of an IXP project yet they may play
important roles on the project”. In IXP, the community members and their roles should be explicitly
defined and mechanisms for communication and coordination between community members should
be established.
Project chartering. The IXP team assesses the project itself to determine whether an appropriate
business justification for the project exists and whether the project will further the overall goals and
objectives of the organization. Chartering also examines the context of the project to determine how it
complements, extends, or replaces existing systems or processes.
Test-driven management. An IXP project requires measurable criteria for assessing the state of the
project and the progress that has been made to date. Test-driven management establishes a series of
measurable “destinations” and then defines mechanisms for determining whether or not these
destinations have been reached.
Vtucode.in Page 8
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Retrospectives. An IXP team conducts a specialized technical review after a software increment is
delivered. Called a retrospective, the review examines “issues, events, and lessons-learned” across a
software increment and/or the entire software release. The intent is to improve the IXP process.
Continuous learning. Because learning is a vital part of continuous process improvement, members
of the XP team are encouraged (and possibly, incented) to learn new methods and techniques that can
lead to a higher quality product.
• Conflicting customer needs. Many projects have multiple customers, each with his own set
of needs.
• Requirements are expressed informally. User stories and acceptance tests are the only
explicit manifestation of requirements in XP. specification is often needed to remove
inconsistencies, and errors before the system is built.
• Lack of formal design: when complex systems are built, design must have the overall
structure of the software then it will exhibit quality.
Other agile process models have been proposed and are in use across the industry. Among the
most common are:
Vtucode.in Page 9
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Adaptive Software Development (ASD) has been proposed by Jim Highsmith as a technique
for building complex software and systems. The philosophical underpinnings of ASD focus on
human collaboration and team self-organization.
High smith argues that an agile, adaptive development approach based on collaboration is “as
much a source of order in our complex interactions as discipline and engineering.” He defines an
ASD “life cycle” that incorporates three phases, speculation, collaboration, and learning.
During speculation, the project is initiated and adaptive cycle planning is conducted. Adaptive
cycle planning uses project initiation information—the customer’s mission statement, project
constraints (e.g., delivery dates or user descriptions), and basic requirements—to define the set of
release cycles (software increments) that will be required for the project.
Motivated people use collaboration in a way that multiplies their talent and creative output beyond
their absolute numbers. This approach is a recurring theme in all agile methods. But collaboration is
not easy. It encompasses communication and teamwork, but it also emphasizes individualism,
because individual creativity plays an important role in collaborative thinking. It is, above all, a
matter of trust. People working together must trust one another to (1) criticize without animosity, (2)
assist without resentment, (3) work as hard as or harder than they do, (4) have the skill set to
contribute to the work at hand, and (5) communicate problems or concerns in a way that leads to
effective action.
As members of an ASD team begin to develop the components that are part of an adaptive cycle, the
emphasis is on “learning” as much as it is on progress toward a completed cycle.
Vtucode.in Page 10
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
ASD teams learn in three ways: focus groups, technical reviews, and project postmortems. ASD’s
overall emphasis on the dynamics of self-organizing teams, interpersonal collaboration, and
individual and team learning yield software project teams that have a much higher likelihood of
success.
Scrum
Scrum is an agile software development method that was conceived by Jeff Sutherland and his
development team in the early 1990s. Scrum principles are consistent with the agile manifesto and are
used to guide development activities within a process that incorporates the following framework
activities: requirements, analysis, design, evolution, and delivery. Within each framework activity,
work tasks occur within a process pattern called a sprint. The work conducted within a sprint is
adapted to the problem at hand and is defined and often modified in real time by the Scrum team. The
overall flow of the Scrum process is illustrated in following figure.
Scrum emphasizes the use of a set of software process patterns that have proven effective for projects
with tight timelines, changing requirements, and business criticality. Each of these process patterns
defines a set of development actions:
Backlog—a prioritized list of project requirements or features that provide business value for the
customer. Items can be added to the backlog at any time. The product manager assesses the backlog
and updates priorities as required.
Vtucode.in Page 11
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Sprints—consist of work units that are required to achieve a requirement defined in the backlog that
must be fit into a predefined time-box (typically 30 days). Changes (e.g., backlog work items) are not
introduced during the sprint. Hence, the sprint allows team members to work in a short-term, but
stable environment.
Scrum meetings—are short (typically 15 minutes) meetings held daily by the Scrum team. Three key
questions are asked and answered by all team members
o What did you do since the last team meeting?
o What obstacles are you encountering?
o What do you plan to accomplish by the next team meeting?
A team leader, called a Scrum master, leads the meeting and assesses the responses from each
person. The Scrum meeting helps the team to uncover potential problems as early as possible. Also,
these daily meetings lead to “knowledge socialization”
Demos—deliver the software increment to the customer so that functionality that has been
implemented can be demonstrated and evaluated by the customer. It is important to note that the
demo may not contain all planned functionality, but rather those functions that can be delivered
within the time-box that was established.
The DSDM life cycle that defines three different iterative cycles, preceded by two additional life cycle
activities:
Vtucode.in Page 12
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• Business study—establishes the functional and information requirements that will allow the
application to provide business value; also, defines the basic application architecture and
identifies the maintainability requirements for the application.
• Design and build iteration—revisits prototypes built during functional model iteration to
ensure that each has been engineered in a manner that will enable it to provide operational
business value for end users. In some cases, functional model iteration and design and build
iteration occur concurrently.
Crystal
Alistair Cockburn and Jim Highsmith created the Crystal family of agile methods in order
to achieve a software development approach that puts a premium on “maneuverability” during what
Cockburn characterizes as “a resource limited, cooperative game of invention and communication,
with a primary goal of delivering useful, working software and a secondary goal of setting up for the
next game”
The Crystal family is actually a set of example agile processes that have been proven effective
for different types of projects. The intent is to allow agile teams to select the member of the crystal
family that is most appropriate for their project and environment.
Feature Driven Development (FDD) was originally conceived by Peter Coad and his
colleagues as a practical process model for object-oriented software engineering. Stephen Palmer and
John Felsing have extended and improved Coad’s work, describing an adaptive, agile process that can
be applied to moderately sized and larger software projects.
Like other agile approaches, FDD adopts a philosophy that (1) emphasizes collaboration among
people on an FDD team; (2) manages problem and project complexity using feature-based
decomposition followed by the integration of software increments, and (3) communication of
technical detail using verbal, graphical, and text-based means.
Vtucode.in Page 13
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
In the context of FDD, a feature “is a client-valued function that can be implemented in two
weeks or less” The emphasis on the definition of features provides the following benefits:
• Because features are small blocks of deliverable functionality, users can describe them more
easily; understand how they relate to one another more readily; and better review them for
ambiguity, error, or omissions.
• Since a feature is the FDD deliverable software increment, the team develops operational
features every two weeks.
• Because features are small, their design and code representations are easier to inspect
effectively.
• Project planning, scheduling, and tracking are driven by the feature hierarchy, rather than an
arbitrarily adopted software engineering task set.
Coad and his colleagues suggest the following template for defining a feature:
Vtucode.in Page 14
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
FDD provides greater emphasis on project management guidelines and techniques than many other
agile methods. FDD defines six milestones during the design and implementation of a feature:
“design walkthrough, design, design inspection, code, code inspection, promote to build”
Lean Software Development (LSD) has adapted the principles of lean manufacturing to the
world of software engineering. The lean principles that inspire the LSD process can be summarized
as eliminate waste, build quality in, create knowledge, defer commitment, deliver fast, respect people,
and optimize the whole. Each of these principles can be adapted to the software process.
For example, eliminate waste within the context of an agile software project as
Agile modeling adopts all of the values that are consistent with the agile manifesto. The agile
modeling philosophy recognizes that an agile team must have the courage to make decisions that may
cause it to reject a design and refactor. The team must also have the humility to recognize that
technologists do not have all the answers and that business experts and other stakeholders should be
respected and embraced.
Agile Modeling suggests a wide array of “core” and “supplementary” modeling principles, those
that make AM unique are:
• Model with a purpose. A developer who uses AM should have a specific goal in mind before
creating the model. Once the goal for the model is identified, the type of notation to be used and
level of detail required will be more obvious.
Vtucode.in Page 15
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
• Use multiple models. There are many different models and notations that can be used to describe
software. Only a small subset is essential for most projects. AM suggests that to provide needed
insight, each model should present a different aspect of the system and only those models that
provide value to their intended audience should be used.
• Travel light. As software engineering work proceeds, keep only those models that will provide
long-term value and jettison the rest. Every work product that is kept must be maintained as
changes occur. This represents work that slows the team down. Ambler notes that “Every time
you decide to keep a model you trade-off agility for the convenience of having that information
available to your team in an abstract manner
• Content is more important than representation. Modeling should impart information to its
intended audience. A syntactically perfect model that imparts little useful content is not as
valuable as a model with flawed notation that nevertheless provides valuable content for its
audience. • Know the models and the tools you use to create them. Understand the strengths and
weaknesses of each model and the tools that are used to create it.
• Adapt locally. The modeling approach should be adapted to the needs of the agile team.
The Agile Unified Process (AUP) adopts a “serial in the large” and “iterative in the small”
philosophy for building computer-based systems. By adopting the classic UP phased activities—
inception, elaboration, construction, and transition—AUP provides a serial overlay that enables a
team to visualize the overall process flow for a software project. However, within each of the
activities, the team iterates to achieve agility and to deliver meaningful software increments to end
users as rapidly as possible. Each AUP iteration addresses the following activities. •
• Modeling. UML representations of the business and problem domains are created.
• Testing. Like XP, the team designs and executes a series of tests to uncover errors and ensure
that the source code meets its requirements.
• Deployment. Like the generic process activity deployment in this context focuses on the
delivery of a software increment and the acquisition of feedback from end users.
Vtucode.in Page 16
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
that are produced by the team. Project management tracks and controls the progress of the team
and coordinates team activities.
Some proponents of the agile philosophy argue that automated software tools (e.g., design tools)
should be viewed as a minor supplement to the team’s activities, and not at all pivotal to the success
of the team.
However, Alistair Cockburn [Coc04] suggests that tools can have a benefit and that “agile teams
stress using tools that permit the rapid flow of understanding. Some of those tools are social, starting
even at the hiring stage. Some tools are technological, helping distributed teams simulate being
physically present.
Many tools are physical, allowing people to manipulate them in workshops.” Because acquiring the
right people (hiring), team collaboration, stakeholder communication, and indirect management are
key elements in virtually all agile process models, Cockburn argues that “tools” that address these
issues are critical success factors for agility.
For example, a hiring “tool” might be the requirement to have a prospective team member spend a
few hours pair programming with an existing member of the team. The “fit” can be assessed
immediately.
Collaborative and communication “tools” are generally low tech and incorporate any mechanism
(“physical proximity, whiteboards, poster sheets, index cards, and sticky notes” [Coc04]) that
provides information and coordination among agile developers
Active communication is achieved via the team dynamics (e.g., pair programming), while passive
communication is achieved by “information radiators” (e.g., a flat panel display that presents the
overall status of different components of an increment).
Project management tools deemphasize the Gantt chart and replace it with earned value charts or
“graphs of tests created versus passed . . . other agile tools are used to optimize the environment in
which the agile team works (e.g., more efficient meeting areas), improve the team culture by
nurturing social interactions (e.g., collocated teams), physical devices (e.g., electronic whiteboards),
and process enhancement (e.g., pair programming or time-boxing)” [Coc04].
Vtucode.in Page 17
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
CHAPTER 2
PRINCIPLES THAT GUIDE THE PRACTICE
Practice is a collection of concepts, principles, methods, and tools that a software engineer calls upon
on a daily basis.
Practice allows managers to manage software projects and software engineers to build computer
programs.
Practice populates a software process model with the necessary technical and management how-to’s
to get the job done.
In an editorial published in IEEE Software a decade ago, Steve McConnell [McC99] made the
following comment:
You often hear people say that software development knowledge has a 3-year half-life: half of what
you need to know today will be obsolete within 3 years. In the domain of technology-related
knowledge, that’s probably about right. But there is another kind of software development
knowledge—a kind that I think of as “software engineering principles”—that does not have a three-
year half-life. These software engineering principles are likely to serve a professional programmer
throughout his or her career.
McConnell goes on to argue that the body of software engineering knowledge (circa the year 2000)
had evolved to a “stable core” that he estimated represented about “75 percent of the knowledge
needed to develop a complex system.” But what resides within this stable core? As McConnell
indicates, core principles—the elemental ideas that guide software engineers in the work that they
do—now provide a foundation from which software engineering models, methods, and tools can be
applied and evaluated.
Vtucode.in Page 18
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Software engineering is guided by a collection of core principles that help in the application of a
meaningful software process and the execution of effective software engineering methods. At the
process level, core principles establish a philosophical foundation that guides a software team as it
performs framework and umbrella activities, navigates the process flow, and produces a set of
software engineering work products.
At the level of practice, core principles establish a collection of values and rules that serve as a guide
as you analyze a problem, design a solution, implement and test the solution, and ultimately deploy
the software in the user community identified a set of general principles that span software
engineering process and practice:
The following set of core principles can be applied to the framework, and by extension, to every
software process.
Principle 1: Be agile. Whether the process model you choose is prescriptive or agile.
1. keep your technical approach as simple as possible
2. keep the work products you produce as concise(short) as possible
3. Make decisions locally whenever possible.
Principle 2. Focus on quality at every step. For every process activity, action, and task should focus
on the quality of the work product that has been produced.
Principle 3. Be ready to adapt. adapt your approach to conditions imposed by the problem, the
people, and the project itself.
Principle 4. Build an effective team. Build a self-organizing team that has mutual trust and respect.
Vtucode.in Page 19
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Principle 5. Establish mechanisms for communication and coordination. Projects fail because
stakeholders fail to coordinate their efforts to create a successful end product.
Principle 6. Manage change. The methods must be established to manage the way changes are
requested, approved, and implemented.
Principle 7. Assess risk. Lots of things can go wrong as software is being developed.
Principle 8. Create work products that provide value for others. Create only those work products
that provide value for other process activities, actions and tasks.
▪ Software engineering practice has a single goal i.e.., to deliver on-time, high quality,
operational software that contains functions and features that meet the needs of all
stakeholders.
▪ To achieve this goal, should adopt a set of core principles that guide the technical work.
▪ The following set of core principles are fundamental to the practice of software
engineering:
Principle 1. Divide and conquer. A large problem is easier to solve if it is subdivided into a
collection of elements (or modules or components). Ideally, each element delivers distinct
functionality that can be developed.
Principle 3. Strive for consistency. Whether it’s creating a requirements model, developing a
software design, generating source code, or creating test cases. All these are consistent so that the
software is easier to develop.
Principle 5. Build software that exhibits effective modularity. Modularity provides a mechanism
for any complex system can be divided into modules (components).
Principle 6. Look for patterns. Brad Appleton suggests that: The goal of patterns within the
software community is to create a body of literature to help software developers resolve recurring
problems encountered throughout all of software development process.
Vtucode.in Page 20
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Principle 7. When possible, represent the problem and its solution from a number of different
perspectives. When a problem and its solution are examined from a number of different
perspectives(ways).
Principle 8. Remember that someone will maintain the software. software will be corrected as
defects are remove, adapted as its environment changes, and enhanced as stakeholders request more
capabilities.
Customer requirements must be gathered through the communication activity. Communication has
begun.
Principle 1. Listen.
Try to focus on the speaker’s words, rather than formulating your response to those words. Ask for
clarification if something is unclear, but avoid constant interruptions. Never become contentious in
your words or actions (e.g., rolling your eyes or shaking your head) as a person is talking.
Principle 2. Prepare before you communicate. Spend the time to understand the problem before
you meet with others. If necessary, do some research to understand business domain jargon. If you
have responsibility for conducting a meeting, prepare an agenda in advance of the meeting.
Principle 3. Someone should facilitate the activity. Every communication meeting should have a
leader (a facilitator) to keep the conversation moving in a productive direction, (2) to mediate any
conflict that does occur, and (3) to ensure than other principles are followed.
Principle 4. Face-to-face communication is best. But it usually works better when some other
representation of the relevant information is present. For example, a participant may create a drawing
or a “strawman” document that serves as a focus for discussion.
Principle 5. Take notes and document decisions. Things have a way of falling into the cracks.
Someone participating in the communication should serve as a “recorder” and write down all
important points and decisions.
Principle 6. Strive for collaboration. Collaboration and consensus occur when the collective
knowledge of members of the team is used to describe product or system functions or features. Each
small collaboration serves to build trust among team members and creates a common goal for the
team.
Vtucode.in Page 21
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Principle 7. Stay focused; modularize your discussion. The more people involved in any
communication, the more likely that discussion will bounce from one topic to the next. The facilitator
should keep the conversation modular, leaving one topic only after it has been resolved
Principle 8. If something is unclear, draw a picture. Verbal communication goes only so far. A
sketch or drawing can often provide clarity when words fail to do the job.
Principle 9. (a) Once you agree to something, move on. (b) If you can’t agree to something, move on.
(c) If a feature or function is unclear and cannot be clarified at the moment, move on.
Communication, like any software engineering activity, takes time. Rather than iterating endlessly,
the people who participate should recognize that many topics require discussion (see Principle 2) and
that “moving on” is sometimes the best way to achieve communication agility.
Principle 10. Negotiation is not a contest or a game. It works best when both parties win. There are
many instances in which you and other stakeholders must negotiate functions and features, priorities,
and delivery dates. If the team has collaborated well, all parties have a common goal. Still,
negotiation will demand compromise from all parties.
• The communication activity helps you to define your overall goals and objectives.
• The planning activity encompasses a set of management and technical practices that enable
the software team to define a road map as it travels toward the objectives.
Principle 1. Understand the scope of the project. It’s impossible to use a road map if you don’t
know where you’re going. Scope provides the software team with a destination.
Principle 2. Involve stakeholders in the planning activity. Stakeholders define priorities and
establish project constraints. To accommodate these realities, software engineers must often negotiate
order of delivery, time lines, and other project-related issues.
Principle 3. Recognize that planning is iterative. A project plan is never engraved in stone. As
work begins, it is very likely that things will change. As a consequence, the plan must be adjusted to
accommodate these changes. In addition, iterative, incremental process models dictate replanning
after the delivery of each software increment based on feedback received from users.
Principle 4. Estimate based on what you know. The intent of estimation is to provide an indication
of effort, cost, and task duration, based on the team’s current understanding of the work to be done. If
information is vague or unreliable, estimates will be equally unreliable.
Vtucode.in Page 22
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Principle 5. Consider risk as you define the plan. If you have identified risks that have high impact
and high probability, contingency planning is necessary. In addition, the project plan (including the
schedule) should be adjusted to accommodate the likelihood that one or more of these risks will
occur.
Principle 6. Be realistic. People don’t work 100 percent of every day. Noise always enters into any
human communication. Omissions and ambiguity are facts of life. Change will occur. Even the best
software engineers make mistakes. These and other realities should be considered as a project plan is
established.
Principle 7. Adjust granularity as you define the plan. Granularity refers to the level of detail that
is introduced as a project plan is developed. A “high-granularity” plan provides significant work task
detail that is planned over relatively short time increments (so that tracking and control occur
frequently). A “low-granularity” plan provides broader work tasks that are planned over longer time
periods. In general, granularity moves from high to low as the project time line moves away from the
current date. Over the next few weeks or months, the project can be planned in significant detail.
Activities that won’t occur for many months do not require high granularity (too much can change).
Principle 8. Define how you intend to ensure quality. The plan should identify how the software
team intends to ensure quality. If technical reviews3 are to be conducted, they should be scheduled. If
pair programming (Chapter 3) is to be used during construction, it should be explicitly defined within
the plan.
Principle 9. Describe how you intend to accommodate change. Even the best planning can be
obviated by uncontrolled change. You should identify how changes are to be accommodated as
software engineering work proceeds. For example, can the customer request a change at any time? If
a change is requested, is the team obliged to implement it immediately? How is the impact and cost of
the change assessed?
Principle 10. Track the plan frequently and make adjustments as required. Software projects fall
behind schedule one day at a time. Therefore, it makes sense to track progress on a daily basis,
looking for problem areas and situations in which scheduled work does not conform to actual work
conducted. When slippage is encountered, the plan is adjusted accordingly.
Create models to gain a better understanding of the actual entity to be built. The modeling principles
are:
Principle 1. The primary goal of the software team is to build software, not create models.
Agility means getting software to the customer in the fastest possible time. Models that make this
Vtucode.in Page 23
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
happen are worth creating, but models that slow the process down or provide little new insight should
be avoided.
Principle 2. Travel light—don’t create more models than you need. Every model that is created must
be kept up-to-date as changes occur. More importantly, every new model takes time that might
otherwise be spent on construction (coding and testing). Therefore, create only those models that
make it easier and faster to construct the software.
Principle 3. Strive to produce the simplest model that will describe the problem or the software.
Don’t overbuild the software [Amb02b]. By keeping models simple, the resultant software will also
be simple. The result is software that is easier to integrate, easier to test, and easier to maintain (to
change). In addition, simple models are easier for members of the software team to understand and
critique, resulting in an ongoing form of feedback that optimizes the end result.
Principle 4. Build models in a way that makes them amenable to change. Assume that your
models will change, but in making this assumption don’t get sloppy. For example, since requirements
will change, there is a tendency to give requirements models short shrift. Why? Because you know
that they’ll change anyway. The problem with this attitude is that without a reasonably complete
requirements model, you’ll create a design (design model) that will invariably miss important
functions and features.
Principle 5. Be able to state an explicit purpose for each model that is created. Every time you
create a model, ask yourself why you’re doing so. If you can’t provide solid justification for the
existence of the model, don’t spend time on it.
Principle 6. Adapt the models you develop to the system at hand. It may be necessary to adapt
model notation or rules to the application; for example, a video game application might require a
different modeling technique than real-time, embedded software that controls an automobile engine.
Principle 7. Try to build useful models, but forget about building perfect models. When building
requirements and design models, a software engineer reaches a point of diminishing returns. That is,
the effort required to make the model absolutely complete and internally consistent is not worth the
benefits of these properties. Am I suggesting that modeling should be sloppy or low quality? The
answer is “no.” But modeling should be conducted with an eye to the next software engineering steps.
Iterating endlessly to make a model “perfect” does not serve the need for agility.
Principle 8. Don’t become dogmatic about the syntax of the model. If it communicates content
successfully, representation is secondary Although everyone on a software team should try to use
consistent notation during modeling, the most important characteristic of the model is to
communicate information that enables the next software engineering task. If a model does this
successfully, incorrect syntax can be forgiven.
Vtucode.in Page 24
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Principle 9. If your instincts tell you a model isn’t right even though it seems okay on paper,
you probably have reason to be concerned. If you are an experienced software engineer, trust your
instincts. Software work teaches many lessons—some of them on a subconscious level. If something
tells you that a design model is doomed to fail (even though you can’t prove it explicitly), you have
reason to spend additional time examining the model or developing a different one.
Principle 10. Get feedback as soon as you can. Every model should be reviewed by members of
the software team. The intent of these reviews is to provide feedback that can be used to correct
modeling mistakes, change misinterpretations, and add features or functions that were inadvertently
omitted.
Principle 1. The information domain of a problem must be represented and understood. The
information domain encompasses the data that flow into the system (from end users, other systems, or
external devices), the data that flow out of the system (via the user interface, network interfaces,
reports, graphics, and other means), and the data stores that collect and organize persistent data
objects (i.e., data that are maintained permanently).
Principle 2. The functions that the software performs must be defined. Software functions
provide direct benefit to end users and also provide internal support for those features that are user
visible. Some functions transform data that flow into the system. In other cases, functions effect some
level of control over internal software processing or external system elements. Functions can be
described at many different levels of abstraction, ranging from a general statement of purpose to a
detailed description of the processing elements that must be invoked.
Principle 3. The behavior of the software (as a consequence of external events) must be
represented. The behavior of computer software is driven by its interaction with the external
environment. Input provided by end users, control data provided by an external system, or monitoring
data collected over a network all cause the software to behave in a specific way.
Principle 4. The models that depict information, function, and behavior must be partitioned in
a manner that uncovers detail in a layered (or hierarchical) fashion. Requirements modeling is
the first step in software engineering problem solving. It allows you to better understand the problem
and establishes a basis for the solution (design). Complex problems are difficult to solve in their
entirety. For this reason, you should use a divide-and-conquer strategy. A large, complex problem is
divided into subproblems until each subproblem is relatively easy to understand. This concept is
called partitioning or separation of concerns, and it is a key strategy in requirements modeling.
Principle 5. The analysis task should move from essential information toward implementation
detail. Requirements modeling begins by describing the problem from the end-user’s perspective.
The “essence” of the problem is described without any consideration of how a solution will be
Vtucode.in Page 25
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
implemented. For example, a video game requires that the player “instruct” its protagonist on what
direction to proceed as she moves into a dangerous maze. That is the essence of the problem.
Implementation detail (normally described as part of the design model) indicates how the essence will
be implemented. For the video game, voice input might be used. Alternatively,
Principle 1. Design should be traceable to the requirements model. The requirements model
describes the information domain of the problem, user-visible functions, system behavior, and a set of
requirements classes that package business objects with the methods that service them. The design
model translates this information into an architecture, a set of subsystems that implement major
functions, and a set of components that are the realization of requirements classes. The elements of
the design model should be traceable to the requirements model.
Principle 2. Always consider the architecture of the system to be built. Software architecture is
the skeleton of the system to be built. It affects interfaces, data structures, program control flow and
behavior, the manner in which testing can be conducted, the maintainability of the resultant system,
and much more. For all of these reasons, design should start with architectural considerations. Only
after the architecture has been established should component-level issues be considered.
Principle 4. Interfaces (both internal and external) must be designed with care. The manner in
which data flows between the components of a system has much to do with processing efficiency,
error propagation, and design simplicity. A well-designed interface makes integration easier and
assists the tester in validating component functions.
Principle 5. User interface design should be tuned to the needs of the end user. However, in
every case, it should stress ease of use. The user interface is the visible manifestation of the software.
No matter how sophisticated its internal functions, no matter how comprehensive its data structures,
no matter how well designed its architecture, a poor interface design often leads to the perception that
the software is “bad.”
Vtucode.in Page 26
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Principle 7. Components should be loosely coupled to one another and to the external
environment. Coupling is achieved in many ways— via a component interface, by messaging,
through global data. As the level of coupling increases, the likelihood of error propagation also
increases and the overall maintainability of the software decreases. Therefore, component coupling
should be kept as low as is reasonable.
Principle 9. The design should be developed iteratively. With each iteration, the designer should
strive for greater simplicity. Like almost all creative activities, design occurs iteratively. The first
iterations work to refine the design and correct errors,
The construction activity encompasses a set of coding and testing tasks that lead to operational
software that is ready for delivery to the customer or end user. coding may be
(1) the direct creation of programming language source code (e.g., Java),
(2) the automatic generation of source code using an Intermediate design like representation of the
component to be built.
The initial focus of testing is at the component level, often called unit testing.
The following set of fundamental principles and concepts are applicable to coding and testing:
Coding Principles. The principles that guide the coding task are closely aligned with programming
style, programming languages, and programming methods. However, there are a number of
fundamental principles that can be stated:
Vtucode.in Page 27
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Preparation principles: Before you write one line of code, be sure you
• Understand of the problem you’re trying to solve.
• Pick a programming language that meets the needs of the software to be built and the
environment in which it will operate.
• Select a programming environment that provides tools that will make your work easier.
• Create a set of unit tests that will be applied once the component you code is completed.
• Understand the software architecture and create interfaces that are consistent with it.
• Select meaningful variable names and follow other local coding standards.
• Create a visual layout (e.g., indentation and blank lines) that aids understanding.
Validation Principles: After you’ve completed your first coding pass, be sure you
Vtucode.in Page 28
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Testing Principles: Glen Myers states a number of rules that can serve well as testing
objectives:
• A good test case is one that has a high probability of finding an as-yet undiscovered error.
Davis suggests a set of testing principles that have been adapted for use.
Principle 1. All tests should be traceable to customer requirements. The objective of software
testing is to uncover errors. It follows that the most severe defects (from the customer’s point of
view) are those that cause the program to fail to meet its requirements.
Principle 2. Tests should be planned long before testing begins. Test planning can begin as soon
as the requirements model is complete. Detailed definition of test cases can begin as soon as the
design model has been solidified. Therefore, all tests can be planned and designed before any code
has been generated.
Principle 3. The Pareto principle applies to software testing. In this context the Pareto principle
implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of
all program components. The problem, of course, is to isolate these suspect components and to
thoroughly test them.
Principle 4. Testing should begin “in the small” and progress toward testing “in the large.” The
first tests planned and executed generally focus on individual components. As testing progresses,
focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the
entire system.
Principle 5. Exhaustive testing is not possible. The number of path permutations for even a
moderately sized program is exceptionally large. For this reason, it is impossible to execute every
combination of paths during testing. It is possible, however, to adequately cover program logic and to
ensure that all conditions in the component-level design have been exercised.
Vtucode.in Page 29
SOFTWARE ENGINEERING AND PROJECT MANAGEMENT (21CS61)
Deployment Principles
Principle 1. Customer expectations for the software must be managed. Too often, the customer
expects more than the team has promised to deliver, and disappointment occurs immediately. This
results in feedback that is not productive and ruins team morale. In her book on managing
expectations, Naomi Karten states: “The starting point for managing expectations is to become more
conscientious about what you communicate and how.” She suggests that a software engineer must be
careful about sending the customer conflicting messages (e.g., promising more than you can
reasonably deliver in the time frame provided or delivering more than you promise for one software
increment and then less than promised for the next).
Principle 2. A complete delivery package should be assembled and tested. A CD-ROM or other
media (including Web-based downloads) containing all executable software, support data files,
support documents, and other relevant information should be assembled and thoroughly beta-tested
with actual users. All installation scripts and other operational features should be thoroughly
exercised in as many different computing configurations (i.e., hardware, operating systems, peripheral
devices, networking arrangements) as possible.
Principle 3. A support regime must be established before the software is delivered. An end user
expects responsiveness and accurate information when a question or problem arises. If support is ad
hoc, or worse, nonexistent, the customer will become dissatisfied immediately. Support should be
planned, support materials should be prepared, and appropriate recordkeeping mechanisms should be
established so that the software team can conduct a categorical assessment of the kinds of support
requested.
Principle 4. Appropriate instructional materials must be provided to end users. The software
team delivers more than the software itself. Appropriate training aids (if required) should be
developed; troubleshooting guidelines should be provided, and when necessary, a “what’s different
about this software increment” description should be published.
Principle 5. Buggy software should be fixed first, delivered later. Under time pressure, some
software organizations deliver low-quality increments with a warning to the customer those bugs
“will be fixed in the next release.” This is a mistake. There’s a saying in the software business:
“Customers will forget you delivered a high-quality product a few days late, but they will never
forget the problems that a low-quality product caused them. The software reminds them every day.”.
Vtucode.in Page 30
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
MODULE-4
1.1 INTRODUCTION TO SOFTWARE PROJECT MANAGEMENT
1. Software Project Management is an art & Science of planning & leading software Projects
from ideas to reality.
3. Project management is the discipline of defining and achieving targets while optimizing
the new resources (time, money, people, materials, energy, space , etc.) over the course of
a project (a set of activities of finite duration).
4. Project management involves the planning, monitoring, and control of people, process, and
events that occur during software development.
Everyone manages, but the scope of each person’s management activities varies according his or
her role in the project.
Software needs to be managed because it is a complex undertaking with a long duration time.
Managers must focus on the fours P’s to be successful (people, product, process, and project).
A project plan is a document that defines the four P’s in such a way as to ensure a cost effective,
high quality software product.
The only way to be sure that a project plan worked correctly is by observing that a high-quality
product was delivered on time and under budget.
Vtucode.in Page 1
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
The Software Development life cycle is a methodology that also forms the framework for
planning and controlling the creation, testing, and delivery of an information system.
The software development life cycle concept acts as the foundation for multiple different
development and delivery methodologies, such as the Hardware development life -cycle and
software development life -cycle . While Hardware development life -cycle deal specially with
hardware and Software development life -cycle deal with software, a systems development life -
cycle differs from each in that it can deal with any combination of hardware and software , as a
system can be composed of hardware only , software only, or a combination of both.
o People
o Process
o Product
o Technology
Vtucode.in Page 2
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
The triangle illustrates the relationship between three primart forces in a project. Time is the
available time to deliver the project. Cost represents the amount of money or resources available
and quality represents the fit-to-purpose that the project must achieve to be a scuccess.
The normal situation is that one of thse factors is fixed and the other two will vary in inverse
proportion to each other. For example , time is often fixed and the quality of the end product will
depend on the cost and resources available. Similarly if you are working to a fixed level of quality
then the cost of the project will largely be dependable upon the time available(if you have longer
you can do it with fewer people).
CR3S2T2QP
1. Complexity Management
o Software projects often involve intricate systems and interdependencies. Effective
management of this complexity ensures that the project remains coherent and
manageable.
2. Requirement Management
o Clear and precise requirement management is essential to ensure that the final
product meets user needs and expectations. Mismanagement here can lead to scope
creep and project failure.
3. Time and Budget Control
o Monitoring and controlling the project timeline and budget is vital. This includes
planning, estimating, and adhering to schedules and financial constraints to prevent
overruns.
4. Risk Management
Vtucode.in Page 3
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
o Identifying, assessing, and mitigating risks can prevent unforeseen issues from
derailing a project. This proactive approach helps in managing uncertainties
effectively.
5. Quality Assurance
o Ensuring that the project meets quality standards is crucial for user satisfaction and
reducing post-release defects. Continuous testing and validation are key practices.
o
6. Team Coordination
o Effective communication and coordination among team members are essential for
collaboration and timely problem-solving, ensuring that everyone is aligned with
project goals.
o
7. Stakeholder Management
o Engaging and managing stakeholders helps in gaining their support and addressing
their concerns, which is critical for project acceptance and success.
8. Scope Management
o Defining and controlling what is included in the project prevents scope creep,
ensures that all necessary features are delivered, and avoids unnecessary work.
9. Process Improvement
o Continuously improving processes ensures that the project is using the most
efficient methods and practices, leading to better performance and outcomes.
10. Resource Allocation
o Efficient allocation and management of resources (human, financial, and material)
ensure that the project has what it needs to succeed without wastage.
Vtucode.in Page 4
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Conclusion
Effective software project management is essential due to the inherent complexities and challenges
of software development. The key areas outlined require diligent attention and management to
ensure project success. The statistics provided illustrate the high stakes involved and the
substantial impact that good project management can have on the success rates of software
projects. By focusing on these areas, businesses can significantly improve their chances of
delivering successful projects that meet deadlines, stay within budget, and satisfy quality
standards.
The definition of a project as being planned assume that to a large extent we can determine
how we are going to carry out a task before we start. There may be some projects of an
exploratory nature where this might be quite hard. Planning is in essence thinking carefully
about something before you do it and even in the case of uncertain projects this is worth doing
as long as it is accepted that the resulting plans will have provisional and speculative elements.
Other activities, concerning, for example, to routine maintenance, might have been performed
so many times that everyone involved knows exactly what needs to be done. In these cases,
planning hardly seems necessary, although procedures might need to be documented to ensure
consistency and to help newcomers to the job.
Here are some definitions of ‘project’. No doubt there are other ones: for example,
‘Unique process, consisting of a set of coordinated and controlled activities with start and finish
dates, undertaken to achieve an objective conforming to specific requirements, including
constraints of time, cost and resources.
Vtucode.in Page 5
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
There is a hazy boundary between the non-routine project and the routine job. The first time you
do a routine task, it will be like a project. On the other hand, a project to develop a system similar
to previous ones you have developed will have a large element of the routine.
The project that employs 20 developers is likely to be disproportionately more difficult than one
with only 20 staff because of the need for additional coordination.
Many of the techniques of general project management are applicable to software project
management. One way of perceiving software project management is as the process of making
visible that which is invisible.
Invisibility: When a physical artifact such as a bridge or road is being constructed the progress
being made can actually be seen. With software, progress is not immediately visible.
Vtucode.in Page 6
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Complexity: Software products contain more complexity than other engineered artifacts.
Conformity: The ‘traditional’ engineer is usually working with physical systems and physical
materials like cement and steel. These physical systems can have some complexity, but are
governed by physical laws that are consistent. Software developers have to conform to the
requirements of human clients. It is not just that individuals can be inconsistent.
Flexibility: The ease with which software can be changed is usually seen as one of its
strengths. However, this means that where the software system interfaces with a physical or
organizational system, it is expected that, where necessary, the software will change to
accommodate the other components rather than vice versa. This means the software systems
are likely to be subject to a high degree of change.
An example for infrastructure project is construction of a flyover. An example for a software
project is development of a payroll management system for an organization using Oracle l0g
and Oracle Forms 10G.
• ln-house projects are where the users and the developers of new software work for the
same organization.
• However, increasingly organizations contract out ICT development to outside
developers. Here, the client organization will often appoint a 'project manager' to
supervise the contract who will delegate many technically oriented decisions to the
contractors.
• Thus, the project manager will not worry about estimating the effort needed to write
individual software components as long as the overall project is within budget and on
time. On the supplier side, there will need to be project managers who deal with the more
technical issues.
➢ Contract management is the process of managing the creation, execution, and analysis
of contracts to maximize operational and financial performance and minimize risk.
➢ It involves various activities from the initial request for a contract, through negotiation,
execution, compliance, and renewal. Effective contract management ensures that all
parties to a contract fulfill their obligations as efficiently as possible.
Vtucode.in Page 7
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Request: Identifying the need for a contract and gathering the necessary information to draft it.
Creation: Drafting the contract terms and conditions that align with the requirements and
objectives of all parties involved.
Example: A software company needs to hire a third-party developer to work on a new project.
The project manager identifies the need for a contract and gathers details about the scope of work,
timelines, payment terms, and other specifics.
2. Negotiation:
Parties involved discuss and negotiate the terms of the contract to reach a mutual agreement.
This stage often involves revisions and adjustments.
Example: The software company and the third-party developer negotiate the terms. The
developer might request more time or a higher payment, while the company might request
milestones for progress checks.
Vtucode.in Page 8
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Example: Once the terms are finalized, the contract is reviewed by both parties' legal teams.
After approval, both the software company and the developer sign the contract.
Example: The developer starts working on the project, adhering to the deadlines and
deliverables specified in the contract. The software company provides the necessary resources
and makes payments as per the contract.
Making necessary amendments if any changes occur during the contract period. Reviewing
and renewing contracts as needed.
Example: Midway through the project, the software company requests additional features not
covered in the original contract. An amendment is made to include these new features and
adjust the payment terms accordingly. As the project nears completion, the company and
developer may negotiate a renewal for ongoing maintenance.
6. Closure:
Completing all contractual obligations, ensuring all parties have met their requirements, and
formally closing the contract.
Example: The developer finishes the project, and the software company conducts a final
review to ensure all deliverables meet the agreed-upon standards. Once confirmed, the
contract is closed, and a final payment is made.
Vtucode.in Page 9
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Risk Mitigation: Identifies and manages potential risks early in the contract lifecycle.
Improved Compliance: Ensures that all parties comply with legal and regulatory
requirements.
Cost Savings: Avoids unnecessary costs and penalties by managing contracts efficiently.
Speed to Market: Accelerates project timelines by leveraging the vendor’s expertise and
resources.
1. Identifying Needs: XYZ Tech identifies a need for a mobile app to complement its
existing software suite.
2. Selecting a Vendor: XYZ Tech shortlists several development firms based in India,
known for their expertise in mobile app development.
5. Project Management: XYZ Tech assigns a project manager to liaise with the vendor,
ensuring regular updates and adherence to milestones.
Vtucode.in Page 10
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
6. Delivery and Integration: The vendor delivers the app, which is integrated with XYZ
Tech’s software suite after thorough testing.
Vtucode.in Page 11
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
1.5.2 Planning:
If the feasibility study produces results which indicate that the prospective project appears
viable, planning of the project can take place. However, for a large project, we would not do all
our detailed planning right at the beginning. We would formulate an outline plan for the whole
project and a detailed one for the first stage. More detailed planning of the later stages would be
done as they approached. This is because we would have more detailed and accurate information
upon which to base our plans nearer to the start of the later stages.
Vtucode.in Page 12
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
1.5.3.2 Specification:
Detailed documentation of what the proposed system is to do.
1.5.3.3 Design:
A design has to be drawn up which meets the specification. This design will be in two
stages. One will be the external or user design concerned with the external appearance of the
application. The other produces the physical design which tackles the way that the data and
software procedures are to be structured internally.
➢ Architecture Design: This maps the requirements to the components of the system that is
to be built. At the system level, decisions will need to be made about which processes in
the new system will be carried out by the user and which can be computerized. This design
of the system architecture thus forms an input to the development of the software
requirements. A second architecture design process then takes place which maps the
software requirements to software components.
➢ Detailed Design: Each software component is made up of a number of software units that
can be separately coded and tested. The detailed design of these units is carried out
separately.
Vtucode.in Page 13
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
1.5.3.4 Coding:
This may refer to writing code in a procedural language or an object-oriented language or
could refer to the use of an application-builder. Even where software is not being built from
scratch, some modification to the base package could be required to meet the needs of the new
application.
Integration: The individual components are collected together and tested to see if they meet
the overall requirements. Integration could be at the level of software where different software
components are combined, or at the level of the system as a whole where the software and
other components of the system such as the hardware platforms and networks and the user
procedures are brought together.
Qualification Testing: The system, including the software components, has to be tested
carefully to ensure that all the requirements have been fulfilled.
A plan for an activity must be based on some idea of a method of work. To take a simple
example, if you were asked to test some software, even though you do not know anything about
the software to be tested, you could assume that you would need to:
• Analyze the requirements for the software
• Devise and write test cases that will check that each requirement has been satisfied
Vtucode.in Page 14
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
• Create test scripts and expected results for each test case
• Compare the actual results and the expected results and identify discrepancies
While a method relates to a type of activity in general, a plan takes that method (and perhaps
others) and converts it to real activities, identifying for each activity:
‘Materials’ in this context could include information, for example a requirements document. With
complex procedures, several methods may be deployed, in sequence or in parallel. The output from
one method might be the input to another. Groups of methods or techniques are often referred to
as methodologies.
Distinguishing different types of projects is important as different types of tasks need different
project approaches e.g.
Vtucode.in Page 15
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
In workplaces there are systems that staff have to use if they want to do something, such
as recording a sale. However, use of a system is increasingly voluntary, as in the case of
computer games. Here it is difficult to elicit precise requirements from potential users as we
could with a business system. What the game will do will thus depend much on the informed
ingenuity of the developers, along with techniques such as market surveys, focus groups and
prototype evaluation.
A traditional distinction has been between information systems which enable staff to carry
out office processes and embedded systems which control machines. A stock control system
would be an information system. An embedded, or process control, system might control the air
conditioning equipment in a building. Some systems may have elements of both where, for
example, the stock control system also controls an automated warehouse.
All types of software projects can broadly be classified into software product development
projects and software services projects. It can be further classified as shown in below Fig.1.7
A software product development concerns developing the software by keeping the
requirements to the general customers in mind and developed software is usually sold-off-the
Vtucode.in Page 16
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Projects may be distinguished by whether their aim is to produce a product or to meet certain
objectives.
Vtucode.in Page 17
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Many software projects have two stages, First is an object-driven project resulting in
recommendations which identify the need for a new software system and next stage is a
project actually to create the software product.
1.8 STAKEHOLDERS
These are people who have a stake or interest in the project. It is important that they be
identified as early as possible, because you need to set up adequate communication channels with
them right from the start. The project leader also has to be aware that not everybody who is
involved with a project has the same motivation and objectives. The end-users might, for instance,
be concerned about the ease of use of the system while their managers might be interested in the
staff savings the new system will allow.
Boehm and Ross proposed a ‘Theory W’ of software project management where the
manager concentrates on creating the role and format situations where all parties benefit from a
project and therefore have an of communication interest in its success. (The 'W' stands for 'win-
win'.)
Stakeholders might be internal to the project team, external to the project team but in the
same organization, or totally external to the organization.
• Internal to the project team: This means that they will be under the direct managerial
control of the project leader.
• External to the project team but within the same organization: For example, the project
leader might need the assistance of the information management group in order to add
some additional data types to a database or the assistance of the users to carry out systems
testing. Here the commitment of the people involved has to be negotiated.
• External to both the project team and the organization: External stakeholders may be
customers (or users) who will benefit from the system that the project implements or
contractors who will carry out work for the project. One feature of the relationship with
these people is that it is likely to be based on a legally binding contract.
Different types of Stakeholders may have different objectives and one of the jobs of the
successful project leader is to recognize these different interests and to be able to reconcile them.
It should therefore come as no surprise that the project leader needs to be a good communicator
and negotiator.
Vtucode.in Page 18
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
• The objectives should define what the project team must achieve for project success.
• Objectives focus on the desired outcomes of the project rather than the tasks within it-they
are the ‘post-conditions’ of the project.
• Objectives could be set of statements following the opening words ‘the project will be a
success if ….’ .
• To have a successful software project, the manager and the project team members must
know what will constitute success. This will make them concentrate on what is essential
to project success.
• There may be several sets of users of a system and there may be several different groups
of specialists involved its development. There is a need for well-defined objectives that
are accepted by all these people. Where there is more than one user group, a project
authority needs to be identified which has overall authority over what the project is to
achieve.
• This authority is often held by a project steering committee (or project board or project
management board) which has overall responsibility for setting, monitoring and
modifying objectives. The project manager still has responsibility for running the project
on a day-to-day basis, but has to report to the steering committee at regular intervals. Only
the steering committee can authorize changes to the project objectives and resources.
Setting objectives can guide and motivate individuals and groups of staff. An effective
objective for an individual must be something that is within the control of that individual. An
objective might be that the software application to be produced must pay for itself by reducing
staff costs over two years. As an overall business objective this might be reasonable. For software
developers it would be unreasonable as, though they can control development costs, any reduction
in operational staff costs depends not just on them but on the operational management after the
application has ‘gone live’. What would be appropriate would be to set a goal or sub-objective
for the software developers to keep development costs within a certain budget.
Thus, objectives will need be broken down into goals or sub-objectives. Here we say that
in order to achieve the objective we must achieve certain goals first. These goals are steps on the
way to achieving an objective, just as goals scored in a football match are steps towards the
objective of winning the match.
Vtucode.in Page 19
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
• Specific: Effective objectives are concrete and well defined. Vague aspirations such as
‘to improve customer relations’ are unsatisfactory. Objectives should be defined in such
a way that it is obvious to all whether the project has been successful or not.
• Achievable: It must be within the power of the individual or group to achieve the
objective.
• Relevant: The objective must be relevant to the true purpose of the project.
• Time constrained: There should be a defined point in time by which the objective
should have been achieved.
• Most projects need to have a justification or business case: the effort and expense of
pushing the project through must be seen to be worthwhile in terms of the benefits that
will eventually be felt.
• The quantification of benefits will often require the formulation of a business model which
explains how the new application can generate the claimed benefits.
Any project plan must ensure that the business case is kept intact. For example:
• The development costs are not allowed to rise to a level which threatens to exceed the
value of benefits.
• The features of the system are not reduced to a level where the expected benefits cannot
be realized.
Vtucode.in Page 20
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
• The delivery date is not delayed so that there is an unacceptable loss benefit.
• The project plan should be designed to ensure project success preserving the business
case for the project.
• Different stakeholders have different interests, some stakeholders in a project might see
it as a success while others do not.
• The project objectives are the targets that the project team is expected to achieve—They
are summarized as delivering:
➢ The agreed functionality
➢ To the required level of quality
➢ In time
➢ Within budget
• A project could meet these targets but the application, once delivered could fail to meet
the business case. A computer game could be delivered on time and within budget, but
might then not sell.
• In business terms, the project is a success if the value of benefits exceeds the costs.
• A project can be a success on delivery but then be a business failure, On the other hand,
a project could be late and over budget, but its deliverables could still, over time, generate
benefits that outweigh the initial expenditure.
• The possible gap between project and business concerns can be reduced by having a
broader view of projects that includes business issues.
• Technical learning will increase costs on the earlier projects, but later projects benefit
as the learnt technologies can be deployed more quickly cheaply and accurately.
• Customer relationships can also be built up over a number of projects. If a client has
trust in a supplier who has done satisfactory work in the past, they are more likely to use
that company again.
1.12.1 MANAGEMENT:
Management involves following activities:
• Planning - deciding what is to be done;
Vtucode.in Page 21
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Much of the project manager’s time is spent only in three activities, i.e. Project Planning,
Monitoring and control. This time period during which these activities are carried out is indicated
in Fig 1.5.
It shows that project management is carried out over three well-defined stages or processes
irrespective of the methodology used.
In the Project initiation stage, an initial plan is made. As a project starts, the project is
monitored and controlled to process as planned. Initial plan is revised periodically to
accommodate additional details and constraints about the project as they become available.
Finally, the project is closed.
Initial project is undertaken immediately after the feasibility study phase and before starting the
requirement analysis and specification process.
Initial project planning involves estimating several characteristics of a project. Based on these
estimates all subsequent project activities are planned.
The monitoring activity involves monitoring the progress of the project. Control activities are
initiated to minimize any significant variation in the plan,
Vtucode.in Page 22
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Project Planning is an important responsibility of the project Manager. During project planning,
the project manger needs to perform a few well-defined activities that have been outlined below/
Several best practices have been proposed for software project planning activities, PRINCE2 is
used extensively in UK and Europe . In USA Project management Institute’s ‘PMBOK’ which
refers to their publication “A Gude to the Project Management Body of knowledge, is used.
• Estimation: The following project attributes are estimated.
• Cost: How much is it going to cost to complete the project.
• Duration: How long is it going to take to complete the project.
• Effort: How much effort would be necessary for completing the project?
The effectiveness of all activities such as scheduling and staffing are planned at later stage.
• Scheduling: Based on estimations of effort and duration, the schedules for manpower
and other resources are developed.
• Staffing: Staff organization and staffing plans are made.
• Risk Management: This activity includes risk identification, analysis, and abatement
planning.
• Miscellaneous Plans: This includes making several other plans such as quality
assurance plan, configuration management plan etc.
While carrying out project monitoring and control activities, a project manager may sometimes
find it necessary to change the plan to cope with specific situations and make the plan more
accurate as more project data becomes available.
Management involves setting objectives for a system and monitoring the performance of
the system.
Vtucode.in Page 23
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
• In the above Fig, local mangers involve in data collection. Bare details such as “location X has
processed 2000 documents” may not be useful to higher management.
• Data processing is required to transform this raw data into useful information. This might be
in such forms as “Percentage of records Processed”, average documents per day per person”,
and estimated completion date”.
• In this example , the project management might examine the “estimated completion date” for
completing data transfer for each branch. They are comparing actual performance with overall
project objectives.
• They might find that one or two branches will fail to complete the transfer of details in time.
• It can be seen that a project plan is dynamic and will need constant adjustment during the
execution of the project. A good plan provides a foundation for a good project, but is nothing
without intelligent execution.
In Project Management process, the project manager carries out project initiation, planning,
execution, monitoring, controlling and closing.
Vtucode.in Page 24
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
The different phases of the project management life cycle are shown in Fig: 1.8.
1. Project Initiation: The project initiation phase starts with project concept development.
During concept development the different characteristics of the software to be developed
are thoroughly understood, which includes, the scope of the project, the project constraints,
the cost that would be incurred and the benefits that would accrue. Based on this
understanding, a feasibility study is undertaken to determine the project would be
financially and technically feasible.
Based on feasibility study, the business case is developed. Once the top management
agrees to the business case, the project manager is appointed, the project charter is
written and finally project team is formed. This sets the ground for the manager to start
the project planning phase.
W5HH Principle: Barry Boehm, summarized the questions that need to be asked and answered in
order to have an understanding of these project characteristics.
Vtucode.in Page 25
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
2. Project Bidding: Once the top management is convinced by the business case, the project
charter is developed. For some categories of projects, it may be necessary to have formal
bidding process to select suitable vendor based on some cost-performance criteria. The
different types of bidding techniques are:
3. Project Planning: An importance of the project initiation phase is the project charter.
During the project planning the project manger carries out several processes and creates
the following documents:
• Project plan: This document identifies the project the project tasks and a schedule
for the project tasks that assigns project resources and time frames to the tasks.
• Resource Plan: It lists the resources , manpower and equipment that would be
required to execute the project.
• Functional Plan: It documents the plan for manpower, equipment and other costs.
• Quality Plan: Plan of quality targets and control plans are included in this
document.
Vtucode.in Page 26
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
• Risk Plan: This document lists the identification of the potential risks, their
prioritization and a plan for the actions that would be taken to contain the different
risks.
4. Project Execution: In this phase the tasks are executed as per the project plan developed
during the planning phase. Quality of the deliverables is ensured through execution of
proper processes. Once all the deliverables are produced and accepted by the customer, the
project execution phase completes and the project closure phase starts.
5. Project Closure: Project closure involves completing the release of all the required
deliverables to the customer along with the necessary documentation. All the Project
resources are released and supply agreements with the vendors are terminated and all the
pending payments are completed. Finally, a postimplementation review is undertaken to
analyze the project performance and to list the lessons for use in future projects.
Software is not developed from scratch any more, Software development projects are based on
either tailoring some existing product or reusing certain pre-built libraries both will maximize
code reuse and compression of project durations.
Other goals include facilitating and accommodating client feedback and client feedbacks and
customer participation in project development work and incremental delivery of the product
with evolving functionality.
Some Important difference between modern management practices and traditional practices are:
• Planning Incremental Delivery: Earlier, projects were simpler and therefore more
predictable than the present-day projects. In those days, projects were planned with
sufficient detail much before the actual project execution started. After the project
initiation, monitoring and control activities were carried out to ensure that the project
execution proceeded as per plan, Now, the projects are required to be completed over a
much shorter duration, and rapid application development and deployment are considered
key strategies.
Instead of making a long-term project completion plan, the project manger now plans all
incremental deliveries with evolving functionalities. This type of project management is
Vtucode.in Page 27
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
• Change Management: Earlier, when the requirements were signed off by the customer,
any changes to the requirements were rarely entertained. Customer suggestions are now
actively solicited and incorporated throughout the development process. To facilitate
customer feedback, incremental delivery models are popularly being used. Product
development is being carried out through a series of product versions implementing
increasingly greater functionalities. The Project manager plays a key role in product base
lining and version control. This has made change management a crucial responsibility of
the project manager. Change Management is also known as configuration management.
Vtucode.in Page 28
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Vtucode.in Page 29
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
Case Study: Paul Duggan is the manager of a software development section. On Tuesday at 10.00
am he and his fellow section heads have a meeting with their group manager about the staffing
requirements for the coming year. Paul has already drafted document ‘bidding’ for staff’. This is
based on the work planned for his section for the next year. The document is discussed at the
meeting. At 2.00 pm Paul has a meeting with his senior staff about an important project his section
is undertaking. One of the software development staff has just had a road accident and will be in
hospital for some time. It is decided that the project can be kept on schedule by transferring
another team member from less urgent work to this project. A temporary replacement is to be
brought in to do the less urgent work, but this might take a week or so to arrange. Paul has to
phone both the personnel manager about getting a replacement and the user for whom the less
urgent work is being done explaining why it is likely to be delayed. Identify which of the eight
management responsibilities listed above Paul was responding to at different points during his
day.
Project Planning: In the project initiation stage, an initial plan is made. As the project start, the
project is monitored and controlled to proceed as per the plan. But, the initial plan is refined from
time to time to factor in additional details and constraints about the project become available.
Based on the details of Paul Duggan's day, we can map his activities to the eight management
responsibilities. The typical management responsibilities include:
1. Planning: Setting objectives and deciding on the actions needed to achieve them.
2. Organizing: Arranging tasks, people, and other resources to accomplish the work.
3. Staffing: Recruiting, selecting, training, and developing employees.
4. Directing: Leading, motivating, and communicating with employees.
5. Controlling: Monitoring and evaluating performance.
6. Coordinating: Ensuring all parts of the organization are working together towards
common goals.
7. Reporting: Keeping all stakeholders informed.
8. Budgeting: Planning and controlling financial resources.
Vtucode.in Page 30
SOFTWARE ENGINEERING AND PROJECT MANGEMENT (21CS61)
In summary:
Vtucode.in Page 31
SOFTWARE PROJECT MANAGEMENT(21CS61)
13.1 INTRODUCTION
Quality:
Objective assessment:
➢ We need to judge objectively whether a system meets our quality requirements and this need
measurement.
➢ Critical for package selection, e.g., Brigette at Brightmouth College.
Now days, delivering a high-quality product is one of the major objectives of all organizations.
Traditionally, the quality of a product means that how much it gets fit into its specified purpose. A product
is of good quality if it performs according to the user’s requirements. Good quality software should meet
all objectives defined in the SRS document. It is the responsibility of the quality managers to ensure that
the software attains a required level of quality.
1
vtucod e.in
Step 1 : Identifying project scope and objectives Some objective could relate to the quality of the
application to be delivered.
Step 2 : Identifying project infrastructure Within this step activity 2.2 involves identifying installation
standards and procedures. Some of these will almost certainly be about quality requirements.
Step 3 : Analyze project characteristics. In this activity the application to be implemented will be
examined to see if it has any special quality requirements.
Step 4 : Identify the product and activities of the project. It is at that point the entry, exit and process
requirement are identified for each activity.
Step 8: Review and publicize plan. At this stage the overall quality aspects of the project plan are
reviewed.
2
vtucod e.in
13.3 THE IMPORTANCE OF SOFTWARE QUALITY
Now a days, quality is the important aspect of all organization. Good quality software is the
requirement of all users. There are so many reasons that describe why the quality of software is important;
few among of those which are most important are described below:
➢ The final customer or user is naturally anxious about the general quality of software especially
about the reliability.
➢ They are concern about the safety because of their dependency on the software system such as
aircraft control system are more safety critical systems.
➢ As software is developed through a number of phases; output of one phase is given as input to the
other one. So, if error in the initial phase is not found, then at the later stage, it is difficult to fix
that error and also the cost indulged is more.
Attempt to identify specific product qualities that are appropriate to software , for instance, grouped
software qualities into three sets. Product operation qualities, Product revision qualities and product
transition qualities.
3
vtucod e.in
Product operation qualities:
Correctness: The extent to which a program satisfies its specification and fulfil user objective.
Reliability: the extent to which a program can be expected to perform its intended function with required
precision.
Integrity: The extent to which access to software or data by unauthorized persons can be controlled.
Usability: The effort required to learn, operate, prepare input and interprets output.
Maintainability: The effort required to locate and fix an error in an operational program
Testability: The effort required to test a program to ensure it performs its intended function.
Portability: The efforts require to transfer a program from one hardware configuration and or software
system environment to another.
When there is concerned about the need for a specific quality characteristic in a software product then a
quality specification with the following minimum details should be drafted .
1. Definition/Description
Definition: Clear definition of the quality characteristic.
Description: Detailed description of what the quality characteristic entails.
3. Test
Practical Test: The method or process used to test the extent to which the quality attribute exists.
4
vtucod e.in
4. Minimally Acceptable
Worst Acceptable Value: The lowest acceptable value, below which the product would be
rejected.
5. Target Range
Planned Range: The range of values within which it is planned that the quality measurement
value should lie.
6. Current Value
Now: The value that applies currently to the quality characteristic.
When assessing quality characteristics in software, multiple measurements may be applicable. For
example, in the case of reliability, measurements could include:
1. Availability:
Definition: Percentage of a particular time interval that a system is usable.
Scale: Percentage (%).
Test: Measure the system's uptime versus downtime over a specified period.
Minimally Acceptable: Typically, high availability is desirable; specifics depend on system
requirements.
Target Range: E.g., 99.9% uptime.
3. Failure on Demand:
Definition: Probability that a system will not be available when required, or probability that a
transaction will fail.
Scale: Probability (0 to 1).
Test: Evaluate the system's response to demand or transaction processing.
Minimally Acceptable: Lower probability of failure is desired; varies by system criticality.
Target Range: E.g., Failure on demand probability of less than 0.01.
5
vtucod e.in
4. Support Activity:
Definition: Number of fault reports generated and processed.
Scale: Count (number of reports).
Test: Track and analyze the volume and resolution time of fault reports.
Minimally Acceptable: Lower number of fault reports indicates better reliability.
Target Range: E.g., Less than 10 fault reports per month.
The quality models give a characterization (hierarchical) of software quality in terms of a set of
characteristics of the software. The bottom level of the hierarchical can be directly measured, thereby
enabling a quantitative assessment of the quality of the software.
1. McCall’s model.
2. Dromey’s Model.
3. Boehm’s Model.
4. ISO 9126 Model.
Garvin’s Quality Dimensions: David Gravin , a professor of Harvard Business school, defined the
quality of any product in terms of eight general attributes of the product.
6
vtucod e.in
1) McCall’ Model: McCall defined the quality of a software in terms of three broad parameters: its
operational characteristics, how easy it is to fix defects and how easy it is to part it to different
platforms. These three high-level quality attributes are defined based on the following eleven
attributes of the software:
2) Dromey’s model: Dromey proposed that software product quality depends on four major high-level
properties of the software: Correctness, internal characteristics, contextual characteristics and certain
descriptive properties. Each of these high-level properties of a software product, in turn depends on
several lower-level quality attributes. Dromey’s hierarchical quality model is shown in Fig 13.2
7
vtucod e.in
3) Boehm’s Model: Boehm’s suggested that the quality of a software can be defined based on these
high-level characteristics that are important for the users of the software. These three high-level
characteristics are the following:
As-is -utility: How well (easily, reliably and efficiently) can it be used?
Maintainability: How easy is to understand, modify and then retest the software?
Portability: How difficult would it be to make the software in a changed environment?
Boehm’s expressed these high-level product quality attributes in terms of several measurable
product attributes. Boehm’s hierarchical quality model is shown in Fig 13.3 .
8
vtucod e.in
13.6 ISO 9126
➢ ISO 9126 standards was first introduced in 1991 to tackle the question of the definition of software
quality. The original 13-page document was designed as a foundation upon which further more
detailed standard could be built. ISO9126 documents are now very lengthy.
➢ ISO 9126 also introduces another type of elements – quality in use- for which following element has
been identified
• Effectiveness : the ability to achieve user goals with accuracy and completeness;
• Productivity : avoiding the excessive use of resources, such as staff effort, in achieving user
goals;
• Safety: within reasonable levels of risk of harm to people and other entities such as business,
software, property and the environment
• Satisfaction: smiling users
ISO 9126 is a significant standard in defining software quality attributes and providing a
framework for assessing them. Here are the key aspects and characteristics defined by ISO 9126:
1. Functionality:
Definition: The functions that a software product provides to satisfy user needs.
Sub-characteristics: Suitability, accuracy, interoperability, security, compliance.
9
vtucod e.in
‘Functionality Compliance’ refers to the degree to which the software adheres to application-related
standard or legal requirements. Typically, these could be auditing requirement. ‘Interoperability’ refers
to the ability of software to interact with others.
2. Reliability:
Definition: The capability of the software to maintain its level of performance under stated
conditions.
Sub-characteristics: Maturity, fault tolerance, recoverability.
Maturity refers to frequency of failures due to fault in software more identification of fault more changes
to remove them. Recoverability describes the control of access to a system.
3. Usability:
Definition: The effort needed to use the software.
Sub-characteristics: Understandability, learnability, operability, attractiveness.
Understand ability is a clear quality to grasp. Although the definition attributes that bear on the user
efforts for recognizing the logical concept and its applicability in our actually makes it less clear.
Learnability has been distinguished from operability. A software tool might be easy to learn but time
consuming to use say it uses a large number of nested menus. This is for a package that is used only
intermittently but not where the system is used or several hours each day by the end user. In this case
learnability has been incorporated at the expense of operability
10
vtucod e.in
4. Efficiency:
Definition: The ability to use resources in relation to the amount of work done.
Sub-characteristics: Time behavior, resource utilization.
5. Maintainability:
Definition: The effort needed to make changes to the software.
Sub-characteristics: Analyzability, modifiability, testability.
Analysability is the quality that McCall called diagnose ability, the ease with which the cause of failure
can be determined. Changeability is the quality that others call flexibility : the latter name implies
suppliers of the software are always changing it. Stability means that there is a low risk of a modification
to software having unexpected effects.
6. Portability:
Definition: The ability of the software to be transferred from one environment to another. \
Sub-characteristics: Adaptability, install ability, co-existence.
Portability compliance relates to those standards that have a bearing on portability. Replaceability refers
to the factors that give upward compatibility between old software components and the new ones.
'Coexistence' refers to the ability of the software to share resources with other software components;
unlike' interoperability', no direct data passing is necessarily involved
11
vtucod e.in
ISO 9126 provides guidelines for the use of the quality characteristics.
ISO 9126 provides structured guidelines for assessing and managing software quality characteristics
based on the specific needs and requirements of the software product. It emphasizes the variation in
importance of these characteristics depending on the type and context of the software product being
developed.
Once the software product requirements are established, ISO 9126 suggests the following steps:
1. Specify Quality Characteristics: Define and prioritize the relevant quality characteristics based
on the software's intended use and stakeholder requirements.
2. Define Metrics and Measurements: Establish measurable criteria and metrics for evaluating
each quality characteristic, ensuring they align with the defined objectives and user expectations.
3. Plan Quality Assurance Activities: Develop a comprehensive plan for quality assurance
activities, including testing, verification, and validation processes to ensure adherence to quality
standards.
4. Monitor and Improve Quality: Continuously monitor software quality throughout the
development lifecycle, identifying areas for improvement and taking corrective actions as
necessary.
5. Document and Report: Document all quality-related activities, findings, and improvements, and
provide clear and transparent reports to stakeholders on software quality status and compliance.
Reliability: Critical for safety-critical systems where failure can have severe consequences. Measures
like mean time between failures (MTBF) are essential.
Efficiency: Important for real-time systems where timely responses are crucial. Measures such as
response time are key indicators.
2. Select the external quality measurements within the ISO 9126 framework relevant to the
qualities prioritized above.
12
vtucod e.in
Determine appropriate external quality measurements that correspond to each quality
characteristic.
• Reliability: Measure with MTBF or similar metrics.
• Efficiency: Measure with response time or time behavior metrics.
3. Map measurements onto ratings that reflect user satisfaction. for example, the mappings might
be as in Table 13.1 .
For response time, user satisfaction could be mapped as follows (hypothetical example):
Excellent: Response time < 1 second
Good: Response time 1-3 seconds
Acceptable: Response time 3-5 seconds
Poor: Response time > 5 seconds
4. ldentify the relevant internal measurements and the intermediate products in which they
appear.
• Identify and track internal measurements such as cyclomatic complexity, code coverage,
defect density, etc.
• Relate these measurements to intermediate products like source code, test cases, and
documentation.
5. Overall assessment of product quality: To what extent is it possible to combine ratings for
different quality characteristics into a single overall rating for the software?
• Use weighted quality scores to assess overall product quality.
• Focus on key quality requirements and address potential weaknesses early to avoid the need
for an overall quality rating later.
➢ Early Stages: Qualitative indicators like checklists and expert judgments are used to assess
compliance with predefined criteria. These are subjective and based on qualitative assessments.
13
vtucod e.in
➢ Later Stages: Objective and quantitative measurements become more prevalent as the software
product nears completion. These measurements provide concrete data about software performance
and quality attributes.
➢ Combining ratings for different quality characteristics into a single overall rating for software
is challenging due to:
• Different measurement scales and methodologies for each quality characteristic.
•
Sometimes, enhancing one quality characteristic (e.g., efficiency) may compromise
another (e.g., portability).
➢ Balancing these trade-offs can be complex and context-dependent.
Software Acquisition: Helps in evaluating software products from external suppliers based on
predefined quality criteria.
Independent Assessment: Aims to provide an unbiased evaluation of software quality for stakeholders
like regulators or consumers.
It seems like you're describing a method for evaluating and comparing software products based on their
quality characteristics. Here's a summary and interpretation of your approach:
The table 13.2 shows how different response times are mapped to quality scores on a scale of 0-5, with
shorter response times receiving higher scores. A rating scale (e.g., 1-5) is used to reflect the importance
of various quality characteristics.
14
vtucod e.in
1. Rating for User Satisfaction:
➢ Products are evaluated based on mandatory quality levels that must be met. Beyond these
mandatory levels, user satisfaction ratings in the range of 0 to 5 are assigned for other desirable
characteristics.
➢ Objective measurements of functions are used to determine different levels of user satisfaction,
which are then mapped to numerical ratings (see Table 13.2 for an example).
2. Importance Weighting:
➢ Each quality characteristic (e.g., usability, efficiency, maintainability) is assigned an importance
rating on a scale of 1 to 5.
➢ These importance ratings reflect how critical each quality characteristic is to the overall
evaluation of the software product.
3. Calculation of Overall Score:
➢ Weighted scores are calculated for each quality characteristic by multiplying the quality score
by its importance weight.
➢ The weighted scores for all characteristics are summed to obtain an overall score for each
software product.
4. Comparison and Preference Order:
➢ Products are then ranked in order of preference based on their overall scores. Higher scores
indicate products that are more likely to satisfy user requirements and preferences across the
evaluated quality characteristics.
This method provides a structured approach to evaluating software products based on user satisfaction
ratings and importance weights for quality characteristics. It allows stakeholders to compare and
prioritize products effectively based on their specific needs and preferences.
• This table 13.3 provides a comparison of two products (A and B) based on weighted quality
scores.
• Each product quality (Usability, Efficiency, Maintainability) is given an importance rating.
• Product A and B are scored for each quality, and these scores are multiplied by the importance
rating to obtain weighted scores.
15
vtucod e.in
• The total weighted scores are summed for each product to determine their overall ranking.
It involves assessing software products by assigning quality scores to various characteristics, weighting
these scores by their importance, and summing them to get an overall score. This approach helps to
objectively compare products based on user satisfaction and key quality metrics.
By assigning scores to various qualities, weighting them by their importance, and summing these to get
an overall score, it provides a comprehensive way to compare and rank software products. This ensures
that both essential and desirable characteristics are considered in the assessment, leading to a more
balanced and objective evaluation.
The internal attributes may measure either some aspects of product or of the development process(called
process metrics).
1. Product Metrics:
Purpose: Measure the characteristics of the software product being developed.
Examples:
Size Metrics: Such as Lines of Code (LOC) and Function Points, which quantify the size or
complexity of the software.
Effort Metrics: Like Person-Months (PM), which measure the effort required to develop the
software.
Time Metrics: Such as the duration in months or other time units needed to complete the
development.
2. Process Metrics:
Purpose: Measure the effectiveness and efficiency of the development process itself.
Examples:
Review Effectiveness: Measures how thorough and effective code reviews are in finding defects.
Defect Metrics: Average number of defects found per hour of inspection, average time taken to
correct defects, and average number of failures detected during testing per line of code.
Productivity Metrics: Measures the efficiency of the development team in terms of output per
unit of effort or time.
Quality Metrics: Such as the number of latent defects per line of code, which indicates the
robustness of the software after development.
16
vtucod e.in
Differences:
➢ Focus: Product metrics focus on the characteristics of the software being built (size, effort,
time), while process metrics focus on how well the development process is performing
(effectiveness, efficiency, quality).
➢ Use: Product metrics are used to gauge the attributes of the final software product, aiding in
planning, estimation, and evaluation. Process metrics help in assessing and improving the
development process itself, aiming to enhance quality, efficiency, and productivity.
➢ Application: Product metrics are typically applied during and after development phases to
assess the product's progress and quality. Process metrics are applied throughout the
development lifecycle to monitor and improve the development process continuously.
By employing both types of metrics effectively, software development teams can better manage projects,
optimize processes, and deliver high-quality software products that meet user expectations.
Product quality management focuses on evaluating and ensuring the quality of the software product itself.
This approach is typically more straightforward to implement and measure after the software has been
developed.
Aspects:
1. Measurement Focus: Emphasizes metrics that assess the characteristics and attributes of the final
software product, such as size (LOC, function points), reliability (defects found per LOC), performance
(response time), and usability (user satisfaction ratings).
2. Evaluation Timing: Product quality metrics are often measured and evaluated after the software
product has been completed or at significant milestones during development.
3. Benefits:
➢ Provides clear benchmarks for evaluating the success of the software development project.
➢ Facilitates comparisons with user requirements and industry standards.
➢ Helps in identifying areas for improvement in subsequent software versions or projects.
17
vtucod e.in
4. Challenges:
➢ Predicting final product quality based on intermediate stages (like early code modules or
prototypes) can be challenging.
➢ Metrics may not always capture the full complexity or performance of the final integrated product.
Process quality management focuses on assessing and improving the quality of the development
processes used to create the software. This approach aims to reduce errors and improve efficiency
throughout the development lifecycle.
Aspects:
1. Measurement Focus: Emphasizes metrics related to the development processes themselves, such as
defect detection rates during inspections, rework effort, productivity (e.g., lines of code produced per
hour), and adherence to defined standards and procedures.
2. Evaluation Timing: Process quality metrics are monitored continuously throughout the development
lifecycle, from initial planning through to deployment and maintenance.
3. Benefits:
➢ Helps in identifying and correcting errors early in the development process, reducing the cost and
effort of rework.
➢ Facilitates continuous improvement of development practices, leading to higher overall quality in
software products.
➢ Provides insights into the effectiveness of development methodologies and practices used by the
team.
4. Challenges:
➢ Requires consistent monitoring and analysis of metrics throughout the development lifecycle.
➢ Effectiveness of process improvements may not always translate directly into improved product
quality without careful management and integration.
➢ While product and process quality management approaches have distinct focuses, they are
complementary.
➢ Effective software development teams often integrate both approaches to achieve optimal results.
18
vtucod e.in
➢ By improving process quality, teams can enhance product quality metrics, leading to more
reliable, efficient, and user-friendly software products.
The British Standards institution (BSI) have engaged in the creation of standards for quality management
systems. The British Standards is now called BS EN ISO 9001:2000, which is identical to the
international standard, ISO 9001:2000. ISO 9000 describes the fundamental features of a quality
management system (QMS) and its terminology. ISO 9001 describes how a QMS can be applied to a
creation of products and provision of services. ISO 9004 applies to process improvement.
ISO 9001:2000 is part of the ISO 9000 series, which sets forth guidelines and requirements for
implementing a Quality Management System (QMS). The focus of ISO 9001:2000 is on ensuring that
organizations have effective processes in place to consistently deliver products and services that meet
customer and regulatory requirements.
Key Elements:
1. Fundamental Features:
➢ Describes the basic principles of a QMS, including customer focus, leadership, involvement
of people, process approach, and continuous improvement.
➢ Emphasizes the importance of a systematic approach to managing processes and resources.
3. Certification Process:
➢ Organizations seeking ISO 9001:2000 certification undergo an audit process conducted by
an accredited certification body.
➢ Certification demonstrates that the organization meets the requirements of ISO 9001:2000
and has implemented an effective QMS.
19
vtucod e.in
4. Quality Management Principles:
❖ Customer focus: Meeting customer requirements and enhancing customer satisfaction.
❖ Leadership: Establishing unity of purpose and direction.
❖ Involvement of people: Engaging the entire organization in achieving quality objectives.
❖ Process approach: Managing activities and resources as processes to achieve desired outcomes.
❖ Continuous improvement: Continually improving QMS effectiveness.
❖ Define and document processes related to software development, testing, and maintenance.
❖ Establish quality objectives and metrics for monitoring and evaluating software development
processes.
❖ Implement corrective and preventive actions to address deviations from quality standards.
❖ Ensure that subcontractors and external vendors also adhere to quality standards through
effective quality assurance practices.
Perceived Value: Critics argue that ISO 9001 certification does not guarantee the quality of the end
product but rather focuses on the process.
Cost and Complexity: Obtaining and maintaining certification can be costly and time-consuming,
which may pose challenges for smaller organizations.
Focus on Compliance: Some organizations may become overly focused on meeting certification
requirements rather than improving overall product quality.
Despite these criticisms, ISO 9001:2000 provides a structured framework that, when implemented
effectively, can help organizations improve their software development processes and overall quality
management practices. Measurement - to demonstrate that products conform to standards, and the QMS
is effective, and to improve the effectiveness of processes that create products or services.
It emphasizes continuous improvement and customer satisfaction, which are crucial aspects in the
competitive software industry.
20
vtucod e.in
Principles of BS EN ISO 9001:2000
1. Customer Focus:
Understanding and meeting customer requirements to enhance satisfaction.
2. Leadership:
Providing unity of purpose and direction for achieving quality objectives.
3. Involvement of People:
Engaging employees at all levels to contribute effectively to the QMS.
4. Process Approach:
❖ Focusing on individual processes that create products or deliver services.
❖ Managing these processes as a system to achieve organizational objectives.
5. Continuous Improvement:
❖ Continually enhancing the effectiveness of processes based on objective measurements and
analysis.
6. Factual Approach to Decision Making:
Making decisions based on analysis of data and information.
7. Mutually Beneficial Supplier Relationships:
Building and maintaining good relationships with suppliers to enhance capabilities and performance.
21
vtucod e.in
7. Analysis and Improvement:
Analyzing causes of discrepancies and implementing corrective actions to improve processes
continually.
Detailed Requirements
1. Documentation:
• Maintaining documented objectives, procedures (in a quality manual), plans, and records that
demonstrate adherence to the QMS.
• Implementing a change control system to manage and update documentation as necessary.
2. Management Responsibility:
Top management must actively manage the QMS and ensure that processes conform to quality
objectives.
3. Resource Management:
Ensuring adequate resources, including trained personnel and infrastructure, are allocated to
support QMS processes.
4. Production and Service Delivery:
❖ Planning, reviewing, and controlling production and service delivery processes to meet
customer requirements.
❖ Communicating effectively with customers and suppliers to ensure clarity and alignment on
requirements.
5. Measurement, Analysis, and Improvement:
❖ Implementing measures to monitor product conformity, QMS effectiveness, and process
improvements.
❖ Using data and information to drive decision-making and enhance overall organizational
performance.
The evolution of quality assurance paradigms from product assurance to process assurance marks a
significant shift in how organizations ensure quality in their outputs. Here’s an overview of some key
concepts and methodologies related to process-based quality management:
Historical Perspective
22
vtucod e.in
2. Shift to Process Assurance:
❖ Later paradigms emphasize that ensuring a good quality process leads to good quality
products.
❖ Modern quality assurance techniques prioritize recognizing, defining, analyzing, and
improving processes.
1. Definition:
❖ TQM focuses on continuous improvement of processes through measurement and redesign.
❖ It advocates that organizations continuously enhance their processes to achieve higher levels
of quality.
1. Objective:
❖ BPR aims to fundamentally redesign and improve business processes.
❖ It seeks to achieve radical improvements in performance metrics, such as cost, quality,
service, and speed
To manage quality during development, process -based techniques are very important. SEI CMM,
CMMI, ISO 15504, and Six Sigma are popular capability models.
3. Six Sigma:
❖ Six Sigma focuses on reducing defects in processes to a level of 3.4 defects per million
opportunities (DPMO).
23
vtucod e.in
❖ It emphasizes data-driven decision-making and process improvement methodologies like
DMAIC (Define, Measure, Analyze, Improve, Control).
The SEI Capability Maturity Model (CMM) is a framework developed by the Software Engineering
Institute (SEI) to assess and improve the maturity of software development processes within
organizations.
It categorizes organizations into five maturity levels based on their process capabilities and practices:
1. Level 1: Initial
Characteristics:
❖ Chaotic and ad hoc development processes.
❖ Lack of defined processes or management practices.
❖ Relies heavily on individual heroics to complete projects.
Outcome:
❖ Project success depends largely on the capabilities of individual team members.
❖ High risk of project failure or delays.
2. Level 2: Repeatable
Characteristics:
❖ Basic project management practices like planning and tracking costs/schedules are in
place.
❖ Processes are somewhat documented and understood by the team.
Outcome:
❖ Organizations can repeat successful practices on similar projects.
❖ Improved project consistency and some level of predictability.
24
vtucod e.in
3. Level 3: Defined
Characteristics:
❖ Processes for both management and development activities are defined and documented.
❖ Roles and responsibilities are clear across the organization.
❖ Training programs are implemented to build employee capabilities.
❖ Systematic reviews are conducted to identify and fix errors early.
Outcome:
❖ Consistent and standardized processes across the organization.
❖ Better management of project risks and quality.
4. Level 4: Managed
Characteristics:
❖ Processes are quantitatively managed using metrics.
❖ Quality goals are set and measured against project outcomes.
❖ Process metrics are used to improve project performance.
Outcome:
❖ Focus on managing and optimizing processes to meet quality and performance goals.
❖ Continuous monitoring and improvement of project execution.
5. Level 5: Optimizing
Characteristics:
❖ Continuous process improvement is ingrained in the organization's culture.
❖ Process metrics are analyzed to identify areas for improvement.
❖ Lessons learned from projects are used to refine and enhance processes.
❖ Innovation and adoption of new technologies are actively pursued.
Outcome:
❖ Continuous innovation and improvement in processes.
❖ High adaptability to change and efficiency in handling new challenges.
❖ Leading edge in technology adoption and process optimization.
1) Capability Evaluation: Used by contract awarding authorities (like the US DoD) to assess
potential contractors' capabilities to predict performance if awarded a contract.
2) Process Assessment: Internally used by organizations to improve their own process capabilities
through assessment and recommendations for improvement. SEI CMM has been instrumental not
only in enhancing the software development practices within organizations but also in
establishing benchmarks for industry standards.
25
vtucod e.in
It encourages organizations to move from chaotic and unpredictable processes (Level 1) to optimized
and continuously improving processes (Level 5), thereby fostering better quality, efficiency, and
predictability in software development efforts.
The Capability Maturity Model Integration (CMMI) is an evolutionary improvement over its predecessor,
the Capability Maturity Model (CMM). Here's an overview of CMMI and its structure:
Expansion and Adaptation: Over time, various specific CMMs were developed for different domains
such as software acquisition (SA-CMM), systems engineering (SE-CMM), and people management
(PCMM). These models provided focused guidance but lacked integration and consistency.
1. Levels of Process Maturity: Like CMM, CMMI defines five maturity levels, each representing
a different stage of process maturity and capability.
These levels are:
❖ Level 1: Initial (similar to CMM Level 1)
❖ Level 2: Managed (similar to CMM Level 2)
❖ Level 3: Defined (similar to CMM Level 3)
❖ Level 4: Quantitatively Managed (an extension of CMM Level 4)
❖ Level 5: Optimizing (an extension of CMM Level 5)
26
vtucod e.in
2. Key Process Areas (KPAs):
Definition: Similar to CMM, each maturity level in CMMI is characterized by a set of Key
Process Areas (KPAs). These KPAs represent clusters of related activities that, when performed
collectively, achieve a set of goals considered important for enhancing process capability.
Gradual Improvement: KPAs provide a structured approach for organizations to incrementally
improve their processes as they move from one maturity level to the next.
Integration across Domains: Unlike the specific CMMs for various disciplines, CMMI uses a
more abstract and generalized set of terminologies that can be applied uniformly across different
domains.
Benefits of CMMI
❖ Broad Applicability: CMMI's abstract nature allows it to be applied not only to software
development but also to various other disciplines and industries.
❖ Consistency and Integration: Provides a unified framework for improving processes, reducing
redundancy, and promoting consistency across organizational practices.
❖ Continuous Improvement: Encourages organizations to continuously assess and refine their
processes to achieve higher levels of maturity and performance.
27
vtucod e.in
Process Reference Model
• Reference Model: ISO 15504 uses a process reference model as a benchmark against which actual
processes are evaluated. The default reference model is often ISO 12207, which outlines the processes in
the software development life cycle (SDLC) such as requirements analysis, architectural design,
implementation, testing, and maintenance.
Process Attributes
Nine Process Attributes: ISO 15504 assesses processes based on nine attributes, which are:
2. Performance Management (PM): Evaluates how well the process is managed and controlled.
3. Work Product Management (WM): Assesses the management of work products like requirements
specifications, design documents, etc.
4. Process Definition (PD): Focuses on how well the process is defined and documented.
5. Process Deployment (PR): Examines how the process is deployed within the organization.
6. Process Measurement (PME): Evaluates the use of measurements to manage and control the process.
7. Process Control (PC): Assesses the monitoring and control mechanisms in place for the process.
8. Process Innovation (PI): Measures the degree of innovation and improvement in the process.
9. Process Optimization (PO): Focuses on optimizing the process to improve efficiency and
effectiveness.
Processes are assessed on the basis of nine process attributes - see Table .l 3.5.
28
vtucod e.in
Compatibility with CMMI
Alignment with CMMI: ISO 15504 and CMMI share similar goals of assessing and improving software
development processes. While CMMI is more comprehensive and applicable to a broader range of
domains, ISO 15504 provides a structured approach to process assessment specifically tailored to
software development.
• Application: The standard is used by organizations to conduct process assessments either internally for
improvement purposes or externally for certification purposes.
29
vtucod e.in
Note: For a process to be judged to be at Level 3, for example, Levels 1 and 2 must also have been
achieved.
When assessors are judging the degree to which a process attribute is being fulfilled, they allocate one of
the following scores:
In order to assess the process attribute of a process at being at a certain level of achievement , indicators
have to be found that provide evidence for the assessment.
In the context of assessing process attributes according to ISO/IEC 15504 (SPICE), evidence is crucial
to determine the level of achievement for each attribute.
Here’s how evidence might be identified and evaluated for assessing the process attributes, taking the
example of requirement analysis processes:
30
vtucod e.in
Using ISO/IEC 15504 Attributes
1) Process Performance (PP):
▪ Evidence: Performance metrics related to the effectiveness and efficiency of the requirements
analysis process, such as the accuracy of captured requirements, time taken for analysis, and
stakeholder satisfaction surveys.
▪ Assessment: Assessors would analyze the metrics to determine if the process meets its
performance objectives (e.g., accuracy, timeliness).
2) Process Control (PC):
▪ Evidence: Procedures and mechanisms in place to monitor and control the requirements
analysis process, such as regular reviews, audits, and corrective action reports.
▪ Assessment: Assessors would review the control mechanisms to ensure they effectively
monitor the process and address deviations promptly.
3) Process Optimization (PO):
▪ Evidence: Records of process improvement initiatives, feedback mechanisms from
stakeholders, and innovation in requirements analysis techniques.
▪ Assessment: Assessors would examine how the organization identifies opportunities for
process improvement and implements changes to optimize the requirements analysis process.
Importance of Evidence
❖ Objective Assessment: Evidence provides objective data to support the assessment of process
attributes.
❖ Validation: It validates that the process attributes are not just defined on paper but are
effectively deployed and managed.
❖ Continuous Improvement: Identifying evidence helps in identifying areas for improvement and
optimizing processes over time.
Implementing process improvement in UVW, especially in the context of software development for
machine tool equipment, involves addressing several key challenges identified within the organization.
Here’s a structured approach, drawing from CMMI principles, to address these issues and improve
process maturity:
1. Resource Overcommitment:
Issue: Lack of proper liaison between the Head of Software Engineering and Project Engineers
leads to resource overcommitment across new systems and maintenance tasks simultaneously.
Impact: Delays in software deliveries due to stretched resources.
31
vtucod e.in
2. Requirements Volatility:
Issue: Initial testing of prototypes often reveals major new requirements.
Impact: Scope creep and changes lead to rework and delays.
3. Change Control Challenges:
Issue: Lack of proper change control results in increased demands for software development
beyond original plans.
Impact: Increased workload and project delays.
4. Delayed System Testing:
Issue: Completion of system testing is delayed due to a high volume of bug fixes.
Impact: Delays in product release and customer shipment.
32
vtucod e.in
Reduced scope creep and unplanned changes disrupting project schedules.
Enhanced control over system modifications, minimizing delays and rework.
Focus: Transition from ad-hoc, chaotic practices to defined processes with formal planning and control
mechanisms.
Benefits: Improved predictability in project outcomes, better resource management, and reduced
project risks.
33
vtucod e.in
The next step would be to identify the processes involved in each stage of the development life cycle.
As in Fig 13.6. The steps of defining procedures for each development task and ensuring that they are
actually carried out help to bring an organization up to Level 3.
34
vtucod e.in
Time Management: PSP advocates that developers should rack the way they spend time. The actual
time spent on a task should be measured with the help of a stop-clock to get an objective picture of the
time spent. An engineer should measure the time he spends for various development activities such as
designing, writing code, testing etc.
PSP Planning: Individuals must plan their project. The developers must estimate the maximum.
minimum, and the average LOC required for the product. They record the plan data in a project plan
summary.
The PSP is schematically shown in Figure 13.7 . As an individual developer must plan the personal
activities and make the basic plans before starting the development work. While carrying out the activities
of different phases of software development, the individual developer must record the log data using time
measurement.
During post implementation project review, the developer can compare the log data with the initial plan
to achieve better planning in the future projects, to improve his process etc. The four maturity levels of
PSP have schematically been shown in Fig 13.8. The activities that the developer must perform for
achieving a higher level of maturity have also been annotated on the diagram.
35
vtucod e.in
Six Sigma
• Motorola, USA , initially developed the six-sigma method in the early 1980s. The purpose of six sigma
is to develop processes to do things better, faster, and at a lower cost.
• Six sigma becomes applicable to any activity that is concerned with cost, timeliness, and quality of
results. Therefore, it is applicable to virtually every industry.
• Six sigma seeks to improve the quality of process outputs by identifying and removing the causes of
defects and minimizing variability in the use of process.
• Six sigma is essentially a disciplined, data-driven approach to eliminate defects in any process. The
statistical representation of six sigma describes quantitatively how a process is performing. To achieve
six sigma, a process must not produce more than 3.4 defects per million defect opportunities.
• A six-sigma defect is defined as any system behavior that is not as per customer specifications. Total
number of six sigma defect opportunities is then the total number of chances for committing an error.
Sigma of a process can easily be calculated using a six-sigma calculator.
UVW, a company specializing in machine tool equipment with sophisticated control software, can benefit
significantly from implementing Six Sigma methodologies. Here’s how UVW can adopt and benefit from
Six Sigma: Overview of Six Sigma Six Sigma is a disciplined, data-driven approach aimed at improving
process outputs by identifying and eliminating causes of defects, thereby reducing variability in processes.
The goal is to achieve a level of quality where the process produces no more than 3.4 defects per million
opportunities.
1. Define:
Objective: Clearly define the problem areas and goals for improvement.
Action: Identify critical processes such as software development, testing, and deployment where
defects and variability are impacting quality and delivery timelines.
2. Measure:
Objective: Quantify current process performance and establish baseline metrics.
Action: Use statistical methods to measure defects, cycle times, and other relevant metrics in software
development and testing phases.
36
vtucod e.in
3. Analyze:
Objective: Identify root causes of defects and variability in processes.
Action: Conduct thorough analysis using tools like root cause analysis, process mapping, and
statistical analysis to understand why defects occur and where process variations occur.
4. Improve
Objective: Implement solutions to address root causes and improve process performance.
Action: Develop and implement process improvements based on the analysis findings. This could
include standardizing processes, enhancing communication between teams (e.g., software
development and testing), and implementing better change control procedures.
5. Control:
Objective: Maintain the improvements and prevent regression.
Action: Establish control mechanisms to monitor ongoing process performance. Implement measures
such as control charts, regular audits, and performance reviews to sustain improvements.
Focus Areas:
❖ Use of DMAIC (Define, Measure, Analyse, Improve, Control) for existing process improvements.
❖ Application of DMADV (Define, Measure, Analyse, Design, Verify) for new process or product
development to ensure high-quality outputs from the outset.
37
vtucod e.in
13.11 TECHNIQUES TO HELP ENHANCE SOFTWARE QUALITY
The discussion highlights several key themes in software quality improvement over time, emphasizing
shifts in practices and methodologies:
Increasing visibility: A landmark in this movement towards making the software development process
more visible was the advocacy by the American software guru, Gerald Weinberg of egoless
programming. Weinberg encouraged the simple practice of programmer looking at each other code.
Procedure structure:
• Initially, software development lacked structured methodologies, but over time, methodologies
with defined processes for every stage (like Agile, Waterfall, etc.) have become prevalent.
• Structured programming techniques and 'clean-room' development further enforce procedural
rigor to enhance software quality
• Traditional approaches involved waiting until a complete, albeit imperfect, version of software
was ready for debugging.
• Contemporary methods emphasize checking and validating software components early in
development, reducing reliance on predicting external quality from early design documents.
Inspections :
▪ Inspections are critical in ensuring quality at various development stages, not just in coding but
also in documentation and test case creation.
It is very effective way of removing superficial errors from a piece of work.
• It motivates developers to produce better structured and self-explanatory software.
• It helps spread good programming practice as the participants discuss the advantages and
disadvantages of specific piece of code.
• It enhances team spirit.
▪ Techniques like Fagan inspections, pioneered by IBM, formalize the review process with trained
moderators leading discussions to identify defects and improve quality.
▪ Learnings from Japanese quality practices, such as quality circles and continuous improvement,
have influenced global software quality standards, emphasizing rigorous inspection and feedback
loops.
38
vtucod e.in
Benefits of Inspections:
▪ Inspections are noted for their effectiveness in eliminating superficial errors, motivating
developers to write better-structured code, and fostering team collaboration and spirit.
▪ They also facilitate the dissemination of good programming practices and improve overall
software quality by involving stakeholders from different stages of development.
The late 1960s marked a pivotal period in software engineering where the complexity of software systems
began to outstrip the capacity of human understanding and testing capabilities. Here are the key
developments and concepts that emerged during this time:
39
vtucod e.in
▪ It involved three separate teams:
➢ Specification Team: Gathers user requirements and usage profiles.
➢ Development Team: Implements the code without conducting machine testing; focuses on
formal verification using mathematical techniques.
➢ Certification Team: Conducts testing to validate the software, using statistical models to
determine acceptable failure rates.
4. Incremental Development:
▪ Systems were developed incrementally, ensuring that each increment was capable of
operational use by end-users.
▪ This approach avoided the pitfalls of iterative debugging and ad-hoc modifications, which
could compromise software reliability.
Overall, these methodologies aimed to address the challenges posed by complex software systems by
promoting structured, systematic development processes that prioritize correctness from the outset rather
than relying on post hoc testing and debugging.
Formal methods
• Clean-room development, uses mathematical verification techniques. These techniques use
unambiguous, mathematically based , specification language of which Z and VDM are examples.
They are used to define preconditions and postconditions for each procedure.
• Precondition define the allowable states, before processing, of the data items upon which a
procedure is to work.
• Post condition define the state of those data items after processing. The mathematical notation
should ensure that such a specification is precise and unambiguous.
40
vtucod e.in
• Staff are involved in the identification of sources of errors through the formation of quality circle.
These can be set up in all departments of an organizations including those producing software
where they are known as software quality circle(SWQC).
• A quality circle is a group of four to ten volunteers working in the same area who meet for ,say,
an hour a week to identify, analyze and solve their work -related problems. One of their number
is a group leader and there could be an outsider a facilitator, who can advise on procedural matters.
• Associated with quality circles is the compilation of most probable error lists. For example, at
IOE, Amanda might find that the annual maintenance contracts project is being delayed because
of errors in the requirements specifications.
1. Identification of Common Errors: Teams, such as those in quality circles, assemble to identify the
most frequent types of errors occurring in a particular phase of software development, such as
requirements specification.
2. Analysis and Documentation: The team spends time analyzing past projects or current issues to
compile a list of these common errors. This list documents specific types of mistakes that have been
identified as recurring.
3. Proposing Solutions: For each type of error identified, the team proposes measures to reduce or
eliminate its occurrence in future projects.
For example:
Example Measure: Producing test cases simultaneously with requirements specification to ensure early
validation.
Example Measure: Conducting dry runs of test cases during inspections to catch errors early in the
process.
4. Development of Checklists: The proposed measures are formalized into checklists that can be used
during inspections or reviews of requirements specifications. These checklists serve as guidelines to
ensure that identified errors are systematically checked for and addressed.
5. Implementation and Feedback: The checklists are implemented into the software development
process. Feedback mechanisms are established to evaluate the effectiveness of these measures in reducing
errors and improving overall quality.
41
vtucod e.in
Improved Quality: By proactively identifying and addressing common errors, the quality of
requirements specifications (or other phases) improves.
Efficiency: Early detection and prevention of errors reduce the need for costly corrections later in the
development cycle.
Standardization: Checklists standardize the inspection process, ensuring that critical aspects are
consistently reviewed. This approach aligns well with quality circles and other continuous improvement
methodologies by fostering a culture of proactive problem-solving and learning from past experiences.
13.12 Testing
The final judgement of the quality of the application is whether it actually works correctly when executed.
42
vtucod e.in
• Considering the diagrammatic representation of V-Model , which stress the necessity of
validation activities that match the activities creating the products of the project.
• The V-process model is expanding the activity box ‘testing’ in the waterfall model.
• Each step has a matching validation process which can , where defects are found, cause a loop
back to the corresponding development stage and a reworking of a following step.
Both are bug detection techniques which helps to remove errors in software.
The main difference between these two techniques are the following:
❖ Verification is the process of determining whether the output of one phase of software
development conforms to that of its previous phase. For example, a verification step can be to
check if the design documents produced after the design step conform to the requirement
specification.
❖ Validation is the process of determining whether fully developed software conforms to its
requirements specification. Validation is applied to the fully developed and integrated software
to check if it satisfies customer’s requirements.
❖ Verification is carried put during development process to check of the development activities are
being carried out correctly , where as
❖ Validation is carried out towards the end of the development process to check if the right product
as required by the customer had been developed.
• All the boxes in the right hand of the V-process model of Fig.13.9 correspond to verification
activities except the system testing block which corresponds to validation activity.
1) Black-box approach
2) White-box(or glass-box) approach.
In black-box approach, test cases are designed using only the functional specification of the software.
That is, test cases are designed solely based on an analysis of the input/output behavior. Hence black-
box testing is also known as functional testing and also as requirement-driven testing.
Design of white-box test cases requires analysis of the source code, It is also called structural testing
or structure-driven testing.
43
vtucod e.in
Levels of Testing:
A software product is normally tested at three different stages or levels. These three testing stages
are:
1) Unit Testing
2) Integration Testing
3) System Testing
During unit testing, the individual components (or units) of a program are tested. For every module, unit
testing is carried out as soon as the coding for it is complete. Since every module is tested separately,
there is a good scope for parallel activities during unit testing.
After testing all the units individually, the units are integrated over a number of steps and tested after
each step of integration (Integration testing).
Testing Activities:
1) Test Planning: Test Planning consists of determining the relevant test strategies and planning for
any test bed that may be required. A test bed usually includes setting up the hardware or simulator.
2) Test Case Execution and Result Checking: Each test case is run and the results are compared
with the expected results. A mismatch between the actual result and expected results indicates a
failure. The test cases for which the system fails are noted down for test reporting.
3) Test Reporting: When the test cases are run, the tester may raise issues, that is, report
discrepancies between the expected and the actual findings. A means of formally recording these
issues and their history is needed. A review body adjudicates these issues. The outcome of this
scrutiny would be one of the following:
• The issue is dismissed on the grounds that there has been a misunderstanding of a
requirement by the tester.
• The issue is identified as a fault which the developers need to correct -Where development
is being done by contractors, they would be expected to cover the cost of the correction.
• It is recognized that the software is behaving as specified, but the requirement originally
agreed is in fact incorrect.
• The issue is identified as a fault but is treated as an off-specification -It is decided that the
application can be made operational with the error still in place.
44
vtucod e.in
4) Debugging: For each failure observed during testing, debugging is carried out to identify the
statements that are in error.
5) Defect Retesting: Once a defect has been dealt with by the development team, the corrected code
is retested by the testing team to check whether the defect has successfully been addressed . Defect
retest is also called resolution testing. The resolution tests are a subset of the complete test suite
(Fig: 13.10).
6) Regression Testing: Regression testing checks whether the unmodified functionalities still
continue to work correctly. Thus, whenever a defect is corrected and the change is incorporated in
the program code, the change introduced to correct an error could actually introduce errors in
functionalities that were previously working correctly.
7) Test Closure: Once the system successfully passes all the tests, documents related to lessons
learned, results, logs etc., are achieved for use as a reference in future projects.
Many organizations have separate system testing groups to provide an independent assessment of
the correctness of software before release. In other organizations, staff is allocated to a purely
testing role but work alongside the develops instead of a separate group.
45
vtucod e.in
Test Automation:
1) Testing is most time consuming and laborious of all software development. With the growing size
of programs and the increased importance being given to product quality, test automation is
drawing attention.
2) Test automation is automating one or some activities of the test process. This reduces human effort
and time which significantly increases the thoroughness of testing.
3) With automation, more sophisticated test case design techniques can be deployed. By using the
proper testing tools automated test results are more reliable and eliminates human errors during
testing.
4) Every software product undergoes significant change overtime. Each time the code changes, it
needs to be tested whether the changes induce any failures in the unchanged features. Thus the
originally designed test suite need to be run repeatedly each time the code changes. Automated
testing tools can be used in repeatedly running the same set of test cases.
➢ Capture and Playback Tools: In this type of tools, the test cases are executed manually only once.
During manual execution , the sequence and values of various inputs as the outputs produced are
recorded. Later, the test can be automatically replayed and the results are checked against the
recorded output.
Advantage: This tool is useful for regression testing.
Disadvantage: Test maintenance can be costly when the unit test changes , since some of the
captured tests may become invalid.
➢ Automated Test Script Tool: Test Scripts are used to drive an automated test tool. The scripts
provide input to the unit under test and record the output. The testers employ a variety of languages
to express test scripts.
Advantage: Once the test script is debugged and verified, it can be rerun a large number of
times easily and cheaply.
Disadvantage: Debugging test scripts to ensure accuracy requires significant effort.
46
vtucod e.in
➢ Random Input Test Tools: In this type of an automatic testing tool, test values are randomly
generated to cover the input space of the unit under test. The outputs are ignored because analyzing
them would be extremely expensive.
Advantage: This is relatively easy and cost-effective for finding some types of defects.
Disadvantage: Is very limited form of testing. It finds only the defects that crash the unit under
test and not the majority of defects that do not crash but simply produce incorrect results.
➢ Model-Based Test Tools: A model is a simplified representation of program. These models can
either be structural models or behavioral models. Examples of behavioral models are state models
and activity models. A state model-based testing generates tests that adequately cover the state space
described by the model.
1) Using Historical Data :If historical error data from past projects are available, you can use this
data to estimate the number of errors per 1000 lines of code. This method allows you to predict
the number of errors likely to be found in a new system development of a known size.
• Instead of seeding known errors, use independent reviewers to inspect or test the same code.
• Collect three counts:
o n1: Number of valid errors found by reviewer A
o n2: Number of valid errors found by reviewer B
o n12: Number of cases where the same error is found by both A and B
• The smaller the proportion of errors found by both A and B compared to those found by only
one reviewer, the larger the total number of errors likely to be in the software.
47
vtucod e.in
• Estimate the total number of errors using the formula:
n=(n1×n2)/n12
This method helps in estimating the number of latent errors without deliberately introducing known
errors into the software.
For example, A finds 30 errors and B finds 20 errors of which 15 are common to both A and B . Thes
estimated total number of errors would be:
(30*20)/15=40
Software product having a large number of defects is unreliable. Reliability of the system will improve
if the number of defects in it is reduced.
Reliability is a observer dependent, it depends on the relative frequency with which different users invoke
the functionalities of a system. It is possible that because of different usage patterns of the available
functionalities of software, a bug which frequently shows up for one user, may not show up at all for
another user, or may show up very infrequently.
Reliability of the software keeps on improving with time during the testing and operational phases as
defects are identified and repaired. The growth of reliability over the testing and operational phases can
be modelled using a mathematical expression called Reliability Growth Model (RGM).
• Jelinski-Moranda model
• Littlewood-Verrall model
• Goel-Okumoto model
RGMs help predict reliability levels during the testing phase and determine when testing can be stopped.
Challenges in Software Reliability which makes which makes difficult to measure than hardware
reliability.
48
vtucod e.in
Hardware vs. Software Reliability
➢ Hardware failures typically result from wear and tear, whereas software failures are due to bugs.
➢ Hardware failure often requires replacement or repair of physical components. Software failures
need bug fixes in the code, which can affect reliability positively or negatively.
➢ Hardware: Shows a "bathtub" curve where failure rate is initially high, decreases during the useful
life, and increases again as components wear out. Software: Reliability generally improves over time
as bugs are identified and fixed, leading to decreased failure rates
Figure 13.11(a): Illustrates the hardware product's failure rate over time, depicting the "bathtub" curve.
Figure 13.11(b): Shows the software product's failure rate, indicating a decline in failure rate over time due
to bug fixes and improvements.
• Time between two successive failures, averaged over a large number of failures.
49
vtucod e.in
• Calculation involves summing up time intervals between failures and dividing by the number
of failures. MTTF can be calculated as
• Only runtime is considered, excluding downtime for fixes.
• POFOD measures likelihood of system failure when a service request is made. For
example, POFOD of 0.001 means that 1 out of every 1000 service requests would result in
a failure.
• Suitable for systems not required to run continuously.
6) Availability:
• Measures how likely the system is available for use during a given period.
• Takes into account both failure occurrences and repair times.
• Transient: Failures occurring only for certain inputs while invoking a function.
• Permanent: Failures occurring for all inputs while invoking a function.
• Recoverable: System can recover without shutting down.
• Unrecoverable: System needs to be restarted to recover.
• Cosmetic: Minor irritations without incorrect result
• Definition: Mathematical models predicting how software reliability improves as errors are
detected and fixed.
• Application: Used to predict reliability levels and determine when to stop testing.
50
vtucod e.in
1) Jelinski and Moranda Model
• Model Characteristics:
• A step function model assuming reliability increases by a constant increment with each error fix.
• Assumes perfect error fixing, meaning all errors contribute equally to reliability growth.
• Typically, unrealistic as different errors contribute differently to growth. Reliabity growth
predicted using this model has been shown in Figure 13.12.
• This model allows for negative reliability growth, acknowledging that repairs can introduce new
errors.
• It models the impact of error repairs on product reliability, which diminishes over time as larger
contributions to reliability are addressed first.
• Uses Gamma distribution to treat an error’s contribution to reliability improvement as an
independent random variable.
3) Goel-Okutomo Model
• This model assumes exponentially distributed execution times between failures and models the
cumulative number of failures over time. ,expected number of failures between time t
and t+Δt .
• It is assumed that it follows a non-homogeneous Poisson process (NHPP) ie., expected number
of occurrences for any time t to t+Δt is proportional to the expected number of undetected
errors at time t.
51
vtucod e.in
o The rate of failures decreases exponentially over time.
o The model assumes immediate and perfect correction once a failure is detected.
• The number of failures over time if given in Figure 13.13. The number of failures at time t can
be given by μ(t)=N(1−e−bt),
where N=Expected total number of defects in the code and
b is the rate at which the failure rate decreases.
Graph: Illustrates the expected total number of errors over execution time, showing an initial rapid
increase in detected errors, which slows down as more errors are corrected.
• Organizations produce quality plans for each project to show how standard quality procedures and
standards from the organization's quality manual will be applied to the project.
• If quality-related activities and requirements have been identified by the main planning process, a
separate quality plan may not be necessary.
• When producing software for an external client, the client’s quality assurance staff might require a
dedicated quality plan to ensure the quality of the delivered products.
• A quality plan acts as a checklist to confirm that all quality issues have been addressed during the
planning process.
• Most of the content in a quality plan references other documents that detail specific quality procedures
and standards.
52
vtucod e.in
Components of a Quality Plan
53
vtucod e.in