Software Engineering 1
Software Engineering 1
Table of Contents
1. What is software? ................................................................................................. 1
1.1. Types of Software....................................................................................................... 1
1.2. Characteristics and Problems of Software ................................................................. 2
1.3. Software Engineering Approach ................................................................................ 2
1.4. Definition of Software Engineering ............................................................................. 3
1.5. Software Attributes ..................................................................................................... 3
2. Software Processes .............................................................................................. 4
2.1. The ‘Waterfall’ model .................................................................................................. 4
2.2. Evolutionary development .......................................................................................... 5
2.3. Describe productivity in software engineering ........................................................... 6
2.4. Formal systems development .................................................................................... 7
2.5. Reuse-oriented development ..................................................................................... 8
2.6. Incremental Development .......................................................................................... 9
2.7. Spiral development ................................................................................................... 10
2.8. The RAD Model ........................................................................................................ 12
3. Introduction to Unified Modeling Language .................................................... 14
3.1. History of the UML .................................................................................................... 14
3.2. Brief overview of UML .............................................................................................. 14
3.3. A Conceptual Model of the UML .............................................................................. 16
3.4. Types of UML Diagrams - Discussion ...................................................................... 16
4. Software Design .................................................................................................. 17
4.1. Design concepts ....................................................................................................... 17
4.1.1. Abstraction ................................................................................................................... 17
4.1.2. Architecture .................................................................................................................. 17
4.1.3. Patterns ........................................................................................................................ 18
4.1.4. Modularity ..................................................................................................................... 18
4.1.5. Cohesion ...................................................................................................................... 20
4.1.6. Coupling ....................................................................................................................... 20
4.1.7. Information hiding......................................................................................................... 21
4.1.8. Functional independence ............................................................................................. 21
4.1.9. Refinement ................................................................................................................... 22
4.2. Architectural design .................................................................................................. 22
4.2.1. Repository model ......................................................................................................... 22
4.2.2. Client-server model ...................................................................................................... 23
4.2.3. Layered model ............................................................................................................. 24
4.3. Explain traceability in software systems and describe the processes..................... 25
4.4. Modular decomposition ............................................................................................ 25
4.5. Procedural design using structured methods .......................................................... 26
4.6. Object oriented design process................................................................................ 26
4.6.1. Understand and define the context and the modes of use of the system ................... 27
4.6.2. Design the system architecture.................................................................................... 27
4.6.3. Identify the principle objects of the system .................................................................. 27
4.6.4. Develop design model ................................................................................................. 27
4.6.5. Specify the object interfaces ........................................................................................ 28
5. Managing Software Projects.............................................................................. 29
5.1. Need for the proper management of software projects ........................................... 29
5.2. Describe the role of repositories .............................................................................. 30
5.3. Management activities .............................................................................................. 30
5.3.1. Project planning ........................................................................................................... 31
5.3.2. Estimating costs ........................................................................................................... 32
5.3.3. Project scheduling ........................................................................................................ 33
5.4. Risk management ..................................................................................................... 36
5.4.1. Risk identification ......................................................................................................... 37
5.4.2. Risk analysis ................................................................................................................ 37
5.4.3. Risk Planning ............................................................................................................... 37
1. What is software?
Many people equate the term software with computer programs. In fact, this is too restrictive
a view. Software is not just the programs but also all associated documentation and
configuration data which is needed to make these programs operate correctly.
A software system usually consists of a number of separate programs, configuration files
which are used to set up these programs, systems and user documentation which explains
how to use the system.
Individuals who develop software’s are termed as software engineers. Software engineers
are concerned with developing software products, i.e. software which can be sold to a
customer. There are two categories of software products:
1. Generic products: These are stand-alone systems which are produced by a
development organization and sold on the open market to any customer who is able
to buy them. Examples: Word processors, Databases, Drawing Packages and Project
management tools.
2. Bespoke (or customized) products: These are systems which are commissioned
by a particular customer. The software is developed specially for that customer by a
software contractor.
Example: Software written to support a business process
Based on its use and purpose there are different types of software which are available in the
real world:
System Software – Systems software is a collection of programs written to service other
programs. They directly control the hardware resources and support the operation of
application software.
Examples of System software are:
1. Operating Systems - Windows, UNIX, Linux
2. Program Translators - Compilers, Interpreters
3. Utility Software - Merging, Sorting
1. Real time software – Software that monitor/analyze/controls real world events as they
occur are called real time software. Elements of real time software include data
gathering, component that collects data, transformer, and respondent.
2. Business software – these are information systems that are used in many general
business applications. Examples are general TPS, MIS, etc.
4. Embedded software – intelligent products include these types of software’s and are
very commonly used nowadays. Embedded software resides in the read only memory
of the product. Example: microwave ovens, vehicle dashboard displays…etc.
6. Web based software – web pages retrieved by a browser are software that
incorporates executable instructions. Example CGI, HTML, Perl...etc.
As well as the services which they provide, software products have a number of other
associated attributes which reflect the quality of that software. These attributes are not
directly concerned with what the software does, rather they reflect its behaviour while it is
executing and the structure and organization of the source program and associated
documentation.
The specific set of attributes which you might expect from a software system obviously
depends on its application. Therefore, a banking system must be secure, an interactive game
must be responsive….etc.
Essential attributes of good software
Product Description
characteristic
Maintainability Software should be written in such a way that it may evolve
to meet the changing needs of customers. This is a critical
attribute because software change is an inevitable
consequence of a changing business environment.
Dependability Software dependability has a range of characteristics,
including reliability, security and safety. Dependable
software should not cause physical or economic damage in
the event of system failure.
Efficiency Software should not make wasteful use of system resources
such as memory and processor cycles. Efficiency therefore
includes responsiveness, processing time, memory
utilization, etc.
Usability Software must be usable, without undue effort, by the type
of user for whom it is designed. This means that it should
have an appropriate user interface and adequate
documentation.
2. Software Processes
A software process is a set of activities and associated results which lead to the production
of a software product. These may involve the development of software from scratch although
it is increasingly the case that new software is developed by extending and modifying
existing systems.
There is no ideal process and different organizations have developed completely different
approaches to software development. Processes have evolved to exploit the capabilities of
the people in an organization and the specific characteristics of the systems which are being
developed. Therefore, even within the same company there may be many different
processes used for software development.
Although there are many different software processes, there are fundamental activities which
are common to all software processes. These are:
• Software specification – The functionality of the software and the constraints on its
operations must be defined.
• Software design and implementation – The software to meet the specification must
be produced.
• Software validation – The software must be validated to ensure that it does what the
customer wants.
• Software evolution – The software must evolve to meet changing customer needs.
2. System and software design: The systems design process partitions the
requirements to either hardware or software systems. It establishes overall system
architecture. Software design involves identifying and describing the fundamental
software system abstractions and their relationships.
3. Implementation and unit testing: During this stage, the software design is realized
as a set of programs or program units. Unit testing involves verifying that each unit
meets its specification.
4. Integration and system testing: The individual program units or programs are
integrated and tested as a complete system to ensure that the software requirements
have been met. After testing, the software system is delivered to the customer.
5. Operation and maintenance: Normally (although not necessarily) this is the longest
life-cycle phase. The system is installed and put into practical use. Maintenance
involves correcting errors which were not discovered in earlier stages of the life cycle,
improving the implementation of system units and enhancing the system’s services
as new requirements are discovered.
Requirements
definition
System and
software design
Implementation
and unit testing
Integration and
system testing
- Operation and
maintenance
1. Exploratory development- where the objective of the process is to work with the
customer to explore their requirements and deliver a final system. The development
starts with the parts of the system which are understood. The system evolves by
adding new features as they are proposed by the customer.
Concurrent
activities
Initial
Specification
version
Outline Intermediate
Development
description versions
Final
Validation
version
Productivity is calculated using effort / size. Note that there are various methods to measure
software size. Each has its own features.
The critical distinctions between this approach and the waterfall model are:
1. The software requirements specification is refined into a detailed formal specification
which is expressed in a mathematical notation.
2. The development processes of design, implementation and unit testing are replaced
by a transformational development process where the formal specification is refined,
through a series of transformations, into a program.
Formal R1 Executable
R2 R3
specification program
P1 P2 P3 P4
This approach is based on the existence of the significant number of reusable components.
The system development process focuses on integrating these components into a system
rather than developing from scratch.
In the majority of software projects, there is some software reuse. This usually happens
informally when people working on the project know of designs or code which is similar to
that required. They look for these, modify them as required and incorporate them into their
system. In the evolutionary approach, reuse is often seen as essential for rapid system
development.
This informal reuse takes place irrespective of the generic process which is used. However,
in the past few years, an approach to software development (component based software
engineering) which relies on reuse has emerged and is becoming increasingly widely used.
This reuse-oriented approach relies on a large base of reusable software components which
can be accessed and some integrating framework for these components.
Development System
and integration validation
Sometimes, these components are systems in their own right (COTS or Commercial Off-The-
Shelf systems) that may be used to provide specific functionality such as text formatting,
numeric calculation, etc. The generic process model for reuse-oriented development is
shown above.
While the initial requirements specification stage and the validation stage are comparable
with other processes, the intermediate stages in a reuse-oriented process are different.
2. Requirements modification: During this stage, the requirements are analyzed using
information about the components which have been discovered. They are then
modified to reflect the available components. Where modifications are impossible, the
component analysis activity may be re-entered to search for alternative solutions.
3. System design with reuse: During this phase, the framework of the system is
designed or an existing framework is reused. The designers take into account the
components which are reused and organized the framework to cater for this. Some
new software may have to be designed if reusable components are not available.
The waterfall model of development requires customers for a system to commit to a set of
requirements before design begins and the designer to commit to particular design strategies
before implementation. Changes to the requirements during development require rework of
the requirements, design and implementation. However, the advantages of the waterfall
model are that it is a simple management model and its separation of design and
implementation should lead to robust systems which are amenable to change.
By contrast, an evolutionary approach to development allows requirements and design
decisions to be delayed but also leads to software which may be poorly structured and
difficult to understand and maintain. Incremental development is an in- between approach
which combines the advantages of both of these models.
The incremental approach to development was suggested by Mills (Mills et al., 1980) as a
means of reducing rework in the development process and giving customers some
opportunities to delay decisions on their detailed requirements until they had some
experience with the system.
Once the system increments have been identified, the requirements for the services to be
delivered in the first increment are defined in detail and that increment is developed using the
most appropriate development process. During that development, further requirements
analysis for later increments can take place but requirements changes for the current
increment are not accepted.
Once an increment is completed and delivered, customers can put it into service. This means
that they take early delivery of part of the system functionality. They can experiment with the
system which helps them clarify their requirements for later increments and for later versions
of the current increment. As new increments are completed, they are integrated with existing
increments so that the system functionality improves with each delivered increment. The
common services maybe implemented early in the process or may be implemented
incrementally as functionality is required by an increment
There is no need to use the same process for the development of each increment. Where the
services in an increment have a well-defined specification, a waterfall model of development
may be used for that increment. Where the specification is unclear, an evolutionary
development model may be used.
1. Objective setting- Specific objectives for that phase of the project is defined.
Constraints on the process and the product are identified and a detailed management
plan is drawn up. Project risks are identified. Alternative strategies, depending on
these risks, may be planned.
2. Risk assessment and reduction- For each of the identified project risks, a detailed
analysis is carried out. Steps are taken to reduce the risk. For example, if there is a
risk that the requirements are inappropriate, a prototype system may be developed
3. Development and validation- After risk evaluation, a development model for the
system is then chosen. For example, if user interface risks are dominant, an
appropriate development model might be evolutionary prototyping. If safety risks are
the main consideration, development based on formal transformations may be the
most appropriate and so on. The waterfall model may be the most appropriate
development model if the main identified risk is sub-system integration.
4. Planning- The project is reviewed and a decision made whether to continue with a
further loop of the spiral. If it is decided to continue, plans are drawn up for the next
phase of the project.
Determine objectives
Evaluate alternatives
alternatives and identify, resolve risks
constraints Risk
analysis
Risk
analysis
Risk
analysis Opera-
Prototype 3 tional
Prototype 2 protoype
Risk
REVIEW analysis Proto-
type 1
Requirements plan Simulations, models, benchmarks
Life-cycle plan Concept of
Operation S/W
requirements Product
design Detailed
Requirement design
Development
plan validation Code
Design Unit test
Integration
and test plan V&V Integr ation
Plan next phase test
Acceptance
Service test Develop, verify
next-level product
Business modeling.
The information flow among business functions is modeled in a way that answers the
following questions: What information drives the business process? What information is
generated? Who generates it? Where does the information go? Who processes it?
Data modeling.
The information flow defined as part of the business modeling phase is refined into a set of
data objects that are needed to support the business. The characteristics (called attributes)
of each object are identified and the relationships between these objects defined.
Process modeling.
The data objects defined in the data modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions are
created for adding, modifying, deleting, or retrieving a data object.
Application generation.
RAD assumes the use of fourth generation techniques. Rather than creating software using
conventional third generation programming languages the RAD process works to reuse
existing program components (when possible) or create reusable components (when
necessary). In all cases, automated tools are used to facilitate construction of the software.
Since the RAD process emphasizes reuse, many of the program components have already
been tested. This reduces overall testing time. How ever, new components must be tested
and all interfaces must be fully exercised.
Obviously, the time constraints imposed on a RAD project demand “Scalable scope” If a
business application can be modularized in a way that enables each major function to be
completed in less than three months (using the approach described previously), it is a
candidate for RAD. Each major function can be addressed by a separate RAD team and then
integrated to form a whole.
The use of different notations brought confusion to the market since one symbol meant
different things to different people. To resolve this confusion Unified Modeling Language
(UML) was introduced.
“UML is a language used to specify, visualize, and document the artifacts of an object-
oriented system under development. It represents the unification of the Booch, OMT, and
Objector notations, as well as the best ideas from a number of other methodologies”.
UML Inputs
UML is an attempt to standardize the artifacts of analysis and design: semantic models,
syntactic notation, and diagrams.
In November 1997, the UML was adopted as the standard modeling language by the
Object Management Group (OMG). The current version of the UML is UML 1.4 and work is
processing on UML 2.0.
The UML is only a language and so is just one part of a software development method.
The UML is process independent, although optimally it should be used in a process that is
use case driven, architecture-centric, iterative, and incremental.
UML is a Language
A language provides a vocabulary and the rules for combining words in that vocabulary for
the purpose of communication. A modeling language is a language whose vocabulary and
rules focus on the conceptual and physical representation of a system. A modeling language
such as UML is thus a standard language for software blueprints.
The vocabulary and rules of a language such as the UML tell you how to create and read
well-formed models, but they don’t tell you what models you should create and when you
should create them.
There are some things about a software system you can’t understand unless you build
models that exceed the textual programming language. Something are best modeled
textually, others are best modeled graphically. The UML is such a graphical language.
The UML is more than just a bunch of graphical symbols. Rather, behind each symbol in
the UML notation is a well-defined semantics. In this manner, once developed can write a
model in the UML, and another developer, or even another tool, can interpret that model
unambiguously.
The UML is not limited to modeling software. In fact, it is expressive enough to model non
software systems, such as workflow in the legal system, the structure and behavior of a
patient healthcare system, and the design of hardware.
To understand the UML, you need to form a conceptual model of the language, and this
requires leaning these major elements: the UML’s basic building blocks, the rules that dictate
how those building blocks may be put together, and some common mechanisms that apply
throughout the UML.
1. Things
2. Relationships
3. Diagrams
Things are the abstractions that are first-class citizens in a model; relationships tie these
things together; diagrams group interesting collections of things.
• Use Case Diagram displays the relationship among actors and use cases.
• Class Diagram models class structure and contents using design elements such as
classes, packages and objects. It also displays relationships such as containment,
inheritance, associations and others.
• Interaction Diagrams
1. Sequence Diagram displays the time sequence of the objects participating in the
interaction. This consists of the vertical dimension (time) and horizontal dimension
(different objects).
• State Diagram displays the sequences of states that an object of an interaction goes
through during its life in response to received stimuli, together with its responses and
actions.
• Activity Diagram displays a special state diagram where most of the states are action
states and most of the transitions are triggered by completion of the actions in the source
states. This diagram focuses on flows driven by internal processing.
• Physical Diagrams
1. Component Diagram displays the high level packaged structure of the code itself.
Dependencies among components are shown, including source code components,
binary code components, and executable components. Some components exist at
compile time, at link time, at run times well as at more than one time.
4. Software Design
Software design consists of various concepts, which are included in applications as given
below:
4.1.1. Abstraction
When we consider a modular solution to any problem, many levels of abstraction can be
posed. At the highest level of abstraction, a solution is stated in broad terms using the
language of the problem environment. At lower levels of abstraction, a more detailed
description of the solution is provided.
“Abstraction is one of the fundamental ways that we as humans cope with complexity”
As we move through different levels of abstraction, we work to create procedural and data
abstractions. A procedural abstraction refers to a sequence of instructions that have a
specific and limited function. The name of procedural abstraction implies these functions, but
specific details are suppressed. An example of a procedural abstraction would be the word
open for a door. Open implies a long sequence of procedural steps (e.g., walk to the door,
reach out and grasp knob, turn knob and pull door, step away from moving door, etc)
A data abstraction is a named collection of data that describes a data object. In the context
of the procedural abstraction open, we can define a data abstraction called door. Like any
data object, the data abstraction for door would encompass a set of attributes that describe
the door (e.g., door type. swing direction, opening mechanism, weight, dimensions). It
follows that the procedural abstraction open would make use of information contained in the
attributes of the data abstraction door.
4.1.2. Architecture
Software architecture alludes to "the overall structure of the software and the ways in
which that structure provides conceptual integrity for a system". In its simplest form,
architecture is the structure or organization of program components (modules), the manner in
which these components interact, and the structure of data that are used by the components.
In a broader sense, however, components can be generalized to represent major system
elements and their interactions.
“Software architecture is the development work product that gives the highest return on
investment with respect to quality, schedule and cost”
The architectural design can be represented using one or more of a number of different
models:
4.1.3. Patterns
"A pattern is a named nugget of insight which conveys the essence of a proven solution
to a recurring problem within a certain context amidst competing concerns".
Stated in another way, a design pattern describes a design structure that solves a particular
design problem within a specific context and amid "forces" that may have an impact on the
manner in which the pattern is applied and used.
“Each pattern describes a problem which occurs over and over again in our environment,
and then describes the core of the solution to that problem, in such a way that you can
use this solution in a million times over, without ever doing it the same way twice”
The intent of each design pattern is to provide a description that enables a designer to
determine:
4.1.4. Modularity
Software architecture and design patterns embody modularity; that is, software is divided
into separately named and addressable components, sometimes called modules that are
integrated to satisfy problem requirements.
It has been stated that "modularity is the single attribute of software that allows a
program to be intellectually manageable". Monolithic software (i.e., a large program
composed of a single module) cannot be easily grasped by a software engineer. The number
of control paths, span of reference, number of variables, and overall complexity would make
understanding close to impossible. To illustrate this point, consider the following argument
based on observations of human problem solving.
Consider two problems, P1 and P2. If the perceived complexity of P1, is greater than the
perceived complexity of P2, it follows that the effort required to solve P1 is greater than the
effort required to solve P2. As a general case, this result is intuitively obvious. It does take
more time to solve a difficult problem.
It also follows that the perceived complexity of two problems when they are combined is
often greater than the sum of the perceived complexity when each is taken separately. This
leads to a "divide and conquer" strategy that is it's easier to solve a complex problem when
you break it into manageable pieces. This has important implications with regard to
modularity and software. It is, in fact, an argument for modularity.
M
Cost of Effort
Cost to Integrate
Number of Modules
4.1.5. Cohesion
Cohesion can be defined as the single mindedness of the component. With in the context of
component level design for object oriented systems, cohesion implies that a component or
class encapsulates only attributes and operations that are closely related to one other.
2. Layer cohesion – this type of cohesion occurs when a higher layer accesses the
services of a lower layer, but the lower layer do not access the higher layer.
3. Communicational cohesion – all operations that access the same data are defined
within one class/component.
4.1.6. Coupling
Coupling is a quantitative measure of the degree to which the classes are connected to one
another. When classes or components are more inter dependent coupling increases.
If the system is to be implemented in a manner which is easy to maintain then the concept of
coupling must be reduced. This is termed as low coupling.
2. Common coupling – occurs when a number of components all make use of a global
variable.
5. Data coupling – occurs when operations pass long string of data arguments from
one operation to another.
7. Type use coupling – occurs when component A uses the data type that is defined
by component B.
Hiding implies that effective modularity can be achieved by defining a set of independent
modules that communicate with one another only that information necessary to achieve
software function. Abstraction helps to define the procedural (or informational) entities that
make up the software. Hiding defines and enforces access constraints to both procedural
detail within a module and any local data structure used by the module.
The use of information hiding as a design criterion for modular systems provides the greatest
benefits when modifications are required during testing and later, during software
maintenance. Because most data and procedure are hidden from other parts of the software,
inadvertent errors introduced during modification are less likely to propagate to other
locations within the software.
The concept of functional independence is a direct outgrowth of modularity and the concepts
of abstraction and information hiding. Functional independence is achieved by developing
modules with "single-minded" function and an "aversion" to excessive interaction with other
modules. Stated another way, we want to design software so that each module addresses a
specific sub function of requirements and has a simple interface when viewed from other
parts of the program structure. It is fair to ask why independence is important.
Software with effective modularity, that is, independent modules, is easier to develop
because function may be compartmentalized and interfaces are simplified (consider the
ramifications when development is conducted by a team). Independent modules are easier to
maintain (and test) because secondary effects caused by design or code modification are
limited, error propagation is reduced, and reusable modules are possible. To summarize,
functional independence is a key to good design, and design is the key to software quality.
4.1.9. Refinement
The main purpose of architectural design is to establish the overall structure of the software
system. Architectural design can be defined as the design process for identifying the sub-
systems making up a system and the framework for sub-system control and communication.
The output of this design process is a description of the software architecture.
Architectural design is an early stage of the system design process which represents the link
between specification and design processes often carried out in parallel with some
specification activities and it involves identifying major system components and their
communications.
Sub-systems making up a system must exchange information so that they can work together
effectively. There are two fundamental ways in which this can be done:
1. All shared data is held in a central database that can be accessed by all subsystems.
A system model based on a shared database is sometimes called a repository model.
2. Each sub-system maintains its own database. Data is interchanged with other sub-
systems by passing messages to them.
The majority of systems that use large amounts of data are organised around a shared
database or repository. This model is therefore suited to applications where data is
generated by one sub-system and used by another. Examples of this type of system include
command and control systems, management information systems, CAD systems and CASE
toolsets.
Design Report
analyser generator
The client-server architectural model is a system model where the system is organised as a
set of services and associated servers and clients that access and use the services. The
major components of this model are:
1. A set of servers that offer services to other sub-systems. Examples of servers are
print servers that offer printing services, file servers that offer file management
services and a compile server, which offers programming language compilation
services.
2. A set of clients that call on the services offered by servers. These are normally sub-
systems in their own right. There may be several instances of a client program
executing concurrently.
3. A network that allows the clients to access these services. This is not strictly
necessary as both the clients and the servers could run on a single machine. In
practice, however, most client-server systems are implemented as distributed
systems.
Clients may have to know the names of the available servers and the services that they
provide. However, servers need not know either the identity of clients or how many clients
there are. Clients access the services provided by a server through remote procedure calls
using a request-reply protocol such as the http protocol used in the WWW. Essentially, a
client makes a request to a server and waits until it receives a reply.
Diagram below shows an example of a system that is based on the client-server model. This
is a multi-user, web-based system to provide a film and photograph library. In this system,
several servers manage and display the different types of media. Video frames need to be
transmitted quickly and in synchrony but at relatively low resolution. They may be
compressed in a store, so the video server may handle video compression and
decompression into different formats. Still pictures, however, must be maintained at a high
resolution, so it is appropriate to maintain them on a separate server.
The catalogue must be able to deal with a variety of queries and provide link into the web
information system that includes data about the film and video clip, and an e-commerce
system that supports the sale of film and video clips. The client program is simply an
integrated user interface, constructed using a web browser, to these services.
However, changes to existing clients and servers may be required to gain the full benefits of
integrating a new server. There may be no shared data model across servers and sub-
systems may organise their data in different ways. This means that specific data models may
be established on each server to allow its performance to be optimised. Of course, if an
XML-based representation of data is used, it may be relatively simple to convert from one
schema to another. However, XML is an inefficient way to represent data, so performance
problems can arise if this is used.
Internet
An example of a layered model is the OSI reference model of network protocols. Another
influential example was proposed by Buxton (Buxton, 1980), who suggested a three-layer
model for an Ada Programming Support Environment (APSE).
Diagram below reflects the APSE structure and shows how a configuration management
system might be integrated using this abstract machine approach.
Traceability works by linking two or more work items in application development. This link
indicates a dependency between the items. Requirements and test cases are often traced.
Requirements are traced forward through other development artifacts, including test cases,
test runs, and issues. Requirements are traced backward to the source of the requirement,
such as a stakeholder or a regulatory compliance mandate.
The purpose of requirements traceability is to verify that requirements are met. It also
accelerates development. That’s because it’s easier to get visibility over your requirements.
Traceability is also an important process for analysis. If a requirement changes, then you can
use traceability to determine the impact of change.
Traceability in software testing is the ability to trace tests forward and backward through the
development lifecycle.
Test cases are traced forward to test runs. And test runs are traced forward to issues that
need to be fixed. Test cases and test runs can also be traced backward to requirements.
Traceability in software testing is often done using a traceability matrix.
After an overall system organisation has been chosen, you need to make a decision the
approach to be used in decomposing sub-systems into modules. There is not a rigid
distinction between system organisation and modular decomposition. However, the
components in modules are usually smaller than sub-systems, which allow alternative
decomposition styles to be used.
There is no clear distinction between sub-systems and modules, but could be useful to think
of them as follows:
1. A sub-system is a system in its own right whose operation does not depend on the
services provided by other sub-systems. Sub-systems are composed of modules and
have defined interfaces, which are used for communication with other sub-systems.
2. A module is normally a system component that provides one or more services to
other modules. It makes use of services provided by other modules. It is not normally
considered to be an independent system. Modules are usually composed from a
number of other simpler system components.
There are two main strategies that you can use when decomposing a sub-system into
modules:
In the object-oriented approach, modules are objects with private state and defined
operations on that state. In the pipelining model, modules are functional transformations. In
both cases, modules may be implemented as sequential components or as processes.
The foundations of component level design for conventional software components were
formed in the early 1960s and were solidified with the work of Edsgar Dijkstra and his
colleagues. In the late 1960s, Dijkstra and others proposed the use of a set of constrained
logical constructs from which any program could be formed. The constructs emphasized
“maintenance of functional domain”. That is, each construct had a predictable logical
structure, was entered at the top and exited at the bottom, enabling a reader to follow
procedural flow more easily.
The constructs are sequence, condition and repetition. Sequence implements processing
steps that are essential in the specification of any algorithm. Condition provides the facility for
selected processing based on some logical occurrence, and repetition allows for looping.
These three constructs are fundamental to structured programming an important
component level design technique.
The structured constructs were proposed to limit the procedural design of software to a small
number of predictable operations. Complexity metrics indicate that the use of the structured
constructs reduces program complexity and thereby enhances readability, testability and
maintainability.
4.6.1. Understand and define the context and the modes of use of the system
The first step of the software design process is to develop an understanding of the
relationships between the software that is being designed and its external environment.
There are two main types of design models in object oriented design:
1. Static models.
2. Dynamic models
The following illustrates three main models which are discussed under this section:
a) Sub system models that show logical grouping of the objects into coherent sub
systems. These are represented using a form of class diagram where each sub
system is shown as a package. Sub system model are static diagrams.
b) Sequence models that show the sequence of object interactions. These are
represented using a UML sequence diagram or a collaboration diagram. Sequence
models are dynamic models.
c) State machine models that show how individual objects change their state in
response to events. These are represented in the UML using state charts diagrams.
State machine models are dynamic models.
a) Object interfaces have to be specified so that the objects and other components can
be designed in parallel
b) Designers should avoid designing the interface representation but should hide this in
the object itself
c) Objects may have several interfaces which are viewpoints on the methods provided
d) The UML uses class diagrams for interface specification but Java may also be used
The failure of many large software projects in the 1960s and early 1970s was first indication
of the difficulties of software management. Software was delivered late, was unreliable, cost
several times the original estimates and often exhibited poor performance characteristics
(Brooks, 1975). These projects did not fail because managers or programmers were
incompetent. On the contrary, these large, challenging projects attracted people of above
average ability. The fault lay in the approach management that was used. Management
techniques derived from other engineering disciplines were applied and these were
ineffective for software development.
Software managers are responsible for planning and scheduling project development. They
supervise the work to ensure that it is carried out to the required standards. They monitor
progress to check that the development is on time and within budget. Good management
cannot guarantee project success. However, bad management usually results in project
failure. The software is delivered late, costs more than originally estimated and fails to meet
its requirements.
Software managers do the same kind of job as other engineering project managers.
However, software engineering is distinct from other types of engineering in a number of
ways which can make software management particularly difficult. Some of the differences
are:
2. There are no standard software processes -We do not have a clear understanding
of the relationships between the software process and product types. In engineering
disciplines with a long history, the process is tried and tested. The engineering
process for particular types of system, such as a bridge, is well understood. Our
understanding of the software process has developed significantly in the past few
years. However, we still cannot predict with certainty when a particular software
process is likely to cause development problems.
3. Large software projects are often ‘one-off projects -Large software are usually
different from previous projects. Managers, therefore, do have a large body of
previous experience which can be used to reduce uncertainty in plans. Consequently,
it is more difficult to anticipate problems. Further more, rapid technological changes in
Because of these problems, it is not surprising that some software projects are late, over-
budget and behind schedule. Software systems are often new and technically innovative.
Engineering projects (such as new transport systems) which are innovative often also have
schedule problems. Given the difficulties involved, it is perhaps remarkable that so many
software projects are delivered on time and to budget.
It is impossible to write a standard job description for a software manager. The job varies
tremendously depending on the organization and on the software product being developed.
However, most managers take responsibility at some stage for some or all of the following
activities:
• Proposal writing
• Project planning and scheduling
• Project costing
• Project monitoring and reviews
• Personnel selection and evaluation
• Report writing and presentations
The first stage in a software project may involve writing a proposal to carry out that project.
The proposal describes the objectives of the project and how it will be carried out. It usually
includes cost and schedule estimates. It may justify why the project contract should be
awarded to a particular organization or team.
Project planning is concerned with identifying the activities, milestones and deliverables
produced by a project. A plan must then be drawn up to guide the development towards the
project goals.
Cost estimation is a related activity that is concerned with estimating the resources required
to accomplish the project plan.
Project monitoring is a continuing project activity. The manager must keep track of the
progress of the project and compare actual and planned progress and cost. Although most
organizations have formal mechanisms for monitoring, a skilled manager can often form a
clear picture of what is going on by informal discussion with project staff.
Project managers usually have to select people to work on their project ideally skilled staff
with appropriate experience will be available to work on the project However, in most cases;
managers have to settle for a less than ideal project team. The reasons for this are:
• The project budget may not cover the use of highly paid staff. Less
experienced; less well-paid staff may have to be used.
• Staff with the appropriate experience may not be available either within an
organization or externally. It may be impossible to recruit new staff to the project
within the organization; the best people may already be allocated to projects.
• The organization may wish to develop the skills of its employees. Inexperienced
staff may be assigned to a project to learn and to gain experience.
The software manager has to work within these constraints when selecting project staff.
However, problems are likely unless at least one project member has some experience, of
the type of system being developed. Without this experience, many simple mistakes are
likely to be made.
The project manager is usually responsible for reporting on the project to both client and
contractor organizations. Project managers must write concise, coherent documents which
abstract critical information from detailed project reports. They must be able to present this
information during progress reviews. Consequently, the ability to communicate effectively
both orally and in writing is an essential skill for a project manager.
As well as a project plan, managers may also have to draw up other types of plan. These are
briefly described in the below given table.
Plan Description
Quality plan Describes the quality procedures and
standards that will be used in a project.
Validation plan Describes the approach, resources and
schedule used for system validation.
Configuration management plan Describes the configuration management
procedures and structures to be used.
Maintenance plan Predicts the maintenance requirements of
the system, maintenance cost and effort
required.
Staff development plan Describes how the skills and experience of
the project team members will be
developed.
Project Plan
The project plan sets out the resources available to the project, the work breakdown and a
schedule for carrying out the work. Most plans should include the following sections:
1. Introduction -This briefly describes the objectives of the project and sets out the
constraints (e.g. budget, time, etc.) which affect the project management.
2. Project organization- This describes the way in which the development team is
organized, the people involved and their roles in the team.
3. Risk analysis- This describes possible project risks, the likelihood of these risks
arising and the risk reduction strategies which are proposed.
4. Hardware and software resource requirements-This describes the hardware and
the support software required to carry out the development. If hardware has to be
bought, estimates of the prices and the delivery schedule should be included.
5. Work breakdown -This describes the breakdown of the project into activities and
identifies the milestones and deliverables associated with each activity.
The project plan should be regularly revised during the project. Some parts, such as the
project schedule, will change frequently; other parts will be more stable. A document
organization which allows for the straightforward replacement of sections should be used.
MILESTONES
5.3.2. Estimating costs
Software cost estimation can be defined as a management activity which involves predicting
the resources required for a software development process. Software cost estimation
involves answering the following questions:
Project cost estimation and project scheduling are normally carried out together. The costs of
development are primarily the cost of the effort involved, so the effort computation is used in
both the cost and the schedule estimate. However you may have to do some cost estimation
before detailed schedules are drawn up. These initial estimates may be used to establish a
budget for the project or to set a price for the software for the customer.
There are three parameters involved in computing the total cost of software development
project as follows:
For most projects, the dominant cost is the effort cost. Computers that are powerful enough
for software development are relatively cheap. Although extensive travel costs may be
needed when a project is developed at different sites, the travel costs are usually small a
fraction of the effort costs. Furthermore, using electronic communications systems such as e-
mail, shared websites and video conferencing can significantly reduce the travel required.
Electronic conferencing also means that travelling time is reduced and time can be used
more productively in software development.
Effort costs are not just the salaries of the software engineers who are involved in the project.
Organizations compute effort costs in term of overhead costs where they take the total cost
of running the organization and divide this by the number of productive staff therefore, the
following costs are all part of the total effort cost:
Project scheduling involves separating the total work involved in a project into separate
activities and judging the time required to complete these activities. Usually, some of these
activities are carried out in parallel. Project schedulers must coordinate these parallel
activities and organize the work so that the Deliverables are usually milestones but
milestones need not be deliverables. Milestones may be internal project results that are used
by the project manager to check project progress but which are not delivered to the
customer.
To establish milestones, the software process must be broken down into basic activities with
associated outputs.
Bar charts and activity networks are graphical notations which are used to illustrate the
project schedule. Bar charts show who is responsible for each activity and when the activity
is scheduled to begin and end. Activity networks show the dependencies between the
different activities making up a project.
Consider the set of activities given. This table shows activities, their duration, and activity
interdependencies. Task T3 is dependent on Task T1. This means that T1 must be
completed before T3 starts. For example, T1 might be the preparation of a component
design and T3, the implementation of that design. Before implementation starts, the design
should be complete.
Given dependency and estimated duration of activities, an activity network which shows
activity sequences may be generated. It shows which activities can be carried out in parallel
and which must be executed in sequence because of a dependency on an earlier activity.
Activities are represented as rectangles. Milestones and project deliverables are shown with
rounded corners. Dates in this diagram show the start date of the activity and are written in
British style when the day precedes the month. You should read the network from left to right
and from top to bottom.
In the project management tool used to produce this chart, all activities must end in
milestones. An activity may start when its preceding milestone (which may depend on
several activities) has been reached. Therefore, in the third column of the table given below,
the corresponding milestone are shown (e.g. M5) which is reached when the tasks in that
column finish.
14/7/99 15 days
15 days
M1 T3
8 days T9
T1 5 days 4/8/99 25/8/99
25/7/99
T6 M4 M6
4/7/99 M3
start 20 days 7 days
15 days
T7 T11
T2
Before progress can be made from one milestone to another, all paths leading to it must be
completed. For example, task T9, shown in the activity network, cannot be started until tasks
T3 and T6 are finished. The arrival at milestone M4 shows that these tasks have been
completed.
The minimum time required to finish the project can be estimated by considering the longest
path in the activity graph (the critical path). In this case, it is 11 weeks of elapsed time or 55
working days. In activity network the critical path is shown as a sequence of emboldened
boxes. The overall schedule of the project depends on the critical path. Any slippage in the
completion of any critical activity causes project delays.
Delays in activities which do not lie on the critical path, however, need not cause an overall
schedule slippage. So long as the delays do not extend these activities so much that the total
time exceeds the critical path, the project schedule will not be affected. For example, if T8 is
delayed, it may not affect the final completion date of the project as it does not lie on the
critical path. The project bar chart shows the extent of the possible delay as a shaded bar.
4/7 11/7 18/7 25/7 1/8 8/8 15/8 22/8 29/8 5/9 12/9 19/9
Start
T4
T1
T2
M1
T7
T3
M5
T8
M3
M2
T6
T5
M4
T9
M7
T10
M6
T11
M8
T12
Finish
Some of the activities in the above chart are followed by a shaded bar whose length is
computed by the scheduling tool. This shows that there is some flexibility in the complete
date of these activities. If an activity does not complete on time, the critical path will not be
affected until the end of the period marked by the shaded bar. Activities which lie on the
critical path have no margin of error and they can be identified because they have no
associated shaded bar.
An important task of a project manager is to anticipate risks which might affect project
schedule or the quality of the software being developed and to take action to avoid these
risks. The, results of the risk analysis should be documented in project plan along with an
analysis of the consequences of a risk occurring & Identifying risks and drawing up plans to
minimize their effect on the project is called risk management (Hall, 1998; Ould, 1999).
Simplistically, you can think of a risk as a probability that some adverse a circumstance will
actually occur. Risks may threaten the project, the software that is being developed or the
organization. These categories of risk can be defined follows:
1. Project risks are risks which affect the project schedule or resources.
2. Product risks are risks which affect the quality or performance of the software being
developed.
3. Business risks are risks which affect the organization developing procuring the
software.
Of course, this is not an exclusive classification. If an experienced programmer leaves a
project this can be a project risk because the delivery of the system will be delayed. It can
also be a product risk because a replacement may not be as experienced and so may make
programming errors. Finally, it can be a business risk because the programmer’s experience
is not available for bidding for future business.
Risk management is particularly important for software projects because of the inherent
uncertainties which most projects face. These stem from loosely defined requirements,
difficulties in estimating the time and resources required for software development,
dependence on individual skills and requirements changes due to changes in customer
needs.
The project manager should anticipate risks, understand the impact of these risks on the
project, the product and the business and take steps to avoid these risks. Contingency plans
may be drawn up so that, if the risks do occur, immediate recovery action is possible.
The process of risk management is illustrated in the diagram below. It involves several
stages:
The risk management process, like all other project planning, is an iterative process which
continues throughout the project. Once an initial set of plans are drawn up, the situation is
monitored. As more information about the risks becomes available, they have to be re-
analyzed and new priorities established. The risk avoidance and contingency plans may be
modified as new risk information emerges.
The results of the risk management process should be documented in a risk management
plan. This should include a discussion of the risks faced by the project, an analysis of these
risks and the plans which are required to manage these risks. When appropriate, it may also
include some results of the risk management, i.e. specific contingency plans to be activated if
the risk occurs.
Risk identification is the first stage of risk management. It is concerned with covering
possible risks to the project. In principle, these should not be assessed or prioritized at this
stage although, in practice, risks with very minor consequence or very low probability risks
are not usually considered.
Risk identification may be carried out as a team process using a brainstorming approach or
may simply be based on a manager’s experience. To help the process, a checklist of
different types of risk may be used. These types include:
1. Technology risks- Risks which derive from the software or hardware technologies
which are being used as part of the system being developed.
2. People risks- Risks which are associated with the people in the development team.
3. Organizational risks-Risks which derive from the organizational environment where
the software is being developed.
4. Tools risks- Risks which derive from the CASE tools and other support software
used to develop the system.
5. Requirements risks- Risks which derive from changes to the customer requirements
and the process of managing the requirements change.
6. Estimation risks -Risks which derive from the management estimates of the system
characteristics and the resources required to build the system.
During the risk analysis process, each identified risk is considered in turn and a judgment
made about the probability and the seriousness of the risk. There is no easy way to do this-it
relies on the judgment and experience of the project manager. These should not generally be
precise numeric assessments but should be based around a number of bands:
1. The probability of the risk might be assessed as very low (<10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (>75%).
2. The effects of the risk might be assessed as catastrophic, serious, tolerable or
insignificant.
The results of this analysis process should then be tabulated with the table ordered
according to seriousness of the risk.
The risk planning process considers each of the key risks which have been identified and
identifies strategies to manage the risk. Again, there is no simple process which can be
followed to establish risk management plans. It relies on the judgment and experience of the
project manager.
These strategies fall into three categories:
1. Avoidance strategies - Following these strategies means that the probability that the
risk will arise will be reduced.
2. Minimization strategies- Following these strategies mean that the impact of the risk
will be reduced.
3. Contingency plans- Following these strategies mean that, if the worst happens, you
are prepared for it and have a strategy in place to deal with it.
Risk monitoring involves regularly assessing each of the identified risks to decide whether or
not that risk is becoming more or less probable and whether the effects of the risk have
changed. Of course, this cannot usually be observed directly, so you have to look at other
factors which give you clues about the risk probability and effects. These factors are
obviously dependent on the types of risk. Risk monitoring should be a continuous process
and, at every management progress review, each of the key risks should be considered
separately and discussed by a meeting.
Unfortunately, poor leadership is all too common in the software industry. Managers fail to
take into account the limitations of individuals and impose unrealistic deadlines on project
teams. They equate management with meetings yet fail to allow people in these meetings to
contribute to the project. They may accept new requirements without proper analysis of what
this means for the project team. They sometimes see their role as one of exploiting their staff
rather than working them to identify how their work can contribute to both organisational and
personal goals.
4. Honesty - As a manager, you should always be honest about what is and what is
going badly in the team. You should also be honest level of technical knowledge and
be willing to defer to staff with more edge when necessary. If you are less than
honest, you will eventually be out and will lose the respect of the group.
During and after the implementation process, the program being developed must be checked
to ensure that it meets specification and delivers the functionality expected by the people
paying for the software. Verification and Validation (V & V) is the name given to these
checking and analysis processes. Verification and Validation starts with requirements
reviews and continues through design reviews and code inspections to product testing.
Verification and Validation is not the same thing, although they are often confused. Boehm
concisely expressed the difference between them as:
These definitions tell us that role of verification involves checking that the software conforms
to its specification. You should check that it meets its specified functional and non-functional
requirements. Validation however is a more general process. The aim of validation is to
ensure that the software system meets the customer’s expectations. It goes beyond checking
that the system conforms to its specification to showing that the software does what the
customer expects it to do.
The ultimate goal of the verification and validation process is to establish confidence that the
software system is fit for purpose. This means that the system must be good enough for its
intended use. The level of required confidence depends on the system’s purpose, the
expectations of the system users and the current marketing environment for the system.
1. Software functions: the level of confidence required is dependent on how critical the
software is to an organization.
2. User expectation: It is a sad reflection on the software industry that many users
have low expectations of their software and are not surprised when it fails during use.
But in the recent past user tolerance of software failures has been decreasing.
3. Marketing environment: when a system is marked, the sellers of the system must
take into account competing programs, the price that customers are willing to pay for
a system and the required schedule for delivering that system where a company has
few competitors, it may decide to release a program before it has been fully tested
and debugged because they want to be the first into the market.
Within the V & V process, there are two complementary approaches to system checking and
analysis such as:
1. Software inspections or peer reviews - This is the process of analyzing and checking
the system representation such as the requirements document, design diagrams and the
program source code. They may apply at all stages of the process. Inspection may be
supplemented by some automatic analysis of the source text of a system or associated
documents. Software inspection and automated analysis are static V & V techniques, as they
do not require the system to be executed.
Software
Inspections
Prototype Program
Testing
The diagram above shows that software inspections and testing play complementary roles in
the software process. The arrows indicate the stages in the process where the techniques
may be used. Therefore you can use software inspections at all stages of the software
process. Starting with the requirements, any readable representations of the software can be
inspected. Requirements and design reviews are the main techniques used for error
detection in the specification and design.
You can only test a system when a prototype or an executable version of the program is
available. An advantage of incremental development is that a testable version of the program
is available at a fairly early stage in the development process. Functionality can be tested as
it is added to the system so you don’t have to have a complete implementation before testing
begins.
Inspection techniques include program inspections, automated source code analysis and
formal verification. However, static techniques can only check the correspondence between
a program and its specification (verification), they cannot demonstrate that the software is
operationally useful. You also cannot use static techniques to check emergent properties of
the software such as its performance and reliability.
Although software inspections are now widely used, program testing will always be the main
software verification and validation technique. Testing involves exercising the program using
data like the real data processed by the program. You discover program defects or
inadequacies by examining the outputs of the program and looking for anomalies. There are
two distinct types of testing that may be used at different stages in the software process such
as:
1. Validation Testing – is intended to show that the software is what the customer
wants, that is it meets its requirements. As part of validation testing, you may use
statistical testing to test the program’s performance and reliability and to check how it
works under operational conditions.
2. Defect Testing – is intended to reveal defects in the system rather than to simulate
its operational use. The goal of defect testing is to find inconsistencies between a
program and its specification.
Of course there is no hard and fast boundary between these approaches to testing. During
validation testing, you will find defects in the system; during defect testing some of the tests
will show that the program meets its requirements.
The process of V & V and debugging are normally interleaved. As you discover faults in the
program that you are testing, you have to change the program to correct these faults.
However, testing (or, more generally verification and validation) and debugging have different
goals:
Test Test
results Specification
cases
✓ Skilled debuggers looks for patterns in the test output where the defects is
exhibited and use knowledge of the type of defect, the output pattern, the
programming language, and the programming process to locate the defect.
✓ Locating a fault is always not an easy task since the fault need not necessarily be
close to the point where the program failed. Manual tracing of the program,
simulating execution, may be required.
✓ Interactive debugging tools are generally part of a suite of language support tools
that are integrated with the compilation system.
✓ Users can often control execution by ‘stepping’ their way through the program
statement by statement after each statement has been executed, the values of the
variables can be examined and the potential errors discovered.
After a defect in the program has been discovered, it must then be corrected and the
system must be revalidated. This may involve re-inspecting the program or repeating
previous test runs. This process is termed as regression testing.
V and V process can be carefully planned which should start early in the development
process. The V and V planning process should have a balance between static and
dynamic approaches of verification and validation.
Further the test planning is concerned with setting out standards for the testing process
rather than describing product test. Test plans are not just management documents.
6.1. Explain how to use project estimating and project planning tools.
The challenge with estimating is that it involves uncertainty. Factors that contribute to this
uncertainty;
• Experience with Similar Projects: The lesser the experience with similar projects, the
greater the uncertainty.
• Planning Horizon: The longer the planning horizon, the greater the uncertainty.
• Project Duration: The longer the project, the greater the uncertainty.
• People: The quantity of people and their skill will be a huge factor in estimating their
costs.
There exists tools and techniques that help in cost estimation.
Expert Judgement - uses the knowledge and experience of experts to estimate the cost of
the project. This technique uses unique factors specific to the project.
Analogous Estimating - uses historical data from similar projects as a basis for the cost
estimate. The estimate can be adjusted for known differences between the projects. This
type of estimate is less accurate than other methods.
Parametric Estimating – makes use of statistical modeling to develop a cost estimate.
Bottom-Up Estimating - uses the estimates of individual work packages which are then
summarized to determine an overall cost estimate for the project. This type of estimate is
more accurate than the others.
Three-Point Estimates – comes hand in hand with the Program Evaluation and Review
Technique (PERT).
Reserve Analysis - used to account for cost uncertainty.
Cost of Quality - includes the money that is spent during the project to avoid failures and
money spent during and after the project due to failures.
Project Management Estimating Software - includes the cost estimating software
applications, spreadsheets, simulation applications and statistical software tools. This is
useful for looking at cost estimation alternatives.
Vendor Bid Analysis - used to estimate what the project should cost by comparing the bids of
multiple vendors.
Functional or black box testing is an approach to testing where the tests are delivered from
the program or component specification. The system is a black box whose behavior can only
be determined by studying its inputs and outputs.
The diagram given below illustrates the black box testing strategy.
Inputs causing
anomalous
Input test data I behaviour
e
System
The tester presents inputs to the component or the system and examines the corresponding
outputs. If the outputs are not as predicted then the test has successfully detected a problem
with the software.
The main problem of this approach is for the tester to select the corresponding test data. This
can be minimized by using, equivalence partitioning.
The prices and complexity of these tasks can push managers to think of technology as a
cost. These tasks are collectively known as the total cost of ownership (TCO) of a system.
Understanding this concept is critical when making technology investment decisions.
6.4. Analyze and explain the software life cycle cost modelling
In modern software development practice, it is a necessity to know the cost and time
required for the software development before building software projects. One of the efficient
cost estimation models applied to many software projects is called “Constructive Cost Model
(COCOMO)”.
This is a procedural software cost estimation model that was proposed by Barry W. Boehm in
1981. This cost estimation model is extensively used in predicting the effort, development
time, average team size and effort required to develop a software project.
Software projects under COCOMO model strategies are categorized into three.
1. Organic
Suits a small software team since it has a generally stable development environment. Here,
the problem is well understood and has been solved in the past.
2. Semi-detached
3. Embedded
This contains the projects with operating tight constraints and requirements. The developer
requires high experiences and has to be creative to develop complex models.
Used for rough calculation which limits accuracy in calculating software estimation. This is
based on lines of source code together with constant values obtained from software project
types rather than other factors which have major influences to Software development
process.
This is an extension of the Basic COCOMO model which includes a set of cost drivers into
account in order to enhance more accuracy to the cost estimation model.
The model incorporates all qualities of both Basic COCOMO and Intermediate COCOMO
strategies on each software engineering process.
Test data
Tests Derives
Component Test
code outputs
The structural testing as given in the diagram is an approach which is also termed as white
box testing, glass box testing or clear box testing. This approach is used for relatively small
programs such as subroutines or operations associated with various objects.
Structural testing or white box testing focuses on three main areas of a program.
• Sequence or statement coverage – this will focus on each and every line of
code in the program.
• Selection or condition coverage – this will emphasize on each condition or
selection in a program therefore two main classes are available which are values
within the range and values out of the range.
E.g.: if the condition is x <= 4 then the values with in the range would be 4 and
the values out of the range will be 3, 5, 7,…etc.
• Loop coverage – focus on each iteration and will have three main types of
equivalence classes. These classes are skip, one pass and more than one pass.
E.g.: if there is a loop which indicates while( y>=3) the value y = 2 will skip the
loop, while if y = 3 then the loop will be executed only once and if the value is
greater than 3 then the loop will be executed for several times.
As indicated above testing each section of the program can be further defined as path
testing.
Path testing is therefore a technique which is used under structural testing. But path
testing mainly does not concentrate regarding loop coverage since it is rather
cumbersome.
The following indicates the testing levels used by software engineers for testing conventional
and object oriented software:
Unit testing focuses on the verification of the smallest unit of software design that is the
software component module. Using the component level design description as a guide,
important control paths are tested to uncover the errors within the boundary of the module.
This testing focuses on the internal processing logic and the internal data structures. This
type of testing can be done on multiple components parallely.
The commonest approach is the white box testing but black box testing also could be applied
if the component is too large or less critical.
Unit testing in object oriented context changes significantly. The modules are the operations
defined in the OO classes which must be tested for each of the sub class as they vary
through redefinition. Alternatively, OO class testing is also a unit testing but it is much more
broader than testing modules since this focuses on testing the functionality (operations) and
testing the behaviour (all states of objects). As the scope is higher the preferred approach is
black-box testing.
Integration testing is the systematic technique for constructing software architecture while at
the same time conducting tests to uncover errors associated with interfacing. The objective is
to take the tested units and then build a program structure that has been given by design.
There are several strategies which are used for the purpose of system integration and test.
The following indicates the strategies which are used for the purpose of integration and test.
1. Top- down Integration - Top down testing tests the high levels of the system before
testing its detailed components. The program is represented as a single abstract component
with sub components represented as stubs (dummy stubs). Stubs consist of the same
interface but with the limited functionality. After the top level components are implemented
and tested then the lower level components are implemented and tested the same way unit
the lowest level components are implemented. Top down testing generally is used with top
down development so that the system components are tested as soon as it is coded.
Testing
Level 1 Level 1 . ..
sequence
Le vel 3
stubs
Test
drivers
Testing
Level N Level N Le vel N Level N Level N
sequence
Test
drivers
Level N–1 Level N–1 Level N–1
Many components in a system are not simple functions or objects but are composite
components that are made up of several interacting objects. Testing these composite
components then is primarily concerned with testing that the component interface behaves
according to its specification.
Test
cases
A B
Interface testing is particularly important for object oriented and component based
development. Objects and components are defined by their interfaces and may be reused in
combination with other components in different systems. Interface errors in the composite
component cannot be detected by testing the individual objects or components. Errors in the
composite component may arise because of interactions between its parts.
There are different types of interfaces between program components and consequently
different types of interface errors that can occur such as:
1. Parameter interfaces
2. Shared memory interfaces
3. Procedural interfaces
4. Message passing interfaces
Interface errors are one of the most common forms of error in complex systems. These
errors fall into the following three categories:
1. Interface misuse
2. Interface misunderstanding
3. Timing errors
Software is only one element of much larger systems. Therefore, ultimately when the
software is incorporated with other elements such as hardware, databases, people etc a
series of tests are conducted. These systems include the following system tests:
1. Recovery Testing – forces the system to fail in a variety of ways and verifies that the
recovery is properly performed.
2. Security Testing – this verifies that the protection mechanisms built on to the system
successfully prevents unauthorized entry to the system, that is basic security features
are met.
4. Performance Test – this is designed to test the run time performance of the system
Validation testing begins at the end of integration testing when the full package of software is
produced. Its aim is to ensure that the software satisfies the functional and performance
requirements of the user. That is to check compliance to the validation criteria given in the
specification. There are two different approached to achieve this:
1. Alpha Testing – software is tested in the developer’s site by the end users. Mainly
used for testing bespoke software. This is achieved in a developer controlled
environment with user participation.
2. Beta Testing – software is tested at the end user sites, therefore software is run in
environment that is not controlled by the developer. All errors experienced are
recorded by the user and reported to the developer at regular intervals. Thereafter
these are corrected by the developer to plan out a final release of the software to the
entire client base. This approach is mainly used for generic software.
Each time when a new component is added the control structures changes, therefore new
interaction will arise. The aim of regression testing is to repeat a subset of previous tests to
ensure that any changes have not introduced any new errors.
Test case design is a part of system and component testing where you design the test cases
(inputs and predicted outputs) that test the system. The goal of the test case design is to
create a set of test cases that are effective in discovering program defects and showing that
the system meets its requirements.
To design a test case, you select a feature of the system or component that you are testing.
You then select a set of inputs that execute that feature, document the expected outputs or
output ranges and where possible design an automated check that tests that the actual and
expected outputs are the same.
There are various approaches that you can take to test case design such as:
1. Requirements based testing – where test cases are designed to test the system
requirements. This is mostly used at the system testing stage as system
requirements are usually implemented by several components. For each
requirement, you identify test cases that can demonstrate that the system meets that
requirement.
2. Partition testing – where you identify input and output partitions and design tests so
that the system executes inputs from all partitions and generates output in all
partitions. Partitions are groups of data that have common characteristics such as all
negative numbers, all names less than 30 characters, all events arising from
choosing items on a menu, and so on.
3. Structural testing – where you use knowledge of the program’s structure to design
tests that exercise all parts of the program. Essentially, when testing a program, you
should try to execute each statement at least once. Structural testing helps identify
test cases that can make this possible.
In general, when designing test cases, you should start with the highest level tests
from the requirements then progressively add more detailed tests using partition and
structural testing.
7. Software Maintenance
It is impossible to produce system of any size which does not need to be changed. Once
software is put into use, new requirements emerge and existing requirements changes as the
business running that software changes.
Parts of the software may have to be modified to correct errors that are found in operation,
improve its performance or other non-functional characteristics.
All of this means that, after delivery, software systems always evolve in response to demand
for change. There are a number of different strategies for software change.
• Software maintenance
• Architectural transformation
• Software re-engineering
Software change
Software change is inevitable due to the following factors:
• New requirements emerge when the software is used
• The business environment changes
• Errors must be repaired
• New equipment must be accommodated
• The performance or reliability may have to be improved
A key problem for organisations is implementing and managing change to their legacy
systems.
Software change strategies
• Software maintenance - Changes are made in response to changed requirements
but the fundamental software structure is stable
• Architectural transformation - The architecture of the system is modified generally
from a centralised architecture to a distributed architecture
• Software re-engineering - No new functionality is added to the system but it is
restructured and reorganised to facilitate future changes
These strategies may be applied separately or together.
7.1. Re-engineering
Software re-engineering
The diagram in the following page illustrates the re-engineering process. The input to the
process is a legacy program and the output is a structured, modularised version of the same
program. During program re-engineering, the data for the system may also be re-engineered.
The activities in this re-engineering process are:
1. Source code translation - The program is converted from an old programming
language to a more modem version of the same language or to a different language.
2. Reverse engineering - The program is analysed and information extracted from it.
This helps to document its organisation and functionality.
3. Program structure improvement - The control structure of the program is analysed
and modified to make it easier to read and understand.
4. Program modularisation - Related parts of the program are grouped together and,
where appropriate, redundancy is removed. In some cases, this stage may involve
Reverse
engineering
Program
structure
improvement
Structured Re-
program engineered
data
System re-engineering may not necessarily require all of the steps given in the above
diagram. Source code translation may not be needed if the programming language used to
develop the system is still supported by the compiler supplier. If the re-engineering relies
completely on automated tools, then recovering documentation through reverse engineering
may be unnecessary. Data re-engineering is only required if the data structures in the
program change during system re-engineering. However, software re-engineering always
involves some program re-structuring.
To make the re-engineered system interoperate with the new software, you may have to
develop adaptor components. These hide the original interfaces of the software system and
present new, better-structured interfaces that can be used by other components. This
process of legacy system wrapping is an important technique for developing large-scale
reusable components.
The costs of re-engineering obviously depend on the extent of the work that is carried out.
Apart from the extent of the re-engineering, the principal factors that affect re-engineering
costs are:
1. The quality of the software to be re-engineered - The lower the quality of the software
and its associated documentation (if any), the higher the re-engineering costs.
2. The tool support available for re-engineering - It is not normally cost-effective to re-
engineer a software system unless you can use CASE tools to automate most of the
program changes.
3. The extent of data conversion required - If re-engineering requires large volumes of
data to be converted, the process cost increases significantly.
4. The availability of expert staff - If the staff responsible for maintaining the system
cannot be involved in the re-engineering process, the costs will increase because
system re-engineers will have to spend a great deal of time understanding the
system.
The main disadvantage of software re-engineering is that there are practical limits to the
extent that a system can be improved by re-engineering. It isn't possible, for example, to
convert a system written using a functional approach to an object-oriented system. Major
architectural changes or radical re-organisation of the system data management cannot be
carried out automatically, so they incur high additional costs. Although re-engineering can
improve maintainability, the re-engineered system will probably not be as maintainable as a
new system developed using modem software engineering methods.
7.2.1. Importance of CM
1. The development organisation sets a delivery time (say 2 p.m.) for system
components. If developers have new versions of the components that they are
writing, they must deliver them by that time. Components may be incomplete but
should provide some basic functionality that can be tested.
2. A new version of the system is built from these components by compiling and linking
them to form a complete system.
3. This system is then delivered to the testing team, which carries out a set of
predefined system tests. At the same time, the developers are still working on their
components, adding to the functionality and repairing faults discovered in previous
tests.
4. Faults that are discovered during system testing are documented and returned to the
system developers. They repair these faults in a subsequent version of the
component.
The advantages of using daily builds of software are that the chances of finding problems
stemming from component interactions early in the process are increased. Furthermore, daily
building encourages thorough unit testing of components.
Psychologically, developers are put under pressure not to 'break the build', that is, deliver
versions of components that cause the whole system to fail. They are therefore reluctant to
deliver new component versions that have not been properly tested. Less system testing
time is spent discovering and coping with software faults that should have been found during
unit testing.
The successful use of daily builds requires a very stringent change management process to
keep track of the problems that have been discovered and repaired. It also leads to a very
large number of system and component versions that must be managed. Good configuration
management is therefore essential for this approach to be successful.
During the configuration management plan process, you decide exactly which items are to be
controlled. Documents or groups of related documents under configuration control and formal
documents or configuration items. Project plans, specifications, designs, programs and test
data suites are normally maintained as configuration items. However, all documents which
may be necessary for future system maintenance should be controlled.
should be integrated with version management system that is used to store and manage the
formal project documents. This approach, supported by some integrated CASE tools, makes
it possible to link changes directly with the documents and components affected by the
change.
A configuration database must be able to provide answers to a variety of queries about
system configurations.
7.2.3. Versioning
Version and release management are the process of identifying and keeping track of
different versions and releases of a system. A system version is an instance of a system that
differs, in some way, from other instances.
• New versions of the system may have different functionality, performance or may
repair system faults. Some versions may be functionally equivalent but designed for
different hardware or software configurations.
• If there are only small different between versions, one of these is sometimes called a
variant of the other.
Version Identification
Procedures for version management should define an ambiguous way of identifying each
component version. There are three techniques which may be used for component
identification:
• Version numbering- The component is given an explicit and unique version number.
This is the most commonly used identification scheme.
• Attribute-based identification- Each component has a name and as associated set
of attributes which differs for each version of component. Components are therefore
identified by the combination of name and attribute set.
• Change-oriented identification- Each system is named as in attribute-based
identification but is also associated with one or more change requests. The system
version is identified by associating the name with the changes implemented in the
component.
A system release is a version of the system that is distributed to customers. System release
manages are responsible for deciding when the system can be released to customers.
Release management is the process of creating the release and the distribution media and
documenting the release to ensure that it may be re-created exactly as distributed if this is
necessary. A system release is not just executable code of the system.
A system release is not just executable code of the system. The release may also include:
• An installation program- that are used to help install the system on target hardware.
• Electronic and paper documentation- describing the system
• Packing and associated publicity- which have been designed for the release.
SQA should aim to develop a “Quality Culture” where quality is seen as everyone’s
responsibility and not merely defining the standards and procedures.
Quality simply means that a product should meet its specifications. This is considered
problematical for software systems due to the following reasons:
Traditional view - Quality is about perfection/ bug free code. Generally associated with
testing at the end of development. Testing can’t introduce quality to a product; it can only
reduce the number of defects in the product.
Modern view (ISO9000) - Good quality is not perfection but fit for the purpose. Build the right
product in the right way. Do not over engineer since it becomes too expensive and do not
under engineer since it may not fit for purpose.
The following are the activities that are carried out in quality management:
These standards may be embedded in procedures or processes which are applied during
development. Processes may be supported by tools that embed knowledge of the quality
standards.
There are two types of standards that may be established as a part of quality assurance:
• Product standards: these are standard that apply to the software product being
developed. They include standards such as document standards, coding standards
and user interface standards. Product quality includes reusability, usability, portability
maintainability etc.
• Process standards: these are standards that define the processes which should be
followed during software development. They may include definitions of specification,
design and validation process and a description of the documents which must be
generated in the course of these processes.
The document standards in a software project are particularly important as documents are
the only tangible way of representing the software and the software process.
There are three types of documentation standards
1. Documentation process standards- these standards define the process which is
should be followed for document production.
2. Document standards- these are standards that govern the structure and
presentation of documents.
3. Document interchange standards- these are standards that ensure that all
electronic copies of documents are compatible
Quality plan should begin at an early stage in the software process. A quality should be set
out the desired product qualities. The quality plan should select those organizational
standards that are appropriate to a particular product and development process. New
standards may have to be defined if the project uses new methods and tools. Humphrey in
this classic book on the software management suggests an outline structure for quality plan.
This includes product introduction, product plans, process descriptions, quality goals and
risks and risk management.
Quality control involves overseeing the software development process to ensure that quality
assurance procedures and standards are being followed. The deliverables from the software
process are checked against the defined project standard quality control process. The quality
control process has its own set of procedures and reports that must be using during software
development. These procedures should be straightforward and easily understood by the
engineers developing software. There are two approaches for quality control as mentioned
below:
1. Quality Reviews - Reviews are the most widely used method of validating the quality
process or product. They involve a group of people examining part or all of a software
process, system or its associated documentation to discover potential problems. The
conclusions of the review are formally recorded and passes to the author or whoever
is responsible for correcting the discovered problems. The following documents may
be “signed off” at a review which signifies that progress to the next development
stage has been approved by management.
a. Configuration reviews
b. Inspections for defect removal (Code reviews, Design reviews)
c. Reviews for progress assessment (Progress meetings)
d. Quality reviews (Product and Standards)
e. Test review
2. Automated consistency checking using CASE tools – These tools ensure that all
omissions (ignored parts) are included and all commissions (incorrectly represented
parts) are corrected. These integrated tools have this feature which can be used to
ensure that the consistency is maintained when transforming a specification into the
program code. These tools use a central data dictionary which can be used to
validate all abstractions of the system. This data dictionary enables to detect any
omissions or commissions. This feature is a form of verification that improves the
overall quality.
Software measurement is concerned with deriving a numeric value for some attributes of a
software product or software process. Unless we capture metrics about the applications we
produce and the process by them then we can’t quantify any improvements in quality or
identify the area that requires further improvements. By comparing metrics to each other and
to standards which apply across an organization, it is possible to draw conclusions about the
quality of software or software process.
A software measurement process may be part of a quality control process. Data collected
during this process should be maintained as an organizational resource. Once a
measurement database has been established, comparisons across projects become
possible.
Choose Analyse
measurements anomalous
to be made components
Select Identify
components to anomalous
be assessed measurements
Measure
component
char acteristics
Product metrics are concerned with characteristics of the software itself. Product Metrics falls
into the following two classes:
1. Dynamic metrics which are collected by measurements made of a program in
execution
2. Static metrics which are collected by measurements made of the system
representations
These different types of metrics are related to different quality attributes. Dynamic metrics
help to assess the efficiency and the reliability of a program where as static metrics help to
assess the complexity, understandability and maintainability of a software system.
Depth of conditional Deeply nested if-statements are hard to understand and are
nesting potentially error-prone.
Fog index This is the measure of average length of words and sentences in
documents. The higher the value of fog index, the more difficult
the document may be to understand
Fan- in/ Fan-out Fan-in is a measure of the number of functions that call some
other function say X. Fan-out is the number of functions which are
called by X. A high value of fan-in means that X is tightly coupled
to the rest of the design and changes to X will have extensive
knock on effects. A high value for fan-out suggests that the overall
complexity of X may be high because of the complexity of the
control logic needed to coordinate the called component.
Weighted method per This is number of methods included in a class weighted by the
class complexity of each method. Therefore, a simple method may
have a complexity of 1 and large and complex method a much
higher value.
Number of overriding These are the number of operations in a super class which are
operations overridden in a subclass.
CASE technology provides software process support by automating some process activities
and by providing information about the software that is being developed. Examples of
activities that can be automated using CASE include:
2. Understanding a design using a data dictionary that holds information about the
entities and relations in a design.
3. The generation of user interfaces from a graphical interface description that is created
interactively by the user.
CASE technology is now available for most routine activities in the software process. This
has led to some improvements in software quality and productivity, although these have
been less than predicted by early advocates of CASE. Early advocates suggested that
orders of magnitude improvement were likely if integrated CASE environments were used. In
fact, the improvements that have been achieved are of the order of 40% (Huff, 1992).
Although this is significant, the predictions when CASE tools were first introduced in the
1980s and 1990s were that the use of CASE technology would generate huge savings in
software process costs.
The improvements from the use of CASE are limited by two factors:
1. Software engineering is, essentially, a design activity based on creative thought.
Existing CASE systems automate routine activities but attempts to harness artificial
intelligence technology to provide support for design have not been successful.
CASE classifications help us understand the types of CASE tools and their role in supporting
software process activities. There are several ways to classify CASE tools, each of which
gives us a different perspective on these tools. In this section, CASE tools are discussed
from three of these perspectives:
1. A functional perspective - where CASE tools are classified according to their specific
function.
2. A process perspective - where tools are classified according to the process activities
that they support.
3. An integration perspective - where CASE tools are classified according to how they
are organised into integrated units that provide support for one or more process
activities.
The table in the following page is a classification of CASE tools according to function. This
table lists a number of different types of CASE tools and gives specific examples of each
one. This is not a complete list of CASE tools. Specialised tools, such as tools to support
reuse, have not been included.
The diagram in the following page presents an alternative classification of CASE tools. It
shows the process phases supported by a number of types of CASE tools. Tools for planning
and estimating, text editing, document preparation and configuration management may be
used throughout the software process.
Reengineering tools
Testing tools
Debugging tools
Language-processing
tools
Prototyping tools
Configuration
management tools
Documentation tools
Editing tools
Planning tools
The breadth of support for the software process offered by CASE technology is another
possible classification dimension. “Fuggetta” proposes that CASE systems should be
classified in three categories:
The diagram below illustrates this classification and shows some examples of these classes
of CASE support. Of course, this is an illustrative example; many types of tools and
workbenches have been left out of this diagram.
Benefits
Drawbacks