Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
17 views70 pages

Software Engineering 1

The document outlines the fundamentals of software engineering, including definitions, types of software, and the software development process. It discusses various software processes and models, such as the Waterfall model, and emphasizes the importance of software attributes like maintainability, dependability, efficiency, and usability. Additionally, it covers the role of software engineers and the need for systematic approaches to software development to address challenges in the industry.

Uploaded by

hlt.ariapala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views70 pages

Software Engineering 1

The document outlines the fundamentals of software engineering, including definitions, types of software, and the software development process. It discusses various software processes and models, such as the Waterfall model, and emphasizes the importance of software attributes like maintainability, dependability, efficiency, and usability. Additionally, it covers the role of software engineers and the need for systematic approaches to software development to address challenges in the industry.

Uploaded by

hlt.ariapala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Accredited Course Provider

Table of Contents
1. What is software? ................................................................................................. 1
1.1. Types of Software....................................................................................................... 1
1.2. Characteristics and Problems of Software ................................................................. 2
1.3. Software Engineering Approach ................................................................................ 2
1.4. Definition of Software Engineering ............................................................................. 3
1.5. Software Attributes ..................................................................................................... 3
2. Software Processes .............................................................................................. 4
2.1. The ‘Waterfall’ model .................................................................................................. 4
2.2. Evolutionary development .......................................................................................... 5
2.3. Describe productivity in software engineering ........................................................... 6
2.4. Formal systems development .................................................................................... 7
2.5. Reuse-oriented development ..................................................................................... 8
2.6. Incremental Development .......................................................................................... 9
2.7. Spiral development ................................................................................................... 10
2.8. The RAD Model ........................................................................................................ 12
3. Introduction to Unified Modeling Language .................................................... 14
3.1. History of the UML .................................................................................................... 14
3.2. Brief overview of UML .............................................................................................. 14
3.3. A Conceptual Model of the UML .............................................................................. 16
3.4. Types of UML Diagrams - Discussion ...................................................................... 16
4. Software Design .................................................................................................. 17
4.1. Design concepts ....................................................................................................... 17
4.1.1. Abstraction ................................................................................................................... 17
4.1.2. Architecture .................................................................................................................. 17
4.1.3. Patterns ........................................................................................................................ 18
4.1.4. Modularity ..................................................................................................................... 18
4.1.5. Cohesion ...................................................................................................................... 20
4.1.6. Coupling ....................................................................................................................... 20
4.1.7. Information hiding......................................................................................................... 21
4.1.8. Functional independence ............................................................................................. 21
4.1.9. Refinement ................................................................................................................... 22
4.2. Architectural design .................................................................................................. 22
4.2.1. Repository model ......................................................................................................... 22
4.2.2. Client-server model ...................................................................................................... 23
4.2.3. Layered model ............................................................................................................. 24
4.3. Explain traceability in software systems and describe the processes..................... 25
4.4. Modular decomposition ............................................................................................ 25
4.5. Procedural design using structured methods .......................................................... 26
4.6. Object oriented design process................................................................................ 26
4.6.1. Understand and define the context and the modes of use of the system ................... 27
4.6.2. Design the system architecture.................................................................................... 27
4.6.3. Identify the principle objects of the system .................................................................. 27
4.6.4. Develop design model ................................................................................................. 27
4.6.5. Specify the object interfaces ........................................................................................ 28
5. Managing Software Projects.............................................................................. 29
5.1. Need for the proper management of software projects ........................................... 29
5.2. Describe the role of repositories .............................................................................. 30
5.3. Management activities .............................................................................................. 30
5.3.1. Project planning ........................................................................................................... 31
5.3.2. Estimating costs ........................................................................................................... 32
5.3.3. Project scheduling ........................................................................................................ 33
5.4. Risk management ..................................................................................................... 36
5.4.1. Risk identification ......................................................................................................... 37
5.4.2. Risk analysis ................................................................................................................ 37
5.4.3. Risk Planning ............................................................................................................... 37

Diploma – Software Engineering 1 i


Accredited Course Provider

5.4.4. Risk monitoring............................................................................................................. 38


5.5. Managing people ..................................................................................................... 38
6. Verification and validation ................................................................................. 40
6.1. Explain how to use project estimating and project planning tools. ......................... 43
6.2. Black-box testing...................................................................................................... 44
6.3. Explain the total cost of system ownership ............................................................. 45
6.4. Analyze and explain the software life cycle cost modelling .................................... 45
6.5. White box testing ..................................................................................................... 46
6.6. Levels of testing ....................................................................................................... 48
6.6.1. Unit testing ................................................................................................................... 48
6.6.2. Integration Testing........................................................................................................ 48
6.6.3. Interface testing ............................................................................................................ 50
6.6.4. System testing .............................................................................................................. 51
6.6.5. Alpha and beta testing ................................................................................................. 51
6.6.6. Regression testing........................................................................................................ 51
6.7. Design of test cases................................................................................................. 51
7. Software Maintenance........................................................................................ 53
7.1. Re-engineering ........................................................................................................ 54
7.2. Configuration Management (CM) ............................................................................ 56
7.2.1. Importance of CM ......................................................................................................... 56
7.2.2. Configuration items ...................................................................................................... 57
7.2.3. Versioning .................................................................................................................... 58
7.2.4. Release Management .................................................................................................. 59
8. Software Quality Assurance (SQA) .................................................................. 60
8.1. Definition of Quality .................................................................................................. 60
8.2. Quality Management Activities ................................................................................ 60
8.3. Quality Assurance and Standards ........................................................................... 60
8.3.1. Process and Product Standards .................................................................................. 61
8.3.2. Documentation Standards............................................................................................ 61
8.4. Quality Planning ....................................................................................................... 61
8.5. Quality Control ......................................................................................................... 62
9. Software Measurements and Metrics............................................................... 63
9.1. The Measurement Process...................................................................................... 63
9.1.1. Product Metrics ............................................................................................................ 63
10. Computer Aided Software Engineering (CASE) ............................................. 65
10.1. Examples of CASE tools.......................................................................................... 65

ii Diploma – Software Engineering 1


Accredited Course Provider

1. What is software?

Many people equate the term software with computer programs. In fact, this is too restrictive
a view. Software is not just the programs but also all associated documentation and
configuration data which is needed to make these programs operate correctly.
A software system usually consists of a number of separate programs, configuration files
which are used to set up these programs, systems and user documentation which explains
how to use the system.
Individuals who develop software’s are termed as software engineers. Software engineers
are concerned with developing software products, i.e. software which can be sold to a
customer. There are two categories of software products:
1. Generic products: These are stand-alone systems which are produced by a
development organization and sold on the open market to any customer who is able
to buy them. Examples: Word processors, Databases, Drawing Packages and Project
management tools.
2. Bespoke (or customized) products: These are systems which are commissioned
by a particular customer. The software is developed specially for that customer by a
software contractor.
Example: Software written to support a business process

1.1. Types of Software

Based on its use and purpose there are different types of software which are available in the
real world:
System Software – Systems software is a collection of programs written to service other
programs. They directly control the hardware resources and support the operation of
application software.
Examples of System software are:
1. Operating Systems - Windows, UNIX, Linux
2. Program Translators - Compilers, Interpreters
3. Utility Software - Merging, Sorting

Application Software – Application software serves the user requirements in a particular


application domain. They can be categorized as follows:

1. Real time software – Software that monitor/analyze/controls real world events as they
occur are called real time software. Elements of real time software include data
gathering, component that collects data, transformer, and respondent.

2. Business software – these are information systems that are used in many general
business applications. Examples are general TPS, MIS, etc.

3. Engineering and Scientific software – these include applications ranging from


astronomy to volcanology. Example: space shuttle orbital dynamics, molecular
biology...etc.

4. Embedded software – intelligent products include these types of software’s and are
very commonly used nowadays. Embedded software resides in the read only memory
of the product. Example: microwave ovens, vehicle dashboard displays…etc.

Diploma – Software Engineering 1 1


Accredited Course Provider

5. Personal computer software – this includes word processing software,


spreadsheets…etc.

6. Web based software – web pages retrieved by a browser are software that
incorporates executable instructions. Example CGI, HTML, Perl...etc.

7. Artificial Intelligence software (AI) – AI software makes use of non-numerical


algorithms to solve complex problems that are not amenable to computation or
straightforward analysis. Example: pattern recognition, neural networks. etc.

1.2. Characteristics and Problems of Software

Characteristics of the software


Each software which is been developed will consist of the following characteristics:
• Software is developed or engineered; it is not manufactured in the classical sense.
• Software doesn’t “wear out”.
• Although the industry is moving towards component-based assembly, most software
continues to be custom built.

1.3. Software Engineering Approach

The need for an engineering approach


Software development process is the set of activities and associated results which produce a
software product. In early days these activities were not distinctly identified and organized.
Therefore in late 1960s there was a huge set back in software industry with the introduction
of third generation computers which was then called a “Software crisis”. The main reasons
for the crisis were:
• Existing software development methods were not good enough to build large
software systems
• Software costs were rising and usually over budgeted
• Software release failed to meet the deadlines
• Expected requirements were not completely fulfilled
• Software maintenance absorbed increasing proportion of software effort
At this point the need of a more systematic approach to software development was raised
and it was called “Software Engineering”. Software engineering has come far in its short
lifetime but it still has far to go. Despite of this a set of software related problems has
persisted throughout the evolution of computer based systems and these problems continue
to intensify. Therefore there are several reasons why software must be engineered rather
than only developed as mentioned below:
• Hardware advances continue to outpace our ability to build software.
• Our ability to build software cannot keep pace with the demand for new programs, nor
can we build programs rapidly enough to meet business and market trends.
• The widespread use of computers has made society dependent on reliable operation
of software.
• Our ability to support and enhance existing programs is threatened by poor design
and inadequate resources.

2 Diploma – Software Engineering 1


Accredited Course Provider

1.4. Definition of Software Engineering

Software engineering is an engineering discipline which is concerned with all aspects of


software production from the early stages of system specification through to maintaining the
system after it has gone into use. In this definition there is two key phases:
1. ‘Engineering discipline’- Engineers make things work. They apply theories,
methods and tools where these are appropriate but they use them selectively and
always try to discover solutions to problems even when there are no applicable
theories and methods to support them. Engineers also recognize that they must work
to organizational and financial constraints, so they look for solutions within these
constraints.

2. ‘All aspects of software production’- Software engineering is not just concerned


with the technical processes of software development but also with activities such as
software project management and with the development of tools, methods and
theories to support software production.

1.5. Software Attributes

As well as the services which they provide, software products have a number of other
associated attributes which reflect the quality of that software. These attributes are not
directly concerned with what the software does, rather they reflect its behaviour while it is
executing and the structure and organization of the source program and associated
documentation.
The specific set of attributes which you might expect from a software system obviously
depends on its application. Therefore, a banking system must be secure, an interactive game
must be responsive….etc.
Essential attributes of good software

Product Description
characteristic
Maintainability Software should be written in such a way that it may evolve
to meet the changing needs of customers. This is a critical
attribute because software change is an inevitable
consequence of a changing business environment.
Dependability Software dependability has a range of characteristics,
including reliability, security and safety. Dependable
software should not cause physical or economic damage in
the event of system failure.
Efficiency Software should not make wasteful use of system resources
such as memory and processor cycles. Efficiency therefore
includes responsiveness, processing time, memory
utilization, etc.
Usability Software must be usable, without undue effort, by the type
of user for whom it is designed. This means that it should
have an appropriate user interface and adequate
documentation.

Diploma – Software Engineering 1 3


Accredited Course Provider

2. Software Processes

A software process is a set of activities and associated results which lead to the production
of a software product. These may involve the development of software from scratch although
it is increasingly the case that new software is developed by extending and modifying
existing systems.
There is no ideal process and different organizations have developed completely different
approaches to software development. Processes have evolved to exploit the capabilities of
the people in an organization and the specific characteristics of the systems which are being
developed. Therefore, even within the same company there may be many different
processes used for software development.
Although there are many different software processes, there are fundamental activities which
are common to all software processes. These are:

• Software specification – The functionality of the software and the constraints on its
operations must be defined.
• Software design and implementation – The software to meet the specification must
be produced.
• Software validation – The software must be validated to ensure that it does what the
customer wants.
• Software evolution – The software must evolve to meet changing customer needs.

Software Development Process Models


A software process model is an abstract representation of a software process. Each process
model represents a process from a particular perspective so it only provides partial
information about that process.
For many large systems there is no single software process that is used, different processes
are used to develop different parts of the system.

The following are some of the generic software process models:

2.1. The ‘Waterfall’ model


This takes the fundamental process activities of specification, development, validation and
evolution and represents them as separate process phases such as requirements
specification, software design, implementation, testing and so on.
The first published model of the software development process was derived from other
engineering processes (Royce, 1970).
The principal stages of the model map onto fundamental development activities:

1. Requirements analysis and definition: The system’s services, constraints and


goals are established by consultation with system users. They are then defined in
detail and serve as a system specification.

2. System and software design: The systems design process partitions the
requirements to either hardware or software systems. It establishes overall system
architecture. Software design involves identifying and describing the fundamental
software system abstractions and their relationships.

4 Diploma – Software Engineering 1


Accredited Course Provider

3. Implementation and unit testing: During this stage, the software design is realized
as a set of programs or program units. Unit testing involves verifying that each unit
meets its specification.

4. Integration and system testing: The individual program units or programs are
integrated and tested as a complete system to ensure that the software requirements
have been met. After testing, the software system is delivered to the customer.

5. Operation and maintenance: Normally (although not necessarily) this is the longest
life-cycle phase. The system is installed and put into practical use. Maintenance
involves correcting errors which were not discovered in earlier stages of the life cycle,
improving the implementation of system units and enhancing the system’s services
as new requirements are discovered.

Requirements
definition

System and
software design

Implementation
and unit testing

Integration and
system testing

- Operation and
maintenance

The Generic Waterfall model -

2.2. Evolutionary development


This ‘approach interleaves the activities of specification, development and validation. An
initial system is rapidly developed from abstract specification. This is then refined with
customer input to produce system which satisfies the customer’s needs.

Evolutionary development is based on the idea of developing an initial implementation,


exposing this to user comment and refining this through many versions until an adequate
system has been developed. Rather than having separate specification, development and
validation activities, these are carried out concurrently with rapid feedback across these
activities.

There are two types of evolutionary development:

1. Exploratory development- where the objective of the process is to work with the
customer to explore their requirements and deliver a final system. The development

Diploma – Software Engineering 1 5


Accredited Course Provider

starts with the parts of the system which are understood. The system evolves by
adding new features as they are proposed by the customer.

2. Throw-away prototyping - where the objective of the evolutionary development


process is to understand the customer’s requirements and hence develop a better
requirements definition for the system. The prototype concentrates on experimenting
with those parts of the customer requirements which are poorly understood.

Concurrent
activities

Initial
Specification
version

Outline Intermediate
Development
description versions

Final
Validation
version

2.3. Describe productivity in software engineering


In software development, 2 factors are used 2 factors are used to measure productivity.
They are:

1. The effort required to build the system (input measure)


2. The size of the software that is delivered (output measure)

Productivity is calculated using effort / size. Note that there are various methods to measure
software size. Each has its own features.

However, productivity is only a single aspect of software development. Other important


factors include the speed to market, quality, staff retention and cost. They must be measured
to evaluate performance.

Reasons why productivity should be measured include;

• To determine if one methodology produces a faster result than another


• To find the most cost-effective techniques and tools
• To have an optimum team size
• To track the overall cost difference between using experienced resources versus
inexperienced
• To compare your team to industry competitors

6 Diploma – Software Engineering 1


Accredited Course Provider

2.4. Formal systems development

This approach is based on producing a formal mathematical system specification and


transforming this specification, using mathematical methods, to construct a program.
Verification of system components is carried out by making mathematical arguments that
they conform to their specification.

Formal systems development is an approach to software development which has something


in common with the waterfall model but where the development process is based on formal
mathematical transformation of a system specification to an executable program.

The critical distinctions between this approach and the waterfall model are:
1. The software requirements specification is refined into a detailed formal specification
which is expressed in a mathematical notation.
2. The development processes of design, implementation and unit testing are replaced
by a transformational development process where the formal specification is refined,
through a series of transformations, into a program.

Diploma – Software Engineering 1 7


Accredited Course Provider

In the transformation process, the formal mathematical representation of the system is


systematically converted into a more detailed, but still mathematically correct, system
representation. Each step adds detail until the formal specification is converted into an
equivalent program. Transformations are sufficiently close that the effort of verifying the
transformation is not excessive.

Requirements Formal Formal Integration and


definition Formal transformations
transformation system testing
specification
T1 T2 T3 T4

Formal R1 Executable
R2 R3
specification program

P1 P2 P3 P4

Proofs of transformation correctness


It can therefore be guaranteed, assuming there are no verification errors, that the program is
a true implementation of the specification.

2.5. Reuse-oriented development

This approach is based on the existence of the significant number of reusable components.
The system development process focuses on integrating these components into a system
rather than developing from scratch.
In the majority of software projects, there is some software reuse. This usually happens
informally when people working on the project know of designs or code which is similar to
that required. They look for these, modify them as required and incorporate them into their
system. In the evolutionary approach, reuse is often seen as essential for rapid system
development.
This informal reuse takes place irrespective of the generic process which is used. However,
in the past few years, an approach to software development (component based software
engineering) which relies on reuse has emerged and is becoming increasingly widely used.
This reuse-oriented approach relies on a large base of reusable software components which
can be accessed and some integrating framework for these components.

Requirements Component Requirements System design


specification analysis modification with reuse

Development System
and integration validation

8 Diploma – Software Engineering 1


Accredited Course Provider

Sometimes, these components are systems in their own right (COTS or Commercial Off-The-
Shelf systems) that may be used to provide specific functionality such as text formatting,
numeric calculation, etc. The generic process model for reuse-oriented development is
shown above.
While the initial requirements specification stage and the validation stage are comparable
with other processes, the intermediate stages in a reuse-oriented process are different.

These stages are:

1. Component analysis: Given the requirements specification, a search is made for


components to implement that specification. Usually, there is not an exact match and
the components which may be used provide only some of the functionality required.

2. Requirements modification: During this stage, the requirements are analyzed using
information about the components which have been discovered. They are then
modified to reflect the available components. Where modifications are impossible, the
component analysis activity may be re-entered to search for alternative solutions.

3. System design with reuse: During this phase, the framework of the system is
designed or an existing framework is reused. The designers take into account the
components which are reused and organized the framework to cater for this. Some
new software may have to be designed if reusable components are not available.

4. Development and integration: Software which cannot be bought in is developed


and the components and COTS systems are integrated to create the system. System
integration, in this model, may be part of the development process rather than a
separate activity.

2.6. Incremental Development

The waterfall model of development requires customers for a system to commit to a set of
requirements before design begins and the designer to commit to particular design strategies
before implementation. Changes to the requirements during development require rework of
the requirements, design and implementation. However, the advantages of the waterfall
model are that it is a simple management model and its separation of design and
implementation should lead to robust systems which are amenable to change.
By contrast, an evolutionary approach to development allows requirements and design
decisions to be delayed but also leads to software which may be poorly structured and
difficult to understand and maintain. Incremental development is an in- between approach
which combines the advantages of both of these models.
The incremental approach to development was suggested by Mills (Mills et al., 1980) as a
means of reducing rework in the development process and giving customers some
opportunities to delay decisions on their detailed requirements until they had some
experience with the system.

Diploma – Software Engineering 1 9


Accredited Course Provider

Define outline Assign requirements Design system


requirements to increments architecture

Develop system Valida te Integrate Validate


increment increment increment system
Final
system
System incomplete

In an incremental development process, customers identify, in outline, the services to be


provided by the system. They identify which of the services are most important and which are
least important to them. A number of delivery increments are then defined, with each
increment providing a subset of the system functionality. The allocation of services to
increments depends on the service priority. The highest priority services are delivered first to
the customer.

Once the system increments have been identified, the requirements for the services to be
delivered in the first increment are defined in detail and that increment is developed using the
most appropriate development process. During that development, further requirements
analysis for later increments can take place but requirements changes for the current
increment are not accepted.

Once an increment is completed and delivered, customers can put it into service. This means
that they take early delivery of part of the system functionality. They can experiment with the
system which helps them clarify their requirements for later increments and for later versions
of the current increment. As new increments are completed, they are integrated with existing
increments so that the system functionality improves with each delivered increment. The
common services maybe implemented early in the process or may be implemented
incrementally as functionality is required by an increment

There is no need to use the same process for the development of each increment. Where the
services in an increment have a well-defined specification, a waterfall model of development
may be used for that increment. Where the specification is unclear, an evolutionary
development model may be used.

2.7. Spiral development


The spiral model of the software process that was originally proposed by Boehm (1988) is
now widely known. Rather than represent the software process as a sequence of activities
with some backtracking from one activity to another, the process in represented as a spiral.
Each loop in the spiral represents a phase of the software process. Thus, the innermost loop
might be concerned with system feasibility, the next loop with system requirements definition,
the next loop with system design and so on.

10 Diploma – Software Engineering 1


Accredited Course Provider

Each loop in the spiral is split into four sectors:

1. Objective setting- Specific objectives for that phase of the project is defined.
Constraints on the process and the product are identified and a detailed management
plan is drawn up. Project risks are identified. Alternative strategies, depending on
these risks, may be planned.

2. Risk assessment and reduction- For each of the identified project risks, a detailed
analysis is carried out. Steps are taken to reduce the risk. For example, if there is a
risk that the requirements are inappropriate, a prototype system may be developed

3. Development and validation- After risk evaluation, a development model for the
system is then chosen. For example, if user interface risks are dominant, an
appropriate development model might be evolutionary prototyping. If safety risks are
the main consideration, development based on formal transformations may be the
most appropriate and so on. The waterfall model may be the most appropriate
development model if the main identified risk is sub-system integration.

4. Planning- The project is reviewed and a decision made whether to continue with a
further loop of the spiral. If it is decided to continue, plans are drawn up for the next
phase of the project.

Determine objectives
Evaluate alternatives
alternatives and identify, resolve risks
constraints Risk
analysis
Risk
analysis
Risk
analysis Opera-
Prototype 3 tional
Prototype 2 protoype
Risk
REVIEW analysis Proto-
type 1
Requirements plan Simulations, models, benchmarks
Life-cycle plan Concept of
Operation S/W
requirements Product
design Detailed
Requirement design
Development
plan validation Code
Design Unit test
Integration
and test plan V&V Integr ation
Plan next phase test
Acceptance
Service test Develop, verify
next-level product

Diploma – Software Engineering 1 11


Accredited Course Provider

2.8. The RAD Model

Rapid application development (RAD) is an incremental software development process


model that emphasizes an extremely short development cycle. The RAD model is a “high-
speed” adaptation of the linear sequential model in which rapid development is achieved by
using component-based construction. If requirements are well under stood and project scope
is constrained, the RAD process enables a development team to create a “fully functional
system” within very short time periods (e.g., 60 to 90 days) Used primarily for information
systems applications, the RAD approach encompasses the following phases :

Business modeling.

The information flow among business functions is modeled in a way that answers the
following questions: What information drives the business process? What information is
generated? Who generates it? Where does the information go? Who processes it?

Data modeling.

The information flow defined as part of the business modeling phase is refined into a set of
data objects that are needed to support the business. The characteristics (called attributes)
of each object are identified and the relationships between these objects defined.

Process modeling.

The data objects defined in the data modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions are
created for adding, modifying, deleting, or retrieving a data object.

Application generation.

RAD assumes the use of fourth generation techniques. Rather than creating software using
conventional third generation programming languages the RAD process works to reuse
existing program components (when possible) or create reusable components (when
necessary). In all cases, automated tools are used to facilitate construction of the software.

Testing and turnover.

Since the RAD process emphasizes reuse, many of the program components have already
been tested. This reduces overall testing time. How ever, new components must be tested
and all interfaces must be fully exercised.

Obviously, the time constraints imposed on a RAD project demand “Scalable scope” If a
business application can be modularized in a way that enables each major function to be
completed in less than three months (using the approach described previously), it is a
candidate for RAD. Each major function can be addressed by a separate RAD team and then
integrated to form a whole.

12 Diploma – Software Engineering 1


Accredited Course Provider

Diploma – Software Engineering 1 13


Accredited Course Provider

3. Introduction to Unified Modeling Language

3.1. History of the UML


During the 1990s many different methodologies, along with their own set of notations, were
introduced to the Market. Three popular ones are OMT, Booch, and OOSE (Jacobson). OMT
was strong in analysis and weaker in the design area. Booch 1991 was strong in design and
weaker in analysis. Jacobson was strong in behavior analysis and weaker in the other areas.

The use of different notations brought confusion to the market since one symbol meant
different things to different people. To resolve this confusion Unified Modeling Language
(UML) was introduced.

“UML is a language used to specify, visualize, and document the artifacts of an object-
oriented system under development. It represents the unification of the Booch, OMT, and
Objector notations, as well as the best ideas from a number of other methodologies”.

UML Inputs

UML is an attempt to standardize the artifacts of analysis and design: semantic models,
syntactic notation, and diagrams.

In November 1997, the UML was adopted as the standard modeling language by the
Object Management Group (OMG). The current version of the UML is UML 1.4 and work is
processing on UML 2.0.

3.2. Brief overview of UML


The Unified Modeling Language (UML) is a standard language for writing software blueprints.
The UML may be used to visualize, specify, construct and document the artifacts of a
software-intensive system.

The UML is only a language and so is just one part of a software development method.
The UML is process independent, although optimally it should be used in a process that is
use case driven, architecture-centric, iterative, and incremental.

14 Diploma – Software Engineering 1


Accredited Course Provider

UML is a Language

A language provides a vocabulary and the rules for combining words in that vocabulary for
the purpose of communication. A modeling language is a language whose vocabulary and
rules focus on the conceptual and physical representation of a system. A modeling language
such as UML is thus a standard language for software blueprints.

The vocabulary and rules of a language such as the UML tell you how to create and read
well-formed models, but they don’t tell you what models you should create and when you
should create them.

1. The UML is a language for Visualizing

Writing models in UML provides an explicit model which facilitates communication.

There are some things about a software system you can’t understand unless you build
models that exceed the textual programming language. Something are best modeled
textually, others are best modeled graphically. The UML is such a graphical language.

The UML is more than just a bunch of graphical symbols. Rather, behind each symbol in
the UML notation is a well-defined semantics. In this manner, once developed can write a
model in the UML, and another developer, or even another tool, can interpret that model
unambiguously.

2. The UML is a language for Specifying


In this context, specifying means building models that are precise, unambiguous, and
complete. In particular, the UML addresses the specification of all the important analysis,
design, and implementation decisions that must be made in developing and deploying a
software-intensive system.

3. The UML is a language for Constructing


The UML is not a visual programming language, but its models can be directly connected
to variety of programming languages. This means that it is possible to map from a model
in the UML to a programming language such as Java, C++, or Visual Basic, or even to
tables in a relational database or the persistent store of an object-oriented database.
Things are best expressed graphically in UML, where as the things are best expressed in
textual are done so in the programming language.
This mapping permits forward engineering: the generation of code from a UML model into
a Programming language. The reverse is also possible. You can reconstruct a model
from an implementation back into the UML. This is called reverse engineering.

4. The UML is a language for Documenting


The UML addresses the documentation of a system’s architecture and all of its details.
The UML also provides a language for expressing requirements and for tests. Finally, the
UML provides a language for modeling the activities of project planning and release
management.

The UML is not limited to modeling software. In fact, it is expressive enough to model non
software systems, such as workflow in the legal system, the structure and behavior of a
patient healthcare system, and the design of hardware.

Diploma – Software Engineering 1 15


Accredited Course Provider

3.3. A Conceptual Model of the UML

To understand the UML, you need to form a conceptual model of the language, and this
requires leaning these major elements: the UML’s basic building blocks, the rules that dictate
how those building blocks may be put together, and some common mechanisms that apply
throughout the UML.

The vocabulary of the UML encompasses three kinds of building blocks:

1. Things
2. Relationships
3. Diagrams

Things are the abstractions that are first-class citizens in a model; relationships tie these
things together; diagrams group interesting collections of things.

3.4. Types of UML Diagrams - Discussion


Each UML diagram is designed to let developers and customers view a software system
from a different perspective and in varying degrees of abstraction. UML diagrams commonly
created in visual modeling tools include:

• Use Case Diagram displays the relationship among actors and use cases.

• Class Diagram models class structure and contents using design elements such as
classes, packages and objects. It also displays relationships such as containment,
inheritance, associations and others.

• Interaction Diagrams

1. Sequence Diagram displays the time sequence of the objects participating in the
interaction. This consists of the vertical dimension (time) and horizontal dimension
(different objects).

2. Collaboration Diagram displays an interaction organized around the objects and


their links to one another. Numbers are used to show the sequence of messages.

• State Diagram displays the sequences of states that an object of an interaction goes
through during its life in response to received stimuli, together with its responses and
actions.

• Activity Diagram displays a special state diagram where most of the states are action
states and most of the transitions are triggered by completion of the actions in the source
states. This diagram focuses on flows driven by internal processing.

• Physical Diagrams

1. Component Diagram displays the high level packaged structure of the code itself.
Dependencies among components are shown, including source code components,
binary code components, and executable components. Some components exist at
compile time, at link time, at run times well as at more than one time.

2. Deployment Diagram displays the configuration of run-time processing elements


and the software components, processes, and objects that live on them. Software
component instances represent run-time manifestations of code units.

16 Diploma – Software Engineering 1


Accredited Course Provider

4. Software Design

Design is a meaningful engineering representation of some thing that is to be built. It can be


traced to a customer’s requirements and at the same time accessed for quality against a set
of predefined criteria for “good” design.

4.1. Design concepts

Software design consists of various concepts, which are included in applications as given
below:

4.1.1. Abstraction

When we consider a modular solution to any problem, many levels of abstraction can be
posed. At the highest level of abstraction, a solution is stated in broad terms using the
language of the problem environment. At lower levels of abstraction, a more detailed
description of the solution is provided.

“Abstraction is one of the fundamental ways that we as humans cope with complexity”

As we move through different levels of abstraction, we work to create procedural and data
abstractions. A procedural abstraction refers to a sequence of instructions that have a
specific and limited function. The name of procedural abstraction implies these functions, but
specific details are suppressed. An example of a procedural abstraction would be the word
open for a door. Open implies a long sequence of procedural steps (e.g., walk to the door,
reach out and grasp knob, turn knob and pull door, step away from moving door, etc)

A data abstraction is a named collection of data that describes a data object. In the context
of the procedural abstraction open, we can define a data abstraction called door. Like any
data object, the data abstraction for door would encompass a set of attributes that describe
the door (e.g., door type. swing direction, opening mechanism, weight, dimensions). It
follows that the procedural abstraction open would make use of information contained in the
attributes of the data abstraction door.

4.1.2. Architecture

Software architecture alludes to "the overall structure of the software and the ways in
which that structure provides conceptual integrity for a system". In its simplest form,
architecture is the structure or organization of program components (modules), the manner in
which these components interact, and the structure of data that are used by the components.
In a broader sense, however, components can be generalized to represent major system
elements and their interactions.

“Software architecture is the development work product that gives the highest return on
investment with respect to quality, schedule and cost”

Diploma – Software Engineering 1 17


Accredited Course Provider

One goal of software design is to derive an architectural rendering of a system. This


rendering serves as a framework from which more detailed design activities are conducted. A
set of architectural patterns enable a software engineer to reuse design-level concepts.

The architectural design can be represented using one or more of a number of different
models:

• Structural models represent architecture as an organized collection of program


components.
• Framework models increase the level of design abstraction by attempting to identify
repeatable architectural design frameworks that are encountered in similar types of
applications.
• Dynamic models address the behavioural aspects of the program architecture,
indicating how the structure or system configuration may change as a function of
external events. Process models focus on the design of the business or technical
process that the system must accommodate.
• Functional models can be used to represent the functional hierarchy of a system.

4.1.3. Patterns

Brad Appleton defines a design pattern in the following manner:

"A pattern is a named nugget of insight which conveys the essence of a proven solution
to a recurring problem within a certain context amidst competing concerns".

Stated in another way, a design pattern describes a design structure that solves a particular
design problem within a specific context and amid "forces" that may have an impact on the
manner in which the pattern is applied and used.

“Each pattern describes a problem which occurs over and over again in our environment,
and then describes the core of the solution to that problem, in such a way that you can
use this solution in a million times over, without ever doing it the same way twice”

The intent of each design pattern is to provide a description that enables a designer to
determine:

• whether the pattern is applicable to the current work


• whether the pattern can be reused (hence, saving design time)
• whether the pattern can serve as a guide for developing a similar, but functionally or
structurally different pattern.

4.1.4. Modularity

Software architecture and design patterns embody modularity; that is, software is divided
into separately named and addressable components, sometimes called modules that are
integrated to satisfy problem requirements.

It has been stated that "modularity is the single attribute of software that allows a
program to be intellectually manageable". Monolithic software (i.e., a large program
composed of a single module) cannot be easily grasped by a software engineer. The number
of control paths, span of reference, number of variables, and overall complexity would make

18 Diploma – Software Engineering 1


Accredited Course Provider

understanding close to impossible. To illustrate this point, consider the following argument
based on observations of human problem solving.

Consider two problems, P1 and P2. If the perceived complexity of P1, is greater than the
perceived complexity of P2, it follows that the effort required to solve P1 is greater than the
effort required to solve P2. As a general case, this result is intuitively obvious. It does take
more time to solve a difficult problem.

It also follows that the perceived complexity of two problems when they are combined is
often greater than the sum of the perceived complexity when each is taken separately. This
leads to a "divide and conquer" strategy that is it's easier to solve a complex problem when
you break it into manageable pieces. This has important implications with regard to
modularity and software. It is, in fact, an argument for modularity.

It is possible to conclude that, if we subdivide software indefinitely, the effort required to


develop it will become negligibly small? Unfortunately, other forces come into play, causing
this conclusion to be (sadly) invalid. Referring to the diagram below, the effort (cost) to
develop an individual software module does decrease as the total number of modules
increases. Given the same set of requirements, more modules means smaller individual size.
However, as the number of modules grows, the effort (cost) associated with integrating the
modules also grows. These characteristics lead to a total cost or effort curve shown in the
diagram. There is a number, M, of modules that would result in minimum development cost,
but we do not have the necessary sophistication to predict M with assurance.
The curves shown in diagram do provide useful guidance when modularity is considered. We
should modularize, but care should be taken to stay in the vicinity of M. Undermodularity or
overmodularity should be avoided.

We modularize a design (and the resulting program) so that

• development can be more easily planned;


• software increments can be defined and delivered;
• changes can be more easily accommodated;
• testing and debugging can be conducted more efficiently
• long-term maintenance can be conducted without serious side effects.

Region of Minimum Total Software Cost

M
Cost of Effort

Cost to Integrate

Number of Modules

Diploma – Software Engineering 1 19


Accredited Course Provider

4.1.5. Cohesion

Cohesion can be defined as the single mindedness of the component. With in the context of
component level design for object oriented systems, cohesion implies that a component or
class encapsulates only attributes and operations that are closely related to one other.

Following are some of the types of cohesion:

1. Functional cohesion – exhibited mainly by the use of operations. This type of


cohesion is used when performing only one specific functionality.

2. Layer cohesion – this type of cohesion occurs when a higher layer accesses the
services of a lower layer, but the lower layer do not access the higher layer.

3. Communicational cohesion – all operations that access the same data are defined
within one class/component.

4. Sequential cohesion – components or operations are grouped in a manner that


allows the first to provide input to the next and so on. The intent is to implement a
sequence of operations.

5. Procedural cohesion – components or operations are grouped in a manner that


allows one to be invoked immediately after the preceding one was invoked, even
when there is no data passed between them.

6. Temporal cohesion – operations that are performed to reflect a specific behaviour or


state.

4.1.6. Coupling

Coupling is a quantitative measure of the degree to which the classes are connected to one
another. When classes or components are more inter dependent coupling increases.

If the system is to be implemented in a manner which is easy to maintain then the concept of
coupling must be reduced. This is termed as low coupling.

Following are some of the types of coupling available:

1. Content coupling – occurs when one component surreptitiously (secretly) modifies


data that is internal to another component. This will violate information hiding.

2. Common coupling – occurs when a number of components all make use of a global
variable.

3. Control coupling – occurs when operations pass control information or flags to


another operation.

4. Stamp coupling – occurs when class B is declared as a type for an argument of an


operation of classA. Because classB is now a part of the definition of classA,
modification of the system becomes more complicated.

5. Data coupling – occurs when operations pass long string of data arguments from
one operation to another.

20 Diploma – Software Engineering 1


Accredited Course Provider

6. Routine call coupling – occurs when one operation invokes another.

7. Type use coupling – occurs when component A uses the data type that is defined
by component B.

8. Inclusion or import coupling- occurs when component A imports or includes the


data type that is defined by component B.

9. External coupling- occurs when a component communicates or collaborates with


infrastructure components. E.g.: operating system functions, database
capabilities…etc.

4.1.7. Information hiding

The concept of modularity leads every software designer to a fundamental question:

“How do we decompose a software solution to obtain the best set of modules? “

The principle of information hiding suggests that modules be "characterized by design


decisions that (each) hides from all others." In other words, modules should be specified
and designed so that information (algorithms and data) contained within a module is
inaccessible to other modules that have no need for such information.

Hiding implies that effective modularity can be achieved by defining a set of independent
modules that communicate with one another only that information necessary to achieve
software function. Abstraction helps to define the procedural (or informational) entities that
make up the software. Hiding defines and enforces access constraints to both procedural
detail within a module and any local data structure used by the module.

The use of information hiding as a design criterion for modular systems provides the greatest
benefits when modifications are required during testing and later, during software
maintenance. Because most data and procedure are hidden from other parts of the software,
inadvertent errors introduced during modification are less likely to propagate to other
locations within the software.

4.1.8. Functional independence

The concept of functional independence is a direct outgrowth of modularity and the concepts
of abstraction and information hiding. Functional independence is achieved by developing
modules with "single-minded" function and an "aversion" to excessive interaction with other
modules. Stated another way, we want to design software so that each module addresses a
specific sub function of requirements and has a simple interface when viewed from other
parts of the program structure. It is fair to ask why independence is important.

Software with effective modularity, that is, independent modules, is easier to develop
because function may be compartmentalized and interfaces are simplified (consider the
ramifications when development is conducted by a team). Independent modules are easier to
maintain (and test) because secondary effects caused by design or code modification are
limited, error propagation is reduced, and reusable modules are possible. To summarize,
functional independence is a key to good design, and design is the key to software quality.

Independence is assessed using two qualitative criteria “Cohesion” and “Coupling”.

Diploma – Software Engineering 1 21


Accredited Course Provider

4.1.9. Refinement

Stepwise refinement is a top-down design strategy. A program is developed by successively


refining levels of procedural detail. A hierarchy is developed by decomposing a macroscopic
statement of function (a procedural abstraction) in a stepwise fashion until programming
language statements are reached.

Refinement is actually a process of elaboration. We begin with a statement of function (or


description of data) that is defined at a high level of abstraction. That is, the statement
describes function or information conceptually but provides no information about the internal
workings of the function or the internal structure of the data. Refinement causes the designer
to elaborate on the original statement, providing more and more detail as each successive
refinement (elaboration) occurs.
Abstraction and refinement are complementary concepts. Abstraction enables a designer to
specify procedure and data and yet suppress low-level details. Refinement helps the
designer to reveal low-level details as design progresses. Both concepts aid the designer in
creating a complete design model as the design evolves.

4.2. Architectural design

The main purpose of architectural design is to establish the overall structure of the software
system. Architectural design can be defined as the design process for identifying the sub-
systems making up a system and the framework for sub-system control and communication.
The output of this design process is a description of the software architecture.

Architectural design is an early stage of the system design process which represents the link
between specification and design processes often carried out in parallel with some
specification activities and it involves identifying major system components and their
communications.

4.2.1. Repository model

Sub-systems making up a system must exchange information so that they can work together
effectively. There are two fundamental ways in which this can be done:
1. All shared data is held in a central database that can be accessed by all subsystems.
A system model based on a shared database is sometimes called a repository model.
2. Each sub-system maintains its own database. Data is interchanged with other sub-
systems by passing messages to them.
The majority of systems that use large amounts of data are organised around a shared
database or repository. This model is therefore suited to applications where data is
generated by one sub-system and used by another. Examples of this type of system include
command and control systems, management information systems, CAD systems and CASE
toolsets.

22 Diploma – Software Engineering 1


Accredited Course Provider

Diagram below is an example of a CASE toolset architecture based on a shared repository.

Design editor Code


generator

Design Project repository Program


translator editor

Design Report
analyser generator

4.2.2. Client-server model

The client-server architectural model is a system model where the system is organised as a
set of services and associated servers and clients that access and use the services. The
major components of this model are:

1. A set of servers that offer services to other sub-systems. Examples of servers are
print servers that offer printing services, file servers that offer file management
services and a compile server, which offers programming language compilation
services.
2. A set of clients that call on the services offered by servers. These are normally sub-
systems in their own right. There may be several instances of a client program
executing concurrently.
3. A network that allows the clients to access these services. This is not strictly
necessary as both the clients and the servers could run on a single machine. In
practice, however, most client-server systems are implemented as distributed
systems.

Clients may have to know the names of the available servers and the services that they
provide. However, servers need not know either the identity of clients or how many clients
there are. Clients access the services provided by a server through remote procedure calls
using a request-reply protocol such as the http protocol used in the WWW. Essentially, a
client makes a request to a server and waits until it receives a reply.

Diagram below shows an example of a system that is based on the client-server model. This
is a multi-user, web-based system to provide a film and photograph library. In this system,
several servers manage and display the different types of media. Video frames need to be
transmitted quickly and in synchrony but at relatively low resolution. They may be
compressed in a store, so the video server may handle video compression and
decompression into different formats. Still pictures, however, must be maintained at a high
resolution, so it is appropriate to maintain them on a separate server.

The catalogue must be able to deal with a variety of queries and provide link into the web
information system that includes data about the film and video clip, and an e-commerce
system that supports the sale of film and video clips. The client program is simply an
integrated user interface, constructed using a web browser, to these services.

Diploma – Software Engineering 1 23


Accredited Course Provider

The most important advantage of the client-server model is that it is a distributed


architecture. Effective use can be made of networked systems with many distributed
processors. It is easy to add a new server and integrate it with the rest of the system or to
upgrade servers transparently without affecting other parts of the system.

However, changes to existing clients and servers may be required to gain the full benefits of
integrating a new server. There may be no shared data model across servers and sub-
systems may organise their data in different ways. This means that specific data models may
be established on each server to allow its performance to be optimised. Of course, if an
XML-based representation of data is used, it may be relatively simple to convert from one
schema to another. However, XML is an inefficient way to represent data, so performance
problems can arise if this is used.

Client 1 Client 1 Client 1 Client 1

Internet

Catalogue Video Picture Web


Server Server Server Server

Library Film clip Digitized Film and


Catalogue flies Photograph Photo Info.

4.2.3. Layered model

The layered model of an architecture (sometimes called an abstract machine model)


organises a system into layers, each of which provide a set of services. Each layer can be
thought of as an abstract machine whose machine language is defined by the services
provided by the layer. This 'language' is used to implement the next level of abstract
machine. For example, a common way to implement a language is to define an ideal
'language machine' and compile the language into code for this machine. A further
translation step then converts this abstract machine code to real machine code.

An example of a layered model is the OSI reference model of network protocols. Another
influential example was proposed by Buxton (Buxton, 1980), who suggested a three-layer
model for an Ada Programming Support Environment (APSE).

Diagram below reflects the APSE structure and shows how a configuration management
system might be integrated using this abstract machine approach.

24 Diploma – Software Engineering 1


Accredited Course Provider

Configuration Management system layer

Object management system layer

Database system layer

Operating System layer

4.3. Explain traceability in software systems and describe the


processes.
Traceability in the process of software engineering is the ability to trace work items across
the development lifecycle. It is used to keep track of the development lifecycle — and show
what has happened. Achieving regulatory compliance is a common purpose for traceability in
software engineering.

Traceability works by linking two or more work items in application development. This link
indicates a dependency between the items. Requirements and test cases are often traced.
Requirements are traced forward through other development artifacts, including test cases,
test runs, and issues. Requirements are traced backward to the source of the requirement,
such as a stakeholder or a regulatory compliance mandate.

The purpose of requirements traceability is to verify that requirements are met. It also
accelerates development. That’s because it’s easier to get visibility over your requirements.
Traceability is also an important process for analysis. If a requirement changes, then you can
use traceability to determine the impact of change.

Traceability in software testing is the ability to trace tests forward and backward through the
development lifecycle.

Test cases are traced forward to test runs. And test runs are traced forward to issues that
need to be fixed. Test cases and test runs can also be traced backward to requirements.
Traceability in software testing is often done using a traceability matrix.

4.4. Modular decomposition

After an overall system organisation has been chosen, you need to make a decision the
approach to be used in decomposing sub-systems into modules. There is not a rigid
distinction between system organisation and modular decomposition. However, the
components in modules are usually smaller than sub-systems, which allow alternative
decomposition styles to be used.

There is no clear distinction between sub-systems and modules, but could be useful to think
of them as follows:

Diploma – Software Engineering 1 25


Accredited Course Provider

1. A sub-system is a system in its own right whose operation does not depend on the
services provided by other sub-systems. Sub-systems are composed of modules and
have defined interfaces, which are used for communication with other sub-systems.
2. A module is normally a system component that provides one or more services to
other modules. It makes use of services provided by other modules. It is not normally
considered to be an independent system. Modules are usually composed from a
number of other simpler system components.

There are two main strategies that you can use when decomposing a sub-system into
modules:

1. Object-oriented decomposition where you decompose a system into a set of


communicating objects.

2. Function-oriented pipelining where you decompose a system into functional


modules that accept input data and transform it into output data.

In the object-oriented approach, modules are objects with private state and defined
operations on that state. In the pipelining model, modules are functional transformations. In
both cases, modules may be implemented as sequential components or as processes.

You should avoid making premature commitments to concurrency in a system. The


advantage of avoiding a concurrent system design is that sequential programs are easier to
design, implement, verify and test than parallel systems. Time dependencies between
processes are hard to formalise, control and verify. It is best to decompose systems into
modules, then decide during implementation whether these need to execute in sequence or
in parallel.

4.5. Procedural design using structured methods

The foundations of component level design for conventional software components were
formed in the early 1960s and were solidified with the work of Edsgar Dijkstra and his
colleagues. In the late 1960s, Dijkstra and others proposed the use of a set of constrained
logical constructs from which any program could be formed. The constructs emphasized
“maintenance of functional domain”. That is, each construct had a predictable logical
structure, was entered at the top and exited at the bottom, enabling a reader to follow
procedural flow more easily.

The constructs are sequence, condition and repetition. Sequence implements processing
steps that are essential in the specification of any algorithm. Condition provides the facility for
selected processing based on some logical occurrence, and repetition allows for looping.
These three constructs are fundamental to structured programming an important
component level design technique.

The structured constructs were proposed to limit the procedural design of software to a small
number of predictable operations. Complexity metrics indicate that the use of the structured
constructs reduces program complexity and thereby enhances readability, testability and
maintainability.

4.6. Object oriented design process

26 Diploma – Software Engineering 1


Accredited Course Provider

Following are the stages of an object oriented systems design:


1. Understand and define the context and the modes of use of the system.
2. Design the system architecture
3. Identify the principle objects of the system.
4. Develop design model.
5. Specify the object interfaces.

4.6.1. Understand and define the context and the modes of use of the system

The first step of the software design process is to develop an understanding of the
relationships between the software that is being designed and its external environment.

There are two basic system models available:


1. The system context is a static model that describes the other systems in that
environment.
2. The model of the system use is a dynamic model that describes how the system
actually interacts with the environment.

4.6.2. Design the system architecture

4.6.3. Identify the principle objects of the system

There are several methods for object identification:


a) Use a grammatical analysis of a natural language description of a system.
b) Use tangible entities in the application domain.
c) Use a behavioral approach where the designer first understands the overall behavior
of the system.
d) Use a scenario based analysis where various scenarios of the system use are
identified and analyzed in turn.

4.6.4. Develop design model

There are two main types of design models in object oriented design:
1. Static models.
2. Dynamic models

The following illustrates three main models which are discussed under this section:

a) Sub system models that show logical grouping of the objects into coherent sub
systems. These are represented using a form of class diagram where each sub
system is shown as a package. Sub system model are static diagrams.
b) Sequence models that show the sequence of object interactions. These are
represented using a UML sequence diagram or a collaboration diagram. Sequence
models are dynamic models.
c) State machine models that show how individual objects change their state in
response to events. These are represented in the UML using state charts diagrams.
State machine models are dynamic models.

Diploma – Software Engineering 1 27


Accredited Course Provider

4.6.5. Specify the object interfaces

a) Object interfaces have to be specified so that the objects and other components can
be designed in parallel
b) Designers should avoid designing the interface representation but should hide this in
the object itself
c) Objects may have several interfaces which are viewpoints on the methods provided
d) The UML uses class diagrams for interface specification but Java may also be used

28 Diploma – Software Engineering 1


Accredited Course Provider

5. Managing Software Projects

5.1. Need for the proper management of software projects

The failure of many large software projects in the 1960s and early 1970s was first indication
of the difficulties of software management. Software was delivered late, was unreliable, cost
several times the original estimates and often exhibited poor performance characteristics
(Brooks, 1975). These projects did not fail because managers or programmers were
incompetent. On the contrary, these large, challenging projects attracted people of above
average ability. The fault lay in the approach management that was used. Management
techniques derived from other engineering disciplines were applied and these were
ineffective for software development.

The need for management is an important distinction between professional software


development and amateur programming. We need software project management because
professional software engineering is always subject to budget and Schedule constraints.
These are set by the organization developing the software. Software project manager’s job is
to ensure that the software project meets these constraints and delivers software which
contributes to the business goals.

Software managers are responsible for planning and scheduling project development. They
supervise the work to ensure that it is carried out to the required standards. They monitor
progress to check that the development is on time and within budget. Good management
cannot guarantee project success. However, bad management usually results in project
failure. The software is delivered late, costs more than originally estimated and fails to meet
its requirements.

Software managers do the same kind of job as other engineering project managers.
However, software engineering is distinct from other types of engineering in a number of
ways which can make software management particularly difficult. Some of the differences
are:

1. The product is intangible- The manager of a shipbuilding project or of an


engineering project can see the product being developed. If a schedule slips the
effect on the product is visible. Parts of the structure are obviously unfinished.
Software is intangible. It cannot be seen or touched. Software project managers
cannot see progress. They rely on others to produce the documentation to review
progress.

2. There are no standard software processes -We do not have a clear understanding
of the relationships between the software process and product types. In engineering
disciplines with a long history, the process is tried and tested. The engineering
process for particular types of system, such as a bridge, is well understood. Our
understanding of the software process has developed significantly in the past few
years. However, we still cannot predict with certainty when a particular software
process is likely to cause development problems.

3. Large software projects are often ‘one-off projects -Large software are usually
different from previous projects. Managers, therefore, do have a large body of
previous experience which can be used to reduce uncertainty in plans. Consequently,
it is more difficult to anticipate problems. Further more, rapid technological changes in

Diploma – Software Engineering 1 29


Accredited Course Provider

computers and communications outdate previous experience. Lessons learned from


that experience may not be transferable to new projects.

Because of these problems, it is not surprising that some software projects are late, over-
budget and behind schedule. Software systems are often new and technically innovative.

Engineering projects (such as new transport systems) which are innovative often also have
schedule problems. Given the difficulties involved, it is perhaps remarkable that so many
software projects are delivered on time and to budget.

5.2. Describe the role of repositories


A software repository, or “repo” for short, is a storage location for software packages. Here, a
table of contents is also stored, along with metadata. A software repository is typically
managed by source control or repository managers

5.3. Management activities

It is impossible to write a standard job description for a software manager. The job varies
tremendously depending on the organization and on the software product being developed.
However, most managers take responsibility at some stage for some or all of the following
activities:

• Proposal writing
• Project planning and scheduling
• Project costing
• Project monitoring and reviews
• Personnel selection and evaluation
• Report writing and presentations

The first stage in a software project may involve writing a proposal to carry out that project.
The proposal describes the objectives of the project and how it will be carried out. It usually
includes cost and schedule estimates. It may justify why the project contract should be
awarded to a particular organization or team.

Project planning is concerned with identifying the activities, milestones and deliverables
produced by a project. A plan must then be drawn up to guide the development towards the
project goals.

Cost estimation is a related activity that is concerned with estimating the resources required
to accomplish the project plan.

Project monitoring is a continuing project activity. The manager must keep track of the
progress of the project and compare actual and planned progress and cost. Although most
organizations have formal mechanisms for monitoring, a skilled manager can often form a
clear picture of what is going on by informal discussion with project staff.

During a project, it is normal to have a number of formal, project management reviews.


They are concerned with reviewing overall progress and technical development of the project
and considering the project’s status against the aims of the organization commissioning the
software.

30 Diploma – Software Engineering 1


Accredited Course Provider

Project managers usually have to select people to work on their project ideally skilled staff
with appropriate experience will be available to work on the project However, in most cases;
managers have to settle for a less than ideal project team. The reasons for this are:

• The project budget may not cover the use of highly paid staff. Less
experienced; less well-paid staff may have to be used.
• Staff with the appropriate experience may not be available either within an
organization or externally. It may be impossible to recruit new staff to the project
within the organization; the best people may already be allocated to projects.
• The organization may wish to develop the skills of its employees. Inexperienced
staff may be assigned to a project to learn and to gain experience.
The software manager has to work within these constraints when selecting project staff.
However, problems are likely unless at least one project member has some experience, of
the type of system being developed. Without this experience, many simple mistakes are
likely to be made.
The project manager is usually responsible for reporting on the project to both client and
contractor organizations. Project managers must write concise, coherent documents which
abstract critical information from detailed project reports. They must be able to present this
information during progress reviews. Consequently, the ability to communicate effectively
both orally and in writing is an essential skill for a project manager.

5.3.1. Project planning

Effective management of a software project depends on thoroughly planning the progress of


the project. The project manager must anticipate problems which might arise and prepare
tentative solutions to those problems. A plan, drawn up at the start of a project, should be
used as the driver for the project. This initial plan should be the best possible plan given the
available information. It evolves as the project progresses and better information becomes
available.

As well as a project plan, managers may also have to draw up other types of plan. These are
briefly described in the below given table.

Plan Description
Quality plan Describes the quality procedures and
standards that will be used in a project.
Validation plan Describes the approach, resources and
schedule used for system validation.
Configuration management plan Describes the configuration management
procedures and structures to be used.
Maintenance plan Predicts the maintenance requirements of
the system, maintenance cost and effort
required.
Staff development plan Describes how the skills and experience of
the project team members will be
developed.

Project Plan

The project plan sets out the resources available to the project, the work breakdown and a
schedule for carrying out the work. Most plans should include the following sections:

Diploma – Software Engineering 1 31


Accredited Course Provider

1. Introduction -This briefly describes the objectives of the project and sets out the
constraints (e.g. budget, time, etc.) which affect the project management.
2. Project organization- This describes the way in which the development team is
organized, the people involved and their roles in the team.
3. Risk analysis- This describes possible project risks, the likelihood of these risks
arising and the risk reduction strategies which are proposed.
4. Hardware and software resource requirements-This describes the hardware and
the support software required to carry out the development. If hardware has to be
bought, estimates of the prices and the delivery schedule should be included.
5. Work breakdown -This describes the breakdown of the project into activities and
identifies the milestones and deliverables associated with each activity.

6. Project schedule -This describes the dependencies between activities, the


estimated time required to teach each milestone and the allocation of people to
activities.
7. Monitoring and reporting mechanisms -This describes the management reports
which should be produced, when these should be produced and the project
monitoring mechanisms used.

The project plan should be regularly revised during the project. Some parts, such as the
project schedule, will change frequently; other parts will be more stable. A document
organization which allows for the straightforward replacement of sections should be used.

Milestones and Deliverables


ACTIVITIES

Feasibility Requirements Prototype Design Requirements


study analysis development study specification

Feasibility Requirements Evaluation Architectural Requirements


report definition report design specification

MILESTONES
5.3.2. Estimating costs

Software cost estimation can be defined as a management activity which involves predicting
the resources required for a software development process. Software cost estimation
involves answering the following questions:

1. How much effort is required to complete each activity?


2. How much calendar time is needed to complete each activity?
3. What is the total cost of each activity?

Project cost estimation and project scheduling are normally carried out together. The costs of
development are primarily the cost of the effort involved, so the effort computation is used in
both the cost and the schedule estimate. However you may have to do some cost estimation
before detailed schedules are drawn up. These initial estimates may be used to establish a
budget for the project or to set a price for the software for the customer.

There are three parameters involved in computing the total cost of software development
project as follows:

32 Diploma – Software Engineering 1


Accredited Course Provider

1. Hardware and software costs including maintenance


2. Travel and training costs
3. Effort costs (the costs of paying software engineers)

For most projects, the dominant cost is the effort cost. Computers that are powerful enough
for software development are relatively cheap. Although extensive travel costs may be
needed when a project is developed at different sites, the travel costs are usually small a
fraction of the effort costs. Furthermore, using electronic communications systems such as e-
mail, shared websites and video conferencing can significantly reduce the travel required.
Electronic conferencing also means that travelling time is reduced and time can be used
more productively in software development.

Effort costs are not just the salaries of the software engineers who are involved in the project.
Organizations compute effort costs in term of overhead costs where they take the total cost
of running the organization and divide this by the number of productive staff therefore, the
following costs are all part of the total effort cost:

1. Costs of providing, heating and lighting office space


2. Costs of support staff such as accountants, administrators, system managers,
cleaners and technicians.
3. Cost of networking and communications
4. Cost of central facilities such as a library or recreational facilities
5. Costs of Social security and employee benefits such as health insurance

5.3.3. Project scheduling

Project scheduling involves separating the total work involved in a project into separate
activities and judging the time required to complete these activities. Usually, some of these
activities are carried out in parallel. Project schedulers must coordinate these parallel
activities and organize the work so that the Deliverables are usually milestones but
milestones need not be deliverables. Milestones may be internal project results that are used
by the project manager to check project progress but which are not delivered to the
customer.
To establish milestones, the software process must be broken down into basic activities with
associated outputs.

Bar charts and activity networks

Bar charts and activity networks are graphical notations which are used to illustrate the
project schedule. Bar charts show who is responsible for each activity and when the activity
is scheduled to begin and end. Activity networks show the dependencies between the
different activities making up a project.

Identify Identify activity Estimate resources Allocate people Create project


activities dependencies for activities to activities charts

Software Activity charts


requirements and bar charts

Diploma – Software Engineering 1 33


Accredited Course Provider

Consider the set of activities given. This table shows activities, their duration, and activity
interdependencies. Task T3 is dependent on Task T1. This means that T1 must be
completed before T3 starts. For example, T1 might be the preparation of a component
design and T3, the implementation of that design. Before implementation starts, the design
should be complete.
Given dependency and estimated duration of activities, an activity network which shows
activity sequences may be generated. It shows which activities can be carried out in parallel
and which must be executed in sequence because of a dependency on an earlier activity.
Activities are represented as rectangles. Milestones and project deliverables are shown with
rounded corners. Dates in this diagram show the start date of the activity and are written in
British style when the day precedes the month. You should read the network from left to right
and from top to bottom.
In the project management tool used to produce this chart, all activities must end in
milestones. An activity may start when its preceding milestone (which may depend on
several activities) has been reached. Therefore, in the third column of the table given below,
the corresponding milestone are shown (e.g. M5) which is reached when the tasks in that
column finish.

Task Duration(Days) Dependencies


T1 8
T2 15
T3 15 T1(M1)
T4 10
T5 10 T2,T4(M2)
T6 5 T1,T2(M3)
T7 20 T1(M1)
T8 25 T4(M5)
T9 15 T3,T6(M4)
T10 15 T5,T7(M7)
T11 7 T9(M6)
T12 10 T11(M8)

14/7/99 15 days
15 days
M1 T3
8 days T9
T1 5 days 4/8/99 25/8/99
25/7/99
T6 M4 M6
4/7/99 M3
start 20 days 7 days
15 days
T7 T11
T2

25/7/99 10 days 11/8/99 5/9/99


10 days
M2 M7 M8
T4 T5 15 days
T10 10 days
18/7/99
T12
M5
25 days
T8 Finish
34 Diploma – Software
19/9/99 Engineering 1
Accredited Course Provider

Before progress can be made from one milestone to another, all paths leading to it must be
completed. For example, task T9, shown in the activity network, cannot be started until tasks
T3 and T6 are finished. The arrival at milestone M4 shows that these tasks have been
completed.
The minimum time required to finish the project can be estimated by considering the longest
path in the activity graph (the critical path). In this case, it is 11 weeks of elapsed time or 55
working days. In activity network the critical path is shown as a sequence of emboldened
boxes. The overall schedule of the project depends on the critical path. Any slippage in the
completion of any critical activity causes project delays.
Delays in activities which do not lie on the critical path, however, need not cause an overall
schedule slippage. So long as the delays do not extend these activities so much that the total
time exceeds the critical path, the project schedule will not be affected. For example, if T8 is
delayed, it may not affect the final completion date of the project as it does not lie on the
critical path. The project bar chart shows the extent of the possible delay as a shaded bar.

4/7 11/7 18/7 25/7 1/8 8/8 15/8 22/8 29/8 5/9 12/9 19/9
Start
T4
T1
T2
M1
T7
T3
M5
T8
M3
M2
T6
T5
M4
T9
M7
T10
M6
T11
M8
T12
Finish

Some of the activities in the above chart are followed by a shaded bar whose length is
computed by the scheduling tool. This shows that there is some flexibility in the complete
date of these activities. If an activity does not complete on time, the critical path will not be
affected until the end of the period marked by the shaded bar. Activities which lie on the
critical path have no margin of error and they can be identified because they have no
associated shaded bar.

Diploma – Software Engineering 1 35


Accredited Course Provider

5.4. Risk management

An important task of a project manager is to anticipate risks which might affect project
schedule or the quality of the software being developed and to take action to avoid these
risks. The, results of the risk analysis should be documented in project plan along with an
analysis of the consequences of a risk occurring & Identifying risks and drawing up plans to
minimize their effect on the project is called risk management (Hall, 1998; Ould, 1999).

Simplistically, you can think of a risk as a probability that some adverse a circumstance will
actually occur. Risks may threaten the project, the software that is being developed or the
organization. These categories of risk can be defined follows:
1. Project risks are risks which affect the project schedule or resources.
2. Product risks are risks which affect the quality or performance of the software being
developed.
3. Business risks are risks which affect the organization developing procuring the
software.
Of course, this is not an exclusive classification. If an experienced programmer leaves a
project this can be a project risk because the delivery of the system will be delayed. It can
also be a product risk because a replacement may not be as experienced and so may make
programming errors. Finally, it can be a business risk because the programmer’s experience
is not available for bidding for future business.

Risk management is particularly important for software projects because of the inherent
uncertainties which most projects face. These stem from loosely defined requirements,
difficulties in estimating the time and resources required for software development,
dependence on individual skills and requirements changes due to changes in customer
needs.

The project manager should anticipate risks, understand the impact of these risks on the
project, the product and the business and take steps to avoid these risks. Contingency plans
may be drawn up so that, if the risks do occur, immediate recovery action is possible.
The process of risk management is illustrated in the diagram below. It involves several
stages:

1. Risk identification-Possible project, product and business risks are identified.


2. Risk analysis-The likelihood and consequences of these risks are assessed.
3. Risk planning-Plans to address the risk either by avoiding it or minimizing its effects
on the project are drawn up.
4. Risk monitoring-The risk is constantly assessed and plans for risk mitigation are
revised as more information about the risk becomes available.

Risk Risk analysis Risk planning Risk


identification monitoring

List of potential Risk avoidance Risk


Prioritised risk and contingency
risks list assessment
plans

The risk management process, like all other project planning, is an iterative process which
continues throughout the project. Once an initial set of plans are drawn up, the situation is

36 Diploma – Software Engineering 1


Accredited Course Provider

monitored. As more information about the risks becomes available, they have to be re-
analyzed and new priorities established. The risk avoidance and contingency plans may be
modified as new risk information emerges.

The results of the risk management process should be documented in a risk management
plan. This should include a discussion of the risks faced by the project, an analysis of these
risks and the plans which are required to manage these risks. When appropriate, it may also
include some results of the risk management, i.e. specific contingency plans to be activated if
the risk occurs.

5.4.1. Risk identification

Risk identification is the first stage of risk management. It is concerned with covering
possible risks to the project. In principle, these should not be assessed or prioritized at this
stage although, in practice, risks with very minor consequence or very low probability risks
are not usually considered.

Risk identification may be carried out as a team process using a brainstorming approach or
may simply be based on a manager’s experience. To help the process, a checklist of
different types of risk may be used. These types include:

1. Technology risks- Risks which derive from the software or hardware technologies
which are being used as part of the system being developed.
2. People risks- Risks which are associated with the people in the development team.
3. Organizational risks-Risks which derive from the organizational environment where
the software is being developed.
4. Tools risks- Risks which derive from the CASE tools and other support software
used to develop the system.
5. Requirements risks- Risks which derive from changes to the customer requirements
and the process of managing the requirements change.
6. Estimation risks -Risks which derive from the management estimates of the system
characteristics and the resources required to build the system.

5.4.2. Risk analysis

During the risk analysis process, each identified risk is considered in turn and a judgment
made about the probability and the seriousness of the risk. There is no easy way to do this-it
relies on the judgment and experience of the project manager. These should not generally be
precise numeric assessments but should be based around a number of bands:

1. The probability of the risk might be assessed as very low (<10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (>75%).
2. The effects of the risk might be assessed as catastrophic, serious, tolerable or
insignificant.

The results of this analysis process should then be tabulated with the table ordered
according to seriousness of the risk.

5.4.3. Risk Planning

The risk planning process considers each of the key risks which have been identified and
identifies strategies to manage the risk. Again, there is no simple process which can be

Diploma – Software Engineering 1 37


Accredited Course Provider

followed to establish risk management plans. It relies on the judgment and experience of the
project manager.
These strategies fall into three categories:

1. Avoidance strategies - Following these strategies means that the probability that the
risk will arise will be reduced.
2. Minimization strategies- Following these strategies mean that the impact of the risk
will be reduced.
3. Contingency plans- Following these strategies mean that, if the worst happens, you
are prepared for it and have a strategy in place to deal with it.

5.4.4. Risk monitoring

Risk monitoring involves regularly assessing each of the identified risks to decide whether or
not that risk is becoming more or less probable and whether the effects of the risk have
changed. Of course, this cannot usually be observed directly, so you have to look at other
factors which give you clues about the risk probability and effects. These factors are
obviously dependent on the types of risk. Risk monitoring should be a continuous process
and, at every management progress review, each of the key risks should be considered
separately and discussed by a meeting.

5.5. Managing people


The people working in a software organisation are its greatest assets. They represent
intellectual capital, and it is up to software managers to ensure that the organisation gets the
best possible return on its investment in people. In successful companies and economies,
this is achieved when people are respected by the organisation. They should have a level of
responsibility and reward that reflects their skills.

Effective management is therefore about managing the people in an organization. Project


managers have to solve technical and nontechnical problems by using the people in their
team in the most effective way possible. They have to motivate people, plan and organise
their work and ensure that the work is being done properly. Poor management of people is
one of the most significant contributors to project failure.

Unfortunately, poor leadership is all too common in the software industry. Managers fail to
take into account the limitations of individuals and impose unrealistic deadlines on project
teams. They equate management with meetings yet fail to allow people in these meetings to
contribute to the project. They may accept new requirements without proper analysis of what
this means for the project team. They sometimes see their role as one of exploiting their staff
rather than working them to identify how their work can contribute to both organisational and
personal goals.

There are four critical factors in people management:

1. Consistency - People in a project team should all be treated in a comparable way.


While no one expects all rewards to be identical, people should not feel to their
contribution to the organisation is undervalued.
2. Respect - Different people have different skills and managers should respect these
differences. All members of the team should be given an opportunity to make a
contribution. In some cases, of course, you will find that people simply don’t fit into a
team and cannot continue, but it is important not to jump to conclusions about this.
3. Inclusion - People contribute effectively when they feel that others listen to and take
account of their proposals. It is important to develop a working environment where all
views, even those of the most junior staff, are considered.

38 Diploma – Software Engineering 1


Accredited Course Provider

4. Honesty - As a manager, you should always be honest about what is and what is
going badly in the team. You should also be honest level of technical knowledge and
be willing to defer to staff with more edge when necessary. If you are less than
honest, you will eventually be out and will lose the respect of the group.

Diploma – Software Engineering 1 39


Accredited Course Provider

6. Verification and validation

During and after the implementation process, the program being developed must be checked
to ensure that it meets specification and delivers the functionality expected by the people
paying for the software. Verification and Validation (V & V) is the name given to these
checking and analysis processes. Verification and Validation starts with requirements
reviews and continues through design reviews and code inspections to product testing.

Verification and Validation is not the same thing, although they are often confused. Boehm
concisely expressed the difference between them as:

• Validation: Are we building the right product is validation


• Verification: Are we building the product right is verification.

These definitions tell us that role of verification involves checking that the software conforms
to its specification. You should check that it meets its specified functional and non-functional
requirements. Validation however is a more general process. The aim of validation is to
ensure that the software system meets the customer’s expectations. It goes beyond checking
that the system conforms to its specification to showing that the software does what the
customer expects it to do.

The ultimate goal of the verification and validation process is to establish confidence that the
software system is fit for purpose. This means that the system must be good enough for its
intended use. The level of required confidence depends on the system’s purpose, the
expectations of the system users and the current marketing environment for the system.

Therefore the software should satisfy the following:

1. Software functions: the level of confidence required is dependent on how critical the
software is to an organization.

2. User expectation: It is a sad reflection on the software industry that many users
have low expectations of their software and are not surprised when it fails during use.
But in the recent past user tolerance of software failures has been decreasing.

3. Marketing environment: when a system is marked, the sellers of the system must
take into account competing programs, the price that customers are willing to pay for
a system and the required schedule for delivering that system where a company has
few competitors, it may decide to release a program before it has been fully tested
and debugged because they want to be the first into the market.

Within the V & V process, there are two complementary approaches to system checking and
analysis such as:

1. Software inspections or peer reviews - This is the process of analyzing and checking
the system representation such as the requirements document, design diagrams and the
program source code. They may apply at all stages of the process. Inspection may be
supplemented by some automatic analysis of the source text of a system or associated
documents. Software inspection and automated analysis are static V & V techniques, as they
do not require the system to be executed.

2. Software Testing - Software testing involves executing an implementation of the software


with test data; examine the outputs of the software and its operational behaviour to check

40 Diploma – Software Engineering 1


Accredited Course Provider

that it is performing as required. Testing is a dynamic technique because it works with an


executable section of the system.

Software
Inspections

Requirements High-Level Formal Detailed Program


Specification Design Specification Design

Prototype Program
Testing

The diagram above shows that software inspections and testing play complementary roles in
the software process. The arrows indicate the stages in the process where the techniques
may be used. Therefore you can use software inspections at all stages of the software
process. Starting with the requirements, any readable representations of the software can be
inspected. Requirements and design reviews are the main techniques used for error
detection in the specification and design.

You can only test a system when a prototype or an executable version of the program is
available. An advantage of incremental development is that a testable version of the program
is available at a fairly early stage in the development process. Functionality can be tested as
it is added to the system so you don’t have to have a complete implementation before testing
begins.

Inspection techniques include program inspections, automated source code analysis and
formal verification. However, static techniques can only check the correspondence between
a program and its specification (verification), they cannot demonstrate that the software is
operationally useful. You also cannot use static techniques to check emergent properties of
the software such as its performance and reliability.

Although software inspections are now widely used, program testing will always be the main
software verification and validation technique. Testing involves exercising the program using
data like the real data processed by the program. You discover program defects or
inadequacies by examining the outputs of the program and looking for anomalies. There are
two distinct types of testing that may be used at different stages in the software process such
as:

1. Validation Testing – is intended to show that the software is what the customer
wants, that is it meets its requirements. As part of validation testing, you may use
statistical testing to test the program’s performance and reliability and to check how it
works under operational conditions.

2. Defect Testing – is intended to reveal defects in the system rather than to simulate
its operational use. The goal of defect testing is to find inconsistencies between a
program and its specification.

Diploma – Software Engineering 1 41


Accredited Course Provider

Of course there is no hard and fast boundary between these approaches to testing. During
validation testing, you will find defects in the system; during defect testing some of the tests
will show that the program meets its requirements.

The process of V & V and debugging are normally interleaved. As you discover faults in the
program that you are testing, you have to change the program to correct these faults.
However, testing (or, more generally verification and validation) and debugging have different
goals:

1. Verification and validation processes are intended to establish the existence of


defects in a software system.
2. Debugging is a process that locates and corrects these defects.

The following indicates the debugging process:

Test Test
results Specification
cases

Locate Design Repair Re-test


error error repair error program

Issues related to the debugging process:

✓ Skilled debuggers looks for patterns in the test output where the defects is
exhibited and use knowledge of the type of defect, the output pattern, the
programming language, and the programming process to locate the defect.

✓ Locating a fault is always not an easy task since the fault need not necessarily be
close to the point where the program failed. Manual tracing of the program,
simulating execution, may be required.

✓ Interactive debugging tools are generally part of a suite of language support tools
that are integrated with the compilation system.

✓ Users can often control execution by ‘stepping’ their way through the program
statement by statement after each statement has been executed, the values of the
variables can be examined and the potential errors discovered.

After a defect in the program has been discovered, it must then be corrected and the
system must be revalidated. This may involve re-inspecting the program or repeating
previous test runs. This process is termed as regression testing.

V and V process can be carefully planned which should start early in the development
process. The V and V planning process should have a balance between static and
dynamic approaches of verification and validation.

Further the test planning is concerned with setting out standards for the testing process
rather than describing product test. Test plans are not just management documents.

42 Diploma – Software Engineering 1


Accredited Course Provider

The major components of a test plan are given below:


1. The testing process - A description of the major phases of the testing process.
2. Requirements traceability - Users are most interested in the system meeting its
requirements and testing should be planned so that all requirements are
individually tested.
3. Tested items - The products of the software process which are to be tested
should be specified.
4. Testing schedule - An overall testing schedule and resource allocation for this
schedule. This, obviously, is linked to the more general project development
schedule.
5. Test recording procedures - It is not enough simply to run tests. The results of
the tests must be systematically recorded. It must be possible to audit the testing
process to check that it been carried out correctly.
6. Hardware and software requirements - This section should set out software
tools required and estimated hardware utilization.
7. Constraints - Constraints affecting the testing process such as staff shortages
should be anticipated in this section.

6.1. Explain how to use project estimating and project planning tools.
The challenge with estimating is that it involves uncertainty. Factors that contribute to this
uncertainty;

• Experience with Similar Projects: The lesser the experience with similar projects, the
greater the uncertainty.
• Planning Horizon: The longer the planning horizon, the greater the uncertainty.
• Project Duration: The longer the project, the greater the uncertainty.
• People: The quantity of people and their skill will be a huge factor in estimating their
costs.
There exists tools and techniques that help in cost estimation.
Expert Judgement - uses the knowledge and experience of experts to estimate the cost of
the project. This technique uses unique factors specific to the project.
Analogous Estimating - uses historical data from similar projects as a basis for the cost
estimate. The estimate can be adjusted for known differences between the projects. This
type of estimate is less accurate than other methods.
Parametric Estimating – makes use of statistical modeling to develop a cost estimate.
Bottom-Up Estimating - uses the estimates of individual work packages which are then
summarized to determine an overall cost estimate for the project. This type of estimate is
more accurate than the others.
Three-Point Estimates – comes hand in hand with the Program Evaluation and Review
Technique (PERT).
Reserve Analysis - used to account for cost uncertainty.
Cost of Quality - includes the money that is spent during the project to avoid failures and
money spent during and after the project due to failures.
Project Management Estimating Software - includes the cost estimating software
applications, spreadsheets, simulation applications and statistical software tools. This is
useful for looking at cost estimation alternatives.
Vendor Bid Analysis - used to estimate what the project should cost by comparing the bids of
multiple vendors.

Diploma – Software Engineering 1 43


Accredited Course Provider

6.2. Black-box testing

Functional or black box testing is an approach to testing where the tests are delivered from
the program or component specification. The system is a black box whose behavior can only
be determined by studying its inputs and outputs.

The diagram given below illustrates the black box testing strategy.

Inputs causing
anomalous
Input test data I behaviour
e

System

Outputs which reveal


the presence of
Output test results Oe defects

44 Diploma – Software Engineering 1


Accredited Course Provider

The tester presents inputs to the component or the system and examines the corresponding
outputs. If the outputs are not as predicted then the test has successfully detected a problem
with the software.

The main problem of this approach is for the tester to select the corresponding test data. This
can be minimized by using, equivalence partitioning.

6.3. Explain the total cost of system ownership


There are costs associated with both design and documentation. There are also testing
costs. New programs should be tested before being deployed. Any remaining errors can slow
down a business or lead to costly mistakes.

Organizations need to engage in activities to support the system like;

• providing end user support


• collecting comments for system improvements
• auditing systems to ensure compliance
• providing regular backup of critical data
• planning for disaster recovery

The prices and complexity of these tasks can push managers to think of technology as a
cost. These tasks are collectively known as the total cost of ownership (TCO) of a system.
Understanding this concept is critical when making technology investment decisions.

6.4. Analyze and explain the software life cycle cost modelling
In modern software development practice, it is a necessity to know the cost and time
required for the software development before building software projects. One of the efficient
cost estimation models applied to many software projects is called “Constructive Cost Model
(COCOMO)”.

This is a procedural software cost estimation model that was proposed by Barry W. Boehm in
1981. This cost estimation model is extensively used in predicting the effort, development
time, average team size and effort required to develop a software project.

Software projects under COCOMO model strategies are categorized into three.

1. Organic

Suits a small software team since it has a generally stable development environment. Here,
the problem is well understood and has been solved in the past.

2. Semi-detached

Requires more experience and better guidance and creativity.

3. Embedded

This contains the projects with operating tight constraints and requirements. The developer
requires high experiences and has to be creative to develop complex models.

There are three different COCOMO models available;

Diploma – Software Engineering 1 45


Accredited Course Provider

1. Basic COCOMO model

Used for rough calculation which limits accuracy in calculating software estimation. This is
based on lines of source code together with constant values obtained from software project
types rather than other factors which have major influences to Software development
process.

2. Intermediate COCOMO model

This is an extension of the Basic COCOMO model which includes a set of cost drivers into
account in order to enhance more accuracy to the cost estimation model.

3. Complete/detailed COCOMO model

The model incorporates all qualities of both Basic COCOMO and Intermediate COCOMO
strategies on each software engineering process.

6.5. White box testing

Test data

Tests Derives

Component Test
code outputs

The structural testing as given in the diagram is an approach which is also termed as white
box testing, glass box testing or clear box testing. This approach is used for relatively small
programs such as subroutines or operations associated with various objects.

Structural testing or white box testing focuses on three main areas of a program.
• Sequence or statement coverage – this will focus on each and every line of
code in the program.
• Selection or condition coverage – this will emphasize on each condition or
selection in a program therefore two main classes are available which are values
within the range and values out of the range.
E.g.: if the condition is x <= 4 then the values with in the range would be 4 and
the values out of the range will be 3, 5, 7,…etc.
• Loop coverage – focus on each iteration and will have three main types of
equivalence classes. These classes are skip, one pass and more than one pass.
E.g.: if there is a loop which indicates while( y>=3) the value y = 2 will skip the
loop, while if y = 3 then the loop will be executed only once and if the value is
greater than 3 then the loop will be executed for several times.

As indicated above testing each section of the program can be further defined as path
testing.

46 Diploma – Software Engineering 1


Accredited Course Provider

Path testing is therefore a technique which is used under structural testing. But path
testing mainly does not concentrate regarding loop coverage since it is rather
cumbersome.

Diploma – Software Engineering 1 47


Accredited Course Provider

6.6. Levels of testing

The following indicates the testing levels used by software engineers for testing conventional
and object oriented software:

Conventional Software Object Oriented Software

1. Unit Testing 1. Unit Testing


2. Object / Class Testing
2. Integration Testing 3. Cluster Testing
3. Validation Testing 4. Validation Testing
4. System Testing 5. System Testing

6.6.1. Unit testing

Unit testing focuses on the verification of the smallest unit of software design that is the
software component module. Using the component level design description as a guide,
important control paths are tested to uncover the errors within the boundary of the module.
This testing focuses on the internal processing logic and the internal data structures. This
type of testing can be done on multiple components parallely.

The commonest approach is the white box testing but black box testing also could be applied
if the component is too large or less critical.

Unit testing in object oriented context changes significantly. The modules are the operations
defined in the OO classes which must be tested for each of the sub class as they vary
through redefinition. Alternatively, OO class testing is also a unit testing but it is much more
broader than testing modules since this focuses on testing the functionality (operations) and
testing the behaviour (all states of objects). As the scope is higher the preferred approach is
black-box testing.

6.6.2. Integration Testing

Integration testing is the systematic technique for constructing software architecture while at
the same time conducting tests to uncover errors associated with interfacing. The objective is
to take the tested units and then build a program structure that has been given by design.

There are several strategies which are used for the purpose of system integration and test.
The following indicates the strategies which are used for the purpose of integration and test.

1. Top- down Integration - Top down testing tests the high levels of the system before
testing its detailed components. The program is represented as a single abstract component
with sub components represented as stubs (dummy stubs). Stubs consist of the same
interface but with the limited functionality. After the top level components are implemented
and tested then the lower level components are implemented and tested the same way unit
the lowest level components are implemented. Top down testing generally is used with top
down development so that the system components are tested as soon as it is coded.

48 Diploma – Software Engineering 1


Accredited Course Provider

Testing
Level 1 Level 1 . ..
sequence

Level 2 Level 2 Le vel 2 Level 2


Le vel 2
stubs

Le vel 3
stubs

2. Bottom-up testing - Bottom up testing is the converse of Top-down testing. It involves


testing the modules at lower levels of the hierarchy and then working up the modules until
the final modules is tested. The advantages of the Bottom-Up testing are the Disadvantages
of Top-Down testing and vice versa. When using Bottom-up testing, test drivers must be
written to exercise the lower level components. Test drivers simulate the components
environment are valuable components in their rights. If components are being tested are
reusable components. Potential re-users can then run those tests to satisfy themselves that
the component behaves as expected in their environment. Bottom- Up testing is appropriate
for object oriented systems in that individual objects may be tested using their own test
drivers. If top down development is combined with bottom up testing, all parts of the system
must be implemented before the testing can begin (Architectural faults are unlikely to be
discovered until much of the system has been tested).

Test
drivers
Testing
Level N Level N Le vel N Level N Level N
sequence

Test
drivers
Level N–1 Level N–1 Level N–1

Integration Testing in Object Oriented Context


Since there is no hierarchical control structure for object oriented software, traditional top-
down and bottom-up integration strategies have a little meaning. Therefore, integration
testing in OO software is called cluster testing where groups of collaborating classes
(clusters) should be identified by one or more of the following methods.

1. Thread based Testing


2. Use Case/ Scenario Based Testing
3. Method Message Path Testing

Diploma – Software Engineering 1 49


Accredited Course Provider

6.6.3. Interface testing

Many components in a system are not simple functions or objects but are composite
components that are made up of several interacting objects. Testing these composite
components then is primarily concerned with testing that the component interface behaves
according to its specification.

The diagram below illustrates the process of interface testing.

Test
cases

A B

Interface testing is particularly important for object oriented and component based
development. Objects and components are defined by their interfaces and may be reused in
combination with other components in different systems. Interface errors in the composite
component cannot be detected by testing the individual objects or components. Errors in the
composite component may arise because of interactions between its parts.

There are different types of interfaces between program components and consequently
different types of interface errors that can occur such as:

1. Parameter interfaces
2. Shared memory interfaces
3. Procedural interfaces
4. Message passing interfaces

Interface errors are one of the most common forms of error in complex systems. These
errors fall into the following three categories:

1. Interface misuse
2. Interface misunderstanding
3. Timing errors

50 Diploma – Software Engineering 1


Accredited Course Provider

6.6.4. System testing

Software is only one element of much larger systems. Therefore, ultimately when the
software is incorporated with other elements such as hardware, databases, people etc a
series of tests are conducted. These systems include the following system tests:

1. Recovery Testing – forces the system to fail in a variety of ways and verifies that the
recovery is properly performed.

2. Security Testing – this verifies that the protection mechanisms built on to the system
successfully prevents unauthorized entry to the system, that is basic security features
are met.

3. Stress Testing – executes the system in a manner that demands resources in an


abnormal quantity, frequency or volume. This can assess how the system copes with
abnormal situations.

4. Performance Test – this is designed to test the run time performance of the system

6.6.5. Alpha and beta testing

Validation testing begins at the end of integration testing when the full package of software is
produced. Its aim is to ensure that the software satisfies the functional and performance
requirements of the user. That is to check compliance to the validation criteria given in the
specification. There are two different approached to achieve this:

1. Alpha Testing – software is tested in the developer’s site by the end users. Mainly
used for testing bespoke software. This is achieved in a developer controlled
environment with user participation.

2. Beta Testing – software is tested at the end user sites, therefore software is run in
environment that is not controlled by the developer. All errors experienced are
recorded by the user and reported to the developer at regular intervals. Thereafter
these are corrected by the developer to plan out a final release of the software to the
entire client base. This approach is mainly used for generic software.

6.6.6. Regression testing

Each time when a new component is added the control structures changes, therefore new
interaction will arise. The aim of regression testing is to repeat a subset of previous tests to
ensure that any changes have not introduced any new errors.

6.7. Design of test cases

Test case design is a part of system and component testing where you design the test cases
(inputs and predicted outputs) that test the system. The goal of the test case design is to

Diploma – Software Engineering 1 51


Accredited Course Provider

create a set of test cases that are effective in discovering program defects and showing that
the system meets its requirements.

To design a test case, you select a feature of the system or component that you are testing.
You then select a set of inputs that execute that feature, document the expected outputs or
output ranges and where possible design an automated check that tests that the actual and
expected outputs are the same.

There are various approaches that you can take to test case design such as:

1. Requirements based testing – where test cases are designed to test the system
requirements. This is mostly used at the system testing stage as system
requirements are usually implemented by several components. For each
requirement, you identify test cases that can demonstrate that the system meets that
requirement.

2. Partition testing – where you identify input and output partitions and design tests so
that the system executes inputs from all partitions and generates output in all
partitions. Partitions are groups of data that have common characteristics such as all
negative numbers, all names less than 30 characters, all events arising from
choosing items on a menu, and so on.

3. Structural testing – where you use knowledge of the program’s structure to design
tests that exercise all parts of the program. Essentially, when testing a program, you
should try to execute each statement at least once. Structural testing helps identify
test cases that can make this possible.

In general, when designing test cases, you should start with the highest level tests
from the requirements then progressively add more detailed tests using partition and
structural testing.

52 Diploma – Software Engineering 1


Accredited Course Provider

7. Software Maintenance

It is impossible to produce system of any size which does not need to be changed. Once
software is put into use, new requirements emerge and existing requirements changes as the
business running that software changes.
Parts of the software may have to be modified to correct errors that are found in operation,
improve its performance or other non-functional characteristics.
All of this means that, after delivery, software systems always evolve in response to demand
for change. There are a number of different strategies for software change.
• Software maintenance
• Architectural transformation
• Software re-engineering
Software change
Software change is inevitable due to the following factors:
• New requirements emerge when the software is used
• The business environment changes
• Errors must be repaired
• New equipment must be accommodated
• The performance or reliability may have to be improved
A key problem for organisations is implementing and managing change to their legacy
systems.
Software change strategies
• Software maintenance - Changes are made in response to changed requirements
but the fundamental software structure is stable
• Architectural transformation - The architecture of the system is modified generally
from a centralised architecture to a distributed architecture
• Software re-engineering - No new functionality is added to the system but it is
restructured and reorganised to facilitate future changes
These strategies may be applied separately or together.

Different types of maintenance


There are three types of software maintenance such as:
1. Maintenance to repair software faults – Coding errors are usually relatively cheap
to correct, design errors are more expensive as they involve rewriting several
program components. Requirements errors are the most expensive to repair
because of the extensive system redesign that may be necessary.
2. Maintenance to adapt the software to a different operating environment – This
type of maintenance is required when some aspect of the system’s environment
such as the hardware, the platform operating system or other support software
changes. The application system must be modified to adapt it to cope with these
environmental changes.
3. Maintenance to add to or modify the system’s functionality – This type of
maintenance is necessary when the system requirements change in response to
organizational or business change. The scale of the changes required to the
software is often much greater than for the other types of maintenance.

Diploma – Software Engineering 1 53


Accredited Course Provider

7.1. Re-engineering

Software re-engineering is concerned with re-implementing legacy systems to make them


more maintainable. Re-engineering may involve re-documenting the system, organizing and
restructuring the system, translating the system to a more modem programming language,
and modifying and updating the structure and values of the system's data. The functionality
of the software is not changed and, normally, the system architecture also remains the same.
Re-engineering a software system has two key advantages over more radical approaches to
system evolution:
1. Reduced risk - There is a high risk in re-developing business-critical software. Errors
may be made in the system specification, or there may be development problems.
Delays in introducing the new software may mean that business is lost and extra
costs are incurred.
2. Reduced cost - The cost of re-engineering is significantly less than the cost of
developing new software.
The critical distinction between re-engineering and new software development is the starting
point for the development. Rather than starting with a written specification, the old system
acts as a specification for the new system. This distinction is illustrated in the diagram below.
Forward engineering starts with a system specification and involves the design and
implementation of a new system. Re-engineering starts with an existing system and the
development process for the replacement is based on understanding and transforming the
original system.

System Design and New


specification implementation system

Existing software Understanding and Re-engineered


system transformation system

Software re-engineering

The diagram in the following page illustrates the re-engineering process. The input to the
process is a legacy program and the output is a structured, modularised version of the same
program. During program re-engineering, the data for the system may also be re-engineered.
The activities in this re-engineering process are:
1. Source code translation - The program is converted from an old programming
language to a more modem version of the same language or to a different language.
2. Reverse engineering - The program is analysed and information extracted from it.
This helps to document its organisation and functionality.
3. Program structure improvement - The control structure of the program is analysed
and modified to make it easier to read and understand.
4. Program modularisation - Related parts of the program are grouped together and,
where appropriate, redundancy is removed. In some cases, this stage may involve

54 Diploma – Software Engineering 1


Accredited Course Provider

architectural transformation where a centralised system intended for a single


computer is modified to run on a distributed platform.
5. Data re-engineering - The data processed by the program is changed to reflect
program changes.

Original Program Modularized Original


program documentation program data

Reverse
engineering

Source code Program Data


translation modularisation re-engineering

Program
structure
improvement

Structured Re-
program engineered
data
System re-engineering may not necessarily require all of the steps given in the above
diagram. Source code translation may not be needed if the programming language used to
develop the system is still supported by the compiler supplier. If the re-engineering relies
completely on automated tools, then recovering documentation through reverse engineering
may be unnecessary. Data re-engineering is only required if the data structures in the
program change during system re-engineering. However, software re-engineering always
involves some program re-structuring.
To make the re-engineered system interoperate with the new software, you may have to
develop adaptor components. These hide the original interfaces of the software system and
present new, better-structured interfaces that can be used by other components. This
process of legacy system wrapping is an important technique for developing large-scale
reusable components.
The costs of re-engineering obviously depend on the extent of the work that is carried out.
Apart from the extent of the re-engineering, the principal factors that affect re-engineering
costs are:
1. The quality of the software to be re-engineered - The lower the quality of the software
and its associated documentation (if any), the higher the re-engineering costs.
2. The tool support available for re-engineering - It is not normally cost-effective to re-
engineer a software system unless you can use CASE tools to automate most of the
program changes.
3. The extent of data conversion required - If re-engineering requires large volumes of
data to be converted, the process cost increases significantly.
4. The availability of expert staff - If the staff responsible for maintaining the system
cannot be involved in the re-engineering process, the costs will increase because
system re-engineers will have to spend a great deal of time understanding the
system.

Diploma – Software Engineering 1 55


Accredited Course Provider

The main disadvantage of software re-engineering is that there are practical limits to the
extent that a system can be improved by re-engineering. It isn't possible, for example, to
convert a system written using a functional approach to an object-oriented system. Major
architectural changes or radical re-organisation of the system data management cannot be
carried out automatically, so they incur high additional costs. Although re-engineering can
improve maintainability, the re-engineered system will probably not be as maintainable as a
new system developed using modem software engineering methods.

7.2. Configuration Management (CM)

7.2.1. Importance of CM

Configuration management (CM) is the development and application of standards and


procedures for managing an evolving system product. CM procedures define how to record
and process proposed system changes how to relate these to system components and the
methods used to identify different versions of the system. CM tools are used to store
versions of system components, build system from these components and track the releases
of system versions to customers.
Configuration management is sometimes considered to be part of software quality
management, with the same manager sharing quality management and configuration
management responsibilities. The software is initially released by the development team for
quality assurance. The QA team checks that the system is of acceptable quality. It then
becomes a controlled system, which means that changes to the system have to be agreed
on and recorded before they are implemented. Controlled systems are sometimes called
baselines because they are a starting point for further, controlled evolution.
There are many reasons why systems exist in different configurations. Configurations may
be produced for different computers, for different operating systems, incorporating client-
specific functions and so on. Configuration managers are responsible for keeping track of the
differences between software versions, for ensuring that new versions are derived in a
controlled way and for releasing new versions to the right customers at the right time.
In a traditional software development process based on the 'waterfall' model software is
delivered to the configuration management team after development is complete and the
individual software components have been tested. This team then takes over the
responsibility for building the complete system and for managing system testing. Faults that
are discovered during system testing are passed back to the development team for repair.
After the faults have been repaired, the development team delivers a new version of the
repaired component to the quality assurance team. If the quality is acceptable, this then may
become the new base-line for further system development.
This model, where the CM team controls the system integration and testing processes, has
influenced the development of configuration management standards. Most CM standards
have an embedded assumption that a waterfall model will be used for system development.
This means that the standards have to be adapted to modem software development
approaches based on incremental specification and development.
To cater for incremental development, some organisations have developed a modified
approach to configuration management that supports concurrent development and system
testing. This approach relies on a very frequent (at least daily) build of the whole system from
its components:

56 Diploma – Software Engineering 1


Accredited Course Provider

1. The development organisation sets a delivery time (say 2 p.m.) for system
components. If developers have new versions of the components that they are
writing, they must deliver them by that time. Components may be incomplete but
should provide some basic functionality that can be tested.
2. A new version of the system is built from these components by compiling and linking
them to form a complete system.
3. This system is then delivered to the testing team, which carries out a set of
predefined system tests. At the same time, the developers are still working on their
components, adding to the functionality and repairing faults discovered in previous
tests.
4. Faults that are discovered during system testing are documented and returned to the
system developers. They repair these faults in a subsequent version of the
component.

The advantages of using daily builds of software are that the chances of finding problems
stemming from component interactions early in the process are increased. Furthermore, daily
building encourages thorough unit testing of components.

Psychologically, developers are put under pressure not to 'break the build', that is, deliver
versions of components that cause the whole system to fail. They are therefore reluctant to
deliver new component versions that have not been properly tested. Less system testing
time is spent discovering and coping with software faults that should have been found during
unit testing.

The successful use of daily builds requires a very stringent change management process to
keep track of the problems that have been discovered and repaired. It also leads to a very
large number of system and component versions that must be managed. Good configuration
management is therefore essential for this approach to be successful.

Configuration management in agile and rapid development approaches cannot be based


around rigid procedures and paperwork. While these may be necessary for large, complex
projects, they slow down the development process. Careful record-keeping is essential for
large, complex systems developed across several sites, but it is unnecessary for small
projects. In these projects, all team members work together in the same room, and the
overhead involved in record keeping slows down the development process. However, this
does not mean that CM should be completely abandoned when rapid development is
required. Rather, agile processes use simple CM tools, such as version management and
system-building tools, that enforce some control. All team members have to learn to use
these tools and conform to the disciplines that they impose.

7.2.2. Configuration items

During the configuration management plan process, you decide exactly which items are to be
controlled. Documents or groups of related documents under configuration control and formal
documents or configuration items. Project plans, specifications, designs, programs and test
data suites are normally maintained as configuration items. However, all documents which
may be necessary for future system maintenance should be controlled.

The configuration database is used to record all relevant information relating to


configurations. Its principle functions are to assists with assessing the impact of the system
changes and to provide management information about the CM process. As well as defining
the configuration database schema, procedures for recording and retrieving project
information must be defined as part of the CM planning process. Configuration database

Diploma – Software Engineering 1 57


Accredited Course Provider

should be integrated with version management system that is used to store and manage the
formal project documents. This approach, supported by some integrated CASE tools, makes
it possible to link changes directly with the documents and components affected by the
change.
A configuration database must be able to provide answers to a variety of queries about
system configurations.

Typical queries might be.


• Which customers have taken delivery of a particular version of the system?
• What hardware and operating system configuration is required to run a system
version?
• How many versions of a system have been created and what were their creation
dates?
• What versions of a system might be affected if a particular component is changed?
• How many change requests are outstanding on a particular version?
• How many reported faults exist in a particular version?

7.2.3. Versioning

Version and release management are the process of identifying and keeping track of
different versions and releases of a system. A system version is an instance of a system that
differs, in some way, from other instances.
• New versions of the system may have different functionality, performance or may
repair system faults. Some versions may be functionally equivalent but designed for
different hardware or software configurations.
• If there are only small different between versions, one of these is sometimes called a
variant of the other.

A system release is a version that is distributed to customers.


• Each system release should either include new functionality or be intended for a
different hardware flat-form.
• There are always many more versions of a system than releases as versions are
created within an organization for internal development or testing that are never
released to customers.
Version management is now always supported by CASE tools as discussed next section.
These tools mange the store of each system version and control assess to system
components.

Version Identification
Procedures for version management should define an ambiguous way of identifying each
component version. There are three techniques which may be used for component
identification:
• Version numbering- The component is given an explicit and unique version number.
This is the most commonly used identification scheme.
• Attribute-based identification- Each component has a name and as associated set
of attributes which differs for each version of component. Components are therefore
identified by the combination of name and attribute set.
• Change-oriented identification- Each system is named as in attribute-based
identification but is also associated with one or more change requests. The system
version is identified by associating the name with the changes implemented in the
component.

58 Diploma – Software Engineering 1


Accredited Course Provider

7.2.4. Release Management

A system release is a version of the system that is distributed to customers. System release
manages are responsible for deciding when the system can be released to customers.
Release management is the process of creating the release and the distribution media and
documenting the release to ensure that it may be re-created exactly as distributed if this is
necessary. A system release is not just executable code of the system.

The release may be also include:


• Configuration files- defining how the release should be configured for particular
installations.
• Data files- which are needed for successful system operation.

A system release is not just executable code of the system. The release may also include:

• An installation program- that are used to help install the system on target hardware.
• Electronic and paper documentation- describing the system
• Packing and associated publicity- which have been designed for the release.

Diploma – Software Engineering 1 59


Accredited Course Provider

8. Software Quality Assurance (SQA)

Software Quality Assurance is also known as Software Quality Management which is


concerned with ensuring that the required level of quality is achieved in a software product. It
involves defining appropriate quality standards and procedures and ensuring that these are
followed.

SQA should aim to develop a “Quality Culture” where quality is seen as everyone’s
responsibility and not merely defining the standards and procedures.

8.1. Definition of Quality

Quality simply means that a product should meet its specifications. This is considered
problematical for software systems due to the following reasons:

• Tension between customer quality requirements (efficiency, reliability, etc) and


developer quality requirements (maintainability, reusability, etc).
• Some quality requirements are difficult to specify in an unambiguous way.
• Some specifications are usually incomplete and often inconsistent.

Traditional view - Quality is about perfection/ bug free code. Generally associated with
testing at the end of development. Testing can’t introduce quality to a product; it can only
reduce the number of defects in the product.

Modern view (ISO9000) - Good quality is not perfection but fit for the purpose. Build the right
product in the right way. Do not over engineer since it becomes too expensive and do not
under engineer since it may not fit for purpose.

8.2. Quality Management Activities

The following are the activities that are carried out in quality management:

1. Quality Assurance – Establish organizational procedures and standards for quality.


2. Quality Planning – Select applicable procedures and standards for a particular
project and modify these as required.
3. Quality Control – Ensure that procedures and standards are followed by the
software development team.

8.3. Quality Assurance and Standards


Quality Assurance (QA) activities define a framework for achieving software quality. The QA
process involves defining or selecting standards that should be applied to the software
development process or software product. These standards are the key to effective quality
management.

These standards may be embedded in procedures or processes which are applied during
development. Processes may be supported by tools that embed knowledge of the quality
standards.

60 Diploma – Software Engineering 1


Accredited Course Provider

Product standards Process standards

Design review form Design review conduct


Requirement document structure Requirement specification

Procedure header format Version release process


Java programming style Project plan approval process
Project plan format Change control process
Change request form Test recording process

8.3.1. Process and Product Standards

There are two types of standards that may be established as a part of quality assurance:
• Product standards: these are standard that apply to the software product being
developed. They include standards such as document standards, coding standards
and user interface standards. Product quality includes reusability, usability, portability
maintainability etc.
• Process standards: these are standards that define the processes which should be
followed during software development. They may include definitions of specification,
design and validation process and a description of the documents which must be
generated in the course of these processes.

8.3.2. Documentation Standards

The document standards in a software project are particularly important as documents are
the only tangible way of representing the software and the software process.
There are three types of documentation standards
1. Documentation process standards- these standards define the process which is
should be followed for document production.

2. Document standards- these are standards that govern the structure and
presentation of documents.

3. Document interchange standards- these are standards that ensure that all
electronic copies of documents are compatible

8.4. Quality Planning

Quality plan should begin at an early stage in the software process. A quality should be set
out the desired product qualities. The quality plan should select those organizational
standards that are appropriate to a particular product and development process. New
standards may have to be defined if the project uses new methods and tools. Humphrey in
this classic book on the software management suggests an outline structure for quality plan.
This includes product introduction, product plans, process descriptions, quality goals and
risks and risk management.

Diploma – Software Engineering 1 61


Accredited Course Provider

Software Quality Attributes


# Safety # Understandability # Portability
# Security # Testability # Usability
# Reliability # Adaptability # Reusability
# Resilience # Modularity # Efficiency
# Robustness # Complexity # Learn ability

8.5. Quality Control

Quality control involves overseeing the software development process to ensure that quality
assurance procedures and standards are being followed. The deliverables from the software
process are checked against the defined project standard quality control process. The quality
control process has its own set of procedures and reports that must be using during software
development. These procedures should be straightforward and easily understood by the
engineers developing software. There are two approaches for quality control as mentioned
below:
1. Quality Reviews - Reviews are the most widely used method of validating the quality
process or product. They involve a group of people examining part or all of a software
process, system or its associated documentation to discover potential problems. The
conclusions of the review are formally recorded and passes to the author or whoever
is responsible for correcting the discovered problems. The following documents may
be “signed off” at a review which signifies that progress to the next development
stage has been approved by management.
a. Configuration reviews
b. Inspections for defect removal (Code reviews, Design reviews)
c. Reviews for progress assessment (Progress meetings)
d. Quality reviews (Product and Standards)
e. Test review
2. Automated consistency checking using CASE tools – These tools ensure that all
omissions (ignored parts) are included and all commissions (incorrectly represented
parts) are corrected. These integrated tools have this feature which can be used to
ensure that the consistency is maintained when transforming a specification into the
program code. These tools use a central data dictionary which can be used to
validate all abstractions of the system. This data dictionary enables to detect any
omissions or commissions. This feature is a form of verification that improves the
overall quality.

62 Diploma – Software Engineering 1


Accredited Course Provider

9. Software Measurements and Metrics

Software measurement is concerned with deriving a numeric value for some attributes of a
software product or software process. Unless we capture metrics about the applications we
produce and the process by them then we can’t quantify any improvements in quality or
identify the area that requires further improvements. By comparing metrics to each other and
to standards which apply across an organization, it is possible to draw conclusions about the
quality of software or software process.

9.1. The Measurement Process

A software measurement process may be part of a quality control process. Data collected
during this process should be maintained as an organizational resource. Once a
measurement database has been established, comparisons across projects become
possible.
Choose Analyse
measurements anomalous
to be made components

Select Identify
components to anomalous
be assessed measurements

Measure
component
char acteristics

9.1.1. Product Metrics

Product metrics are concerned with characteristics of the software itself. Product Metrics falls
into the following two classes:
1. Dynamic metrics which are collected by measurements made of a program in
execution
2. Static metrics which are collected by measurements made of the system
representations
These different types of metrics are related to different quality attributes. Dynamic metrics
help to assess the efficiency and the reliability of a program where as static metrics help to
assess the complexity, understandability and maintainability of a software system.

Diploma – Software Engineering 1 63


Accredited Course Provider

Some software product metrics

Software metric Description


Length of code This is measure of the size of a program. Generally, the larger
size of the code of program components, the more complex and
error-pone that component is likely to be.

Cyclomatic This is measure of the control complexity program.


complexity
Length of identifiers This is measure of the average length of distinct identifiers in a
program. The longer identifiers, the more likely they are to be
meaningful.

Depth of conditional Deeply nested if-statements are hard to understand and are
nesting potentially error-prone.
Fog index This is the measure of average length of words and sentences in
documents. The higher the value of fog index, the more difficult
the document may be to understand
Fan- in/ Fan-out Fan-in is a measure of the number of functions that call some
other function say X. Fan-out is the number of functions which are
called by X. A high value of fan-in means that X is tightly coupled
to the rest of the design and changes to X will have extensive
knock on effects. A high value for fan-out suggests that the overall
complexity of X may be high because of the complexity of the
control logic needed to coordinate the called component.

Object Oriented Metrics

Object oriented metric Description


Depth of inheritance tree This represents the number of discrete levels in the inheritance
tree where subclasses inherit attributes and methods from
super classes.

Weighted method per This is number of methods included in a class weighted by the
class complexity of each method. Therefore, a simple method may
have a complexity of 1 and large and complex method a much
higher value.

Number of overriding These are the number of operations in a super class which are
operations overridden in a subclass.

64 Diploma – Software Engineering 1


Accredited Course Provider

10. Computer Aided Software Engineering (CASE)

Computer-Aided Software Engineering (CASE) is the name given to software used to


support software process activities such as requirements engineering, design, program
development and testing. CASE tools therefore include design editors, data dictionaries,
compilers, debuggers, system building tools and so on.

CASE technology provides software process support by automating some process activities
and by providing information about the software that is being developed. Examples of
activities that can be automated using CASE include:

1. The development of graphical system models as part of the requirements


specification or the software design.

2. Understanding a design using a data dictionary that holds information about the
entities and relations in a design.

3. The generation of user interfaces from a graphical interface description that is created
interactively by the user.

4. Program debugging through the provision of information about an executing program.

5. The automated translation of programs from an old version of a programming


language such as COBOL to a more recent version.

CASE technology is now available for most routine activities in the software process. This
has led to some improvements in software quality and productivity, although these have
been less than predicted by early advocates of CASE. Early advocates suggested that
orders of magnitude improvement were likely if integrated CASE environments were used. In
fact, the improvements that have been achieved are of the order of 40% (Huff, 1992).
Although this is significant, the predictions when CASE tools were first introduced in the
1980s and 1990s were that the use of CASE technology would generate huge savings in
software process costs.

The improvements from the use of CASE are limited by two factors:
1. Software engineering is, essentially, a design activity based on creative thought.
Existing CASE systems automate routine activities but attempts to harness artificial
intelligence technology to provide support for design have not been successful.

2. In most organisations, software engineering is a team activity, and software


engineers spend quite a lot of time interacting with other team members. CASE
technology does not provide much support for this.

10.1. Examples of CASE tools

CASE classifications help us understand the types of CASE tools and their role in supporting
software process activities. There are several ways to classify CASE tools, each of which
gives us a different perspective on these tools. In this section, CASE tools are discussed
from three of these perspectives:

Diploma – Software Engineering 1 65


Accredited Course Provider

1. A functional perspective - where CASE tools are classified according to their specific
function.

2. A process perspective - where tools are classified according to the process activities
that they support.

3. An integration perspective - where CASE tools are classified according to how they
are organised into integrated units that provide support for one or more process
activities.

The table in the following page is a classification of CASE tools according to function. This
table lists a number of different types of CASE tools and gives specific examples of each
one. This is not a complete list of CASE tools. Specialised tools, such as tools to support
reuse, have not been included.

Tool Type Example


Planning Tools PERT tools, estimation tools, spreadsheets
Editing Tools Text editors ,diagram editors, word processors
Change Management Tools Requirements traceability tools, change control systems
Configuration Management Tools Version management systems, system building tools
Prototyping Tools Very high level languages, user interface generators
Method-support Tools Design editors, data dictionaries, code generators
Language Processing Tools Compilers, interpreters
Program analysis Tools Cross reference generators, static analyzers, dynamic
analyzers
Testing Tools Test data generators, file comparators
Debugging Tools Interactive debugging systems
Documentation Tools Page layout programs, image editors
Reengineering Tools Cross reference systems, program restructuring systems

The diagram in the following page presents an alternative classification of CASE tools. It
shows the process phases supported by a number of types of CASE tools. Tools for planning
and estimating, text editing, document preparation and configuration management may be
used throughout the software process.

66 Diploma – Software Engineering 1


Accredited Course Provider

Reengineering tools

Testing tools

Debugging tools

Program analysis tools

Language-processing
tools

Method support tools

Prototyping tools

Configuration
management tools

Change management tools

Documentation tools

Editing tools

Planning tools

Specification Design Implementation Verification


and
Validation

The breadth of support for the software process offered by CASE technology is another
possible classification dimension. “Fuggetta” proposes that CASE systems should be
classified in three categories:

1. Tools - support individual process tasks such as checking the consistency of a


design, compiling a program and comparing test results. Tools may be general-
purpose, standalone tools (e.g., a word processor) or grouped into workbenches.

2. Workbenches - support process phases or activities such as specification, design,


etc. They normally consist of a set of tools with some greater or lesser degree of
integration.

3. Environments - support all or at least a substantial part of the software process.


They normally include several integrated workbenches.

The diagram below illustrates this classification and shows some examples of these classes
of CASE support. Of course, this is an illustrative example; many types of tools and
workbenches have been left out of this diagram.

Diploma – Software Engineering 1 67


Accredited Course Provider

Benefits and Drawbacks of CASE tools

Benefits

Drawbacks

68 Diploma – Software Engineering 1

You might also like