It B.tech II Year II Sem Se (R18a0511) Notes
It B.tech II Year II Sem Se (R18a0511) Notes
ON
SOFTWARE ENGINEERING
UNIT - II:
Software Requirements: Functional and non-functionalrequirements, User
requirements, System requirements, Interface specification, the software requirements
document.
Requirements engineering process: Feasibility studies, Requirements elicitation and
analysis, Requirements validation, Requirements management.
System models: Context Models, Behavioral models, Data models, Object models,
structured methods. UML Diagrams.
UNIT - III:
Design Engineering: Design process and Design quality, Design concepts, the design model.
Creating an architectural design: Software architecture, Data design, Architectural styles and
patterns, Architectural Design.
Object-Oriented Design: Objects and object classes, An Object-Oriented design process,
Design evolution.
Performing User interface design: Golden rules, User interface analysis and design,
interface analysis, interface design steps, Design evaluation.
UNIT - IV:
Testing Strategies: A strategic approach to software testing, test strategies for conventional
software, Black-Box and White-Box testing, Validation testing, System testing, the art of
Debugging.
Product metrics: Software Quality, Metrics for Analysis Model, Metrics for Design Model,
Metrics for source code, Metrics for testing, Metrics for maintenance.
UNIT - V:
Risk management: Reactive vs. Proactive Risk strategies, software risks, Risk identification,
Risk projection, Risk refinement, RMMM, RMMM Plan.
TEXT BOOKS :
1. Software Engineering A practitioner’s Approach, Roger S Pressman, 6th edition. McGraw Hill
International Edition.
2. Software Engineering, Ian Sommerville, 7th edition, Pearson education.
REFERENCE BOOKS :
1. Software Engineering, A Precise Approach, Pankaj Jalote, Wiley India, 2010.
2. Software Engineering: A Primer, Waman S Jawadekar, Tata McGraw-Hill, 2008
3. Software Engineering, Principles and Practices, Deepak Jain, Oxford University Press.
4. Software Engineering1: Abstraction and modelling, Diner Bjorner, Springer International
edition, 2006.
5. Software Engineering2: Specification of systems and languages, Diner Bjorner, Springer
International edition 2006.
6. Software Engineering Principles and Practice, Hans Van Vliet, 3rd edition, John Wiley & Sons
Ltd.
7. Software Engineering3: Domains, Requirements, and Software Design, D. Bjorner, Springer
International Edition.
8. Introduction to Software Engineering, R. J. Leach, CRC Press.
Course Outcomes:
Students will have the ability:
1. To compare and select a process model for a business system.
2. To identify and specify the requirements for the development of an application.
3. To develop and maintain efficient, reliable and cost effective software solutions.
4. To critically think and evaluate assumptions and arguments
INDEX
4 I Process models 11
5 II Software Requirements 18
7 II System models 26
12 IV Testing Strategies 42
13 IV Product metrics 49
15 V Risk management 53
16 V Quality Management 56
UNIT - I
INTRODUCTION:
Engineering is the branch of science and technology concerned with the design, building, and
use of engines, machines, and structures. It is the application of science, tools and methods to
find cost effective solution to simple and complex problems.
SOFTWARE ENGINEERING is defined as a systematic, disciplined and quantifiable
approach for the development, operation and maintenance of software.
Characteristics of software
• Software is developed or engineered, but it is not manufactured in the classical sense.
• Software does not wear out, but it deteriorates due to change.
• Software is custom built rather than assembling existing components.
LEGACY SOFTWARE
Legacy software are older programs that are developed decades ago.
The quality of legacy software is poor because it has inextensible design, convoluted code,
poor and nonexistent documentation, test cases and results that are not achieved.
As time passes legacy systems evolve due to following reasons:
The software must be adapted to meet the needs of new computing environment or
technology.
The software must be enhanced to implement new business requirements.
The software must be extended to make it interoperable with more modern systems or
database
The software must be rearchitected to make it viable within a network environment.
SOFTWARE MYTHS
Myths are widely held but false beliefs and views which propagate misinformation and
confusion.
Three types of myth are associated with software:
- Management myth
- Customer myth
- Practitioner’s myth
MANAGEMENT MYTHS
• Myth(1)-The available standards and procedures for software are enough.
• Myth(2)-Each organization feel that they have state-of-art software development tools since
they have latest computer.
• Myth(3)-Adding more programmers when the work is behind
schedule can catch up.
• Myth(4)-Outsourcing the software project to third party, we can
relax and let that party build it.
CUSTOMER MYTHS
• Myth(1)- General statement of objective is enough to begin writing programs, the
details can be filled in later.
• Myth(2)-Software is easy to change because software is flexible
PRACTITIONER’S MYTH
• Myth(1)-Once the program is written, the job has been done.
• Myth(2)-Until the program is running, there is no way of assessing the quality.
• Myth(3)-The only deliverable work product is the working program
• Myth(4)-Software Engineering creates voluminous and unnecessary documentation and
invariably slows down software development.
A PROCESS FRAMEWORK
• Establishes the foundation for a complete software process
• Identifies a number of framework activities applicable to all software projects
• Also include a set of umbrella activities that are applicable across the entire
software process.
engineering action
Software Work tasks
Work products
Task sets Quality assurance points
Project milestones
Work tasks
Software ask sets
engineering action T
Work products
Quality assurance points
Project milestones
Framework activity #n
Work tasks
Work products
Software engineering action
Quality assurance points
Project milestones
\
A PROCESS FRAMEWORK
Used as a basis for the description of process models
Generic process activities
• Communication
• Planning
• Modeling
• Construction
• Deployment
A PROCESS FRAMEWORK
Generic view of engineering complimented by a number of umbrella activities
– Software project tracking and control
– Formal technical reviews
– Software quality assurance
– Software configuration management
– Document preparation and production
– Reusability management
Continuous model:
-Lets organization select specific improvement that best meet its business objectives and
minimize risk-Levels are called capability levels.
-Describes a process in 2 dimensions
-Each process area is assessed against specific goals and practices and is rated according to
the following capability levels.
CMMI
• Six levels of CMMI
– Level 0:Incomplete
– Level 1:Performed
– Level 2:Managed
– Level 3:Defined
– Level 4:Quantitatively managed
– Level 5:Optimized
CMMI
• Incomplete -Process is adhoc . Objective and goal of process areas are not known
• Performed -Goal,o bjective, work tasks, work products and other activities of software
process are carried out
• Managed -Activities are monitored, reviewed, evaluated and controlled
• Defined -Activities are standardized, integrated and documented
• Quantitatively Managed -Metrics and indicators are available to measure the process and
quality
• Optimized - Continuous process improvement based on quantitative feed back from
the user
-Use of innovative ideas and techniques, statistical quality control and other
methods for process improvement.
PROCESS ASSESSMENT
Does not specify the quality of the software or whether the software will be
delivered on time or will it stand up to the user requirements. It attempts to keep a check on
the current state of the software process with the intention of improving it.
PROCESS ASSESSMENT
Software Process
Software Process Assessment
Software Process improvement
Motivates Capability determination
Communication
Planning
Modeling
Construction
Deployment
PROBLEMS IN WATERFALLMODEL
• Real projects rarely follow the sequential flow since they are always iterative
• The model requires requirements to be explicitly spelled out in the beginning,
which is often difficult
• A working model is not available until late in the project time plan
• Linear sequential model is not suited for projects which are iterative in nature
• Incremental model suits such projects
• Used when initial requirements are reasonably well-defined and compelling
need to provide limited functionality quickly
• Functionality expanded further in later releases
• Software is developed in increments
Communication
Planning
Modeling
Construction
Deployment
INCREMENT 2
Communication
Planning
Modeling
Construction
: Deployment
:
:
:
INCREMENT N
Communication
Planning
Modeling
Construction
Deployment
Problems in RAD
• Requires a number of RAD teams
• Requires commitment from both developer and customer for rapid-fire completion of
activities
• Requires modularity
• Not suited when technical risks are high
EVOLUTIONARY PROCESSMODEL
PROTOTYPING
Quick Design
Build Prototype
Deployment & Delivery
STEPS IN PROTOTYPING
• Begins with requirement gathering
• Identify whatever requirements are known
• Outline areas where further definition is mandatory
• A quick design occur
• Quick design leads to the construction of prototype
• Prototype is evaluated by the customer
• Requirements are refined
• Prototype is turned to satisfy the needs of customer
LIMITATIONS OF PROTOTYPING
• In a rush to get it working, overall software quality or long term maintainability are
generally overlooked
• Use of inappropriate OS or PL
• Use of inefficient algorithm
An evolutionary model which combines the best feature of the classical life cycle and
the iterative nature of prototype model. Include new element : Risk element. Starts in middle and
continually visits the basic tasks of communication, planning, modeling, construction and
deployment
Evolved by Rumbaugh, Booch, Jacobson. Combines the best features their OO models.
Adopts additional features proposed by otherexperts. Resulted in Unified Modeling
Language(UML). Unified process developed Rumbaugh and Booch. A framework for
Object-Oriented Software
Engineering using UML
3. Construction Phase
*Design model
*System components
*Test plan and procedure
*Test cases
*Manual
4. Transition Phase
*Delivered software increment
*Beta test results
*General user feedback
UNIT II
SOFTWARE REQUIREMENTS
SOFTWARE REQUIREMENTS
• Encompasses both the User’s view of the requirements( the external view ) and the
Developer’s view( inside characteristics)
User’s Requirements
--Statements in a natural language plus diagram, describing the services the system is
expected to provide and the constraints
• System Requirements --Describe the system’s function, services and operational condition
SOFTWARE REQUIREMENTS
• System Functional Requirements
--Statement of services the system should provide
--Describe the behavior in particular situations
--Defines the system reaction to particular inputs
• Nonfunctional Requirements
- Constraints on the services or functions offered by the system
--Include timing constraints, constraints on the development process and standards
--Apply to system as a whole
• Domain Requirements
--Requirements relate to specific application of the system
--Reflect characteristics and constraints of that system
FUNCTIONAL REQUIREMENTS
• Should be both complete and consistent
• Completeness
-- All services required by the user should be defined
• Consistent
-- Requirements should not have contradictory definition
• Difficult to achieve completeness and consistency for large system
NON-FUNCTIONALREQUIREMENTS
Types of Non-functional Requirements
1. Product Requirements
-Specify product behavior
-Include the following
• Usability
• Efficiency
• Reliability
Software Engineering Page 18
• Portability
2. Organisational Requirements
--Derived from policies and procedures
--Include the following:
• Delivery
• Implementation
• Standard
3. External Requirements
-- Derived from factors external to the system and its development process
--Includes the following
• Interoperability
• Ethical
• Legislative
STRUCTURED LANGUAGESPECIFICATION
• Requirements are written in a standard way
• Ensures degree of uniformity
• Provide templates to specify system requirements
• Include control constructs and graphical highlighting to partition the specification
Interface Specification
• Working of new system must match with the existing system
• Interface provides this capability and precisely specified
Purpose of SRS
• communication between the Customer,Analyst,system developers, maintainers, ..
• firm foundation for the design phase
• support system testing activities
• Support project management and control
• controlling the evolution of the system
Process activities
1. Requirement Discovery -- Interaction with stakeholder to collect their requirements
including domain and documentation
2. Requirements classification and organization -- Coherent clustering of requirements from
unstructured collection of requirements
3. Requirements prioritization and negotiation -- Assigning priority to requirements
--Resolves conflicting requirements through negotiation
4. Requirements documentation -- Requirements be documented and placed in the next round
of spiral
2. Interviewing--Puts questions to stakeholders about the system that they use and the system
to be developed. Requirements are derived from the answers.
Two types of interview
– Closed interviews where the stakeholders answer a pre-defined set of questions.
– Open interviews discuss a range of issues with the stakeholders for better understanding
their needs.
3. Scenarios --Easier to relate to real life examples than to abstract description. Starts with an
outline of the interaction and during elicitation, details are added to create a complete
description of that interaction
Scenario includes:
• 1. Description at the start of the scenario
• 2. Description of normal flow of the event
• 3. Description of what can go wrong and how this is handled
• 4.Information about other activities parallel to the scenario
• 5.Description of the system state when the scenario finishes
LIBSYS scenario
• Initial assumption: The user has logged on to the LIBSYS system and has located the
journal containing the copy of the article.
• Normal: The user selects the article to be copied. He or she is then prompted by the system
to either provide subscriber information for the journal or to indicate how they will pay for
the article. Alternative payment methods are by credit card or by quoting an organisational
account number.
• The user is then asked to fill in a copyright form that maintains details of the transaction and
they then submit this to the LIBSYS system.
• The copyright form is checked and, if OK, the PDF version of the article is downloaded to
the LIBSYS working area on the user’s computer and the user is informed that it is available.
The user is asked to select a printer and a copy of the article is printed
LIBSYS scenario
• What can go wrong: The user may fail to fill in the copyright form correctly. In this case,
the form should be re-presented to the user for correction. If the resubmitted form is still
incorrect then the user’s request for the article is rejected.
• The payment may be rejected by the system. The user’s request for the article is rejected.
• The article download may fail. Retry until successful or the user terminates the session..
• Other activities: Simultaneous downloads of other articles.
• System state on completion: User is logged on. The downloaded article has been deleted
from LIBSYS workspace if it has been flagged as print-only.
4. Use cases -- scenario based technique for requirement elicitation. A fundamental feature of
UML, notation for describing object-oriented system models. Identifies a type of interaction
and the actors involved. Sequence diagrams are used to add information to a Use case
REQUIREMENTS VALIDATION
Concerned with showing that the requirements define the systemthat the customer wants.
Important because errors in requirements can lead to extensiverework cost
Validation checks
1. Validity checks --Verification that the system performs the intended function bythe user
2.Consistency check --Requirements should not conflict
3. Completeness checks --Includes requirements which define all functions andconstraints
intended by the system user
4. Realism checks --Ensures that the requirements can be actually implemented
5. Verifiability -- Testable to avoid disputes between customer and developer.
VALIDATION TECHNIQUES
1.REQUIREMENTS REVIEWS
Reviewers check the following:
(a) Verifiability: Testable
(b) Comprehensibility
(c) Traceability
(d) Adaptability
2.PROTOTYPING
3. TEST-CASE GENERATION
Requirements management
Requirements are likely to change for large software systems and as such requirements
management process is required to handle changes.
Reasons for requirements changes
(a) Diverse Users community where users have different requirements and priorities
(b) System customers and end users are different
(c) Change in the business and technical environment after installation
Two classes of requirements
(a) Enduring requirements: Relatively stable requirements
(b) Volatile requirements: Likely to change during system development process or during
operation
Traceability
Maintains three types of traceability information.
1. Source traceability--Links the requirements to the stakeholders
2. Requirements traceability--Links dependent requirements within the requirements
document
3. Design traceability-- Links from the requirements to the design module
A traceability matrix
SYSTEM MODELS
Used in analysis process to develop understanding ofthe existing system or new system.
Excludes details. An abstraction of the system
Types of system models
1. Context models
2. Behavioural models
3. Data models
4. Object models
5.Structured models
Behavioral models
Describes the overall behaviour of a system.
Two types of behavioural model
1. Data Flow models
2. State machine models
Data flow models --Concentrate on the flow of data and functional transformation on that data.
Show the processing of data and its flow through a sequence of processing steps. Help
analyst understand what is going on
Advantages
-- Simple and easily understandable
-- Useful during analysis of requirements
DATA MODELS
Used to describe the logical structure of data processed by the system. An entity-relation-
attribute model sets out the entities in the system, the relationships between these entities and
the entity attributes. Widely used in database design. Can readily be implemented using
relational databases. No specific notation provided in the UML but objects and associations
can be used.
OBJECT MODELS
INHERITANCE MODELS
A type of object oriented model which involves in object classes attributes. Arranges classes
into an inheritance hierarchy with the most general object class at the top of hierarchy
Specialized objects inherit their attributes and services
UML notation
-- Inheritance is shown upward rather than downward
--Single Inheritance: Every object class inherits its attributes and operations from a single
parent class
--Multiple Inheritance: A class of several of several parents.
OBJECT MODELS
OBJECT AGGREGATION
Some objects are grouping of other objects. An aggregate of a set of other objects. The
classes representing these objects may be modeled using an object aggregation model
A diamond shape on the source of the link represents the composition.
OBJECT-BEHAVIORAL MODEL
-- Shows the operations provided by the objects
-- Sequence diagram of UML can be used for behavioral modeling
UNIT III
DESIGN ENGINEERING
DESIGN PROCESS AND DESIGN QUALITY
ENCOMPASSES the set of principles, concepts and practices that lead to the development of
high quality system or product. Design creates a representation or model of the software.
Design model provides details about S/W architecture, interfaces and components that are
necessary to implement the system. Quality is established during Design. Design should
exhibit firmness, commodity and design. Design sits at the kernel of S/W Engineering.
Design sets the stage for construction.
QUALITY GUIDELINES
• Uses recognizable architectural styles or patterns
• Modular; that is logically partitioned into elements or subsystems
• Distinct representation of data, architecture, interfaces and components
• Appropriate data structures for the classes to be implemented
• Independent functional characteristics for components
• Interfaces that reduces complexity of connection
• Repeatable method
QUALITY ATTRIBUTES
FURPS quality attributes
• Functionality
* Feature set and capabilities of programs
* Security of the overall system
• Usability
* user-friendliness
* Aesthetics
* Consistency
* Documentation
• Reliability
* Evaluated by measuring the frequency and severity of failure
* MTTF
• Supportability
* Extensibility
* Adaptability
* Serviceability
DESIGN CONCEPTS
1. Abstractions
2. Architecture
3. Patterns
4. Modularity
5. Information Hiding
Software Engineering Page 30
6. Functional Independence
7. Refinement
8. Re-factoring
9. Design Classes
DESIGN CONCEPTS
ABSTRACTION
Many levels of abstraction.
Highest level of abstraction : Solution is slated in broad terms using the language of the
problem environment
Lower levels of abstraction : More detailed description of the solution is provided
• Procedural abstraction-- Refers to a sequence of instructions that a specific and
limited function
• Data abstraction-- Named collection of data that describe a data object
DESIGN CONCEPTS
ARCHITECTURE--Structure organization of program components (modules) and their
interconnection
Architecture Models
(a) Structural Models-- An organised collection of program components
(b) Framework Models-- Represents the design in more abstract way
(c) Dynamic Models-- Represents the behavioral aspects indicating changes as a function of
external events
(d). Process Models-- Focus on the design of the business or technical process
PATTERNS
Provides a description to enables a designer to determine the followings :
(a). Whether the pattern is applicable to the current work
(b). Whether the pattern can be reused
(c). Whether the pattern can serve as a guide for developing a similar but functionally or
structurally different pattern
MODULARITY
Divides software into separately named and addressable components, sometimes called modules.
Modules are integrated to satisfy problem requirements. Consider two problems p1 and p2. If
the complexity of p1 iscp1 and of p2 is cp2 then effort to solve p1=cp1 and effort to solve
p2=cp2
If cp1>cp2 then ep1>ep2
The complexity of two problems when they are combined is often greater than the sum of the
perceived complexity when each is taken separately. • Based on Divide and Conquer strategy
: it is easier to solve a complex problem when broken into sub-modules
INFORMATION HIDING
Information contained within a module is inaccessible to other modules who do not need such
information. Achieved by defining a set of Independent modules that communicate with one
another only that information necessary to achieve S/W function. Provides the greatest
benefits when modifications are required during testing and later. Errors introduced during
modification are less likely to propagate to other location within the S/W.
DESIGN CLASSES
Class represents a different layer of design architecture.
Five types of Design Classes
1. User interface class -- Defines all abstractions that are necessary for human computer
interaction
2. Business domain class -- Refinement of the analysis classes that identity attributes and
services to implement some of business domain
3.Process class -- implements lower level business abstractions required to fully manage the
business domain classes
4.Persistent class -- Represent data stores that will persist beyond the execution of the
software
5.System class -- Implements management and control functions to operate and
communicate within the computer environment and with the outside world.
Data Design
The data design action translates data objects defined as part of the analysis model into data
structures at the component level and a database architecture at application level when
necessary.
ARCHITECTURAL STYLES
Describes a system category that encompasses:
(1) a set of components
(2) a set of connectors that enables “communication andcoordination
(3) Constraints that define how components can be integrated to form the system
(4) Semantic models to understand the overall propertiesof a system
Data-flow architectures
Shows the flow of input data, its computational components and output data. Structure is also
called pipe and Filter. Pipe provides path for flow of data. Filters manipulate data and work
independent ofits neighboring filter. If data flow degenerates into a single line of transform,
it is termed as batch sequential.
Object-oriented architectures
The components of a system encapsulate data and the operations. Communication and
coordination between components is done via message
Layered architectures
A number of different layers are defined Inner Layer( interface with OS)
• Intermediate Layer Utility services and application function)
Outer Layer (User interface)
ARCHITECTURAL PATTERNS
A template that specifies approach for some behavioral characteristics of the system
Patterns are imposed on the architectural styles
Pattern Domains 1.Concurrency
--Handles multiple tasks that simulates parallelism.
--Approaches(Patterns)
(a) Operating system process management pattern
(b) A task scheduler pattern
2.Persistence
--Data survives past the execution of the process
--Approaches (Patterns)
(a) Data base management system pattern
(b) Application Level persistence Pattern( word processing software)
Object-Oriented Design : Objects and object classes, An Object-Oriented design process, Design
evolution.
• Performing User interface design : Golden rules, User interface analysis and design,
interface analysis, interface design steps, Design evaluation.
Systems context and modes of use. It specify the context of the system. it also specify
the relationships between the software that is being designed and its external environment.
• If the system context is a static model it describe the other system in that environment.
• If the system context is a dynamic model then it describe how the system actually interact
with the environment.
System Architecture
Once the interaction between the software system that being designed and the system environment
have been defined. We can use the above information as basis for designing the System
Architecture.
Object Identification--This process is actually concerned with identifying the object classes.
We can identify the object classes by the following
1) Use a grammatical analysis
2) Use a tangible entities
3) Use a behaviourial approach
4) Use a scenario based approach
Golden Rules
1. Place the user in control
2. Reduce the user’s memory load
3. Make the interface consistent
Make the Interface Consistent. Allow the user to put the current task into a meaningful
context. Maintain consistency across a family of applications. If past interactive models have
created user expectations, do not make changes unless there is a compelling reason to do
so.
Interface analysis
-Understanding the user who interacts with the system based on their skill levels.i.e,
requirement gathering
-The task the user performs to accomplish the goals of the system are identified, described
and elaborated. Analysis of work environment.
Interface design
In interface design, all interface objects and actions that enable a user to perform all desired
task are defined
Implementation
A prototype is initially constructed and then later user interface development tools may be
used to complete the construction of the interface.
• Validation
The correctness of the system is validated against the user requirement
User Analysis
• Are users trained professionals, technician, clerical, o manufacturing workers?
• What level of formal education does the average user have?
• Are the users capable of learning from written materials or have they expressed a desire for
classroom training?
• Are users expert typists or keyboard phobic?
• What is the age range of the user community?
• Will the users be represented predominately by one gender?
• How are users compensated for the work they perform?
• Do users work normal office hours or do they work until the job is done?
Preliminary design
Build prototype #1
Interface evaluation is studied by designer
Design modifications are made
Build prototype # n
Interface
User evaluate's interface
Interface design is complete
Testing Strategies
Software is tested to uncover errors introduced during design and construction. Testing often
accounts for more project effort than other s/e activity. Hence it has to be done carefully using a
testing strategy. The strategy is developed by the project manager, software engineers and testing
specialists.
Testing is the process of execution of a program with the intention of finding errors
Involves 40% of total project cost
Testing Strategy provides a road map that describes the steps to be conducted as part of testing. It
should incorporate test planning, test case design, test execution and resultant data collection and
execution
Validation refers to a different set of activities that ensures that the software is traceable to the
customer requirements.
V&V encompasses a wide array of Software Quality Assurance
Unit Testing begins at the vortex of the spiral and concentrates on each unit of software in source
code. It uses testing techniques that exercise specific paths in a component and its control structure
to ensure complete coverage and maximum error detection. It focuses on the internal processing
logic and data structures. Test cases should uncover errors.
Boundary testing also should be done as s/w usually fails at its boundaries. Unit tests can be
designed before coding begins or after source code is generated.
Integration testing: In this the focus is on design and construction of the software architecture. It
addresses the issues associated with problems of verification and program construction by testing
inputs and outputs. Though modules function independently problems may arise because of
interfacing. This technique uncovers errors associated with interfacing. We can use top-down
integration wherein modules are integrated by moving downward through the control hierarchy,
Software Page
Engineering 43
beginning with the main control module. The other strategy is bottom –up which begins
construction and testing with atomic modules which are combined into clusters as we move up the
hierarchy. A combined approach called Sandwich strategy can be used i.e., top-down for higher
level modules and bottom-up for lower level modules.
Validation Testing: Through Validation testing requirements are validated against s/w
constructed. These are high-order tests where validation criteria must be evaluated to assure that
s/w meets all functional, behavioural and performance requirements. It succeeds when the
software functions in a manner that can be reasonably expected by the customer.
1) Validation Test
Criteria 2)Configuration
Review
3)Alpha And Beta Testing
The validation criteria described in SRS form the basis for this testing. Here, Alpha and Beta
testing is performed.
Alpha testing is performed at the developers site by end users in a natural setting and with a
controlled environment.
Beta testing is conducted at end-user sites. It is a “live” application and environment is not
controlled. End-user records all problems and reports to developer. Developer then makes
modifications and releases the product.
System Testing: In system testing, s/w and other system elements are tested as a whole. This is the
last high-order testing step which falls in the context of computer system engineering. Software is
combined with other system elements like H/W, People, Database and the overall functioning is
checked by conducting a series of tests. These tests fully exercise the computer based system. The
types of tests are:
1. Recovery testing: Systems must recover from faults and resume processing within a pre specified
time. It forces the system to fail in a variety of ways and verifies that recovery is properly
performed. Here the Mean Time To Repair (MTTR) is evaluated to see if it is within acceptable
limits.
2. Security Testing: This verifies that protection mechanisms built into a system will protect it from
improper penetrations. Tester plays the role of hacker. In reality given enough resources and time
it is possible to ultimately penetrate any system. The role of system designer is to make penetration
cost more than the value of the information that will be obtained.
3. Stress testing: It executes a system in a manner that demands resources in abnormal quantity,
frequency or volume and tests the robustness of the system.
4. Performance Testing: This is designed to test the run-time performance of s/w within the context
of an integrated system. They require both h/w and s/w instrumentation.
Testing Tactics:
The goal of testing is to find errors and a good test is one that has a high probability of finding an
error. A good test is not redundant and it should be neither too simple nor too complex.
Software Page
Engineering 44
Software Page
Engineering 45
Two major categories of software testing
Black box testing: It examines some fundamental aspect of a system, tests whether each
function of product is fully operational.
White box testing: It examines the internal operations of a system and examines the procedural
detail.
Software Page
Engineering 46
Example
If 0.0<=x<=1.0
Then test cases are (0.0,1.0) for valid input and (-0.1 and 1.1) for invalid input
Debugging occurs as a consequence of successful testing. It is an action that results in the rmoval of
errors. It is very much an art.
Debugging Strategies:
The objective of debugging is to find and correct the cause of a software error which is realized by a
combination of systematic evaluation, intuition and luck. Three strategies are proposed:
Software Page
Engineering 49
Software Page
Engineering 50
1) Brute Force Method.
2) Back Tracking
3) Cause Elimination
Brute Force: Most common and least efficient method for isolating the cause of a s/w error. This is
applied when all else fails. Memory dumps are taken, run-time traces are invoked and program is
loaded with output statements. Tries to find the cause from the load of information Leads to waste
of time and effort.
Back tracking: Common debugging approach. Useful for small programs
Beginning at the system where the symptom has been uncovered, the source code is traced
backward until the site of the cause is found.
More no.of lines implies no.of paths are unmanageable.
Cause Elimination: Based on the concept of Binary partitioning. Data related to error occurenec
are organized to isolate potential causes. A “cause hypothesis” is devised and data is used to prove
or disprove it. A list of all possible causes is developed and tests are conducted to eliminate each
cause.
Automated Debugging: This supplements the above approaches with debugging tools that provide
semi-automated support like debugging compilers, dynamic debugging aids, test case generators,
mapping tools etc.
Regression Testing: When a new module is added as part of integration testing the software
changes. This may cause problems with the functions which worked properly before. This testing is
the re-execution of some subset of tests that are already conducted to ensure that changes have not
propagated unintended side effects. It ensures that changes do not introduce unintended behaviour
or errors. This can be done manually or automated.
Software Quality
Conformance to explicitly stated functional and performance requirements, explicitly
documented development standards, and implicit characteristics that are expected of all
professionally developed software.
Factors that affect software quality can be categorized in two broad groups:
Factors that can be directly measured (e.g. defects uncovered during testing)
Factors that can be measured only indirectly (e.g. usability or maintainability)
• McCall’s quality factors
1. Productoperation
Correctness
Reliability
Efficiency
Integrity
Usability
2. Product Revision
a. Maintainability
b. Flexibility
Software Page
Engineering 51
c. Testability
3. Product Transition
Portability
Reusability
Interoperability ISO 9126
Quality Factors
1. Functionality
2. Reliability
3. Usability
4. Efficiency
5. Maintainability
6. Portability
Product metrics
Software Page
Engineering 52
Measures the functionality delivered by the system
FP computed from the following parameters
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
Software Page
Engineering 53
S5:Number of unique database items
S6: Number of database segments
S7:Number of modules with single entry and exit
• DSQI=sigma of WiDi
Primitive measure that may be derived after the code is generated or estimated once design is
complete
• V = N log2(n1+ n2)
Software Measurement:
Software measurement can be categorized as
1) Direct Measure and
2) Indirect Measure
Software Measurement
The metrics in software Measurement are Size oriented
metrics Function oriented metrics
Object oriented metrics
Web based application metric
Size Oriented Metrics
It totally concerned with the measurement of software.
A software company maintains a simple record for calculating the size of the software.
It includes LOC, Effort,$$,PP document,Error,Defect ,People.
Software Page
Engineering 55
Function oriented metrics
Measures the functionality derived by the application
The most widely used function oriented metric is Function point
Function point is independent of programming language
Measures functionality from user point of view
Risk Management
Risk is an undesired event or circumstance that occur while a project is underway
It is necessary for the project manager to anticipate and identify different risks that a project may be
susceptible to.
Software Page
Engineering 56
UNIT-V
Risk Management
It aims at reducing the impact of all kinds of risk that may effect a project by identifying, analyzing
and managing them
Reactive Vs Proactive risk
Reactive : It monitors the projects likely risk and resources are set aside.
Proactive: Risk are identified, their probability and impact is accessed
Software Risk
It involve 2 characteristics
Uncertainty : Risk may or may not happen
Loss : If risk is reality unwanted loss or consequences will occur
It includes
1)Project Risk
2)Technical Risk
3) Business Risk
4) Known Risk
5)Unpredictable
Risk 6)Predictable
Risk
• Project risk: Threaten the project plan and affect schedule and resultant cost
Technical risk: Threaten the quality and timeliness of software to be produced
• Business risk: Threaten the viability of software to be built
• Known risk: These risks can be recovered from careful evaluation
• Predictable risk: Risks are identified by past project experience
• Unpredictable risk: Risks that occur and may be difficult to identify
Risk Identification
It concerned with identification of
risk Step1: Identify all possible risks
Step2: Create item check list
Step3: Categorize into risk components-Performance risk, cost risk, support risk and
schedule risk
Step4: Divide the risk into one of 4
categories Negligible-0
Marginal-1
Risk Identification
Risk Identification includes
Product size
Business impact
Development environment
Process definition
Customer characteristics
Software Page
Engineering 57
Software Page
Engineering 58
Technology to be built
Staff size and experience
Risk Projection
Also called risk estimation. It estimates the impact of risk on the project and the
product.Estimation is done by using Risk Table.Risk projection addresses risk
in 2 ways
Likelihood or probability that the risk is real(Li)
Consequences(Xi)
Risk Projection
Steps in Risk projection
1. Estimate Li for each risk
2. Estimate the consequence Xi
3. Estimate the impact
4. Draw the risk table
Ignore the risk where the management concern is low i.e., risk having impact high or low with low
probability of occurrence
Consider all risks where management concern is high i.e., high impact with high or moderate
probability of occurrence or low impact with high
Risk Projection
Software Page
Engineering 59
Risk Refinement
Also called Risk assessment
Refines the risk table in reviewing the risk impact based on the following three factors
a. Nature: Likely problems if risk occurs
b. Scope: Just how serious is it?
c. Timing: When and how long
Risk Refinement
It is based on Risk Elaboration
Calculate Risk exposure RE=P*C
Where P is probability and C is cost of project if risk occurs
Risk Mitigation Monitoring And Management (RMMM)
Its goal is to assist project team in developing a strategy for dealing with risk
There are three issues of RMMM
1) Risk Avoidance
2) Risk Monitoring and
3) Risk Management
Actions to be taken in the event that mitigation steps have failed and the risk has become
a live problem
Devise RMMP(Risk Mitigation Monitoring And Management Plan)
RMMM plan
It documents all work performed as a part of risk analysis.
Each risk is documented individually by using a Risk Information Sheet.
RIS is maintained by using a database system
Quality Management
Quality Concepts
Variation control is the heart of quality control
Form one project to another, we want to minimize the difference between the predicted
resources needed to complete a project and the actual resources used, including staff,
equipment, and calendar time
Software Page
Engineering 60
Software Page
Engineering 61
Quality of design
Refers to characteristics that designers specify for the end product
Quality Management
Quality of conformance
Degree to which design specifications are followed in manufacturing the product
Quality control
Series of inspections, reviews, and tests used to ensure conformance of a work product to its
specifications
Quality assurance
Consists of a set of auditing and reporting functions that assess the effectiveness and
completeness of quality control activities
Cost of Quality
Prevention costs
Quality planning, formal technical reviews, test equipment, training
Appraisal costs
In-process and inter-process inspection, equipment calibration and maintenance, testing
Failure costs
rework, repair, failure mode analysis
External failure costs
Complaint resolution, product return and replacement, help line support, warranty work
SQA Activities
Prepare SQA plan for the project.
Participate in the development of the project's software process description.
Review software engineering activities to verify compliance with the defined software process.
Software Page
Engineering 62
Audit designated software work products to verify compliance with those defined as part of the
software process.
Ensure that any deviations in software or work products are documented and handled according to a
documented procedure.
Record any evidence of noncompliance and reports them to management.
Software Reviews
Purpose is to find errors before they are
passed on to another software engineering
activity or released to the customer.
Software engineers (and others) conduct
formal technical reviews (FTRs) for software
quality assurance.
Using formal technical reviews (walkthroughs
or inspections) is an effective means for
improving software quality.
Formal Technical Review
A FTR is a software quality control activity performed by software engineers and others.The
objectives are:
To uncover errors in function, logic or implementation for any representation of the software.
To verify that the software under review meets its requirements.
To ensure that the software has been represented according to predefined standards.
To achieve software that is developed in a uniform manner and
To make projects more manageable.
Software Page
Engineering 63
The meeting(FTR) is started by introducing the agenda of meeting and then the producer
introduces his product. Then the producer “walkthrough” the product, the reviewers raise
issues which they have prepared in advance.
If errors are found the recorder notes down
Review Guidelines
Review the product, not the producer
Set an agenda and maintain it
Limit debate and rebuttal
Enunciate problem areas, but don’t attempt to solve every problem noted Take return notes
Limit the number of participants and insist upon advance preparation.
Develop a checklist for each product i.e likely to be reviewed
Allocate resources and schedule time for FTRS
Conduct meaningful training for all reviewer
Review your early reviews
Software Defects
Industry studies suggest that design activities
introduce 50-65% of all defects or errors
during the software process
Review techniques have been shown to be up
to 75% effective in uncovering design flaws
which ultimately reduces the cost of
subsequent activities in the software process
Software Page
Engineering 64
Six Sigma for Software Engineering
The most widely used strategy for statistical quality assurance
Three core steps:
1. Define customer requirements, deliverables, and project goals via well-
defined methods of customer communication.
2. Measure each existing process and its output to determine current
quality performance (e.g., compute defect metrics)
3. Analyze defect metrics and determine vital few
causes. For an existing process that needs improvement
1. Improve process by eliminating the root causes for defects
2. Control future work to ensure that future work does not reintroduce causes
of defects
If new processes are being developed
1. Design each new process to avoid root causes of defects and to
meet customer requirements
2. Verify that the process model will avoid defects and meet
customer requirements
CMMI (Capability Maturity Model Integration) is a proven industry framework to improve product
quality and development efficiency for both hardware and software.
CMMI, staged, uses 5 levels to describe the maturity of the organization.
CMMI is an evolutionary improvement path for software organizations from immature process to a
mature, disciplined one.
Provides guidance on how to gain control of processes for developing and maintaining software.
CMMI describes the key elements of an effective software process.
Software Page
Engineering 65
Software Reliability
Defined as the probability of failure free operation of a computer program in a specified
environment for a specified time period Can be measured directly and estimated using historical
and developmental data Software reliability problems can usually be traced back to errors in design
or implementation.
Measures of Reliability
Mean time between failure (MTBF) = MTTF + MTTR
MTTF = mean time to failure
MTTR = mean time to repair
Availability = [MTTF / (MTTF + MTTR)] x 100%
ISO 9000 describes the quality elements that must be present for a quality assurance system to be
compliant with the standard, but it does not describe how an organization should implement
these elements.
ISO 9001:2000 is the quality standard that contains 20 requirements that must be present in an
effective software quality assurance system.
Software Page
Engineering 66