0 ratings0% found this document useful (0 votes) 173 views411 pagesSoftware Project Management
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Sor TWARE PROJECT
MANAGEMENT
A UNIFIED FRAMEWORK
WALKER ROYCE
Foreword by Barry Boehm
i
| heeSOFTWARE
PROJECT
MANAGEMENT
A Unified
Framework
WALKER ROYCEThe Addison-Wesley Object Technology Series
Grady Booch, Ivar Jacobson, and James Rumbaugh, Series Editors
For more information check out the series web site [http:/wwww awl comlesenglotseries/]
‘Armout/Miller, Advanced Use Case Modeling, Volume 1
Binder, Testing Object-Oriented Systems: Models, Patterns and
Tools
Blakley, CORBA Security: An Inroduction 10 Safe Computing
with Objects
Booch, Object Solutions: Managing the Object-Oriented Project
Booch, Object-Oriented Analysis and Design with Applications
Second Edition
Booch/Rumbaugh/lacobson, The Unified Modeling Language
User Guide
Box, Essential COM
Box/Brown/Ewald/Sels, Effective COM: 50 Ways 10 Improve
Your COM and MTS-based Applications
‘Cockbucn, Surviving Object-Oriented Projects: A Manager's
Guide
Collins, Designing Object-Oriented User Inerfaces
Conalen. Building Web Applications with UML.
D'Souza/Wilis, Objects: Components, and Frameworks with
UML: The Catalysis Approach
Douglass, Doing Hard Time: Developing Real-Time Systems with
UML, Objects, Frameworks, and Patterns
Douglass, Real-Time UML, Second Edition: Developing Efcient
Objects for Embedded Sysiems
Fowler, Analysis Patterns: Reusable Object Models
Fowler/Beck/BranvOpdyke/Roberts, Refactoring: Improving the
Design of Existing Code
Fowler/Scott, UML Distilled, Second Edition: A Brief Guide 0
the Standard Object Modeling Language
Gomaa, Designing Concurrent, Distributed. and Real-Time
Applications with UML
Gorton, Encerprise Transaction Processing Stems: Putting the
CORBA OTS, Encina++ and Orbic OTM to Work
Graham, Object-Oriented Methods, Third Edition: Principles and
Practice
Heinckiens, Building Scalable Database Applications: Object
Oriented Design, Architectures, and Implementations
HofmesterINord/Dilip, Applied Software Architecture
Jacobson/Boocl/Rumbaugh, The Unified Software Development
Process
acobson/Christersonibonsson/Overgaard, Object-Oriented
‘Software Engineering: A Use Case Driven Approach
Jacobson/Eriesson/Jacobson, The Object Advantage: Business
Process Reengineering with Object Technology
Jacobsow/Grisstlonsson, Software Reuse: Architecture, Process
‘and Organization for Business Success
Jordan, C++ Object Databases: Programming withthe ODMG
Standard
Kruchien, The Rational Unified Process. An Iniroduction, Second
Eaiion
Lau, The Art of Objects: Object-Oriented Design and Architecture
LLeffingwellWidrig, Managing Software Requirements: A Unified
Approach
Marshall. Enterprive Modeling with UML: Designing Successful
Sofware vhrough Business Analysis
Mowbray/Ruh, Inside CORBA: Distributed Object Standards and
Applications
estereich, Developing Software with UML: Object-Oriented
‘Analysis and Design in Practice
Page-Jones, Fundamentals of Object-Oriented Design in UML
Pohl, Object-Oriented Programming Using C++, Second Editon
Pooley/Stevens, Using UML: Software Engineering with Objects
‘and Components
‘Quatani Visual Modeling with Rational Rose 2000 and UBL.
RectoiSells, ATL Internals
Reed, Developing Applications with Viual Basic and UML
Rosenbery/Scot, Use Case Driven Object Modeling with UML: A
Practical Approach
Royce, Software Project Management: A Unified Framework
RuhiHerron/Klinker, JOP Complete: Understanding CORBA and
Middleware Interoperability
Rumbaugh/tacobson/Boach, The Unified Moudeling Language
Reference Manual
Schneider Winters, Applying Use Cases: A Practical Guide
Shan‘€arle, Emerprise Computing with Objects: From
Client/Server Environments tothe Internet
WarmeriKleppe, The Object Constraint Language: Precise
Modeling with UML
White, Software Configuration Management Sirategies and
Rational ClearCase®: A Practical Iniuduction
‘The Component Software Series
Clemens Szyperski, Series Editor
For more information check out the series web site
[http://www.awl convcseng/esseries’].
* Allen, Realizing eBusiness with Components
‘Cheesman/Daniels. UML Components: A Simple Process for
‘Specifving Component-Based SoftwareSOFTWARE
PROJECT
MANAGEMENT
A Unified
Framework
WALKER ROYCE
RATIONAL SOFTWARE CORPORATION
Ww
ADDISON-WESLEYMany of the designations used by manufacturers and sellers to distinguish their products are claimed as
trademarks. Where those designations appear in this book, and we were aware of a trademark claim, the
designations have been printed in initial capital leters or in all capitals
‘The author and publisher have taken care in the preparation of this book, but make no expressed or
implied warranty of any kind and assume no responsibility for errors or omissions. No liability is
assumed for incidental or consequential damages in connection with or arising out of the use of the
information or programs contained herein.
‘The publisher offers discounts on this book when ordered in quantity for special sales. For more informa-
tion, please contact:
Pearson Education Corporate Sales Division
One Lake Street
Upper Saddle River, NJ. 07458
(800) 382-3419
ccorpsales @pearsontechgroup.com
Visit AW on the Web: www awl.com/eseng/
Library of Congress Cataloging-in-Publication Data
Royce, Walker, 1955-
Software project management : a unified framework / Walker Royce,
. cm. ~ (The Addison-Wesley object technology series)
Includes bibliographical references and index.
ISBN 0-201-30958-0
1, Computer software—Development—Management. 1. Tile
UL. Series.
QA76.76.D47R69 1998
005.1'2—de21 98-2071
5 cP
‘Special permission to paraphrase and use the Maturity Questionnaire, CMU/SEI-94-SR-007 © 1998 by
CCamegie Mellon University, in the book Software Project Management: A Unified Framework is granted
by the Software Engineering Institute.
Capability Maturity Model is a service mark of Carnegie Mellon University, CMMSM registered in the
US, Patent and Trademark Office.
Copyright © 1998 by Addison-Wesley
{Al sights reserved, No pat of this publication may be reproduced, stored in a retrieval system, or
transmited, in any form, or by any means, electronic, mechanical, photocopying, recording, or other-
wise, without the prior consent ofthe publisher, Printed inthe United States of America, Published
simultaneously in Canada. .
ISBN 0-201-30958-0
‘Text printed on recycled paper
6789 10—MA—0403020100
Sixth printing, November 2000This work is dedicated to my
father, Winston Royce, whose
vision and practicality were
always in balance.
—WalkerPARTI
CHAPTER 1
CHAPTER 2
CHAPTER 3
Contents
List of Figures...
List of Tables
Foreword.
Preface...
SOFTWARE MANAGEMENT RENAISSANCE
Conventional Software Management.
1.1 The Waterfall Model
1.1.1 In Theory.
1.1.2 In Practice enn
1.2. Conventional Software Management Performance.
Evolution of Software Economics..
2.1 Software Economic:
2.2 Pragmatic Software Cost Estimation
Improving Software Economics
3.1 Reducing Software Product Size .
3.1.1 Languages ;
3.1.2 Object-Oriented Methods and Visual Modeling...
3.1.3 Reuse
3.1.4 Commercial Components
3.2. Improving Software Processes.
3.3. Improving Team Effectiveness .
viiCONTENTS
CHAPTER 4
PART II
CHAPTER $
CHAPTER 6
CHAPTER 7
CHAPTER 8
CHAPTER 9
‘A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
3.4 Improving Automation through Software Environments...
3.5. Achieving Required Quality
3.6 Peer Inspections: A Pragmatic View...
The Old Way and the New
4.1 The Principles of Conventional Software Engineering ..
4.2 The Principles of Modern Software Management...
4.3. Transitioning to an Iterative Proces...
Life-Cycle Phases
5.1 Engineering and Production Stag
5.2 Inception Phase
5.3 Elaboration Phase...
5.4 Construction Phase
5.§ Transition Phase ...:
Artifacts of the Process.
6.1 The Artifact Sets..
6.1.1 The Management Set...
6.1.2 The Engineering Sets.
6.1.3 Artifact Evolution over the Life Cycle...
6.14 Test Artifacts.
6.2 Management Artifact...
6.3 Engineering Actifacts..
64 Pragmatic Artifacts
Model-Based Software Architectures.
7.1 Architecture: A Management Perspective ..
7.2 Architecture: A Technical Perspective.
Workflows of the Proces
8.1 Software Process Workflows.
8.2 Iteration Workflows...
Checkpoints of the Process.
a1 Major Milestones
9.2 Minor Milestones.
9.3 Periodic Status Assessments......
103
105
109
110
ua
47
118
121
125
126
132
133CONTENTS ix
PART IIT SOFTWARE MANAGEMENT DISCIPLINES 135
CHAPTER 10 _ Iterative Process Planning .. 139
10.1 Work Breakdown Structures. 139
10.1.1 Conventional WBS Issues. = 140
10.1.2. Evolutionary Work Breakdown Structures. wo 142,
10.2 Planning Guidelines. 146
10.3 The Cost and Schedule Estimating Process 149
10.4 The Iteration Planning Process 150
10.5 Pragmatic Planning. 153
CHAPTER 11 Project Organizations and Responsibilities... 15s
LLL Line-of-Business Organizations 156
11.2 Project Organizations 158
11.3. Evolution of Organizations 165
CHAPTER 12 _ Process Automation. 167
12.1 Tools: Automation Building Blocks 168
12.2 The Project Environment 172
12.2.1 Round-Trip Engineering, 173
12.2.2 Change Management ..csncssensinnnsnsst see 174
12.2.3. Infrastructures... 181
12.2.4 Stakeholder Environments . 184
CHAPTER 13 Project Control and Process Instrumentation. 187
13.1 The Seven Core Metrics. 188
13.2. Management Indicators... 190
13.2.1 Work and Progress.. 190
132.2 Budgeted Cost and Expenditures.viussonnesonenn 191
13.2.3. Staffing and Team Dynamics... 195
13.3 Quality Indicators .. 196
133.1 Change Traffic and Stability 196
13.3.2. Breakage and Modularity. 197
13.3.3 Rework and Adaptability... — 197
13.3.4 MTBF and Maturity... 198
13.4 Life-Cycle Expectations enesene 199
201
202
13.5 Pragmatic Software Metrics.
13.6 Metrics Automation ...CHAPTER 14
PART IV
CHAPTER 15
CHAPTER 16
CHAPTER 17
PART V
APPENDIX A
APPENDIX B
Tailoring the Process.
14.1 Process Discriminants..
14.1 Scale . :
14.1.2. Stakeholder Cohesion or Contention
14.1.3. Process Flexibility or Rigor
14.1.4 Process Maturity
14.1.5 Architectural Ris!
14.1.6 Domain Experience,
14.2. Example: Small-Scale Project versus Large-Scale Project.
LOOKING FORWARD
Modern Project Profiles.
15.1 Continuous Integration
15.2 Early Risk Resolution.
15.3. Evolutionary Requirements
15.4 Teamwork among Stakeholders.
15.5 Top 10 Software Management Principles
15.6 Software Management Best Practices
‘Next-Generation Software Economics
16.1 Next-Generation Cost Models
16.2 Modern Software Economics .
Modern Process Transitions..
17.1. Culture Shifts
17.2 Denouement.
CASE STUDIES AND BACKUP MATERIAL
‘The State of the Practice in Software Management
‘The COCOMO Cost Estimation Model
BL COCOMO vrs
B2 Ada COCOMO
B3 COCOMONCONTENTS _xi
APPENDIX C Change Metrics. 283
Cl Overview. (284
C2 Metrics Derivation 286
C21 Collected Statistics
2.2 End-Product Quality Metrics
2.3. In-Progress Indicators...
C3 Pragmatic Change Metrics.
288
291
293
297
299
300
APPENDIX D —CCPDS-R Case Study.
D.1 Context for the Case Study.
D.2_ Common Subsystem Overview. 301
D.3_ Project Organization = 304
Da Common Subsystem Product Overview... senennnnmnees 305
310
312
315
318
321
323
326
337
338
340
343
343
D5 Process Overview ...
D,S5.1 Risk Management: Build Content
D.5.2_ The Incremental Design Process
D.5.3. Component Evolution...
D.5.4 The Incremental Test Process ..
D.5.8. DOD-STD-2167A Artifacts
D.6 _ Demonstration-Based Assessment.
D7 Core Metrics.
D.7.1 Development Progress
D.7.2_ Test Progress
D.73 Stability.
D.74 Modularity...
D.7.5 Adaptability 344
D.7.6 Maturity. = 345
D77 Coston Expenditures by Aas seonnsnnmnes 34S
D.8 Other Metrics. ; vn 348
D.8.1 Software Size Evolution 348
1.8.2. Subsystem Process Improvements.. 352
D.8.3 SCO Resolution Profile 353
D.8.4 CSCI Productivities and Quality Factors... 354
D.9 People Factors... 356
D.9.1 Core Team... 337
D.9.2 Award Fee Flowdown Plan. 358
D.10 Conclusions... 359)xii__ CONTENTS
APPENDIX E
Process Improvement and Mapping to the CMM.
E.1 CMM Overview se
E.2 Pragmatic Process Improvement
363
363
366
£3 Maturity Questionnaire 367
E44 Questions Not Asked by the Maturity Questionnaire. 387
ES — Overall Process Assessment 390
391
397
401
Glossary
References
Index...FIGURE 1-1
FIGURE 1.2
FIGURE 1-3
FIGURE 1.4
FIGURE 241
FIGURE 2.2
FIGURE 23
FIGURE 3-1
FIGURE 41
FIGURE 5-1
FIGURE 61
FIGURE 62
FIGURE 63
FIGURE 6-4
FIGURE 65
FIGURE 6-6
FIGURE 6-7
FIGURE 68
FIGURE 6.9
FIGURE 6-10
List of Figures
‘The waterfall model
Progress profile of a conventional software project...
Risk profile of a conventional software project across its life cycle.
Suboptimal software component organization resulting from a
requirements-driven approach.
Three generations of software economics leading to the
target objective.
Return on investment in different domains.
The predominant cost estimation process.
Cost and schedule investments necessary to achieve reusable
components
The top five principles of a modern process...
The phases of the life-cycle process.
Overview of the artifact sets.....
Life-cycle focus on artifact sets ..
Life-cycle evolution of the artifact sets
‘Typical business case outline .
Typical release specification outline .
‘Typical software development plan outline...
‘Typical release description outline
Artifact sequences across a typical life cycle...
‘Typical vision document outline
‘Typical architecture description outline
12
14
16
23
25
28
39
64
75
85
89
2
7
7
997
100
102
103
105FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
FIGURE
7
a
82
83
84
oA
92
93
94
10-1
102,
10-3,
10-4
122
123
4
125
126
134
132
133
134
135
136
13.7
Architecture, an organized and abstracted view into the
design models.
Activity levels across the life-cycle phases
The workflow of an iteration ...
Iteration emphasis across the life cycle
A typical build sequence associated with a layered architecture...
A typical sequence of life-cycle checkpoints
Engineering artifacts available at the life-cycle architecture milestone
Default agendas for the life-cycle architecture milestone ....
‘Typical minor milestones in the lifecycle of an iteration.
Conventional work breakdown structure, following the
product hierarchy ..
Default work breakdown structure
Evolution of planning fieley in che WBS over the
Planning balance throughout the life cycle...
Default roles in a software line-of-business organization...
Default project organization and responsibilities.
Software management team activities...
Software architecture team activities...
Software development team activities
Software assessment team activites.
Software project team evolution over the life eycle
‘Typical automation and tool components that support the
process workflows ..
Round-trip engineering.
The primitive components ofa sofware change order
Example release histories for a typical project and a typical product...
Organization policy outline
Extending environments into stakeholder domains ..
Expected progress for a typical project with three major releases...
The basic parameters of an earned value system...
Assessment of book progress (example
‘Typical staffing profile...
Stability expectation over a healthy project lifecycle.
Modularity expectation over a healthy project’ life cycle.
Adaptability expectation over a healthy project's life cycle...
113
119
121
123
124
127
130
131
133
141
144
147
151
156
159
160
161
162
164
165
169
174
176
179
183
185
191
193
194
196
197
197
198LISTOF FIGURES x
FIGURE 13-8 Maturity expectation over a healthy project’ life cyClessnnnsmnnnnnn 198
205
206
FIGURE 13.9 Examples of the fundamental metrics classes..
FIGURE 13-10 Example SPCP display for a top-level project situation...
FIGURE 14-1 The two primary dimensions of process variability 210
FIGURE 14-2 Priorities for tailoring the process framework. ve 20
FIGURE 15-1 Progress profile of a modern project... ee 226
FIGURE 15:2 Risk profile of a typieal modern project across its lifecycle . 229
FIGURE 15-3 Organization of software components resulting from a
modern process... 230
FIGURE 15-4 Balanced application of modern principles to achieve
economic results... 233
FIGURE 16-1 Next-generation cost models... 239
FIGURE 16-2 Differentiating potential solutions through cost estimation .... 240
FIGURE 16-3 Automation of the construction process in next-generation
environments... 242
292
270
276
276
FIGURE 17-1 Next-generation project performance.
FIGURE B1 Profile of a conventional project
FIGURE B2 Software estimation over a project life cycle
FIGUREB3 CO
20MO I estimation over a project life cycle..
FIGURE C-1 Expected trends for in-progress indicators. 294
FIGURE C2 Expectations for quality trends. 295
FIGURE D1 CCPDS-R life-cycle overview 302
FIGURE D2 Full-scale development phase project organization. 306
FIGURE D3 Common Subsystem SAS evolution.. 309
FIGURE D-4 Overview of the CCPDS-R macroprocess, milestones, and schedule... 311
FIGURE D-5_ Common Subsystem builds so 313
FIGURE D-6 Basic activities sequence for an individual build sis
FIGURE D-7 Incremental baseline evolution and test activity flOW srs 322
FIGURE D-8 CCPDS-R first demonstration activities and schedule 330
FIGURE D9 Development progress summary... 339
FIGURE D-10 Common Subsystem development progress 340
FIGURE D-11_ Common Subsystem test progress. 342
FIGURE D-12. Common Subsystem stability 343
FIGURE D-13 Common Subsystem modularity 344
FIGURE D-14 Common Subsystem adaptability... 345
FIGURE DIS Common Subsystem maturity . 346
FIGURE D-16 Common Subsystem SCO change profile 355
FIGURE E-1 Project performance expectations for CMM maturity levels 365TABLE 1-1
TABLE 1.2
TABLE 3.1
TABLE 3:2
TABLE 33
TABLE 3-4
TABLE 3.5
TABLE 41
TABLE 5-1
TABLE 8-1
TABLE 9-1
TABLE 9-2
TABLE 10-1
TABLE 102
TABLE 124
TABLE 13-1
TABLE 13.2
TABLE 133
TABLE 14-1
List of Tables
Expenditures by activity for a conventional software project...
Results of conventional software project design reviews...
Important trends in improving software economics.
Language expressiveness of some of today’s popular languages
‘Advantages and disadvantages of commercial components versus
custom software.
Three levels of process and their attributes...
General quality improvements with a modern process.
Modern process approaches for solving conventional problems..
The two stages of the life cycle: engineering and production...
The artifacts and life-cycle emphases associated with each workflow
The general status of plans, requirements, and products across the
major milestones...
Default content of status assessment reviews:
WBS budgeting defaults... .
Defaul distributions of effort and schedule by phase
Representative examples of changes at opposite ends of the
project spectrum ....
Overview of the seven core metrics.
‘Measurement of actual progress of book development (example)...
The default pattern of life-cycle metrics evolution...
Process discriminators that result from differences in project size..
13
7
32
34
40
41
49
66
74
120
128
134
148
148
180
189
194
200
213
xviiTABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
TABLE
142
143
144
14s
146
147
148
49
152,
Ma
Az
AS
BA
Be
Bs
Be
Bs
Bs
D4
bs
D6
D7
LIST OF TABLES
Process discriminators that result from differences in
stakeholder cohesion...
Process discriminators that result from differences in
process flexibility
Process discriminators that result from differences in
process maturity -
Process discriminators that result from differences in
architectural risk
Process discriminators that result from differences in
domain experience
Schedule distribution across phases for small and large projects...
Differences in workflow priorities between small and large projects...
Differences in artifacts between small and large projects,
Differences in workflow cost allocations between a conventional
process and a modern process..
Results of major milestones in a modern process
Technologies used on software projects.
Social factors observed on software projects...
Factors that affect the success of software projects:
OMO project characterization parameters...
Effort and schedule partition across conventional life-cycle phases...
Default effort allocations across COCOMO WEBS activities.
Ada COCOMO improvements to the effort adjustment factors
Early design model effort adjustment factors.
COCOMO Il post architecture model updates to Ada COCOMO
and COCOMO,
COCOMO HI process exponent parameters
Definitions of collected statistics
End-product quality metrics.
Definitions of in-progress indicators...
CSCI summary.
A typical component evolution from creation through turnov
NAS CSCI metrics summary at month 10...
CCPDS-R software artifacts..
Software development fle evolUtiON. seem
SCO characteristics for build 2 BIT testing,
Requirements verification work by test type and CSC
24
216
216
217
218
218
219
220
227
231
260
261
262
267
268
269
272
a7
278
281
288
291
293
307
319
320
325
326
341
342TABLE D-8
TABLE D9
TABLE D-10
TABLE Det
TABLE D2
TABLE D3,
TABLE D-14
TABLE D-15
TABLE Es
Common Subsystem cost expenditures by top-level WBS element ..
Common Subsystem lower level WBS elements
Common Subsystem CSCI sizes...
SLOC-to-ESLOC conversion factors..
Common Subsystem CSCI sizes in ESLOC..
CCPDS-R subsystem changes by CSCI
Common Subsystem CSCI summary
CCPDS-R technology improvements.
Industry distribution across maturity levels.
LISTOF TABLES _ xix
346
347
349
350
392
354
355
360
364Foreword
his book blazes the way toward the next generation of software management prac-
tice. Many organizations still cling to the waterfall model because, even with its
shortfalls, it provides the most fully elaborated management guidelines on how to
proceed in a given software situation.
Ichas been difficult to find a fully articulated alternative management approach
for dealing with such issues as commercial component integration, software reuse,
risk management, and evolutionary/incremental/spiral software processes. This book
provides a new experience-tested framework and set of guidelines on how to proceed.
Walker Royce developed and tested this software management approach during
his inception-to-delivery participation in the large, successful CCPDS-R project per-
formed by TRW for the U.S. Air Force. He then refined and generalized it across a
wide spectrum of government, aerospace, and commercial software development
experiences at Rational.
Chapters 1 through 4 of the book motivate the approach by showing how it
gives you management control of the key software economics leverage points with
respect to traditional software management. These are (1) reducing the amount of
software you need to build, (2) reducing rework via improved processes and team-
work, and (3) reducing the labor-intensiveness of the remaining work via automation.
Chapters 5 through 10 present the specifics of a new organization of the soft-
ware life cycle, which also forms the management basis for Rational’s Unified process.
It combines the flexibility of the spiral model with the discipline of risk management
and a set of major life-cycle phases and milestones. These milestones are focused on
major management commitments to life-cycle courses of action.
As with our Anchor Point approach at USC, the life-cycle objectives milestone
involves a management commitment to engage in a software architecting effort based
on a business case analysis (or not to engage, in which case the project is mercifully
xxixxii FOREWORD
killed). The life-cycle architecture milestone involves a management commitment to
proceed into full-scale development based on establishing and demonstrating a sound
architecture and resolving all major risk items. The initial operational capability mile-
stone involves a management commitment to proceed to beta testing the product with
outside users, oF its equivalent.
In these chapters, Royce provides a set of views showing how these milestones
differ from conventional document-oriented or code-oriented milestones. Instead, the
key product artifact sets (requirements, design, implementation, deployment) concur-
rently evolve and coalesce in a manner consistent with the project’s objectives and its
strategies for controlling risk.
In Chapters 10 through 14, Royce addresses how to ensure that the software
project’s management artifacts are also concurrently evolving and coalescing. These
include the project’s plans and associated cost and schedule estimates, the project’s
organization and team-building activities, and the project’s metrics, instrumentation,
and control processes. Chapter 14 is particularly noteworthy. It not only emphasizes
that the management solutions are situation-dependent, it also provides guidelines for
tailoring them to the project's scale, team culture, process maturity, architectural risk,
and domain experience.
In Chapters 15 through 17, Royce looks forward to where the best software
developers are going with their practices: toward product line management, round-
trip engineering, and smaller teams with managers as performers and quality assur-
ance as everyone’s job. Appendixes relate his software management approach to the
current state of the practice, to the COCOMO and COCOMO I family of cost mod-
els, and to the SEI Capability Maturity Model, Appendix D provides a convincing
case study of how the approach was successfully used on the large, technically chal-
lenging CCPDS-R project.
Royce has a refreshing candor about some of the fads, follies, and excesses in the
software field. This comes out particularly in several “pragmatic” sections that
address such topics as software cost estimation, inspections, artifacts, planning, and
metrics, Not everyone will agree with all of his assessments, particularly on inspec
tions, bur they are incisive and thought-provoking.
1 feel extremely fortunate to have been able to work with both Walker Royce
and his equally insightful father, Winston Royce; to have learned from their experi-
ences; and to have interacted with them as they evolved their path-breaking ideas.
Barry Boehm
Director, USC Center for Software Engineering
April 1998Preface
he software industry moves unrelentingly toward new methods for managing the
ever-increasing complexity of software projects. In the past, we have seen evolu-
tions, revolutions, and recurring themes of success and failure. While software technol-
ogies, processes, and methods have advanced rapidly, software engineering remains a
people-intensive process. Consequently, techniques for managing people, technology,
resources, and risks have profound leverage.
This book captures a software management perspective that emphasizes a bal-
anced view of these elements:
* Theory and practice
* Technology and people
* Customer value anid provider profitability
© Strategies and tactics
Throughout, you should observe a recurring management theme of paramount
importance: balance. It is especially important to achieve balance among the objec-
tives of the various stakeholders, who communicate with one another in a variety of
languages and notations. Herein is the motivation for the part opener art, an abstract
portrayal of the Rosetta stone. The three fundamental representation languages inher-
ent in software engineering are requirements (the language of the problem space),
design (the transformation languages of software engineers), and realizations (the lan-
guage of the solution space executable on computers). Just as the Rosetta stone enabled
the translation of Egyptian hieroglyphics, software management techniques enable the
translation of a problem statement into a solution that satisfies all stakeholders.xxiv PREFACE
There is no cookbook for software management. There are no recipes for obvi-
ous good practices. I have tried to approach the issues with as much science, realism,
and experience as possible, but management is largely a matter of judgment, (un)com-
mon sense, and situation-dependent decision making. That’s why managers are paid
big bucks.
Some chapters include sections with a pragmatic and often hard-hitting treat-
ment of a particular topic. To differentiate this real-world guidance from the general
process models, techniques, and disciplines, headings of these sections include the
word pragmatic. By pragmatic I mean having no illusions and facing reality squarely,
which is exactly the intent of these sections. They contain strong opinions and pro-
vocative positions, and will strike nerves in readers who are entrenched in some obso-
lete or overhyped practices, tools, or techniques.
Ihave attempted to differentiate among proven techniques, new approaches,
and obsolete techniques using appropriate substantiation. In most cases, I support my
positions with simple economic arguments and common sense, along with anecdotal
experience from field applications. Much of the material synthesizes lessons learned
(state-of-the-practice) managing successful software projects over the past 10 years.
On the other hand, some of the material represents substantially new (state-of-the-
art), hypothesized approaches that do not have clear substantiation in practice.
Thave struggled with whether to position this book as management education or
management training. The distinction may seem nitpicky, but it is important. An
example I heard 15 years ago illustrates the difference. Suppose your 14-year-old
daughter came home from school one day and asked, “Mom and Dad, may Itake the
sex education course offered at school?” Your reaction would likely be different if she
asked, “May I take the sex training course offered at school?” (This meant less to me
then than it does now that my three daughters are teenagers!)
Training has an aspect of applied knowledge that makes the knowledge more or
less immediately useful. Education, on the other hand, is focused more on teaching
the principles, experience base, and spirit of the subject, with the application of such.
knowledge left to the student. I have tried to focus this book as a vehicle for software
‘management education. (I am not sure there is such a thing as management training
other than on-the-job experience.) I will not pretend that my advice is directly appli-
cable on every project. Although I have tried, to substantiate as many of the position
statements as possible, some of them are left unsubstantiated as pure hypotheses. I
hope my conjecture and advice will stimulate further debate and progress.
My intended audience runs the gamut of practicing software professionals, Pri-
mary target readers are decision makers: those people who authorize investment and
expenditure of software-related budgets. This group includes organization managers,
project managers, software acquisition officials, and their staffs. For this audience, I
am trying to provide directly applicable guidance for use in today’s tactical decisionPREFACE _XXV
making and tomorrow's strategic investments. Another important audience is soft-
ware practitioners who negotiate and execute software project plans and deliver on
organizational and project objectives.
Style
Because | am writing for a wide audience, I do not delve into technical perspectives or
technical artifacts, many of which are better discussed in other books. Instead, I pro-
vide fairly deep discussions of the economics, management artifacts, work breakdown
strategies, organization strategies, and metrics necessary to plan and execute a success-
ful software project.
Ilustrations are included to make these complex topics more understandable.
The precision and accuracy of the figures and tables merit some comment. While most
of the numerical data accurately describe some concept, trend, expectation, or rela-
tionship, the presentation formats are purposely imprecise. In the context of software
management, the difference between precision and accuracy is not as trivial as it may
seem, for two reasons:
1, Software management is full of gray areas, situation dependencies, and
ambiguous trade-offs, It is difficult, if not impossible, to provide an accu-
rate depiction of many concepts and to retain precision of the presentation
across a broad range of domains.
Understanding the difference between precision and accuracy is a funda-
mental skill of good software managers, who must accurately forecast est
mates, risks, and the effects of change. Unjustified precision—in
requirements or plans—has proven to be a substantial, yet subtle, recurring
obstacle to success.
In many of my numeric presentations, the absolute values are unimportant and quite
variable across different domains and project circumstances. The relative values co}
stitute the gist of most of the figures and tables.
I occasionally provide anecdotal evidence and actual field experience to put the
management approaches into a tangible context and provide relatively accurate and
precise benchmarks of performance under game conditions. Several appendixes clar-
ify how the techniques presented herein can be applied in real-world contexts. My
flagship case study is a thoroughly documented, successful, large-scale project that
provides a concrete example of how well many of these management approaches can
work. It also provides a framework for rationalizing some of the improved processes
and techniques. :Xxvi__ PREFACE
Organization
The book is laid out in five parts, each with multiple chapters:
* Part I, Software Management Renaissance. Describes the current state of
software management practice and software economics, and introduces the
state transitions necessary for improved software return on investment.
‘+ Part II, A Software Management Process Framework. Describes the process
primitives and a framework for modern software management, including
the life-cycle phases, artifacts, workflows, and checkpoints.
* Part Ill, Software Management Disciplines. Summarizes some of the criti-
cal techniques associated with planning, controlling, and automating a
modern software process.
+ Part IV, Looking Forward. Hypothesizes the project performance expectations
for modern projects and next-generation software economics, and discusses
the culture shifts necessary for success.
+ Part V, Case Studies and Backup Material, Five appendixes provide substan-
tial foundations for some of the recommendations, guidance, and opinions
presented elsewhere.
Acknowledgments
Although my perspective of iterative development has been influenced by many sources,
Thave drawn on relatively few published works in writing this book. Providing a more
detailed survey of related publications might have helped some readers and satisfied
some authors, but most of the correlation with my views would be coincidental.
The foundation of ny material comes basically from three sources, on which I
have drawn extensively:
1. TRW’s Ada Process Model Guidebook [Royce, Walker, 1989]. I wrote this
guidebook to capture the process description implemented successfully on
a large-scale TRW project so that it could be used throughout TRW.
2. Rational Software Corporation’s software management seminar [Royce,
Walker, 1997]. I wrote this two-day seminar on software best practices to
describe Rational’s software management approach. The peer reviewers for
this material included Don Andres (TRW), Barry Boehm (University of
Southern California), Larry Druffel (Software Engineering Institute), Lloyd
Mosemann (U.S. Air Force), and Winston Royce (TRW), in addition to
numerous field practitioners and executives within Rational. The seminar
was delivered dozens of times in the mid-1990s to a broad range of audi-
ences, including government groups, defense contractors, and commercial
organizations.3.
PREFACE xxvii
Rational’s Unified process, The acquisition of Objectory by Rational
resulted in a large internal investment to merge the techniques of the
Objectory process (focused on use-case-driven techniques) and the existing
Rational process (focused on management techniques and object-oriented
modeling). This investment is on-going, as Rational continues to broaden
the process description and prescription across more of the life-cycle activi-
ties, tools, and methods, resulting in the Unified process.
Several other sources had a significant effect on the management process pre~
sented in this book. Their influence is the result of long-term relationships that encap-
sulate years of interaction, exchange of ideas, and extensive firsthand communication.
My association with Barry Boehm over the past 15 years has been a rich
source of software engineering knowledge.
Don Andres's extraordinary leadership and project management expertise
set him apart from the many project managers I have worked for and with,
and Thave learned much from him.
Dave Bernstein, Robert Bond, Mike Devlin, Kevin Haar, Paul Levy, John
Lovitt, and Joe Marasco, senior managers at Rational, have evolved a nim-
ble company with a clear vision of software engineering as a business.
Philippe Kruchten’s work on software architecture and process frame-
works, as well as his own field experience, has helped gel many of my per-
spectives and presentations.
Grady Booch, Ivar Jacobson, and Jim Rumbaugh, Rational’s three senior
methodologists, have done the software engineering community a great
service in defining the Unified Modeling Language.
Hundreds of dedicated software professionals in the Rational field organi-
zation have been responsible for delivering value to software projects and
transitioning software engineering theory into practice.
The most important influence on this work was my father, Winston Royce, who
set my context, validated my positions, critiqued my presentation, and strengthened
my resolve to take a provocative stand and stimulate progress.
Several people invested their own time reviewing early versions of my manu-
script and contributing to the concepts, presentation, and quality contained herein.
My special thanks go to Ali Ali, Don Andres, Peter Biche, Barry Boehm, Grady
Booch, Doug Ishigaki, Ivar. Jacobson, Capers Jones, Hartmut Kocher, Philippe
Kruchten, Eric Larsen, Joe Marasco, Lloyd Mosemann, Roger Oberg, Rich Reitman,
Jim Rumbaugh, and John Smith.PREFACE
Finally, the overall presentation quality, consistency, and understandability of
this material are substantially the work of Karen Ailor. Her critique, sense of organi
zation, attention to detail, and aggressive nitpicking contributed greatly to the overall
substance captured in this book.SOFTWARE
MANAGEMENT
RENAISSANCEcHapter 1 CONVENTIONAL SOFTWARE
MANAGEMENT
CHAPTER 2 EVOLUTION OF SOFTWARE
ECONOMICS
cHarTeR 3 IMPROVING SOFTWARE
ECONOMICS
cuarrer 4 THE OLD WAY AND THE NEW
he software industry is experiencing a renaissance. Many ingrained software
engineering principles are going by the wayside, obsolesced by new technol-
ogy or replaced by better techniques or advanced levels of automation,
No matter what the discipline, it is important that the practitioner under-
stand the current state before attempting to transition to a new one. Before con-
sidering a software management framework for the future, it is necessary to
understand where the industry is today and how it got here.
In the past 10 years, | have participated in the software process improve-
ment efforts of several Fortune 500 companies. Typical goals of these efforts are
to achieve a 2X, 3X, or 10X increase in productivity, quality, time to market, or
some combination of all three, where X corresponds to how well the company
does now. The funny thing is that many of these organizations have no idea
what Xs, in objective terms.
The chapters in Part | introduce the state of the practice in the software
industry and define the X associated with the conventional software manage-
ment process.CHAPTER 1
Conventional
Software
Management
he best thing about software is its flexibil- gy poy
ey Points
ity: It can be programmed to do almost | conventional software management
anything. The worst thing about software is practices are mosty sound in theory, but
also its flexibility: The “almost anything” practice itil tied to archaic technol-
characteristic has made it difficult to plan, — 99Y 2nd techniques.
monitor, and control software development, 4 Conventional software economics
pe dictability is the besie of what hag Provides a benchmark of performance
is unpredictability is the basis of what has frvsmentionat sottwore monsge,
been referred to for the past 30 years as the ment principles
“software crisis.”
In the mid-1990s, at least three important analyses of the state of the software
engineering industry were performed. The results were presented in Patterns of Soft-
ware Systems Failure and Success [Jones, 1996], in “Chaos” [Standish Group, 1995],
and in Report of the Defense Science Board Task Force on Acquiring Defense Software
Commercially (Defense Science Board, 1994]. Appendix A highlights some of the rele-
vant results.
Alll three analyses reached the same general conclusion: The success rate for soft-
ware projects is very low. Although the analyses had some differing perspectives, their
primary messages were complementary and consistent. They can be summarized as
follows:
1, Software development is still highly unpredictable. Only about 10% of
sofware projects are delivered successfully within initial budget and sched-
ule estimates.
2. Management discipline is more of a discriminator in success or failure than
are technology advances.
3. The level of software scrap and rework is indicative of an immature process.6 _ CONVENTIONAL SOFTWARE MANAGEMENT.
The three analyses provide a good introduction to the magnitude of the software
problem and the current norms for conventional software management performance.
‘There is much room for improvement.
The remainder of this chapter summarizes the software management process
framework that most conventional software projects have used, While this frame-
work, known as the waterfall model, has many derivatives, it is the baseline process
for most of the software project experience amassed to date. And while it is dangerous
to generalize, it is important to lay out a good context for the process improvement
techniques discussed throughout this book.
1.1. THE WATERFALL MODEL
Most software engineering texts present the waterfall model as the source of the “con-
ventional” software process. I regard it more as a benchmark of that process. This
section examines and critiques the waterfall model theory, then looks at how most of
the industry has practiced the conventional software process. In reality, although the
industry has ignored much of the theory, it has still managed to evolve many good
practices (and some not-so-good practices), especially when they are used with mod-
ern technologies.
1.1.1 IN THEORY
In 1970, my father, Winston Royce, presented a paper titled “Managing the Develop-
ment of Large Scale Software Systems” at IEEE WESCON [Royce, Winston, 1970]
This paper, based on lessons he had learned managing large software projects,
remains the most quoted source of the waterfall model. It provides an insightful and
concise summary of conventional software management philosophy circa 1970, and
most of its 30-year-old advice has stood the test of time in the face of immense tech-
nology turnover.
“The paper made three primary points. (Quotations and paraphrased statements
are presented in italics.)
1. There are two essential steps common to the development of computer
programs: analysis and coding.
2. In order to manage and control all of the intellectual freedom associated
with software development, one must introduce several other “overhead”
steps, including system requirements definition, software requirements def-
inition, program design, and testing. These steps supplement the analysis
and coding steps. Figure 1-1 illustrates the resulting project profile and the
basic steps in developing a large-scale program.1 THE WATERFALL MODEL 7
Waterfall Model Part 1: The two basic steps to building a program
Analysis |
det Sy Analysis and coding both ivolve
A Creative work that directly
Contributes to the usefulness of
the end product.
Coding
Waterfall Mode! Part 2: The large-scale system approach
System
Tequirements
Softwar
require
A anatas
Program design
Coding
Jt
Waterfall Model Part 3 : Five necessary improvements for this approach to work
|. Complete program design before analysis and coding begin.
Maintain current and complete documentation.
Do the job twice, if possible.
Plan, control, and monitor testing
. Involve the customer.
Figure 1-1. The waterfall model(CONVENTIONAL SOFTWARE MANAGEMENT
The basic framework described in the waterfall model is risky and invites
failure. The testing phase that occurs at the end of the development cycle is
the first event for which timing, storage, inputloutput transfers, etc., are
experienced as distinguished from analyzed. The resulting design changes
are likely to be so disruptive that the software requirements upon which
the design is based are likely violated. Either the requirements must be
modified or a substantial design change is warranted.
Item 1, which is seemingly trivial, will be expanded later into one of my overall
management themes: the separation of the engineering stage from the production
stage.
Seven of the article’s nine pages are devoted to describing five improvements to
the basic waterfall process that would eliminate most of the development risks
alluded to in item 3. These five improvements are presented next. (Quotations and
paraphrased statements are presented in italics, followed by my comments, in the con-
text of today’s technology and terminology.)
1.
Program design comes first. The first step toward a fix is to insert a prelim-
inary program design phase between the software requirements generation
phase and the analysis phase. By this technique, the program designer
assures that the software will not fail because of storage, timing, and data
flux. As analysis proceeds in the succeeding phase, the program designer
must impose on the analyst the storage, timing, and operational constraints
in such a way that he senses the consequences. If the total resources to be
applied are insufficient or if the embryonic operational design is wrong, it
will be recognized at this early stage and the iteration with requirements
and preliminary design can be redone before final design, coding, and test
commences. How is this program design procedure implemented? The fol-
lowing steps are required:
Begin the design process with program designers, not analysts or pro-
grammers.
Design, define, and allocate the data processing modes even at the risk of
being wrong. Allocate processing functions, design the database, allo-
cate execution time, define interfaces and processing modes with the
operating system, describe input and output processing, and define pre-
liminary operating procedures.
Write an overview document that is understandable, informative, and
current so that every worker on the project can gain an elemental under-
standing of the systemLA THE WATERFALL MODEL 9
‘A. The essence of the process framework | present in later chapters is architecture-
first development. Although a few terms may be changed (for example, architecture
is used instead of program design), the essence of a modern process is congruent
with the explanation given here. As described later, the architecture comes first and
is designed and developed in parallel with planning and requirements definition as
part of the engineering stage of a project,
. Document the design. The amount of documentation required on most
software programs is quite a lot, certainly much more than most program
mers, analysts, or program designers are willing to do if left to their own
devices. Why do we need so much documentation? (1) Each designer must
communicate with interfacing designers, managers, and possibly custom-
ers. (2) During early phases, the documentation is the design. (3) The real
monetary value of documentation is to support later modifications by a
separate test team, a separate maintenance team, and operations personnel
who are not software literate.
‘A. If we ignore the technological inadequacies of the time frame in which the paper
‘was written, the essence of this “document the design” message is stil valid. Under-
standable representations of the artifacts, accessible by all stakeholders and teams,
are essential. However, major advances in notations, languages, browsers, tools,
and methods have rendered the need for many of the documents obsolete. In later
chapters, | argue at length that to focus on documentation is wrong and counter-
productive. This is because today’s technologies support rigorous and self-docu-
menting notations for requirements, designs, and implementations
. Do it twice. If a computer program is being developed for the first time,
arrange matters so that the version finally delivered to the customer for
operational deployment is actually the second version insofar as critical
designioperations are concerned. Note that this is simply the entire process
done in miniature, to a time scale that is relatively small with respect to the
overall effort. In the first version, the team must have a special broad com-
petence where they can quickly sense trouble spots in the design, model
them, model alternatives, forget the straightforward aspects of the design
that aren't worth studying at this early point, and, finally, arrive at an
error-free program,
44 This is a concise and simplistic description of architecture-first development, in
which an architecture team is responsible for the initial engineering. Generalizing
this practice, as | do later, results in a “do it N times" approach that is a principle of
modern-day iterative development10
CONVENTIONAL SOFTWARE MANAGEMENT.
Without this first pass, the project manager is at the mercy of human judg-
‘ment. With this first-pass “simulation,” he can at least perform experimen-
tal test of some key hypotheses and scope down what remains for human
judgment, which in the area of computer program design (as in the estima-
tion of takeoff gross weight, costs to complete, or the daily double) is
invariably and seriously optimistic.
‘A This is a great description of the spirit of iterative development and its inherent
advantages for risk management.
. Plan, control, and monitor testing. Without question, the biggest user of
project resources—manpower, computer time, andlor management judg-
ment—is the test phase. This is the phase of greatest risk in terms of cost
and schedule. It occurs at the latest point in the schedule, when backup
alternatives are least available, if at all. The previous three recommenda-
tions were all aimed at uncovering and solving problems before entering
the test phase. However, even after doing these things, there is still a test
phase and there are still important things to be done, including: (1) employ
4 team of test specialists who were not responsible for the original design;
(2) employ visual inspections to spot the obvious errors like dropped minus
signs, missing factors of two, jumps to wrong addresses (do not use the
computer to detect this kind of thing, it is too expensive); (3) test every
logic path; (4) employ the final checkout on the target computer.
‘4 Here we have some good advice and some obsolete advice. Items 1 and 4, still
good advice, are discussed at length in later chapters. Item 2 is still a popular quality
assurance fad (use software inspections), but its purpose as presented here is mostly
obsolete. Although it may have been a good, cost-effective practice using 1970
technology, itis not today. Computers, compilers, analyzers, and other tools are far
more efficient mechanisms for catching obvious errors. AS for item 3, testing every
logic path was difficult enough in 1970, without the added complexity of distribu-
tion, reusable components, and several other complicating factors. It is certainly not
feasible with most of today’s systems. This is especially true with distributed comput-
ing, in which, with time as an additional dimension, there are an infinite number of
logic paths. In a modern process, testing isa life-cycle activity that, when executed
properly, requires fewer total resources and uncovers issues far earlier in the life
cycle, when backup alternatives can still be used.
Involve the customer. For some reason, what a software design is going to
do is subject to wide interpretation, even after previous agreement. It is
important to involve the customer in a formal way so that he has committedLi THE WATERFALL MODEL 11
himself at earlier points before final delivery. There are three points follow~
ing requirements definition where the insight, judgment, and commitment
of the customer can bolster the development effort. These include a “prelim-
inary software review” following the preliminary program design step, a
sequence of “critical software design reviews” during program design, and a
“final software acceptance review” following testing.
‘4. This insight has been pursued for many years and, where practiced, has produced
positive results. Involving the customer with early demonstrations and planned
alpha/beta releases is a proven, valuable technique.
Thave always been overwhelmed by the insight presented in this paper. While
most of the industry has spent considerable energy bashing the waterfall model
approach, I find only minor flaws in the theory even when itis applied in the context
of today’s technology. The criticism should have been targeted at the practice of the
approach, which incorporated various unsound and unworkable elements. I suspect
that most critics never really understood this theory; they just understood the default
practice.
Throughout this book, I refer to the past and current practice of the waterfall
model approach, discussed next, as the “conventional” software management
approach or process. I argue that it is no longer a good framework for modern soft-
ware engineering practices and technologies, and I use it as the reality benchmark to
rationalize an improved process that eliminates some of its fundamental flaws.
1.1.2 IN Practice
Despite the advice of many software experts and the theory behind the waterfall
model, some software projects still practice the conventional software management
approach, However, because its use is declining and was far more prevalent in the
past, I refer to it in the past tense throughout.
Ic is useful to summarize the characteristics of the conventional process as it has
typically been applied, which is not necessarily as it was intended, Projects destined
for trouble frequently exhibit the following symptoms:
* Protracted integration and late design breakage
© Late risk resolution
+ Requirements-driven functional decomposition
Adversarial stakeholder relationships
Focus on documents and review meetings12__ CONVENTIONAL SOFTWARE MANAGEMENT.
Protracted Integration and Late Design Breakage
For a typical development project that used a waterfall model management process,
Figure 1-2 illustrates development progress versus time. Progress is defined as percent
coded, that is, demonstrable in its target form. (The software was compilable and exe-
cutable; it was not necessarily complete, compliant, nor up to specifications.) The fol-
lowing sequence was common:
Early success via paper designs and thorough (often too thorough) briefings
© Commitment to code late in the life cycle
* Integration nightmares due to unforeseen implementation issues and inter-
face ambiguities
Heavy budget and schedule pressure to get the system working
Late shoe-horning of nonoptimal fixes, with no time for redesign
A very fragile, unmaintainable product delivered late
/ Source
Format [ Adhocted | Flowcharts Contiguration baselines
[Re Pi ~ ‘Coding and =
5 jequremenis | Program oding an Pee
"Y | anatysis| design unittesting | Prowacted integration and testing
Product | Documenis | Documents | Coded units | Fragile baselines
100% Integration
] ceo
8
5
ge
i
Be
3
§
‘Sequential activities: requirements — design — coding — integration — testing
Late design
breakage,
Original
target date
A A
Project Schedule
FiGuRE 1-2. Progress profile of a conventional software project1.1 THE WATERFALL MODEL 13
TABLE 1-1. Expenditures by activity for a
conventional software project
ACTIVITY ‘cost
Management 5%
Requirements «5%
Design 10%
Code and unictesting «30%
Integeation and test 40%
Deployment 8%
Environment «SM%
“Toul S«i00%
Given the immature languages and technologies used in the conventional
approach, there was substantial emphasis on perfecting the “software design” before
committing it to the target programming language, where it was difficult to under-
stand or change. This practice resulted in the use of multiple formats (requirements in.
English, preliminary design in flowcharts, detailed design in program design lan-
‘guages, and implementations in the target language, such as FORTRAN, COBOL, or
©) and error-prone, labor-intensive translations between formats.
Conventional techniques that imposed a waterfall model on the design process
inevitably resulted in late integration and performance showstoppers. In the conven-
tional model, the entire system was designed on paper, then implemented all at once,
then integrated. Only at the end of this process was it possible to perform system test-
ing to verify that the fundamental architecture (interfaces and structure) was sound,
One of the recurring themes of projects following the conventional process was that
testing activities consumed 40% or more of life-cycle resources. Table 1-1 provides a
typical profile of cost expenditures across the spectrum of software activities.
Late Risk Resolution
A serious issue associated with the waterfall life cycle was the lack of early risk resolu-
tion. This was not so much a result of the waterfall life cycle as it was of the focus on
early paper artifacts, in which the real design, implementation, and integeation risks
were still relatively intangible. Figure 1-3 illustrates a typical risk profile for conven-
tional waterfall model projects. It includes four distinct periods of risk exposure,
where risk is defined as the probability of missing a cost, schedule, feature, or quality
goal. Early in the life cycle, as the requirements were being specified, the actual risk14_ CONVENTIONAL SOFTWARE MANAGEMENT
Requirements Desi
High
Focused Risk | Controlled Risk
Resolution Management
8 Period Period
z
g
| Risk Exploration | Risk Elaboration
Period Period
Low
Project Lite Cyele
Ficure 13. Risk profile of a conventional software project across its life cycle
exposure was highly unpredictable. After a design concept was available to balance
the understanding of the requirements, even if it was just on paper, the risk exposure
stabilized. However, it usually stabilized at a relatively high level because there were
too few tangible facts for a software manager to achieve an objective assessment. As
the system was coded, some of the individual component risks got resolved. Then
integration began, and the real system-level qualities and risks started becoming tan-
gible. It was usually during this period that many of the real design issues were
resolved and engineering trade-offs were made. However, resolving these issues late in
the life cycle, when there was great inertia inhibiting changes to the mass of artifacts,
was very expensive. Consequently, projects tended to have a protracted integration
phase (as illustrated in Figure 1-2) as major redesign initiatives were implemented.
This process tended to resolve the important risks, but not without sacrificing the
quality of the end product, especially its maintainability. I use the term redesign
loosely. Most of this effort would be described better as shoe-horning late fixes and
patches into the existing implementation so that the overall resolution effort was min-
imized. These sorts of changes did not conserve the overall design integrity and its
corresponding maintainability.
Requirements-Driven Functional Decomposition
Traditionally, the software development process has been requirements-driven: An
attempt is made to provide a precise requirements definition and then to implementLL THE WATERFALL MODEL 15
exactly those requirements. This approach depends on specifying requirements com-
pletely and unambiguously before other development activities begin. It naively treats
all requirements as equally important, and depends on those requirements remaining
constant over the software development life cycle. These conditions rarely occur in
the real world. Specification of requirements is a difficult and important part of the
software development process. As discussed in Appendix A, virtually every major
software program suffers from severe difficulties in requirements specification. More-
over, the equal treatment of all requirements drains away substantial numbers of engi-
neering hours from the driving requirements and wastes those efforts on paperwork
associated with traceability, testability, logistics support, and so on—paperwork that
is inevitably discarded later as the driving requirements and subsequent design under-
standing evolve.
‘As an example, consider a large-scale project such as CCPDS-R, presented as a
case study in Appendix D, where the software requirements included 2,000 shalls, (A
shall is a discrete requirement such as “the system shall tolerate all single-point hard~
ware failures with no loss of critical capabilities.”) Dealing adequately with the design
drivers in such systems (typically only 20 to 50 of the shalls) is difficult when the con-
tractual standards require that all 2,000 shalls be defined first and dealt with at every
major milestone. The level of engineering effort that can be expended on the important
design issues is significantly diluted by carrying around the excess baggage of more
than 1,950 shalls and dealing with traceability, testability, documentation, and so on.
Another property of the conventional approach is that the requirements were
typically specified in a functional manner. Built into the classic waterfall process was
the fundamental assumption that the software itself was decomposed into functions;
requirements were then allocated to the resulting components. This decomposition
was often very different from a decomposition based on object-oriented design and
the use of existing components. The functional decomposition also became anchored
in contracts, subcontracts, and work breakdown structures, often precluding a more
architecture-driven approach. Figure 1-4 illustrates the result of requirements-driven
approaches: a software structure that is organized around the requirements specifica~
tion structure.
Adversarial Stakeholder Relationships
The conventional process tended to resule in adversarial stakeholder relationships, in
large part because of the difficulties of requirements specification and the exchange of,
information solely through paper documents that captured engineering information
in ad hoc formats. The lack of rigorous notation resulted mostly in subjective reviews
and opinionated exchanges of information.16 CONVENTIONAL SOFTWARE MANAGEMENT
—
LAR,
a | | }
{E rk A
=
| Ee
=
| SS
LIUILI
Ficure 1-4, Suboptimal software com:ponent organization resulting from a requirements-
driven approach
The following sequence of events was typical for most contractual software
efforts:
1. The contractor prepared a draft contract-deliverable document that cap-
tured an intermediate artifact and delivered it to the customer for approval.
2, The customer was expected to provide comments (typically within 15 to 30
days).
3. The contractor incorporated these comments and submitted (typically
within 15 t0 30 days) a final version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of
customers and contractors. The overhead of such a paper exchange review process
was intolerable. This approach also resulted in customer-contractor relationships
degenerating into mutual distrust, making it difficult to achieve a balance among
requirements, schedule, and cost.
Focus on Documents and Review Meetings
The conventional process focused on producing various documents that attempted to
describe the software product, with insufficient focus on producing tangible incre-
ments of the products themselves. Major milestones were usually implemented as cer-1.2 CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE 17
TabLe 1-2. Results of conventional software project design reviews
APPARENT RESULTS REAL RESULTS,
Big briefing to a diverse audience Only a small percentage of the audience under-
stands the software,
Briefings and documents expose few of the impor-
tant assets and risks of complex software systems.
‘A design that appears to be compliant There is no tangible evidence of compliance.
Compliance with ambiguous requirements is of
lice value
Coverage of requicements (typically Few (tens) are design drivers.
hundreds) Dealing with all requirements dilutes the focus on
the critical drivers.
AA design considered “innocent until The design is always guily
Proven guilty” Design laws are exposed later in the life cycle.
emonious meetings defined solely in terms of specific documents. Contractors were
driven to produce literally tons of paper to meet milestones and demonstrate progress
to stakeholders, rather than spend their energy on tasks that would reduce risk and
produce quality software. Typically, presenters and the audience reviewed the simple
things that they understood rather than the complex and important issues Most
design reviews therefore resulted in low engineering value and high cost in terms of
the effort and schedule involved in their preparation and conduct. They presented
merely a facade of progress. Table 1-2 summarizes the results of a typical design
review.
Diagnosing the five symptoms of projects headed for trouble (just discussed) can
be difficult, especially in early phases of the life cycle when problems with the conven-
tional approach would have been most easily cured. Consequently, modern software
projects must use mechanisms that assess project health in early life-cycle phases and
that continue with objective, periodic checkups.
1.2 CONVENTIONAL SOFTWARE MANAGEMENT
PERFORMANCE
Barry Bochm’s one-page “Industrial Software Metrics Top 10 List” [Bochm, 1987] is
a good, objective characterization of the state of software development. (There is very
little evidence of significant changes in the past decade.) Although many of the metrics
are gross generalizations, they accurately describe some of the fundamental economic
relationships that resulted from the conventional software process practiced over the
past 30 years.8
‘CONVENTIONAL SOFTWARE MANAGEMENT
In the following paragraphs, quotations from Boehm’s top 10 list are presented
in italics, followed by my comments.
1.
Finding and fixing a software problem after delivery costs 100 times more
than finding and fixing the problem in early design phases.
‘4 This metric dominates the rationale for virtually every dimension of process
improvement discussed in this or any other book. It is not unique to software devel-
‘opment. When one of the big automobile companies implements a recall for 2 post-
delivery defect, the cost of repair can be many orders of magnitude greater than the
cost of fixing the defect during the engineering or production stage.
You can compress software development schedules 25% of nominal, but
‘4 One reason for this is that an N% reduction in schedule would require an M%
increase in personnel resources (assuming that other parameters remain fixes). Any
increase in the number of people requires more management overhead. In general,
the limit of flexibility in this overhead, along with scheduling concurrent activities,
conserving sequential activities, and other resource constraints, is about 25%. Opti-
mally, a 100-staff-month effort may be achievable in 10 months by 10 people.
Could the job be done in one month with 100 people? Two months with 50 people?
How about 5 months with 20 people? Clearly, these alternatives are unrealistic. The
25% compression metric says that the limit in this case is 7.5 months (and would
require additional staff-months, perhaps as many as 20). Any further schedule com-
pression is doomed to fail. On the other hand, an optimal schedule could be
extended almost arbitrarily and, depending on the people, could be performed in a
much longer time with many fewer staff resources. For example, if you have the lux-
uty of a 25-month schedule, you may need only 75 staff-months and three people.
For every $1 you spend on development, you will spend $2 on maintenance.
‘4 Boehm calls this the “iron law of software development.” Whether you build a
long-lived product that undergoes commercial version upgrades twice 2 year or
build a one-of-a-kind custom software system, twice as much money will probably
be spent over the maintenance life cycle than was spent in the development life
cycle. It is hard to tell at first whether this relationship is good or bad. In the com-
mercial product domain, the primary driver of this relationship is the product's suc-
cess in the marketplace. Successful software products (such as Oracle, Microsoft
applications, Rational Rose, and the UNIX operating system) are very long lived and
can result in much higher ratios of maintenance cost to development cost. Manag-
ers of one-of-a-kind software projects, on the other hand, rarely plan to expend this
much on software maintenance. In either case, anyone working in the software
industry over the past 10 to 20 years knows that most of the software in operation is
considered to be difficult to maintain1.2 CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE 19
4. Software development and maintenance costs are primarily a function of
the number of source lines of code.
‘4. This metric is primarily the result of the predominance of custom software devel-
opment, lack of commercial componentry, and lack of reuse inherent in the era of
the conventional process.
$. Variations among people account for the biggest differences in software
productivity:
‘A. This is a key piece of conventional wisdom: Hire good people. This metric is a sub-
ject of both overhype and underhype. When you don't know objectively why you
succeeded or failed, the obvious scapegoat is the quality of the people. This judg-
ment is subjective and difficult to challenge.
6. The overall ratio of software to hardware costs is still growing. In 1955 it
was 15:85; in 1985, 85:15.
‘4. The fact that software represents 85% of the cost of most systems is not so much
a statement about software productivity (which is, arguably, not as good as we
want) as itis about the level of functionality being allocated to software in system
solutions. The need for software, its breadth of applications, and its complexity con-
tinue to grow almost without limits.
7. Only about 15% of software development effort is devoted to programming.
‘4. This is an important indicator of the need for balance. Many activities besides
coding are necessary for software project success. Requirements management,
design, testing, planning, project control, change management; and toolsmithing
are equally important considerations that consume roughly 85% of the resources,
8.
Software systems and products typically cost 3 times as much per SLOC as
individual software programs. Software-system products (i., system of sys-
tems) cost 9 times as much.
‘4. This exponential relationship is the essence of what is called diseconomy of scale.
Unlike other commodities, the more software you build, the more expensive it is per
source line.
9. Walkthroughs catch 60% of the errors.
4 This may be true. However, given metric 1, walkthroughs are not catching the
errors that matter and certainly are not catching them early enough in the life cycle.
All defects are not created equal. In general, walkthroughs and other forms of
human inspection are good at catching surface problems and style issues. If you are
using ad hoc design notations, human review may be your primary quality assurance