Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
133 views25 pages

Quantitative Analysis Methods 1

This document discusses quantitative analysis methods for information technology project management. It describes several studies that propose statistical methodologies to reduce the risk of software project failure. These include using internal benchmarks based on project size/volatility, strategies to prevent complexities, and frameworks to identify risks. The document also discusses the Capability Maturity Model, which assesses software process quality on a 5-level scale, and two case studies showing that higher levels correlate with better project performance.

Uploaded by

Arijit Bhadra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views25 pages

Quantitative Analysis Methods 1

This document discusses quantitative analysis methods for information technology project management. It describes several studies that propose statistical methodologies to reduce the risk of software project failure. These include using internal benchmarks based on project size/volatility, strategies to prevent complexities, and frameworks to identify risks. The document also discusses the Capability Maturity Model, which assesses software process quality on a 5-level scale, and two case studies showing that higher levels correlate with better project performance.

Uploaded by

Arijit Bhadra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Quantitative Analysis Methods 1

Running Head: QUANTITATIVE ANALYSIS METHODS

Quantitative Analysis Methods for

Information Technology

Project Management Processes

Gabriel Tocci
Quantitative Analysis Methods 2

Abstract. According to the literature on project management, software projects are costly

endeavors that can fail—often catastrophically. To reduce the risk of failure, various authorities have

proposed statistical methodologies for software project management. The need for statistical project

control is suggested, in part, by studies that compare the results of outsourcing to companies with weak

and strong statistical control processes, as measured by the Capability Maturity Model (CMM), a

common metric for assessing software process quality. Work on statistical methodologies includes

studies by Sauer et al., which propose internal company expectation benchmarks, based on project size

and volatility; a study by Xia and Lee on Information Systems Development Project (ISDP), which

proposes strategies for preventing and resolving project complexities; and a study by Keil et al. on risk

identification frameworks, which describes a universal set of software development project risks and a

high level framework that may be used to create effective risk mitigation strategies.
Quantitative Analysis Methods 3

Introduction

Project Management is the process of planning, organizing, monitoring and adjusting a project’s

direction (Tsui). A 2001 report showed that the US spends 2.3 trillion dollars on projects every year, an

amount equal to 25% of the nation’s gross domestic product (PMI). The failure to successfully complete

major software projects have even proved fatal to their sponsoring organizations including, American

LaFrance and FoxMeyer Drugs.

Standish Report

The Standish Group International, Inc. is a research advisory firm founded in 1989 and

headquartered in Massachusetts (Standish, 1994). The Standish Group is best known for a series of

reports on failures in the software development industry. The first of these reports, released in 1994,

asserted that the software development industry is in a state of chaos. The report found that 31% of the

projects that respondents undertook were cancelled before completion. Moreover, 52% of projects cost

189% of their original estimated budgets (Standish, 1994). The 1994 report discusses presents a series of

case studies including a software project failure in the Denver airport baggage claim that cost taxpayers

1.1 million dollars per day (Standish, 1994). This report estimated that in 1995 over 81 billion dollars will

be spent on cancelled software projects in the United States (Standish, 1994).

A follow-on report, published in 2003, shows some major improvements since the 1994 report.

Here, the Standish Group reported that 15% of all software projects were failing, with an average cost

overrun reduced to 43% (PMI). The total dollar estimate for cancelled software projects in the US was

reduced to 38 billion dollars (PMI).

Trilogy

From September 2000 to April 2005, The Federal Bureau of Investigation (FBI) spent over 100

million dollars on a software system that has since been discarded. This system, alternately known as the

Virtual Case File (VCF) system and Trilogy, was intended as a response to the 9/11 commission’s

obaservation that the FBI had failed to correlate data that may have helped prevent the 9/11 hijacking.
Quantitative Analysis Methods 4

The FBI’s current system, Automated Case Support (ACS), is a command line driven application that is

supported by an antiquated German database. Its interface was deemed unusable by many FBI agents

(Goldstein).

The Trilogy initiative attempted to upgrade the ACS system’s hardware, software, and

networking support. The hardware and networking portions were contracted to DynCorp and the software

portion was contracted to the Science Applications International Corporation (SAIC).

In the face of intense public and congressional pressure, the project manager made an extremely

poor decision to move the deadline to December 2003 from June 2004 (Goldstein). In March 2004 an

arbitrator found 59 key problems with the project, including 40 key development errors by SAIC and 19

changes in system requirements by the FBI (Goldstein).

Various reasons have been given for the project’s failure. A blue ribbon panel of business leaders,

lawyers and academics known as the Markle Foundation Task Force on National Security concluded that,

“among other things, [the FBI] has failed to develop an adequate strategic plan, has no comprehensive

strategic human capital plan, has personnel with inadequate language skills, antiquated computer

hardware and software, no enterprise architecture and several disabling cultural traditions” (Kumagai).

Direct results of this root cause were the many missteps along Trilogy’s development path. The four basic

project management activity sets—planning, organizing, monitoring and adjusting—were not followed

(Tsui). Management’s incompetence was rampant, and the FBI failed to establish controls and maintain

accountability throughout the Trilogy project.

Capability Maturity Model

Software development problems arise because the projects are often complex, multifaceted, and

evolving through a constant state of change. Guiding these projects successfully requires effective project

management processes and controls. Authorities have described a variety of quantitative strategies for

increasing the likelihood of project success.

A common starting point for using metrics to assess the quality of an organization’s software

managerial processes is the Capability Maturity Model (CMM). The CMM was developed by The
Quantitative Analysis Methods 5

Software Engineering Institute at Carnegie Mellon University in 1987 at the request of the United States

Department of Defense to create software improvement practices. CMM is a five level model used as a

scale to evaluate the software development process maturity of an organization. It helps software

development organizations identify process weaknesses and defines specific areas where improvements

can be made.

• Level 1 – The Initial Level – At this level the software development practices are unstable and the
resulting products are unpredictable.
• Level 2 – The Repeatable Level – At this level the organization is employing project management
practices that allow the organization to repeat past successful processes and procedures. Key
development processes required to be measured at this level include requirements management,
project planning, project tracking, software quality assurance, and configuration management.
• Level 3 – The Defined Level – At this level the organization has a defined and documented
software engineering and project management practices. Key processes required to be measured
at this level include organizational process definition, formal training programs, intergroup
coordination and peer reviews.
• Level 4 – The Managed Level – At this level the organization focuses on productivity, quality
and statistical assessment. The key process required to be measured at this level is quantitative
process management.
• Level 5 – The Optimization Level – At this level the organization focuses on continuous process
improvement. The organization is concerned with identifying process weaknesses, product
defects and the improvement upon these identifications. Key processes required to be measured at
this level include defect prevention, technology change management and process change
management. (Biberoglu)
Case studies have shown that the higher the CMM level, the better overall performance of the

development organization (Biberoglu). Two of these studies are described below.

CMM in Korea

In “Software development risk and project performance measurement: Evidence in Korea”, Na

uses measures of Information Technology (IT) performance to validate the author’s three-part model of

software project risk (Figure 1). Na’s models are based on eleven hypotheses about risk and

organizational performance:
Quantitative Analysis Methods 6

H1: An increase in standardization will be directly associated with a decrease in residual


performance risk.
Quantitative Analysis Methods 7

H2: An increase in requirements uncertainty will be directly associated with an increase in


residual performance risk.
H3: An increase in residual performance risk will be positively associated with cost overrun.
H4: An increase in residual performance risk will be positively associated with schedule overrun.
H5: An increase in functional development risk will be directly associated with an increase in
system development risk.
H6: An increase in system development risk will be positively associated with cost overrun.
H7: An increase in system development risk will be positively associated with schedule overrun.
H8: An increase in system development risk will be positively associated with system
development risk.
H9: Increases in system development risk will be associated with decrease in process
performance.
H10: Increases in system development risk will be associated with decrease in product
performance.
H11: Increases in process performance will be associated with increase in product performance.

Data for Na’s study was gathered from Korean IT firms with over 25,000 employees. Large

companies were examined because “large scale organizations provide an appropriate setting for this study

because they require extensive cooperation, communication and autonomy to successfully complete

projects” (Na). Historically, IT risk management studies have focused on mature IT development

companies in the US and India. Seventy-four of the seventy-seven Korean firms examined had a

capability maturity model (CMM) level of one or two (Na).

Na’s results were analyzed using structural equation models, a “multivariate technique that

combines the attributes of both factor analysis and multiple regressions to simultaneously estimate a

series of dependence relationships” (Na). Na’s analyses—see Figure 2 for coefficients of correlation—

confirmed all eleven of Na’s hypothesis.


Quantitative Analysis Methods 8

Korea’s poor performance may stem from its immature IT infrastructure, compared to that of the

US. Many US firms use CMM-based software process improvement (SPI) software. Korea is behind the

US in using SPI.

The study’s primary limitation is its focus on one country, Korea. To determine the study’s

external validity would require analyses of more firms with low CMM ratings. Other factors that may

have impacted the outcome of this limited study include project complexity, managerial incentives and

internal team conflicts (Na).


Quantitative Analysis Methods 9

CMM in India

Tata Consultancy Services (TCS), India’s largest Information Technology (IT) development

enterprise, uses an in house quality management system, TCS-QMS, to promote continuous quality

improvement. The system provides TCS development center managers with a path for achieving Software

Capability Maturity Model (SW-CMM) level five. TCS-CMS standardizes software engineering practices

and management processes throughout TCS. In 2003, 15 of TCS’s 17 development centers were

operating at SW-CMM level five (Murugappan 43).

The main goal of TCS-QMS is to improve Tata’s operations by ensuring proper customer service.

These four main desired outcomes of the system are organizational knowledge sharing, data driven

quality management, effective project management and continuous process improvement.

TCS-QMS ensures effective project management by standardizing common project management

tasks, including monitoring cost, schedule, quality and scope. The system creates baselines for estimation

and procedures, checklists, project management review meetings and the use of project tracking tools.

Tata reports that TCS-QMS increases the probability of project success (Murugappan). Tata claims that

its system accomplishes this increase through an analysis of prior successes on similar projects from a

historical archive.

TCS has also used its system to establish and manage an environment that supports organization

wide knowledge sharing. TCS has created a dedicated software engineering process management group,

whose members are assigned a single development process to manage. TCS bases project management on

well defined software development life cycles and provides training to ensure that developers understand

TCS development process and can do their job. Service level agreements (SLA) are used to inform

development teams about customer expectations. SLA’s can also be used internally to ensure clarity about

expected services. These services may be delivered between different groups inside TCS. A process

assets library supports the storing and sharing of best practices throughout the organization.

TCS uses statistical techniques to support data driven product quality management. These

techniques define targets and tolerances for product quality and then analyze the relationship of these
Quantitative Analysis Methods 10

statistics to development capability baselines. If actions are required increase product quality, they are

developed based on the data and then deployed.

The final function of TCS-QMS is continuous process improvement. Continuous process

improvement uses quantifiable feedback from the development processes to identify and prioritize

sources of process problems and improvements. This is done using Pareto analysis, cause-effect analysis

and the blending of SW-CMM and Six Sigma.

Project Size and Volatility

In “The Impact of Size and Volatility on IT Project Performance”, Sauer et al. examine the effect

of project size and volatility on project risk. Data for this study were gathered from 412 responses to a

survey of registered readers of a U.K. trade publication, with an average of “17 years in the industry and

nine years as a project manager” (Sauer).

The first half of this study analyzes the relationship between underperformance and project size.

Sauer et al. divided surveyed projects into five performance based groups: Star Performers (7%), Good

Performers (60%), Budget Challenged (5%), Schedule Challenged (18%) and Abandoned Projects (9%).

This data is summarized in Figure 3.


Quantitative Analysis Methods 11

Excluding the Star Performers, the data show a strong correlation between project size and

project risk. Survey respondents reported that projects on average exceed budget by 13% and schedule by

20%, while under-delivering on scope by 7%.

Projects with a larger team size, a longer duration and more effort are at a higher risk of failure.

The authors attributed the strong performance of the Star Performers to high quality project management.

While I agree with this assessment, I would suggest that performance is more likely related to budget. The

average budget for Star Performer projects is double and the median budget is quadruple that of good

performers. Because the team size is only 30% smaller, I think the additional funds could have been used

to acquire more experienced development staff including but not limited to project management.

The authors then divided the five performance groups into underperformers and better

performers. The underperformers comprising 33% of the sample and include the abandoned and

challenged groups. The remainder of the sample — better performers — includes good performers and

Star Performers. Figure 4 relates risk of underperformance to team size, project duration and effort.

Based on this data, the authors argue that “…conventional wisdom that restricts project size using budget
Quantitative Analysis Methods 12

or duration is somewhat misguided. A focus first on effort, then on team size and duration will limit risk

of underperformance” (Sauer). The data also showed a minimum risk of 25% for all projects, regardless

of size.

The second half of this study analyzes the relationship between underperformance and project

volatility. Project volatility is defined as any unexpected disruptions to the project. The authors divide

volatility into two categories: target volatility, or changes to project budget, schedule or scope, and

governance volatility, or changes in project management or sponsorship.

Figure 5 relates risk of underperformance to project volatility. In the data sample, half of all

projects changed project managers at least once and one quarter changed sponsors. Target changes

occurred eight times per project on average. Underperforming projects had an average of 1.5 governance

changes and 12 target changes per project. Better performers had an average of 0.4 governance changes

and 7 target changes per project.


Quantitative Analysis Methods 13

A regression analysis of the volatility data (cf. Figure 6) shows that changes to a project’s sponsor

tend to reduce a project’s scope by 5.6%, while leaving its budget and schedule unchanged. In one case, a

single change of project manager increased the total budget by 4%, schedule by 8% and reduced the

scope by 3.5%. The ex-project manager, when told of these findings, noted, “If only I’d known

beforehand that my departure would add £80 million to the budget, I’d have offered to stay for half that!”

(Sauer).
Quantitative Analysis Methods 14

Project Complexity

In Grasping the Complexity of IS Development Projects, Xia and Lee studied the effect of project

complexity on Information Systems Development Projects (ISDPs). For this work, the authors created a

classification scheme for ISDP complexity based on two pairs of distinctions. The one contrasts
Quantitative Analysis Methods 15

organizational complexity, the “relationships among hierarchical levels and formal organizational

units…” with technological complexity, the “relationships among inputs, outputs, tasks and technologies

[within the project]…” (Xia). The other contrasts structural complexity, the complexity of a projects

underlying structure, with dynamic complexity, the uncertainty and extent of possible change to structural

complexity.

This research was divided into four phases: conceptualization, measurement development, data

collection and measurement validation-data analysis. In the conceptualization phase the authors

developed a framework for classifying complexity, based on extensive literature reviews and interviews

with six experienced ISDP project managers. In the measurement development phase, they held

interviews and focus groups with 45 ISDP managers to discuss the proposed measurements. The

managers involved with designing measurements were excluded from the final study. These first two

phases, conceptualization and measurement development, produced a web questionnaire for the data

collection phase.

The data collection phase was administered by the Information Systems Special Interest Group of

the Project Management Institute (www.pmi-issig.org). Surveys were only accepted from North

American project managers who had managed projects completed within the past three years.

The questionnaire produced 541 useable responses, a response rate of 31%. A wide variety of industries
were represented in this study including manufacturing (13.7%), finance/insurance (20.6%), retailing
(5.3%), consulting (6.3%), health care (5.9%), software (9.7%), transportation (4.0%), government (9.2%)
and utilities (7.4%). The median annual sales of these organizations were $800 million. The sample
included three types of ISDP’s: in-house software development (38.1%), packaged software
implementation (33.9%) and major enhancement of existing software (28%). The median project budget
was $555,000, with a median duration of nine months. …To test validity in the results [the authors] applied
factor analysis … the test results indicated that the measures demonstrated adequate validity and reliability
(Xia 73).

Participants indicated that an ISDP’s most complex component is the structural complexity of

Information Technology (IT). This was followed by the dynamic uncertainty in organizational structures,

the structural complexity of organizational structures and the dynamic uncertainty if IT. In the IT

complexity category, project managers perceived the structural complexity factors to be of greater
Quantitative Analysis Methods 16

concern than the dynamic complexity factors. In the organizational category, project managers perceived

the dynamic complexity factors to be of greater concern than the structural complexity factors (Figure 7).

The study also correlated these aspects of project complexity with project performance. Project

performance was assessed according to four dimensions: accuracy in project delivery time, cost,

functionality and end user satisfaction. The correlation analysis showed that overall project complexity

negatively affects overall project performance.

Regression analysis was used to correlate the four classifications of complexity with the four

components of project performance. Structural organizational complexity showed strong correlations with

all four classifications of project performance. Its greatest influence was a negative effect on end user

satisfaction, followed by accuracy in delivery time, functionality and cost, respectively. Dynamic

organizational complexity correlated to accuracy in cost performance and dynamic IT complexity

correlated to accuracy in system functionality. Structural IT complexity did not correlate with any aspect

of project performance (Figure 8).


Quantitative Analysis Methods 17

Although technological complexity is the most apparent problem for ISDP managers, it does not

appear to have a strong correlation with project performance. The authors believe this is because

technological complexities are controllable by a skilled project manager and organizational complexity is

not.

Project manager focus points are determined from a project’s areas of concern. For example, if

the main concern of a project is end user satisfaction and accuracy in delivery time, the project manager

should focus on an organization’s structural complexity. If the main concern is cost, the project manager

should focus on the organizations dynamic complexity. If the main concern is full system functionality,

the project manager should focus on the technology’s dynamic complexity.

Risk Management

Risk management is the process of identifying a project’s most important risks, rationally

prioritizing those risks and allocating resources to mitigate the risks. Experts in risk management argue

that well designed risk mitigation strategies can do much to reduce software development project risks.

Boehm’s work in the 1980’s focused solely on risk identification. Due to the complexity of current

organizational and technological environments, strategies for risk management should also include steps

for identifying, classifying and ranking the relative importance of project risks.

In “A Framework for Identifying Software Project Risks”, Keil et al. develop a system for

systematically analyzing software project risks and developing meaningful strategies for mitigating these
Quantitative Analysis Methods 18

risks. The authors surveyed experienced project managers from Finland, Hong Kong and the U.S., asking

them to identify specific risk factors and rank them by importance. Participants identified the following

eleven concerns as the most important risks:

1. Lack of top management commitment to the project


2. Failure to gain user commitment
3. Misunderstanding the requirements
4. Lack of adequate user involvement
5. Failure to manage end user expectations
6. Changing scope/objectives
7. Lack of required knowledge/skill in project personnel
8. Lack of frozen requirements
9. Introduction of new technology
10. Insufficient/inappropriate staffing
11. Conflict between user departments

The survey’s cross-cultural nature suggested that these risks are a “universal set of risks with

global relevance”. These findings are consistent with the findings of a similar Delphi study completed in

1997 (Schmidt).

The results indicate the importance of commitment from top level management and end users.

Respondents stated that a proper commitment from top level management was probably essential for

implementing effective risk mitigation. They also stated that the system’s end users are a product’s

ultimate consumer and without their support the final product could be one that no one wants.

Risks outside the direct control of the project manager were ranked much higher than risks

directly controllable by the project manager. This key difference along with relative importance is used

throughout the remainder of the study as a method of categorization.

Figure 9 shows the authors’ framework for identifying risks. Each category requires a different

management strategy. All four categories must be actively managed throughout a project’s lifetime (Keil

et. al.).
Quantitative Analysis Methods 19

The customer mandate category—Category One—includes high risks with a low perception of

control. The Keil report suggests mitigating such risks requires positive long term relationships with

customers and top level management. Developing strong relationships requires trust to be built over time

through consistent commitment fulfillment. (Keil et.al.) Theory-W is an effective approach for

establishing and mandating commitments within a project (Boehm). McFarlan suggests emphasizing

large payoffs to top level managers to create opportunity for them to publicly display their support

(McFarlan). Although the risks in this category are not directly controllable by a project manager, the

project manager does have some level of influence.

The scope and requirements category–Category Two–involves high-level risks that are perceived

as controllable by project managers. The Keil report suggests that effectively managing requirement

ambiguity and change is the key to mitigating these risks. The popularity of evolutionary software

development models—such as the spiral, iterative and incremental and rapid prototype models—and

evolutionary requirements gathering techniques—such as JAD sessions, storyboarding, scenario analysis

and rapid prototyping—indicate the necessity and difficulty associated with managing requirement risks.

These risks also indicate the importance of a well developed scope management strategy and software

requirement specification (SRS). Engaging end users is a key to effective requirements development. In

“Portfolio approach to information systems”, McFarlan states a supporting conclusion: “Ensuring the
Quantitative Analysis Methods 20

project requirements are derived from the end user and not the development team is an effective tactic in

the reduction of requirement and scope risks”. The following survey response is an effective method of

engaging end users to assist in mitigating the risks described in categories one and two. “[I tell them] You

own the application layer—whatever you need it to look like and however you need it to function that’s

yours and if you don’t tell me I can’t make it. I don’t know your job. You have to tell me how it works”

(Keil et. al.).

The execution category–Category Three—involves moderate risks that are directly related to

project execution. Project manager competency is highly influential to the mitigation of risks in this

category because execution tasks are under the direct control of the project manager. In Project Managers,

Can We Make Them or Just Make Them Better?, Brewer observes that organizations are becoming more

project-based as project management positions become more prominent. Based on this observation,

Brewer argues that it has become critical for organizations to assign project management roles to

individuals that have the skills to succeed in these roles. To ensure the project maintains a direction

towards success, Keil recommends using a disciplined development process along with internal and

external process evaluations (Keil et. al.).

The environmental category–Category Four—involves moderate risks that have project managers

have little or no control over. Environmental risks are changes internal and external to the organization

that modify a project’s objectives. In Project Management: The Managerial Process, political, social,

economic and technological changes are defined as external factors, while strengths and weaknesses in

management, facilities, core competencies and financial condition are defined as internal factors (Gray).

Kiel categorizes these factors with a lower relative level of risk because his study finds them to have a

low probability of occurrence. To help deal in unlikely event they should occur, Keil suggests project

managers should create contingency plans (Keil et. al.).

In “Internal Memory and Risk Management”, Riehl argues that software development companies

should use analyses of risk management metrics to anticipate risk. These metrics should be generated

from complete and accurate historical statistics on stakeholder-project interaction, and maintained in a
Quantitative Analysis Methods 21

database, where they can be used to improve the quality of analyses. Riehl sees his recommendations as

consistent with best practice in all other engineering and science disciplines, which use the historical

record to predict project outcomes. Using historical data retrieved from a risk management database

would put software engineering projects on a par with other engineering and scientific disciplines.

Risk management data should be collected from all stakeholders involved with project

development, including project managers, quality assurance personnel and end users. Instruments for data

collection include risk assessments, quality assessment reports, and end user feedback.

In the past, the Capability Maturity Model (CMM) has been used to assess the organizational

competency of software engineering firms. Riehl views the CMM’s focus on current and recent projects

as a problem, because it fails to use past knowledge and experiences to identify long term patterns of risk.

A risk database can recognize patterns and predict current and future projects risks with much more

accuracy. This increased accuracy would give project managers more time to change direction in the

project or reallocate resources.

The problems created by a failure to maintain a risk management database are compounded by

employee turnover. With a risk management database, an organization maintains a record of the risks that

former employees faced, along with the actions taken as a result of the risk, and any outcomes of the

actions.

A risk database can be used in employee training, newly hired or newly promoted. These

employees are generally not trained or informed about past failures in the organization. They could

benefit from access to these past lessons learned.

This is especially important for project managers. Project managers make highly significant

decisions based on their evaluation of risk. Historical risk databases that have been categorized and

quantified can be mined for patterns of risk, including probability density by risk type with confidence

intervals and risk exposure assessments (Riehle). These measurements would aid the project managers in

decision making.
Quantitative Analysis Methods 22

This database would also allow organizations to free themselves from project managers that they

are dissatisfied with, but feel they could not afford to lose due to their past project experience.

Conclusion

The software development industry appears to have emerged from the level of crisis described in

the Standish reports. However recent project failures like the Trilogy project reaffirm that proven

effective software development practices are used with enough regularity.

Adequate project management processes were not used in the development of Trilogy. This is

particularly disheartening because the government believed this system was necessary to protect U.S.

citizens and they were still unable to successfully deliver the product.

The project size and volatility study provides a benchmark for future expectations related to

project change. It also shows that project managers cannot be wholly responsible for project

underperformance when change is present. Quality steering committees and top management are crucial

to the success of IT projects.

The study on ISDP complexity gives ISDP project managers a strategy for preventing and

resolving project complexities. The study shows focus points for a project manager correlated from areas

of concern for the project.

This study on risk identification frameworks examines a universal set of software development

project risks and provides a high level framework that can be used to create effective risk mitigation

strategies.

The Korean and Tata case studies show the difference in outsourcing to a company with a low

CMM level versus a company with a high CMM level. The lack of process standardization in immature

IT development companies is a problem. Companies that complain about outsourcing complications and

do not feel they benefit from outsourcing should make sure the company they deal with has a high CMM

level.

Future Research
Quantitative Analysis Methods 23

This survey characterizes only a small portion of the literature on quantitative analysis methods

for software development project management. Other areas to study include the following:

• ISO 9001
• Software Process Improvement (SPI)
• CMM, CMMI, and SW-CMM
• SPICE
• Return on Investment (ROI)
• The Software Engineering Institute (SEI)
• Quality Management Systems (QMS)
• Legal ramifications of nonconformity to industry standards
• Six Sigma
• Case Studies
Quantitative Analysis Methods 24

References

Biberoglu, E., & Haddad, H. (2002). A Survey of Industrial Experiences with CMM and the Teaching of
CMM Practices. Kennesaw State University.

Boehm, B. & Ross, R. (1989). Theory-W software project management: Principals and examples. IEEE
Trans. Softw. Eng. 15(7). 902-916

Brewer, J., (2005). Project Managers, Can We Make Them or Just Make Them Better? Purdue
University, 167-173

Goldstein, H. (2005). “Who Killed the Virtual Case File?” IEEE Spectrum. 24-35.

Gray, C., & Larson, E. (2002). Project Management: The Managerial Process. McGraw Hill
Professional, 14

Keil, M., Cule, P., Lyytinen, K., & Schmidt, R. (1998). A Framework for Identifying Software Project
Risks. Communications of the ACM. 41(11). 76-83

Kumagai, J. Mission Impossible? IEEE Spectrum <http://www.spectrum.ieee.org/archive/1508>

Na, K. (2007) Software development risk and project performance measurement: Evidence in Korea. The
Journal of Systems and Software. 80. 596-605

McFarlan, F. (1981) Portfolio approach to information systems. Harvard Business Review. 59(5). 142-
150

Murugappan, M. & Keeni, G. (2003) Blending CMM and Six Sigma to Meet Business Goals.
42-48

Patton, M. (2002) Neohapsis Archives. <http://archives.neohapsis.com/archives/isn/2002-


q4/00990.html>

Project Management Institute (PMI), (2001). The PMI Project Management Fact Book

Riehle, R.(2007). Institutional Memory and Risk Management. ACM SIGSOFT Software Engineering
Notes. 32(6)

Sauer, C., Gemino, A., & Horner R. (2007). The Impact of Size and Volatility on IT Project Performance.
Communications of the ACM. 50(11). 79-84

Schmidt, R. (1997). Managing Delphi surveys using nonparametric statistical techniques. Decision
Sciences. 28(3). 763–774
Quantitative Analysis Methods 25

Standish Group, (2003). Latest Standish Group CHAOS Report Shows Project Success Rates Have
Improved by 50%

Standish Group. (1994). Chaos Report.

Tsui, F., & Orlando K. (2007) Essentials of Software Engineering.

Xia, W. & Lee, G. (2004) Grasping the Complexity of IS Development Projects. Communications of the
ACM. 47(5). 69-74.

You might also like