Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views54 pages

Statistical Quality Control

Uploaded by

Trình Nhật
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views54 pages

Statistical Quality Control

Uploaded by

Trình Nhật
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/263559764

Statistical Process Control

Chapter · January 2000

CITATIONS READS
43 3,569

1 author:

Tapan P. Bagchi
Indian Institute of Technology Kharagpur
212 PUBLICATIONS 2,020 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Critical Chain Project Management View project

Machine Learning and Artificial Intelligence applications in Management View project

All content following this page was uploaded by Tapan P. Bagchi on 22 January 2021.

The user has requested enhancement of the downloaded file.


Statistical Quality Control

If you are new to quality management, visit your plant’s QC department and the production shops first. Then read this text—slowly, and
not in one sitting! But after reading this material you would be able to:

 Apply statistical thinking to manage quality in your factory


 Use 7 tools to troubleshoot quality issues
 Determine process capability in production processes—a good starting point
 Explain what a control chart is and what sampling plans are
 Select proper control charts for different applications
 Understand the principles behind sampling methods
 Understand the role of 6-sigma and other advanced methods

Structure of this Text

 What is Statistical Quality Control?


 Process Capability: A Trusted Measure of Performance
 The Seven Quality Improvement Tools
 Statistical Process Control (SPC) Methodology
 Control Charts for Variables Data
 Constructing xbar and R charts and establishing Statistical Control
 Interpreting Patterns in Control Charts
 Process Monitoring and Control

1
 Estimating Process Capability
 Fraction Nonconforming (p) Chart.
 Variable Sample Size
 np-Charts for Number Nonconforming Charts
 c-and u-charts for Defects
 Control Chart Selection
 Process Capability Analysis

 Review Questions and Sampling Plans
 Six-Sigma and Design of Experiments

Figure 1.1 Input, Environmental and Output Variables in Manufacturing

1 What is Statistical Quality Control?

Statistical quality control (SQC) refers to the use of statistical methods to maintain and improve the quality of products. Perhaps in no
other area in a factory one employs analytics (some of which we review in this text) and statistics as one does in quality control. This task
is seldom trivial because the real world of production is full of random and uncontrolled variations.

Even within a well-managed factory process conditions fluctuate with time. Such variations occur in the incoming materials, machine
conditions and operator performance and everything isn't always "in control", making the delivery of acceptable output a big challenge.
For example, a steel plant may purchase constantly good quality ore from a mine, but the physical and chemical characteristics of batches
of ore coming from different locations in the mine may vary. All calculations required for SQC—we note—can be done in Excel®.

Figure 1.2 An Idealized Process Model (Representation)

2
Besides ore, blast furnace conditions may change from batch to batch. In a welding process, it is not possible to form two exactly
identical joints and faulty joints may occur occasionally. In a cutting process, the size of each piece of material cut varies; even the most
high-quality cutting machine has some inherent variability. In addition to such inherent variability, a large number of other factors may
also influence processes (Figure 1.1).

Many of these variations cannot be predicted with certainty, although sometimes it is possible to trace the unusual patterns of such
variations to their root cause(s). If we have collected sufficient data from these variations, we can tell, in terms of probability, what is
most likely to occur the next if no action is taken. It we know what is likely to occur the next given certain conditions, we can take
suitable actions to try to maintain or improve the quality of the output. This is rationale of statistical quality control.

Another prospect in which statistical methods can help to improve product quality is the design of products and processes. It is now
well-understood that over 2/3rd of all product malfunctions may be traced to their design. Indeed, the characteristics or quality of a
product depend greatly on the choice of materials, settings of various parameters in the design of the product and the production process
settings. In order to locate an optimal setting of the various parameters which gives the best product, we may consider using models
governing the outcome and the various parameters, if such models can be established by theory or through experimental work. Such a
model can be expressed diagrammatically as shown in Figure 1.2.

However, in many cases, a theoretical quality control model y = f(x) relating the final output responses (y1, y2, y3, …) and the input
parameters (x1, x2, x3, …) is either extremely difficult to establish or mathematically intractable. The following two examples illustrate
such cases.

Example 1: In bakery industry, the taste, tenderness and texture of a kind of bread depends on various input parameters such as the
origin of the flour used, the amounts of sugar, the amount of baking powder, the baking temperature profile and baking time, and the
type of oven used, and so on. In order to improve the quality of the bread produced, the baker may use a model which relates the input
parameters and the output quality of the brea To find theoretical models quantifying the taste, tenderness and texture of the bread
produced and relate these quantities to the various input parameters based our present scientific knowledge is a formidable task.
However, an analyst can easily use statistical methods in regression analysis to establish empirical models and use them to locate an
optimal setting of the input parameters.

Example 2: Sometimes there are great difficulties in solving an engineering problem using established theoretical models. The heat
accumulated on a chip in an electronic circuit during normal operation will raise the temperature of the chip and shorten its life. In order
to improve the quality of the circuit, the designer would like to optimize the design of the circuit so that the heat accumulated on the chip
will not exceed a certain level. This heat accumulated can be expressed theoretically in terms of other parameters in the circuit using a
complicated system of ten or more daunting partial differential equations which can be used to optimize the circuit design. However, it is
usually not possible to solve such a system analytically, and to solve it numerically using the computer also has computational difficulties.

In this situation, a statistical methodology known as design of experiments (DOE) can be used to find an optimal design of the circuit
without going through the complicated method of solving partial differential equations.

In other cases control may need to be exercised even on-linewhile the process is in progressbased on how the process is performing,
to maintain product quality. Thus in statistical quality control (SQC) problems are numerous and diverse. SQC engages three main
methodologies as follows.

1.1 Acceptance Sampling

This method is also called "sampling inspection." When products are required to be inspected but it is not feasible to inspect 100% of the
products, samples of the product may be taken for inspection and conclusions drawn using the results of inspecting the samples. This
technique specifies how to draw samples from a population and what rules to use to determine the acceptability of the product being
inspecte

3
1.2 Statistical Process Control (SPC)

Even in an apparently stable production process, products produced are subject to random variations. SPC aims at controlling the
variability of process output using a device called the control chart. On a control chart, a certain characteristic of the product is plotte
Under normal conditions these plotted points are expected to vary in a "usual way" on the chart. When abnormal points or patterns
appear on the chart, it is a statistical indication that the process parameters or production conditions might have changed undesirably.
At this point an investigation is conducted to discover unusual or abnormal conditions (e.g. tool breakdown, use of wrong raw material,
temperature controller failure, etc.). Subsequently, corrective actions are taken to remove the abnormality. In addition to the use of
control charts, SPC also monitors process capability, an indicator of the adequacy of the manufacturing process to meet customer
requirements under routine operating conditions. In summary, SPC aims at maintaining a stable, capable and predictable process.

Note, however, that since SPC requires processes to display measurable variation, it is ineffective for quality levels approaching six-sigma
though it is quite effective for companies in the early stages of quality improvement efforts.

1.3 Design of Experiments

Trial and error can be used to run experiments in the design of products and design of processes, in order to find an optimal setting of
the parameters so that products of good quality will be produce However, performing experiments by trial and error unscientifically is
frequently very inefficient in the search of an optimal solution. Application of the statistical methodology of "design of experiments"
(DOE) can help us in performing such experiments scientifically and systematically. Additionally, such methods greatly reduce the total
effort used in product or process development experiments, increasing at the same time the accuracy of the results. DOE forms an
integral part of Taguchi methodstechniques that produce high quality and robust product and process designs.

2 The Correct Use of Statistical Quality Control Methods

The production of a product typically progresses as indicated in the simplified flow diagram shown in Figure 1.3. In order to improve the
quality of the final product, design of experiments (DOE) may be used in Step 1 and Step 2, acceptance sampling may be used in Step 3
and Step 5, and statistical process control (SPC) may be used in Step 4.

Design Design of
of the Procurement Produc- Dispatch
the of materials tion
Product Process of the
and parts Product

Figure 1.3 Production from Design to Dispatch

We introduce now another important concept employed in thinking statistically about real live processes. Process capability is the range
over which the "natural variation" of a process occurs as determined by the system of common or random causes; that is, process
capability indicates what the process can achieve under "stable" conditions when it is said to be under statistical control. The capability
of a process is the fraction of output that can be routinely found to be within specifications. A capable process has 99.73% or more of its
output within specifications (Figures 1.4 and 1.5).

Process capability refers to how capable a process is of making parts that are within the range of engineering or customer specifications.
Figure 1.4 shows the distribution of the dimension of parts for a machining process whose output follows the bell-shaped normal
distribution. This process is capable because the distribution of its output is wholly within the customer’s spec range. By contrast, the
process shown by Figure 1.5 is not capable.

Process Control on the other hand refers to maintaining the performance of a process at its current capability level. Process control
involves a range of activities such as sampling the process product, charting its performance, determining causes of any excessive
variation and taking corrective action.

4
2.1 Process Capability is an Analytic Measure of Process Performance

As mentioned above, the capability of a process is an expression of the comparison of product specs to the range of natural variability
seen in the process. In simple terms, process capability expresses the proportion or fractional output that a process can routinely deliver
within the specifications. A process when subjected to a capability study answers two key questions, "Does the process need to be
improved?” and "How much improvement does the process require?"

Knowing process capability allows manufacturing and quality managers to predict, quantitatively, how well a process will meet specs
and to specify equipment requirements and the level of control necessary to maintain the firm's capability. For example, if a design specs
require a length of metal tubing to be cut within one-tenth of an inch, a process consisting of a worker using a ruler and hacksaw will
probably result in a large percentage of nonconforming product. In this case, the process, due to its high inherent or natural variability, is
not capable of meeting the design specs. Management would face here three possible choices: (1) measure each piece and either re-cut
or scrap nonconforming tubing, (2) develop a better process by investing in new technology, or (3) change the specifications.

Such decisions are usually based on economics. Remember that under routine production, the cost to produce one unit of the product
(i.e., its unit cost) whether the product ultimately ends up falling within or outside specs is the same. Rather, the firm may be forced to
raise the market price of the within-spec products and thus weaken its competitive position.

5
6
"Scrap and/or rework out-of-spec or defective parts" is therefore a poor business strategy since labor, utilities, machine time and
materials have already been invested in the unacceptable product produce Additionally, inspection errors will probably allow some
nonconforming products to leave the production facility if the firm aims at making parts that just meet the specs. On the other hand, new
technology might require substantial investment the firm cannot affor

Changes in design, on the other hand, may sacrifice fitness-for-use requirements and result in a lower quality product. Thus, these
factors demonstrate the need to consider process capability during product design and in the acceptance of new contracts. Many firms
now require process capability data from their vendors. Both ISO 9000 and QS 9000 quality management systems require a firm to
determine its process capability. The motivation for improving quality is the cost of (poor) quality.

Process capability has three important components: (1) the design specifications, (2) the centering of the natural variation, and (3) the
range, or spread, of variation. Figures 2.1 to 2.4 illustrate four possible outcomes that can arise when natural process variability is
compared with product specs. In Figure 2.1, the specifications are wider than the natural variation; one would therefor expect that this
process will always produce conforming products as long as it remains in control. It may even be possible to reduce costs by investing in
a cheaper technology that permit a larger variation in the process output. In Figure 2.2, the natural variation and specifications are the
same. A small percentage of nonconforming products might be produced; thus, the process should be closely monitore

In Figure 2.3, the range of natural variability is larger than the specification; thus, the current process would not always meet
specifications even when it is in control. This situation often results from a lack of adequate communication between the design
department and manufacturing, a task entrusted to manufacturing engineers.

If the process is in control but cannot produce according to the design specifications, the question should be raised whether the
specifications have been correctly applied or if they may be relaxed without adversely affecting the assembly or subsequent use of the
product. If the specifications are realistic and firm, an effort must be made to improve the process to the point where it is capable of
producing consistently within specifications.

Finally, in Figure 2.4, the capability is the same as in Figure 2.2, but the process average is off-center. Usually this can be corrected by a
simple adjustment of a machine setting or re-calibrating the inspection equipment used to capture the measurements. If no action is
taken, however, a substantial portion of output will fall outside the spec limits even though the process has the inherent capability to
meet specifications.

We may define the study of process capability from another perspective. A capability study is a technique for analyzing the random
variability found in a production process. In every manufacturing process there is some variability. This variability may be large or
small, but it is always present, in two types:

 Variability due to common (random) causes


 Variability due to assignable (special) causes

The first type of variability is said to be inherent in the process and it can be expected to occur naturally within a process. It is attributed
to common causes which behave like a constant system of the chances the process being affected by a multitude of factors such as
equipment vibration, passing traffic, atmospheric pressure or temperature changes, electrical or humidity fluctuations, changes in
operator's physical or emotional conditions, etc. Such are the forces that determine whether a coin when tossed will end up showing a
head or tail when on the floor. Together, however, these "chances" form a unique, stable and describable distribution. This variability
may be reduced by changing the environment or the technology, but given a set of operating condition, this variability can never be
completely eliminated from a process. Variability due to assignable causes, on the other hand, refers to the variation that can be linked to
specific or special causes that disturb a process. Examples are tool failure, power supply interruption, process controller malfunction,
adding wrong ingredients or wrong quantities, switching a vendor, etc.

Assignable causes are fewer in number and are usually identifiable through investigation on the shop floor or an examination of process
logs. The effect (i.e., the variation in the process) caused by an assignable factor, however, is usually large when compared to the
inherent variability seen in the process. Therefore, if the assignable causes are modified or controlled properly, the total process
variability associated with them can be reduced and even eliminate Still, the effect of assignable causes cannot be described by a single
distribution.

7
A capability study measures the inherent variability or the performance potential of a process when no assignable causes are present
(i.e., when the process is said to be in statistical control). Since inherent variability can be described by a unique distribution, usually a
normal distribution, capability can be evaluated by utilizing the properties of this distribution. Recall that capability is the proportion of
routine process output that remains within product specs.

Thus, capability calculations utilize statistical concepts and lead to predictions to be made regarding quality. Even approximate
capability calculations done using histograms enable manufacturers to take a preventive approach to defects. This approach is in
contrast with the traditional approach to manufacturing, which is a two-step process: production personnel make the product while QC
personnel inspect and screen out those products from shipments that do not meet specifications.

Such QC is wasteful and expensive since it allows plant resources including time and materials to be put into products that are not
salable. It is also unreliable since even 100% inspection would fail to catch all defective products.

2.2 Control Limits are Not an Indication of Capability

Those new to SPC often have the misconception that they don't need to calculate capability indices. Some even think that they can
compare their control limits to the specification limits instea This is not true, because control limits look at the distribution of averages
(xbar, Range, etc.) while capability indices look at the distribution of individual measurements (x). The distribution of x for a process
will always be more spread out than the distribution of its xbar values (Figure 2.5). Therefore, the control limits are often within the
specification limits but the plus and minus 3 sigma distribution of individual parts dimensions is not.

The statistical theory of the "central limit theorem" says that the averages of subgroups { xbar} follow more closely a normal distribution.
This is why we can do control charts on process data that are themselves not normally distributed; their averages are. But averages
cannot be used for capability calculation because capability concerns itself with individual parts or samples from a process. After all,
parts get shipped to customers, not averages.

2.3 What Capability Studies tell you

Capability (Cp/Cpk) studies are most often used to quickly determine whether a process can meet customer specs or what fraction of
routine production will fail to be within specification limits. As well, there are numerous other business uses of a capability study:

8
 Estimating percentage of defective parts to be expected
 Evaluating new equipment purchases
 Predicting whether design tolerances can be met
 Assigning equipment to production
 Planning process control checks
 Analyzing the interrelationship of sequential processes
 Making adjustments during manufacture
 Setting specifications
 Costing out contracts

Since a capability study determines the inherent reproducibility of parts created in a process, it can even be applied to many problems
outside the domain of manufacturing, such as inspection, administration, and engineering.

There are instances where capability measurements are valuable even when it is not practical to determine in advance if the process is in
control. Such an analysis is called a performance study. Performance studies can be useful for examining incoming lots of materials or
one-time-only production runs. In the case of an incoming lot, a performance study cannot tell us that the process that produced the
materials is in control, but it may tell us by the shape of the distribution what percent of the parts are out of specs or more importantly,
whether the distribution was truncated by the vendor sorting out the obvious bad parts.

2.4 How to set up a Capability Study

Before we set up a capability study, we must select the critical dimension or quality characteristic (must be a measurable variable) to be
examine This dimension is the one that must meet product specs. In the simplest case, the study dimension is the result of a single,
direct product and measurement process. In more complicated studies, the critical dimension may be the result of several processing
steps or stages. It may become necessary in these cases to perform capability studies on each process stage. Studies on early process
stages frequently prove to be more valuable than elaborate capability studies done on later processes since early processes lay the
foundation (i.e., constitute the input) which may affect later operations.

Once the critical dimension is selected, data measurements can be collecte This can be accomplished manually or by using automatic
gaging and fixturing linked to a data collection device or computer. When measurements on a critical dimension are made, it is
important we ensure that the measuring instrument is as precise as possible, preferably one order of magnitude finer than the
specification. Otherwise, the measuring process itself will contribute excess variation to the dimension data as recorde Using handheld
data collectors with automatic gages may help reduce errors introduced by the process of measurement, data recording, and
transcription for post processing by computer.

The ideal situation for data collection is to collect as much data as possible over a defined time perio This will yield a capability study
which is very reliable since it is based upon a large sample size. In the steps of process improvement, determining process capability is
Step 5:

Step 1 Gather process data


Step 2 Plot the data on control charts.
Step 3 Find the control limits.
Step 4 Get the process in control (in other words, identify and eliminate assignable causes).
Step 5 Calculate process capability.
Step 6 If process capability is not sufficient, improve the process (reduce the random cause variation), and go back to Step 1.

Capability Calculations Condition 1: The Process Must be in Control! It should be free of the influence of assignable factors.

Process capability formulas commonly used by industry require that the process must be in control and normally distributed before one
takes samples to estimate process capability. All standard capability indices calculated assume that the process is in control and the
individual data follow a normal distribution. If the process is not in control, capability indices are not valid, even if they appear to
indicate the process is capable.

9
Three different statistical tools are to be used together to determine whether a process is in control and follows a normal distribution.
These are
 Control charts
 Visual analysis of a histogram
 Mathematical analysis of the distribution to test that the distribution is normal.

Note that no single tool can do the job here and all three must be used together. Control charts (discussed in detail later in this Unit) are
the most common method for maintaining a process in statistical control. For a process to be in control, all points plotted on the control
chart must be inside the control limits with no apparent patterns (e.g., trends) be present. A histogram (described below) allows us to
quickly see (a) if any parts are outside the spec limits and (b) what the distribution's position is relative to the specification range. If the
process is one that is naturally a normal distribution, then the histogram should approximate a bell-shaped curve if the process is in
control. However, note that a process can be in control but not have its individuals following a normal distribution if the process is
inherently non-normal.

Capability Calculations Condition 2: The Process Must be Inherently Normal

Many process naturally follow a bell-shaped curve (a normal distribution) but some do not! Examples of non-normal dimensions are
roundness, squareness, flatness and positional tolerances; they have a natural barrier at zero. In these cases, a perfect measurement is
zero (for example, no ovality in the roundness measurement in crank shaft grinding). There can never be a Cp/Cpk value less than zero
in such situation. The standard capability indices are not valid for such non-normal distributions. Tests for normality are available in SPC
text books that can assist you to identify whether or not a process is normal. If a process is not normal, you may have to use special
capability measures that apply to non-normal distributions.

The Cpk Index

While CP is only a measure of dispersion, Cpk measures both dispersion and centeredness.

The Cpk formula takes into account both the process spread and the location of the process average in relation to the spec midpoint. The
formula is as follows.

 USL  mean   mean  LSL 


C pk  The _ lesser _ of   and  
 3   3 

"The lesser of" actually determines how capable the process is on the worst side. Using the data of an example plant we obtain

Cpk = The lesser of (5.0 - 2.0)/1.5 or (2.0 - 1.0)/1.5

= Min (2.0, 0.67) = 0.67

This calculation indicates that about one third of this plant’s routine production is expected to be off-spec. The greater the Cpk value is,
the higher is the fraction of the output meeting specs. Hence, the better is the process. A Cpk value greater than 1.0 means that the 6
spread of the data falls completely within the spec limits. An example is the process shown in Figure 1.4.

A Cpk value of 1.0 indicates that 99.73% of the parts produced by the process would be within the spec limits. In this process only about 3
out of a thousand parts would be scrapped or rejecte In other words, such a process just meets specs.

Do we need to improve the process (i.e., reduce its inherent variability) further? Improvement beyond just meeting specs may greatly
improve the quality of fitness of the parts during assembly and also cut warranty costs.

The different special process conditions detectable by Cpk calculations are as follows. Note that these formulas assume that measurement
data is normally distribute

10
Cpk value Process Output Distribution
> 1.0 All output completely within spec limits
= 1.0 One end of the 6 process spread falls on a spec limit, the other
may be within the other spec limit
0  Cpk  1.0 Part of the 6 spread falls outside spec limit
Negative Cpk , i.e., Process mean is not within spec limits
Cpk  0.0

Many automotive companies, especially those with QS 9000 registration, demand their vendors to demonstrate Cpk capabilities of 1.33 or
beyon A Cpk of 1.33 has about 99.994% of products within specs.

A Cpk value of 2.0 is the coveted "six sigma" quality level. To reach this stage advanced SQC methods including design-of-experiments
(DOE) would be require At this level no more than 3 or 4 parts per million products produced would fall outside the spec limits. Such
small variation is not visible on xbar-R control charts in the normal operation of the process.

You should understand that control and capability are two very different concepts. As shown in Figure 2.6, in general, a process may be
capable or not capable, or in control or out of control, independently of each other. Clearly, we would like every process to be both
capable and in (statistical) control. If a process is neither capable nor in control, we must take two corrective actions to improve it. First
we should get it in a state of control by removing special causes of variation, and then attack the common causes to improve its
capability. If a process is capable but not in control (as the above example illustrated), we should work to get it back in control.

3 Statistical Process Control (SPC) Methodology

Control charts, like the other basic tools for quality improvement, are relatively simple to use “on-line” control tools that when combined
with troubleshooting can minimize off spec production. Control charts have three basic applications: (1) to establish a state of statistical
control, (2) to monitor a process and signal when the process goes out of control, and (3) to determine process capability. The following
is a summary of the steps required to develop and use control charts. Steps 1 through 4 focus on establishing a state of statistical control;
in step 5, the charts are used for ongoing monitoring; and finally, in step 5, the data are used for process capability analysis.

1. Preparation

a. Choose the variable or attribute to be measured

b. Determine the basis, size, and frequency of sampling.

11
c. Set up the correct control chart.

2. Data Collection

a. Record the data

b. Calculate relevant statistics: averages, ranges, proportions, and so on.

b. Plot the statistics on the chart.

3. Determination of trial control limits

a. Draw the center line (process av erage) on the chart.

b. Compute the upper and lower control limits.

4. Analysis and interpretation

a. Investigate the chart for lack of control

b. Eliminate out-of-control points.

c. Re-compute control limits if necessary.

5. Use as a problem-solving tool

a. Continue data collection and plotting.

b. Identify out-of-control situations and take corrective action.

6. Use the control chart data to determine process capability, if desire

The next section reviews the "seven quality improvement tools"simple methods popularized by the Japanesethat can do a great deal
in bringing a poorly performing process into control and then to improve it further.

In Section 4 we discuss the SPC methodology in detail and the construction, interpretation, and use of the different types of process
control charts. Although many different charts will be described, they will differ only in the type of measurement for which the chart is
used; the same analysis and interpretation methodology described applies to each of them.

4 The Seven Quality Improvement Tools

In the 1950's Japanese industry began to learn and apply the statistical quality control tools and thinking that American statisticians
Walter Shewhart and W Edward Deming developed in the 1930's and 1940's. Subsequently, progress in continuous quality
improvement led to significant expansion of the use of these tools on shop floors in Japan. Kaoru Ishikawa, head of the Japanese Union of
Scientists and Engineers (JUSE), formalized the use of statistical approaches in Japanese manufacturing with the introduction of the 7
Quality Control (7 QC) tools in the 1960's. 7 QC tools are now an integral part of controlling quality around the world and many Indian
industries use them routinely. In this section we briefly review these tools. An extended description of these tools may be found in the
quality management standard ISO 9004-4 (1994).

12
4.1 Flowchart

The flowchart lists the order of activities in a project or process and their interdependency. It expresses detailed process knowledge. To
express this knowledge certain standard symbols are use The oval symbol indicates the beginning or end of the process. The boxes
indicate action items while diamonds indicate decision or check points. The flowchart can be used to identify the steps affecting quality
and the potential control points. Another effective use of the flowchart would be to map the ideal process and the actual process and to
identify their differences as the targets for improvements. Flowcharting is often the first step in Business Process Reengineering (BPR).

4.2 Histogram

The histogram is a bar chart showing a distribution of variable quantities or characteristics. An example of a "live" histogram would be
to line up by height a group of students enrolled in a course. Normally, one individual would be the tallest and one the shortest, with a
cluster of individuals bunched around the average height. In manufacturing, the histogram can rapidly identify the nature of quality
problems in a process by the shape of the distribution as well as the width of the distribution. It informally establishes process
capability. It can also help compare two or more distributions.

4.3 Pareto Chart

The Pareto chart, as shown above, indicates the distribution of effects attributable to various causes or factors arranged from the most
frequent to the least frequent. This tool is named after Wilfredo Pareto, the Italian economist who determined that wealth is not evenly
distributed and some of the people have most of the money. The tool is a graphical picture of the relative frequencies of different types of

13
quality problems with the most frequent problem type obtaining clear visibility. Thus the Pareto chart identifies the vital few and the
trivial many and it highlights problems that should be worked first to get the most improvement. Historically, 80% problems are caused
by 20% of the factors.

4.4 Cause and Effect or Ishikawa Diagram

The cause and effect diagram is also called the fishbone chart because of its appearance and the Ishikawa diagram after the man who
popularized its use in Japan. Its most frequent use is to list the causes of some particular quality problem or defect. The lines coming of
the core horizontal line are the main causes while the lines coming off those are subcauses.

The cause and effect diagram identifies problem areas where data should be collected and analyze It is used to develop reaction plans to
help investigate out-of-control points found on control charts. It is also the first step for planning design of experiments (DOE) studies
and for applying Taguchi methods to improve product and process designs.

4.5 Scatter Diagram

The scatter diagram shows any existing the pattern in the relationship between two variables that are thought to be relate For example,
is there a relationship between outside temperature and cases of the common cold? As temperatures drop, do cases of the common cold
rise in number? The closer the scatter points hug a diagonal line, the more closely there is one-to-one relationship between the variables
being studie Thus, the scatter diagram may be used to develop informal models to predict the future based on past correlations.
However, you must note that correlation does not imply causality—correlation is not a tool for process control (see Chapter 1, Section 9).

4.6 Run Chart

The run chart shows the history and pattern of variation. It is plot of data points in time sequence, connected by a line. Its primary use is
in determining trends over time. The analyst should indicate on the chart whether up is good or down is goo This tool is used at the
beginning of the change process to see what the problems are. It is also used at the end (or check) part of the change process to see
whether the change made has resulted in a permanent process improvement.

14
4.7 Control Chart

The control chart is a powerful process quality monitoring device and it constitutes the core of statistical process control (SPC). It is a
line chart marked with control limits at 3 standard deviations () above and below the average quality level. These limits are based on
the statistical studies conducted by Walter Shewhart in the 1930s. By comparing certain measures of the process output such as xbar, R,
p, u, c etc. (see Section 4) to their control limits one can determine quality variation that is due to common or random causes and
variation that is produced by the occurrence of assignable events (special causes, see Montgomery 2012).

Shewhart demonstrated using actual industry data that by eliminating the special causes first with the help of the control chart and then
systematically reducing common causes the variation in process output can be reduced and its acceptability can be improve Control
charts are discussed in detail in Section 4 of this unit.

Failure to distinguish between common causes and special causes of variation can actually increase the variation in the output of a
process. This is often due to the mistaken belief that whenever process output is off target, some adjustment must be made. Knowing
when to leave a process alone is an important step in maintaining control over a process. Equally important is knowing when to take
action to prevent the production of nonconforming product.

Statistical process control (SPC) is a methodology for monitoring a process to identify the special causes of variation and signal the need
to take corrective action when it is appropriate. When special causes are present, the process is deemed to be out of control. If the
variation in the process is due to common causes alone, the process is said to be in statistical control. A practical definition of statistical
control is that both the process averages and variances are constant over time.

SPC relies on control charts as the basic tool to improve both quality and productivity. SPC provides a means by which a firm may
demonstrate its quality capability, an activity necessary for survival in today's highly competitive markets. Also, many customers (e.g.,
the automotive companies) now require the evidence that their suppliers use SPC in managing their operations. Note, however, that
since SPC requires processes to display measurable variation; even though it is quite effective for companies in the early stages of quality
efforts, it becomes ineffective in producing improvements once quality level approaches six-sigma.

Before we leave this section, we repeat again that process capability calculations make little sense if the process is not in statistical
control because the data are confounded by special (or assignable) causes and thus do not represent the inherent capability of the
process. The simple tools described in this section may be good enough to enable you to check this. To see this, consider the data in
Table 3.1, which shows 150 measurements of a quality characteristic from a manufacturing process with specifications 0.75  0.25. Each
row corresponds to a sample of size = 5 taken every 15 minutes. The average of each sample is also given in the last column. A
frequency distribution and histogram of these data is shown in Figure 3.1. The data form a relatively symmetric distribution with a mean
of 0.762 and standard deviation 0.0738. Using these values, we find that Cpk = 1.075 and form the impression that the process capability
is at least marginally acceptable. Some key questions, however, remain to be answere Because the data were taken over an extended
period of time, we cannot determine if the process remained stable throughout that perio In a histogram the dimension of time is not
considere

15
Thus, histograms do not allow you to distinguish between common and special causes of variation. So it is unclear whether special
causes are influencing Cpk. If we plot the average of each sample against the time at which the sample was taken (since the time
increments between samples are equal, the sample number is an appropriate surrogate for time), we obtain the run chart shown in
Figure 3.6. The run chart hints that the mean might have shifted up at about sample 1 In fact, the average for the first 16 samples was
0.738 while for the remaining it was 0.789. Hence, while the overall average is close to the target (0.75), at no time was the actual
process operating centered near the target. In the next section you will see why we should conclude that this process is not in statistical
control and so we should not pay much attention to the process capability index Cpk calculated as 1.075.

5 Control charts for variables data

As we mentioned in the previous section, the control chart is a powerful process quality monitoring device and it constitutes the core of
statistical process control (SPC). In the SPC methodology, knowing when to leave a process alone is an important step in maintaining
control over a process. Equally important is knowing when to take action to prevent the production of nonconforming product. Indeed,
failure to distinguish between variation produced by common causes and special causes can actually increase the variation in the output
of a process (see Montgomery 2012).

This differentiation is precisely what control charts enable us to do. When the chart crosses its control limits, special causes are
indicated to be present, the process is deemed to be out of control and it should be investigated and the source of disturbance remove
Otherwise when variation stays within control limits, it is indicated to be due to common causes alone. Now the process is said to be in
"statistical control" and it should be left alone. A practical definition of statistical control is that in this state both the process averages
and variances are constant over time and hence the process output is stable and predictable.

Table 2.1 Thirty samples of Quality Measurements.

Sample # X1 X2 X3 X4 X5 Average
1 0.682 0.689 0.776 0.798 0.714 0.732
2 0.787 0.860 0.601 0.749 0.779 0.755
3 0.780 0.667 0.838 0.785 0.723 0.759
4 0.591 0.727 0.812 0.775 0.730 0.727
5 0.693 0.708 0.790 0.758 0.671 0.724
6 0.749 0.714 0.738 0.719 0.606 0.705
7 0.791 0.713 0.689 0.877 0.603 0.735
8 0.744 0.779 0.660 0.737 0.822 0.748
9 0.769 0.773 0.641 0.644 0.725 0.71
10 0.718 0.671 0.708 0.850 0.712 0.732
11 0.787 0.821 0.764 0.658 0.708 0.748
12 0.622 0.802 0.818 0.872 0.727 0.768
13 0.657 0.822 0.893 0.544 0.750 0.733
14 0.806 0.749 0.859 0.801 0.701 0.783
15 0.660 0.681 0.644 0.747 0.728 0.692

16
16 0.816 0.817 0.768 0.716 0.649 0.753
17 0.826 0.777 0.721 0.770 0.809 0.781
18 0.828 0.829 0.865 0.778 0.872 0.834
19 0.805 0.719 0.612 0.938 0.807 0.776
20 0.802 0.756 0.786 0.815 0.801 0.792
21 0.876 0.803 0.701 0.789 0.672 0.768
22 0.855 0.783 0.722 0.856 0.751 0.793
23 0.762 0.705 0.804 0.805 0.809 0.777
24 0.703 0.837 0.759 0.975 0.732 0.801
25 0.737 0.723 0.776 0.748 0.732 0.743
26 0.748 0.686 0.856 0.811 0.838 0.788
27 0.826 0.803 0.764 0.823 0.886 0.82
28 0.728 0.721 0.820 0.772 0.639 0.736
29 0.803 0.892 0.740 0.816 0.770 0.804
30 0.774 0.837 0.872 0.849 0.818 0.83

Most process that deliver a "product" or a "service" may be monitored by measuring their output over time and then plotting these
measurements appropriately. However, processes differ in the nature of those output. Variables data are those output characteristics
that are measurable along a continuous scale. Examples of variables data are dimension, weight, or viscosity. By contrast, some output
may only be judged to be good or bad, or "acceptable" or "unacceptable", such as print quality of a photocopier or defective knots
produced per meter by a weaving machine. In such cases we categorize the output as an attribute that is either acceptable or
unacceptable; we cannot put it on a continuous scale as done with weight or viscosity. However, SPC methodology provides us with a
variety of different types of control charts to work with such diversity.

Control charts most commonly used for variables data are the " xbar" chart and the "R-chart" (range chart). The xbar chart is used to
monitor the centering of the process, and the R-chart is used to monitor the dispersion or variability in the process. The range is used as
a measure of variation simply for convenience, particularly when workers on the factory floor perform control chart calculations by han
An example is the turbine blade machining shop in BHEL, Haridwar. For large samples and when data are analyzed by computer
programs, the standard deviation is a better measure of variability.

5.1 Constructing xbar and R-Charts and Establishing Statistical Control

The first step in developing xbar and R-charts is to gather data. Usually, about 25 to 30 samples are collecte Samples between size 3 and
10 are generally used, with 5 being the most common. The number of samples is indicated by k, and n denotes to sample size. For each
sample i, the mean (denoted by xbari) and the range (Ri) are compute These values are then plotted on their respective control charts.
Next, the overall mean (x_doublebar) and average range (Rbar) calculations are made. These values specify the center lines for the xbar
and R-charts, respectively. The overall mean is the average of the sample means xbari.

x1  x2  ...  xn
xbar 
n
The average range is similarly computed, using the formulas

Ri  max( xi )  min( xi )

The average range and average mean are used to compute control limits for the R-and xbar charts. Control limits are easily calculated
using the following formulas:

17
k

 xbar i
x _ doublebar  i 1

R i
Rbar  i 1

UCLR = D4 Rbar

LCLR = D3 Rbar

UCLxbar = x_doublebar + A2 Rbar

LCLxbar = x_doublebar - A2 Rbar

where the constants D3, D4 and A2 depend on sample size n and are shown in Table 5.1 below. Control limits represent the range
between which all points are expected to fall if the process is in statistical control, i.e., operating only under the influence of random or
common causes. If any points fall outside the control limits or if any unusual patterns are observed, then some special (called assignable)
cause has likely affected the process. In such case the process should be studied using a “reaction plan”, process logs and other tools and
devices to determine and eliminate that cause.

Table 5.1 Factors used in calculating Control Chart Limits

Sample Size n A2 D4 d2
2 1.880 3.267 1.128
3 1.023 2.574 1.693
4 0.729 2.282 2.059
5 0.577 2.114 2.326
6 0.483 2.004 2.534

Note, however, that if assignable causes are affecting the process, then the process data are not representative of the true state of
statistical control and hence the calculations of the center line and control limits would be biase To be effective, SPC requires the center
line and the control limit calculations to be unbiase Therefore, before control charts are set up for routine use by the factory, any out-or-
control data points should be eliminated from the data table and new values for x_doublebar, Rbar, and the control limits re-computed, as
illustrated below.

In order to determine whether a process is in statistical control, the R-chart is always analyzed first. Since the control limits in the xbar
chart depend on the average range, special causes in the R-chart may produce unusual patterns in the xbar chart, even when the
centering of the process is in control. (An example of this is given later in this unit). Once statistical control is established for the R-chart,
attention may turn to the xbar chart. Process standard deviation  is estimated by Rbar/d2.

Figure 4.2 shows a typical data sheet used for shop floor recording. This form provides space for descriptive information about the
process, recording of sample observations and computed statistics, and drawing the control charts. The construction and analysis of
control charts is best seen by example. A critical entry is under “Notes”, which records any abnormal events or conditions that existed
when the particular sample was taken. This data is critical in successful quality troubleshooting (see Figure 4.3).

18
Part Name: Process Step: Spec Limits:
Operator: Machine: Gage: Unit of Measure:

Date
Time
Sample 1
2
3
4
5
xbar
R
Notes

Figure 4.2 A “Variables” Control Chart Data Sheet

Example 3: Control Charts for Silicon Wafer Production.

The thickness of silicon wafers used in the production of semiconductors must be carefully controlled during wafer manufacture. The
tolerance of one such wafer is specified as  0.0050 inches. In one production facility, three wafers were randomly selected each hour
and the thickness measured carefully (Figure 4.3).

Subsequently, xbar and R were calculate For example, the average of the first sample was

xbar1 = (41 + 70 + 22)/3 = 113/3 = 44.33

The range of sample was Ri = 70-22 = 48. (Note: In practice, calculations may be rounded to the nearest integer for simplicity).

The calculations of average range, overall mean, and control limits are shown in Figure 4.3. The average range is the sum of the sample
ranges (676) divided by the number of samples (25); the overall mean is the sum of the sample averages (1,221) divided by the number
of samples (25). Since the sample size is 3, the factors used in computing the control limits are A 2 = 1.023 and D4 = 2.574 (Table 1). For
sample size of 6 or less, factor D3 = 0; therefore, the lower control limit on the range chart is zero. The center line and control limits are
drawn on the chart shown in Figure 4.3. As a factory convention, out-of-control points are noted directly on the charts.

12/ 12/ 12/ 12/ 12/ 12/ 13/ 13/ 13/ 13/ 13/ 13/ 14/ 14/ 14/ 14/ 14/ 14/ 15/ 15/ 15/ 15/ 15/ 15/ 16/
Date
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Time 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00 11:0 2:00 5:00 8:00 11.0 8:00

1 41 78 84 60 46 64 43 37 50 57 24 78 51 41 56 46 99 71 41 41 22 62 64 44 41

2 70 53 34 36 47 16 53 43 29 83 42 48 57 29 64 41 86 54 2 39 40 70 52 38 63

3 22 68 48 25 29 56 64 30 57 32 39 39 50 35 36 16 98 39 53 36 46 46 57 60 62

Calculations:

Sum 133 199 166 121 122 136 160 110 136 172 105 165 158 105 156 103 283 164 96 116 108 178 173 142 166

Avg =
44.3 66.3 55.3 40.3 40.6 45.3 53.3 36.7 45.3 57.3 35 55 52.8 35 52 34.3 94.3 54.7 32 38.7 36 59.3 57.7 48.3 55.3
Xbar
Range
48 25 50 35 18 48 21 13 28 51 18 39 7 12 28 30 13 32 51 5 24 24 12 22 22
=R
Pum
Gas AC New
p
Shop flow malf clean
stalle
Notes: adjus uncti ing
d
ted on tried
once

Figure 4.3 Silicon Wafer Thickness Data as observed

19
On examining the range chart first we infer that the process is in control. All points lie within the control limits and no unusual patterns
exist. On the xbar chart, however, the xbar value for sample 17 lies above the upper control limit. On investigation we find that some
suspicious cleaning material had been used in the process at this point (an assignable cause for variation). Therefore, data from sample
17 should be eliminated from the control chart calculations. Figure 4.4 shows the calculations after sample 17 was remove The revised
center lines and control limits are shown in the figure. Resulting xbar and R charts both are in control.

Control Limit Calculations for this example


Calculation basis: All Subgroups included
Rbar 676/25 = 27
x_doublebar 1221/25 = 48.8
Spec Midpoint 50.0
A2  Rbar 1.02327 = 26
UCLxbar = x_doublebar + A2  Rbar 76.4
LCLxbar = x_doublebar - A2  Rbar 21.2
UCLR = D4  Rbar 69.5
UCLR = D3  Rbar 0.0

The Initial xbar Control Chart—raw data plotted as is

Sample # 1 2 3 4 5 19 20 21 22 23 24 25
Average =
Xbar 44.33 66.33 55.33 40.33 40.67 32 38.67 36 59.33 55.7 48.33 55.33

x_double bar 48.8 48.8 48.8 48.8 48.8 48.8 48.8 48.8 48.8 48.8 48.8 48.8
UCLxbar 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4 76.4
LCLxbar 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2 21.2

Sample # 1 2 3 4 5 19 20 21 22 23 24 25

Range = R 48 25 50 35 18 51 5 24 24 12 22 22

UCLR 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5 69.5

LCLR 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

20
Revised Control Chart Calculations: The control limits are re-calculated, after removing any "out of control" points (here Sample #17).

Calculation basis: All Subgroups included Subgroup #17 Removed


Rbar 676/25 = 27 663/24 = 26
x_doublebar 1221/25 = 48.8 1127/24 = 40
Spec Midpoint 50.0 50.0
A2  Rbar 1.02327 = 26 1.02326 = 28.2
UCLxbar = x_doublebar + A2  Rbar 76.4 75.2
LCLxbar = x_doublebar - A2  Rbar 21.2 18.8
UCLR = D4  Rbar 69.5 71.0
UCLR = D3  Rbar 0.0 0.0

The Revised Xbar Control Limits and Chart

Sample # 1 2 3 4 5 18 19 20 21 22 23 24 25
Average = 44.33 66.33 55.33 40.33 40.67 54.67 32 38.67 36 59.33 567 433 55.33
Xbar
x_double 40 40 40 40 40 40 40 40 40 40 40 40 40
bar
UCLxbar 75.2 75.2 75.2 75.2 75.2 75.2 75.2 75.2 75.2 75.2 75.2 75.2 75.2
LCLxbar 18.8 18.8 18.8 18.8 18.8 18.8 18.8 18.8 18.8 18.8 18.8 18.8 18.8

Subgroup # 1 2 3 4 5 19 20 21 22 23 24 25
Range = R 48 25 50 35 18 51 5 24 24 12 22 22
UCLR 71.0 71.0 71.0 71.0 71.0 71.0 71.0 71.0 71.0 71.0 71.0 71.0
UCLR 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

5.2 Interpreting Abnormal Patterns in Control Charts

When a process is in “statistical control,” the points on a control chart fluctuate randomly between the control limits with no
recognizable, non-random pattern. The process in such cases should be left alone! The following checklist provides a set of general rules
for examining a process to determine if it is in statistical control and OK.:

1. No points are outside control limits.


2. The number of points above and below the center line is about the same.
3. The points seem to fall randomly above and below the center line.
4. Most points, but not all, are near the center line, and only a few are close to the control limits.

The underlying assumption behind these rules is that the distribution of sample means is normal. This assumption follows from the
central limit theorem of statistics, which states that the distribution of sample means approaches a normal distribution as the sample size

21
increases regardless of the original distribution. Of course, for small sample sizes, the distribution of the original data must be
reasonably normal for this assumption to hol The upper and lower control limits are computed to be three standard deviations from the
overall mean. Thus, the probability that any sample mean falls outside the control limits is very small. This probability is the origin of
rule 1—no points outside control limits.

Since the normal distribution is symmetric, about the same number of points fall above as below the center line. Also, since the mean of
the normal distribution is the median, about half the points fall on either side of the center line. Finally, about 68 percent of a normal
distribution falls within one standard deviation of the mean; thus, most but not all points should be close to the center line. These
characteristics will hold provided that the mean and variance of the original data have not changed during the time the data were
collected; that is, the process is stable. Several types of unusual patterns arise in control charts, which are reviewed here along with an
indication of the typical causes of such patterns.

One Point Outside Control Limits

The process now requires troubleshooting! A single point outside the control limits (see below) is usually produced by a special cause.

Often, the R-chart provides a similar indication. Once in a while, however, such points are normal part of the process and occur simply by
chance. A common reason for a point falling outside a control limit is an error in the calculation of Xbar or R for the sample. You should
always check your calculations whenever this occurs, Other possible causes are a sudden power surge, a broken tool, measurement
error, or an incomplete or omitted operation in the process.

Sudden Shift in the Process Average

An Unusual number of consecutive points falling on one side of the center line (see below) is usually an indication that the process
average has suddenly shifte Typically, this occurrence is the result of an external influence that has affected the process, which would be
considered a special cause. In both the xbar and R-charts, possible causes might be a new operator, a new inspector, a new machine
setting, or a change in the setup or metho

If the shift is up in the R-chart, the process has become less uniform. Typical causes are carelessness of operators, poor or inadequate
maintenance, or possibly a fixture in need of repair. If the shift is down in the R-chart, the uniformity of the process has improve This

22
might be the result of improved workmanship or better machines or materials. As mentioned, every effort should be made to determine
the reason for the improvement and to maintain it.

Three rules of thumb are used for early detection of process shifts. A simple rule is that if eight consecutive points fall on one side of the
center line, one could conclude that the mean has shifte Second, divide the region between the center line and each control limit into
three equal parts. Then if (1) two of three consecutive points fall in the outer one-third region between the center line and one of the
control limits or (2) four of five consecutive points fall within the outer two-thirds region, one would also conclude that the process has
gone out of control.

Cycles

Cycles are short, repeated patterns in the chart, alternating high peaks and low valleys (see below). These patterns are the result of
causes that come and go on a regular basis. In the xbar chart, cycles may be the result of operator rotation or fatigue at the end of a shift,
different gauges used by different inspectors, seasonal effects such as temperature or humidity, or differences between day and night
shifts. In the R-chart, cycles can occur from maintenance schedules, rotation of fixtures or gauges, differences between shifts, or operator
fatigue.

Trends

A trend is the result of some cause that gradually affects the quality characteristics of the product and causes the points on a control
chart to gradually move up or down from the center line. As a new group of operators gains experience on the job, for example, or as
maintenance of equipment improves over time, a trend may occur. In the xbar chart, trends may be the result of improving operator
skills, dirt or chip buildup in fixtures, tool wear, changes in temperature or humidity, or aging of equipment. In the R-chart, an increasing
trend may be due to a gradual decline in material quality, operator fatigue, gradual loosening of a fixture or a tool, or dulling of a tool. A
decreasing trend often is the result of improved operator skill or work methods, better purchased materials, or improved or more
frequent maintenance.

Hugging the Center Line

23
Hugging the center line occurs when nearly all the points fall close to the center line (see Figure 4.8). In the control chart, it appears that
the control limits are too wide. A common cause of hugging the center line is that the sample includes one item systematically taken from
each of several machines, spindles, operators, and so on. A simple example will served to illustrate this pattern. Suppose that one
machine produces parts whose diameters average 508 with variation of only a few thousandths; a second machine produces parts whose
diameters average 502, again with only a small variation. Taken together, parts from both machines would yield a range of variation that
would probably be between 500 and 510, and average about 505, since one will always be high and the second will always be low. Even
though a large variation will occur in the parts taken as a whole, the sample averages will not reflect this variation. In such a case, a
control chart should be constructed for each machine, spindle, operator, and so on. An often overlooked cause for this pattern is
miscalculation of the control limits, perhaps by using the wrong factor from the table, or misplacing the decimal point in the
computations.

Hugging the Control Limits

This pattern shows up when many points are near the control limits with very few in between. It is often called a mixture and is actually
a combination of two different patterns on the same chart. A mixture can be split into two separate patterns. A mixture pattern can
result when different lots of material are used in one process, or when parts are produced by different machines but fed into a common
inspection group.

Instability

Instability is characterized by unnatural and erratic fluctuations on both sides of the chart over a period of time (see below). Points will
often lie outside both UCL and LCL without a consistent pattern. Assignable causes may be more difficult to identify here than when
specific patterns are present. A frequent cause of instability is over adjustment of a machine, or reasons that cause hugging the control
limits.

As suggested earlier, the R-chart should be analyzed before the xbar chart, because some out-of-control conditions in the R-chart may
cause out-of-control conditions in the xbar chart. Also, as the variability in the process decreases, all the sample observations will be
closer to the true population mean, and therefore their average, xbar, will not vary much from sample to sample. If this reduction in R
can be identified and controlled, then new control limits should be computed for both charts.

5.3 Routine Process Monitoring and Control

After a process is determined to be in control, the charts should be used on a daily basis to monitor production, identify any special
causes that might arise, and make corrections as necessary. More important, the chart tells when to leave the process alone!
Unnecessary adjustments to a process result in nonproductive labor, reduced production, and increased variability of output.

It is more productive if the operators themselves take the samples and chart the data. In this way, they can react quickly to changes in
the process and immediately make adjustments. To do this effectively, training of the operators is essential. Many companies conduct

24
in-house training programs to teach operators and supervisors the elementary methods of statistical quality control. Not only does this
training provide the mathematical and technical skills that are required, but it also give the shop-floor personnel increased quality-
consciousness.

Improvements in conformance typically follow the introduction of control charts on the shop floor, particularly when the process is labor
intensive. Apparently, management involvement in operators' work often produces positive behavioral modifications (as first
demonstrated in the famous Hawthorne studies). Under such circumstances, and as good practice, management and operators should
revise the control limits periodically and determine a new process capability as improvements take place.

Control charts are designed to be used by production operators rather than by inspectors or quality control personnel. Under the
philosophy of statistical process control, the burden of quality rests with the operators themselves. The use of control charts allows
operators to react quickly to special causes of variation. The range is used in place of the standard deviation for the very reason that it
allows shop-floor personnel to easily make the necessary computations to plot points on a control chart. Only simple calculations are
require

6 Estimating Plant Process Capability

After a process has been brought to a state of statistical control by eliminating special causes of variation, the data may be used to find a
rough estimate process capability. This approach uses the average range Rbar rather than the estimated standard deviation of the
original data. Nevertheless, it is a quick and useful method, provided that the distribution of the original data is reasonably normal.

Under the normality assumption, the standard deviation (x) of the original data {x}can be estimated as follows:

 (= x) = Rbar/d2

where d2 is a constant that depends on the sample size and is given in Table 1. Therefore, process capability may be determined by
comparing the spec range to 6x. The natural variation of individual measurements is given by x_doublebar  3x. The following
example illustrates these calculations.

Example 4: Estimating Process Capability for Silicon Wafer Manufacture

In this example, the capability (Cp and Cpk, see Section 2.4) calculations for the silicon wafer production data displayed in Figure 4.3 are
shown. The overall distribution of the data is indicated by Figure 4.10. For a sample of size (n) = 3, d2 is 1.693. The ULx and LLx
represent the upper and lower 3x limits on the data for individual observations. Thus, wafer thickness is expected to vary between -19
and 95.9. The "zero point" of acceptable wafers is the lower specification, meaning that the thickness of all wafers produced is expected,
without adjustment, to vary from 0.0019 below the lower specification to 0.0959 above the lower specification.

25
Therefore,

Cp = 100/98 = 1.02

Thus, Cp for this process looks OK. However, lower and upper capability indices are

Cpl = (47 - 0)/48.9 = 0.96


Cpu = (100 - 47)/48.9 = 1.08

This gives a Cpk value equal to 0.96, which is less than 1.0. This analysis suggests that both the centering and the variation of the
manufacturing process must be improve

The above capability calculations are shown below in a step-by-step manner

Step 1: Find Modified (corrected) Control limits

Note: The initial xbar control chart (Figure 4.3) shows one xbar point (Sample #17) out of control (beyond UCLxbar). This point should
be removed and the control limits should be re-calculate This is done as follows.

All Subgroups Subgroup #17


Calculation basis:
included Removed

Rbar 676/25 = 27 663/24 = 26


x_doublebar 1221/25 = 48.8 1127/24 = 40
Spec Midpoint 50.0 50.0
A2  Rbar 1.02327 = 26 1.02326 = 28.2
UCLxbar = x_doublebar + A2  Rbar 76.4 75.2
LCLxbar = x_doublebar - A2  Rbar 21.2 18.8

UCLR = D4  Rbar 69.5 71.0

UCLR = D3  Rbar 0.0 0.0

Step 2: Find the revised Process Standard Deviation (x)

x = Rbar/d2 = 26/1.693 = 16.30

Step 3: Compare tolerance limits (specification) with the revised 6x

US (Upper Spec Limit) = 100.0

LS (Lower Spec Limit) = 0.0

US - LS = 100.0 - 0.0 = 100.0

6x = 6  16.30 = 981

Step 4: Compute upper and lower ( 3x) limits of process variation under statistical control:

ULx = x_doublebar + 3x = 40 + 3  16.30 = 95.9

LLx = x_doublebar - 3x = 40 - 3  16.30 = -1.9

26
If the individual observations are normally distributed, then the probability of being out of specification can be compute In the example
above we assumed that the data are normal. The revised mean (estimated by x doublebar) is 47 and the standard deviation (x) is
98/6=16.3.

Figure 4.10 shows the z calculations for specification limits of 0 and 100. These z values are used to find the area (= probability of
finding x between 0 and the mean 47) is 0.4980 as determined from the standard normal distribution table. Thus 0.2 percent of the
output (wafer production or {x}) would be expected to fall below the lower specification.

The area to the right of 100 is approximately zero. Therefore, all the output can be expected to meet the upper specification.

Note!!! Control limits are not specification limits!

A word of caution deserves emphasis here. Many often confuse control limits with specification or "spec" limits. Spec limits, normally
expressed in engineering units, indicate the range of variation in a quality characteristic that is acceptable to the customer. Specification
dimensions are usually stated in relation to individual parts for "hard" goods, such as automotive hardware. However, in other
applications, such as in chemical processes, specifications are stated in terms of average characteristics. Thus, control charts might
mislead one into thinking that if all sample averages fall within the control limits, all output will conform to specs. This assumption is not
true. Control limits relate to sample averages while specification limits relate to individual measurements. A sample average may fall
within the upper and lower control limits even though some of the individual observations are out of specification. Since xbar = x/n,
control limits are narrower than the natural variation in the process (Figure 2.5) and they do not represent process capability.

The modified control limits allow for more variation than the ordinary control limits and still provide high confidence that the product
produced will remain within specifications. It is important to note that modified limits apply only if process capability is at least 60 to 75
percent of tolerance. Nonetheless, if the mean must be controlled closely, a conventional xbar-chart should be used even if the process
capability is goo Also if the standard deviation (x) of the process is likely to shift, modified control limits are not appropriate.

7 Control charts for Attributes

Attributes quality data assume only two valuesgood or bad, pass or fail. Attributes usually cannot be measured, but they can be
observed and counted and are useful in quality management in many practical situations. For instance, in printing packages for
consumer products, color quality can be rated as acceptable or not acceptable, or a sheet of cardboard either is damaged or is not.
Usually, attributes data are easy to collect, often by visual inspection. Many accounting records, such as percent scrapped, are also
usually readily available. However, one drawback in using attributes data is that large samples are necessary to obtain valid statistical
results.

7.1 Fraction Nonconforming (p) Chart

Several different types of control charts are fused for attribute data. One of the most common is the p-chart (introduced in this section).
Other types of attributes charts are presented in the next chapter. One distinction that we must make is between the terms defects and
defectives. A defect is a single nonconforming quality characteristic of an item. An item may have several defects. The term defective
refers to items having one or more defects. Since certain attributes charts are used for defectives while others are used for defects, one
must understand the difference. In quality control literature, the term nonconforming is often used instead of defective.

A p-chart monitors the proportion of nonconforming items produced in a lot. Often it is also called a fraction nonconforming or fraction
defective chart. As with variables data, a p-chart is constructed by first fathering 25 to 30 samples of the attribute being measure The
size of each sample should be large enough to have several nonconforming items. If the probability of finding a nonconforming item is
small, a large sample size is usually necessary. Samples are chosen over time periods so that any special causes that are identified can be
investigate

27
Let us suppose that k samples, each of size n, are selecte If y represents the number nonconforming items or defectives in a particular
sample, the proportion nonconforming is (y/n). Let pi be the fraction nonconforming in the ith sample; the average fraction
nonconforming pbar for the group of k samples then is

p1  p 2  ...  p k
pbar  p 
k
This statistic pbar reflects the average performance of the process. One would expect a high percentage of samples to have a fraction
nonconforming within three standard deviations of p. As estimate of the standard deviation is given by

p(1  p)
sp 
n
Therefore, upper and lower control limits may be given by

UCLp  p  3s p

LCLp  p  3s p

If LCLp is less than zero, a value of zero is use Analysis of a p-chart is similar to that of the xbar or R-chart. Points outside the control
limits signify an out-of-statistical-control situation, i.e., the process has been disturbed by an assignable factor. Patterns and trends
should also be sought to identify the presence of assignable factors.
However, a point on a p-chart below the lower control limit or the development of a trend below the center line indicates that the
process might have improved, since the ideal is zero defectives. However, caution is advised before such conclusions are drawn, because
errors may have been made in computation. An example of a p-chart is presented next.

Example 5: Constructing a p-Chart.

The mail sorting personnel in a post office must read the PIN code on a letter and divert the letter to the proper carrier route. Over one
month's time, 25 samples of 100 letters were chosen and the number of errors was recorde This information is summarized in Figure
6.2. The average fraction nonconforming, pbar is

0.03  0.01  ...  0.01


pbar  p   0.022
25

Figure 6.1 Attribute (p) Chart for Example 5

Thus, the upper control limit, UCLp, is 0.022 + 3(0.01467) = 0.066, and the lower control limit, LCLp is 0.022 - 3(0.01467) = 0.022. Since
this later figure is negative and fraction nonconforming (pbar) can never be negative, for LCLp zero (0) is use The control chart for this
example is shown in Figure 6.1.

28
Figure 6.2: Data and Control Limit Calculations for Example 5

p = Fraction
Sample # Number of errors Sample Size pbar UCLp
Nonconforming
1 3 100 0.03 0.022 0.066
2 1 100 0.01 0.022 0.066
3 0 100 0.00 0.022 0.066
4 0 100 0.00 0.022 0.066
5 2 100 0.02 0.022 0.066
6 5 100 0.05 0.022 0.066
7 3 100 0.03 0.022 0.066
8 6 100 0.06 0.022 0.066
9 1 100 0.01 0.022 0.066
10 4 100 0.04 0.022 0.066
11 0 100 0.00 0.022 0.066
12 2 100 0.02 0.022 0.066
13 1 100 0.01 0.022 0.066
14 3 100 0.03 0.022 0.066
15 4 100 0.04 0.022 0.066
16 1 100 0.01 0.022 0.066
17 1 100 0.01 0.022 0.066
18 2 100 0.02 0.022 0.066
19 5 100 0.05 0.022 0.066
20 2 100 0.02 0.022 0.066
21 3 100 0.03 0.022 0.066
22 4 100 0.04 0.022 0.066
23 1 100 0.01 0.022 0.066
24 0 100 0.00 0.022 0.066
25 1 100 0.01 0.022 0.066

The sorting process appears to be in statistical control. Any values found above the upper control limit or evidence of upward trend
might indicate the need for re-training the personnel.

7.2 p chart with Variable Sample Size

Often 100 percent inspection is performed on process output during fixed sampling periods; however, the number of units produced in
each sampling period may vary. In this case, the p-chart would have a variable sample size.

One way of handling this is to compute a standard deviation for each individual sample. Thus,

Std _ Dev  p(1  p) / ni


where the number of observations in the Ith sample is nI,. The control limits for this sample will be given by

p(1  p)
p3
ni
where

p
 number _ nonconfor min g
n i

29
Example 6: A Variable Sample Size p chart.

Figure 6.3 shows 20 samples with varying sample sizes. The value of pbar is computed as

18  20  14  ...  18 271
p   0.0909
137  158  92  ...  160 2980

Therefore, control limits for sample #1 would be

0.0909(1  0.0909)
LCL p  0.0909  3  0.017
137

0.0909(1  0.0909)
UCL p  0.0909  3  0.165
137

Note carefully that because the sample sizes vary, control limits would be different for each sample. The p-chart is shown in Figure 6.4.
Points 13 and 15 are outside the control limits and would require investigation by production personnel.

Figure 6.3 Data and Calculations for Example 6

p = Fraction
Sample #i Value Sample Size (ni) Std Dev LCLp UCLp
Nonconforming
1 18 137 0.1314 0.0245648 0.0172 0.1646
2 20 158 0.1266 0.0228741 0.0223 0.1596
3 14 92 0.1522 0.0299764 0.001 0.1809
4 6 122 0.0492 0.0260311 0.0128 0.169
5 11 86 0.1279 0.031004 0 0.184
6 22 187 0.1176 0.0210258 0.0279 0.154
7 6 156 0.0385 0.230203 0.0219 0.16
8 9 117 0.0769 0.0265815 0.0112 0.1707
9 14 110 0.1273 0.0274143 0.0087 0.1732
10 12 142 0.0845 0.0241284 0.0186 0.1633
11 8 140 0.0571 0.0243001 0.018 0.1638
12 13 179 0.0726 0.0214905 0.0265 0.1554
13 5 196 0.0255 0.0205374 0.0293 0.1526
14 15 163 0.0920 0.0225206 0.0234 0.1585
15 25 140 0.1786 0.0243001 0.018 0.1638
16 12 135 0.0889 0.0247461 0.0167 0.1652
17 16 186 0.0860 0.0210822 0.0277 0.1542
18 12 193 0.0622 0.0206964 0.0289 0.153
19 15 181 0.0829 0.0213714 0.0268 0.1551
20 18 160 0.1125 0.0227307 0.0227 0.1591

7.3 np-charts for Number Nonconforming

30
In the p-chart, the fraction nonconforming of the ith sample is given by

pi = yi/n

where yi is the number found nonconforming and n is the sample size. Multiplying both sides of the equation pi = yi/n, yields

yi = npi

That is, the number nonconforming is equal to the sample size times the proportion nonconforming. Instead of using a chart for the
fraction nonconforming, an equivalent alternativea chart for the number of nonconforming items is useful. Such a control chart is
called an np-chart.

The np-chart is a control chart for the number of nonconforming items in a sample. To use the np-chart, the size of each sample must be
constant. Suppose that two samples of sizes 10 and 15 each have four nonconforming items. Clearly, the fraction nonconforming in each
sample is difference between samples. Thus, equal sample size are necessary to have a common base for measurement. Equal sample
sizes are not required for p-charts, since the fraction nonconforming is invariant to the sample size.

The np-chart is a useful alternative to the p-chart because it is often easier to understand for production personnelthe number of
nonconforming items is more meaningful than a fraction. Also, since it requires only a count, the computations are simpler.

Figure 6.6 Data and Calclations for Example 7


Sample # Number Nonconforming (np) LCLnp UCLnp
1 3 0 6.6
2 1 0 6.6
3 0 0 6.6
4 0 0 6.6
5 2 0 6.6
6 5 0 6.6
7 3 0 6.6
8 6 0 6.6
9 1 0 6.6
10 4 0 6.6
11 0 0 6.6
12 2 0 6.6
13 1 0 6.6
14 3 0 6.6
15 4 0 6.6
16 1 0 6.6
17 1 0 6.6
18 2 0 6.6
19 5 0 6.6
20 2 0 6.6
21 3 0 6.6
22 4 0 6.6
23 1 0 6.6
24 0 0 6.6
25 1 0 6.6

The control limits for the np-chart, like those for the p-chart, are based on the binomial probability distribution. The center line is the
average number of nonconforming items

snpbar  npbar (1  pbar )

y1  y 2  ...  y k
npbar 
k

31
per sample as denoted by npbar, which is calculated by taking k samples of size n, summing the number of nonconforming items yi in each
sample, and dividing by k. That is

An estimate of the standard deviation is shown above where pbar = (npbar)/n. Using three-sigma limits as before, the control limits and
plots are shown below.

UCLnpbar  npbar  3 npbar (1  pbar )

LCLnpbar  npbar  3 npbar (1  pbar )

Example 7: An np-chart for a Post Office.

2.2
p  0.022
100

The np data for the post office example discussed earlier is given in Figure 6.5. The average number of errors found is:

To find the standard deviation, we compute

3  1  ...  0  1
np   2.2
25

s n p  2.2(1  0.022)  1.4668

Then, the control limits are computed as

UCLn p  2.2  3(1.4668)  6.6

LCLn p  2.2  3(1.4668)  2.20


Since the lower control limit is less than zero, a value of LCL = 0 is use The control chart for this example is given in Figure 6.5.

32
7.4 Charts for Defects

Recall that a defect is a single nonconforming characteristic of an item, while the term defective refers to an item that has one or more
defects in it. In some situations, quality assurance personnel may be interested not only in whether an individual item is defective but
also in how many defects it has. For example, in complex assemblies such as electronics or automobiles or hotel suits being inspected,
the number of defects is just as important as whether the product is defective. Two charts can be applied in such situations. The c-chart
is used to control the total number of defects per unit when subgroup size is constant. If subgroup sizes are variable, a u-chart is used to
control the average number of defects per unit.

The c-chart is based on the Poisson probability distribution. To construct a c-chart, first estimate the average number of defects
per unit, cbar, by taking at least 25 samples of equal size, counting the number of defects per sample, and finding the average. The
standard deviation (sc) of the Poisson distribution is the square root of the mean and yields. Thus, three-sigma control limits for the c-

sc  cbar
chart are given by

UCLc  cbar  cbar


LCLc  cbar  cbar

Example 8: Constructing a c-Chart

Figure 6.7 shows the number of machine failures in a factory over a 25-day perio The total number of failures is 45; therefore, the
average number of failures per day is

cbar = 45/25 = 1.8

Control limits for the c-chart here are therefore given by

UCLc = 5.82
LCLc = -2.22 or zero

The chart is shown in Figure 6.7 and appears to be in control. The inspection data is shown in Figure 6.8. Such a chart can be used for
continued control or for monitoring the effectiveness of a quality improvement program.

Sample # Number of Defects (c) LCLc UCLc


1 2 0 5.82
2 3 0 5.82
3 0 0 5.82
4 1 0 5.82
5 3 0 5.82
6 5 0 5.82
7 3 0 5.82
8 1 0 5.82
9 2 0 5.82
10 2 0 5.82
11 0 0 5.82
12 1 0 5.82
13 0 0 5.82
14 2 0 5.82
15 4 0 5.82
16 1 0 5.82
17 2 0 5.82
18 0 0 5.82
19 3 0 5.82
20 2 0 5.82
21 1 0 5.82
22 4 0 5.82

33
23 0 0 5.82
24 0 0 5.82
25 3 0 5.82

Figure 6.8 Data and Calculations for Example 8

As long as sample size is constant, a c-chart is appropriate. In many cases, however, the subgroup size is not constant or the nature of the
production process does not yield discrete, measurable units. For example, suppose that in an auto assembly plant, several different
models are produced that vary in surface area. The number of defects will not then be a valid comparison among different car models.
Other applications, such as the production of textiles, photographic film, or paper, have no convenient set of items to measure. In such
cases, a standard unit of measurement is used, such as defects per square foot or defects per square inch. The control chart used in these
situations is called a u-chart.The variable u represents the average number of defects per unit of measurement, that is ui = ci/ni, where ni
is the size of subgroup #i (such as square feet). The center line ubar for k samples each of size ni is computed as follows:

c1  c2  ...  ck
ubar 
n1  n2  ...  nk

The standard deviation su of the ith sample is estimated by

su  ubar / ni

The control limits, based on three standard deviations for the ith sample are then

UCLu  ubar  3 ubar / ni

LCLu  ubar  3 ubar / ni

Note that if the size (ni) of the subgroups varies, so will the control limits. This result is similar to the p-chart with variable sample sizes.
In general, whenever the sample size n varies, the control limits will also vary.

Example 9: Constructing a u-Chart

A catalog distributor ships a variety of orders each day. The packing slips often contain errors such as wrong purchase order numbers,
wrong quantities, or incorrect sizes. Figure 6.10 shows the error data collected during August, 1999. Since the sample size varies each
day, a u-chart is appropriate for use here.

To construct the chart, first compute the number of errors per slip as shown in column 3. The average number of errors per slip, ubar, is
found by dividing the total number of errors (217) by the total number of packing slips (2,843). This is ubar = 217/2843 = 0.076. The

su  0.076 / ni
standard deviation for a particular sample size ni is

34
The control limits are shown in Figure 6.10. As with a p-chart, individual control limits will vary with the sample size. The control chart
is shown in Figure 6.9. One point (sample #2) appears to be out of control.

Figure 6.10 Data and Calculations for Example 9


Sample # Count of Defects ci Sample Size ni ui = ci/ni = Defects/unit Standard Deviation su LCLu UCLu
1 8 92 0.0870 0.0288036 0 0.1627
2 15 69 0.2174 0.0332596 0 0.1761
3 6 86 0.0698 0.0297915 0 0.1657
4 13 85 0.1529 0.0299662 0 0.1662
5 5 123 0.0407 0.0249109 0.0016 0.1511
6 5 87 0.0575 0.0296198 0 0.1652
7 3 74 0.0405 0.0321163 0 0.1727
8 8 83 0.0964 0.0303251 0 0.1673
9 4 103 0.0388 0.0272222 0 0.158
10 6 60 0.1000 0.0356669 0 0.1833
11 7 136 0.0515 0.0236904 0.0053 0.1474
12 4 80 0.0500 0.0308885 0 0.169
13 2 70 0.0286 0.0330212 0 0.1754
14 11 73 0.1507 0.0323355 0 0.1733
15 13 89 0.1461 0.0292951 0 0.1642
16 6 129 0.0465 0.0243246 0.0034 0.1493
17 6 78 0.0769 0.031282 0 0.1702
18 3 88 0.0341 0.029451 0 0.1647
19 8 76 0.1053 0.0316909 0 0.1714
20 9 101 0.0891 0.0274904 0 0.1588
21 8 92 0.0870 0.0288036 0 0.1627
22 2 70 0.0286 0.0330212 0 0.1754
23 9 54 0.1667 0.0375963 0 0.1891
24 5 83 0.0602 0.0303251 0 0.1673
25 13 165 0.0788 0.021508 0.0118 0.1409
26 5 137 0.0365 0.0236038 0.0055 0.1471
27 8 79 0.1013 0.0310834 0 0.1696
28 6 76 0.0789 0.0316909 0 0.1714
29 7 147 0.0476 0.0227868 0.008 0.1447
30 4 80 0.0500 0.0308885 0 0.169
31 8 78 0.1026 0.031282 0 0.1702

7.5 A Special Application of Attribute Charts

One application of c-charts and u-charts is in a quality rating system. When some defects are considered to more serious than others,
they can be rated, or categorized, into different classes. Then counts of defects found in each class may be used to produce c- or u- charts.
For instance,
A - Very serious, B - Serious, C - Moderately serious, D - Not serious

35
Each category can be weighted using a point scale, such as 100 for A, 50 for B, 10 for C, and 1 for These points, or demerits, can be used
as the basis for a c- or u-chart that would measure total demerits or demerits per unit, respectively. Such charts are often used for
internal quality control and as a means of rating suppliers.

8 Choosing the correct process control (spc) chart

Confusion often exists over which chart is appropriate for a specific application, since the c- and u-charts apply to situations in which the
quality characteristics inspected do not necessarily come from discrete units. Figure 8.1 shows the choices and guidelines.

The key issue to consider is whether the sampling unit is constant. For example, suppose that an electronics manufacturer produces
circuit boards. The boards may contain various defects, such as faulty components and/or missing connections. Because the sampling
unit—the circuit board—is constant (assuming that all boards are the same), a c-chart is appropriate. If the process produces boards of
varying sizes with different numbers of components and connections, then a u-chart would apply.

As another example, consider a telemarketing firm that wants to track the number of calls needed to make one sale. In this case, the firm
has no physical sampling unit. However, an analogy can be made with the circuit boards. The successful sale corresponds to the defect-
free circuit board, and the number of failed calls to the number of defects. In both examples, the number of occurrences in relationship to
a constant entity is being measure Thus, a c-chart is appropriate.

Figure 8.1 Control Chart Selection

9 Key Points about Control Chart Construction

 Statistical process control (SPC) is a methodology for monitoring a process to identify special causes of variation and signal
the need to take corrective action when appropriate.
 Capability and control are independent concepts. Ideally, we would like a process to have both high capability and be in
control. If a process is not in control, it should first be brought into control before attempting to evaluate process capability
 Control charts have three basic applications:
 Establishing a state of statistical control,
 Monitoring a process to identify special causes, and
 Determining process capability.
 Control charts for variables data include: x- and R-charts; and s-charts, and individual and moving range charts. x-and s-charts
are alternatives to x-and R-charts for larger sample sizes. The sample standard deviation provides a better indication of

36
process variability than the range. Individuals charts are useful when every item can be inspected and when a long lead time
exists for producing an item. Moving ranges are used to measure the variability in individuals charts.
 A process is in control if no points are outside control limits; the number of points above and below the center line is about the
same; the points seem to fall randomly above and below the center line; and most points (but not all) are near the center line,
with only a few close to the control limits.
 Typical out-of-control conditions are represented by sudden shifts in the mean value, cycles, trends, hugging of the center line,
hugging of the control limits, and instability.
 Modified control limits can be used when the process capability is known to be goo These wider limits reduce the amount of
investigation of isolated points that would fall outside the usual control limits.
 Charts for attributes include p-, np-, c- and u-charts. The np-chart is an alternative to the p-chart, and controls the number
nonconforming for attributes data. Charts for defects include the c-chart and u-chart. The c-chart is used for constant sample
size and the u-chart is used for variable sample size.

10 Review Questions about Control Charts

Manufacturing engineers and supervisors generally skip plotting process control charts. Sometimes they may plot control charts using
historical data, rather than live inspection data. The real value of the chart is its serving as a process control tool online, allowing
personnel to quickly react to unacceptable production. As Juran and Deming repeatedly remarked, one need not worry about small
fluctuations in quality in the factory. Rather one should troubleshoot when a control chart signals that an assignable cause has probably
disturbed the process. To review such concepts some questions are provided below that may be used in internally organized SPC
workshops. For actually preparing the control charts no more than Excel® would be neede

1. Define statistical process control and discuss its advantages

2. What does the term in statistical control mean? Explain the difference between capability and control.

3. What are the disadvantages of simply using histograms to study process capability (C pk)?

4. Discuss the three primary applications of control charts.

5. Describe the difference between variables and attributes data. What types of control charts are used for each?

6. Briefly describe the methodology of constructing and using control charts.

7. What does one look for in interpreting control charts? Explain the possible causes of different out-of-control indicators.

8. How should control charts be used by shop-floor personnel?

9. What are modified control limits? Under what conditions should they be used?

10. How are variables control charts used to determine process capability?

11. Describe the difference between control limits and specification limits.

12. Why is the s-chart sometimes used in place of the R-Chart?

13. Describe some situations in which a chart for individual measurements would be use

14. Explain the concept of a moving range. Why is a moving range chart difficult to interpret?

15. Explain the difference between defects and defectives.

16. Briefly describe the process of constructing a p-chart. What are the key differences compared with an x-chart?

17. Does an np-chart provide any different information than a p-chart? Why would an np-chart be used?

18. Explain the difference between a c-chart and a u-chart.

19. Discuss how to use charts for defects in a quality rating system.

20. Describe the rules for determining the appropriate control chart to use in any given situation.

37
11 Self assessment questions about statistical process control

1. Warmup question: Fill in all the labels missing in the boxes:

Is this process in statistical control? What actions will you take if it is not?

2. Thirty samples of size 3 listed in the following table were the dimensional measurements taken from a machining process over a
15-hour perio
a. Which control chart will you use?
b. Use Excel to compute the mean and standard deviation of the data.
c. Compute also the mean and range of each sample and plot them on the control chart.
d. Is this process in statistical control?

Sample Observations
1 3.55 3.64 4.37
2 3.61 3.42 4.07
3 3.61 3.36 4.34
4 4.13 3.50 3.61
5 4.06 3.28 3.07
6 4.48 4.32 3.71
7 3.25 3.58 3.51
8 4.25 3.38 3.00
9 4.35 3.64 3.20
10 3.62 3.61 3.43
11 3.09 3.28 3.12
12 3.38 3.15 3.09
13 2.85 3.44 4.06
14 3.59 3.61 3.34
15 3.60 2.83 2.84
16 2.69 3.57 3.28
17 3.07 3.18 3.11
18 2.86 3.69 3.05
19 3.68 3.59 3.93
20 2.90 3.41 3.37
21 3.57 3.63 2.72
22 2.82 3.55 3.56
23 3.82 2.91 3.80
24 3.14 3.83 3.80
25 3.97 3.34 3.65
26 3.77 3.60 3.81
27 4.12 3.38 3.37
28 3.92 3.60 3.54
29 3.50 4.08 4.09
30 4.23 3.62 3.00

38
3. Fifty samples (subgroups) of size 5 each listed in the following table were taken from a machining process over a 25-hour perio
a. Compute the mean and standard deviation of the measurement data shown.
b. Compute the mean and range of each sample and plot them on control charts. Does the process appear to be in statistical
control? Why or why not?

Sub Group #
Sample (Subgroup) Number 
1 2 3 4 5 6 7 8 9 10
1 9.999 10.022 10.001 10.007 10.011 10.019 10.015 9.988 9.980 10.017
2 9.992 9.998 10.006 10.006 9.979 10.017 10.015 9.990 10.001 10.017
3 10.002 10.037 10.002 10.004 9.991 10.018 9.978 10.008 10.013 9.988
4 10.003 9.994 9.993 10.018 9.996 10.008 10.006 10.002 9.998 10.010
5 10.009 10.003 10.011 10.011 9.994 10.018 9.997 9.989 10.015 9.980

11 12 13 14 15 16 17 18 19 20
1 9.980 10.004 10.025 9.992 9.985 9.977 9.996 10.014 10.001 9.982
2 10.038 9.990 9.989 10.023 10.002 9.975 9.991 10.010 9.979 9.975
3 9.990 10.002 9.981 10.019 10.008 10.002 10.005 10.000 10.001 9.976
4 9.996 10.003 10.006 9.990 10.008 10.021 10.009 10.001 10.015 10.012
5 10.016 9.996 9.998 10.003 9.998 9.989 9.977 10.006 10.009 9.994

21 22 23 24 25 26 27 28 29 30
1 10.010 9.988 9.991 10.005 9.987 9.994 9.994 9.972 10.018 9.985
2 10.003 10.004 9.996 10.003 9.993 10.007 9.987 9.994 10.007 10.010
3 9.990 10.001 10.020 10.027 9.992 10.013 10.027 9.969 9.980 9.998
4 10.010 9.995 10.002 9.996 9.987 9.997 10.030 10.011 9.987 10.033
5 10.015 9.977 10.022 9.970 10.008 10.014 9.989 9.985 10.014 9.994

31 32 33 34 35 36 37 38 39 40
1 10.009 9.987 9.990 9.985 9.991 10.002 10.045 9.970 10.019 9.954
2 10.013 10.012 9.973 10.038 9.999 9.989 9.993 9.999 9.989 10.011
3 10.008 10.015 9.996 9.991 9.989 9.983 10.007 9.989 9.998 10.003
4 9.990 9.995 9.990 9.988 10.014 10.013 9.990 9.999 9.997 9.987
5 10.008 10.021 9.980 9.986 9.997 9.980 10.010 10.014 9.986 10.005

41 42 43 44 45 46 47 48 49 50
1 9.978 10.008 10.035 10.012 10.009 10.004 10.023 9.972 9.988 10.032
2 10.010 10.025 10.015 10.025 10.007 9.998 9.979 9.993 10.002 10.017
3 9.996 10.017 9.994 9.972 10.027 10.026 10.002 10.005 9.992 9.999
4 9.991 10.008 9.991 10.013 9.988 10.032 9.979 9.982 10.002 9.979
5 9.995 9.989 9.998 10.011 9.967 10.019 10.020 9.999 9.988 10.002

4. The fraction defective for 20 samples for an automotive piston production process is given below. Two hundred pistons are
inspected each day. Construct a p-chart and interpret the results.

Sample # Fraction Defective Sample # Fraction Defective


1 0.11 11 0.16
2 0.16 12 0.23
3 0.12 13 0.15
4 0.10 14 0.12
5 0.09 15 0.11
6 0.12 16 0.11
7 0.12 17 0.14
8 0.15 18 0.16
9 0.09 19 0.10
10 0.13 20 0.13

12 Acceptance Sampling

A factory that uses JIT in production does not worry about the quality of incoming parts—they are all guaranteed by vendors to be 100%
defect free. These parts are directly fed on the assembly line. However, not all factories have such vendors. Sampling inspection is a
screening mechanism often used in material receiving departments to separate items of good from items of poor quality; it actually does
not improve the quality of any item inspecte

39
The most obvious way to tell whether a product item is of good quality or not, is to inspect it or to use it. If this can be done to every
product item, that is, a 100% inspection can be performed, there would be no need to use acceptance sampling. However, in many cases,
it is not economical or not possible to do a 100% inspection. For instance, if the cost of inspection for an item is higher the value of the
item, which usually is true for low-cost massed-produced products, a 100% inspection is not justified; if the equipment cost and labor
cost to inspect an item is very high, only a small fraction of the product can be inspecte If the inspection is a destructive test (for
example, a burn-in test for electronic components or a car crash test), obviously a 100% inspection will destroy all the products. When
inspection is necessary and 100% inspection is not possible, acceptance sampling can be employe

A sampling plan is a method for guiding the acceptance sampling process. It specifies the procedure for drawing samples to inspect from
a batch of products and then the rule for deciding whether to accept or reject the whole batch based on the results of this inspection. The
sample is a small number of items taken from the batch rather than the whole batch. The action of rejecting the batch means not
accepting it for consumption and this may include downgrading the batch or selling it at a lower price or returning it to its vendor.

If we aren’t doing 100% inspection, let’s look at what we risk by sampling only a few of the items from the batch receive Suppose that a
plant’s sampling plan specifies that (a) n item are drawn randomly from a batch of parts (perhaps a truckload) received to form a sample
and (b) the batch is rejected if and only if c or more of these n items are defective or non-conforming. Probability theory can help us
here in estimating the risk of accepting an unacceptable batch. An operating characteristic curve, or OC-curve, of a sampling plan is
defined as the plot of the probability that the batch will be accepted (Pa( p)) against the fraction p of defective products in the batch. The
larger the magnitude of Pa(p), the more it is likely that the batch will be accepte (See Figures 11.1 and 11.2.) A higher likelihood of
acceptance benefits the producer (the vendor). On the other hand, the smaller is the magnitude of Pa(p), the harder it will be for the
batch to be accepte This would benefit and even protect the consumer who would normally want some assurance against receiving bad
parts and would prefer accepting batches with a low p (fraction defective) value.

In order to specify a sampling plan with Pa(p) characteristics as given above, the numbers n and c must be correctly specifie This
specification requires us to specify two batch quality or fraction defective levels first, namely the AQL (acceptable quality level) and the
RQL (rejection quality level) values. AQL and RQL (explained below) are two key quality parameters frequently used in designing a
sampling plan. Of course an ideal sampling plan is one that accepts the batch with 100% probability if the fraction of defective items in it
is less than or equal to AQL and that rejects the batch with a 100% probability if the fraction of defective items in the batch is larger than
AQL.

AQL and RQL

Most customers realize that perfect quality (e.g., a batch or lot of parts containing no defective parts in it) may be impossible to expect;
some defectives will always be there. Therefore, customers often decide to tolerate a small fraction of defectives in their purchases.
However, a customer certainly wants a high level of assurance that the sampling plan used to screen the incoming lots will reject lots
with fraction defective levels exceeding some decidedly poor quality threshold—called the rejection quality level or RQL. In reality, RQL
is a defect level that causes a great deal of heartache once such a lot enters the customer's factory. It’s a disaster!

The producer or supplier of those parts, on the other hand, wants to ensure that the customer's acceptance sampling plan will not reject
too many lots with defect levels that are certainly within the customer's tolerance, i.e., acceptable to the customer on the average.
Generally, the customer sets a quality threshold here also, called AQL. This is actually the worst lot fraction defective but a level that is
acceptable to the customer in the shipments he/she receives on a regular basis.

40
The Operating Characteristic (OC) Curve of a Single Sampling Plan

A bit of thinking will indicate that only 100% inspection of all items in a lot would accept the batch with 100% probability if the fraction
of defective items in it is lower than AQL and rejects the lot with a 100% probability if the fraction of defective items in it is larger than
AQL. Such performance cannot be realized otherwise. The OC-curve of such a sampling plan is shown in Figure 11.1.

In order to correctly evaluate the OC-curve of an arbitrary (not 100%-inspection) sampling plan with parameters (n, c) we need to use
certain principles of probability theory as follows. Suppose that the batch is large or the production process is continuous, so that
drawing a sample item with or without replacement has about the same result. In such cases we are permitted to assume that the
number of defective items x in a sample of sufficiently large size n follows a binomial probability distribution. Pa(p) is then given by the
following expression.

c
n
Pa ( p)     p x (1  p) n  x
x 0  x 

Now, if the producer is willing to sacrifice a little bit so that the batch with fraction defective AQL is accepted with a probability at least
(1- ) where  is a small positive number (a probability), and the consumer of the parts is willing to sacrifice a little so that the batch
with fraction defective RQL is accepted with a probability of at most , where  is a small positive number and RQL > AQL, then the
following two inequalities can be establishe

 AQL (1  AQL)
x 0
x n x
1

 RQL (1  RQL)
x  c 1
x n x


From the above two inequalities, the numbers n and c can be solved (but the solution may not be unique). The OC-curve of such a
sampling plan is shown in the figure below. The number  is call the producer's risk, and the number  called the consumer's risk. As a
common practice in industries, the magnitude of  and  is usually set at some value from 0.01 to 0.1. Nomograms are available for
obtaining solution(s) of the inequalities given above easily.

41
The above scheme is called a single sampling plan, because the decision to accept or reject the batch is made in a single stage after
drawing one single sample from it.

Double Sampling Plans

A double sampling plan is one in which a sample of size n1 is drawn and inspected, the batch is accepted rejected according to whether
the number d1 of defective items found in the sample is  r1 or  r2 (where r1 < r2); and if the number of defective items lies between r1
and r2, a further sample of size n2 is drawn and inspected, and the batch is accepted or rejected according to whether the total number d1
+ d2 of defective items in the first and second sample is  r3 or > r3. This procedure is shown in Figure 11.3.

42
*

The average sample number (ASN) of a double sampling plan is Pa1(p) n1 + (1 - Pa1(p)) n2, where Pa1(p) is the probability that the batch
will be accepted upon inspection of the first sample. If the value of p in the batch is very low or very high, a decision can usually be made
within inspection of the first sample, and the ASN of a double sampling plan will be smaller than the sample size of a single sampling plan
with the same producer's risk () and consumer's risk ().

The quantity [p Pa(p)] is called the average outgoing quality (AOQ) of a sampling plan at fraction defective p. This is the quality to be
expected at the customer's end if the sampling plan is used consistently and repeatedly to accept or reject lots being receive It is clear
that if p = 0, then AOQ = 0. If p = 1, that is, all the product items in the batch are defective, then Pa(p) of any sampling plan that we are
using should be equal to 0 for otherwise the plan would not have any utility. Hence AOQ = 0 also when p = 1. Since AOQ can never be
negative, it has a global maximum point in the range (0, 1); this maximum is called the average outgoing quality limit (AOQL) of the
sampling plan.

The idea of double sampling plan can be extended to construct sampling plans of more than two stages, namely multiple sampling plans.
A sequential sampling plan is a sampling plan in which one item is drawn and inspected each time, in such a way that if a decision (of
accepting or rejecting the batch) can be made upon inspection of the first item, the process is stop; if a decision cannot be made, a second
item will be drawn and inspected, and if decision can be made upon inspection of the second item, the plan is stopped; otherwise, the
third item is drawn and inspected, and so on, until a decision can be made. However, multiple sampling plans and sequential sampling
plans are not so commonly used, because their implementation in practice is more complicated that single and double sampling plans.
The benefit of multiple or sequential sampling is that it reduces inspection.

Although a sampling plan can be specified by setting the producer's risk () and consumer's risk () at AQL and RQL, the quantity AOQL
can also be used to specify a sampling plan. These methods are described in texts of quality assurance such as Montgomery (2012).

Another type of sampling plan which is different from the above is called continuous sampling procedure (CSP). The rationale of CSP is
that, if we are not sure that the products produced from a process is of good quality, a 100% inspection will be adopted; if the quality of
products is found to be good, then only a fraction of the products will be inspecte In the simplest CSP, initially 100% is performed;
during 100% inspection if no defective items are found after a specified number of items are inspected (which means that the quality of
product produced is perhaps good), 100% inspection will be stopped and only a fraction f of the products will be inspecte During
fraction inspection of the products, if a defective item is found (which means that the quality of products might have deteriorated), then
fraction inspection will be stopped and 100% inspection will be resume More refined CSP's have also been constructed, for example, by
setting f at 1/2 at the first stage, 1/4 at the second stage, and so on.

All sampling plans described above are called attribute sampling plans, because the inspection procedure is based on a "go"/ "no go"
basis, that is, an item is either regarded as non-defective and accepted, or it is regarded as defective and not accepte Variable sampling
plans are sampling plans in which continuous measurements (such as dimensional or weight checks) are made on each item in the
sample, and the decision as to whether to accept or to reject the batch is based on the sample mean or the average of the measurements
obtained from all items contained in the sample. A variable sampling plan can be used, for example, when a product item is regarded as
acceptable if a certain measurement x (diameter, length, hardness, etc.) of it exceeds a pre-set lower specification limit L; otherwise the
item is regarded as not acceptable (see Figure 11.4).

43
Measurements {x} of the products (for instance ring dimensions in automotive bearings manufacture) would vary from item to item, but
these measurements have a population mean µ, say. When µ is considerably larger than L, we can expect that most items will have an x
value greater than L and all such items would be acceptable; when µ is much less than L, we can expect that most items will have x values
less than L and all such items would not be acceptable. A variable sampling plan can be constructed by specifying a sample size n and a
lower cut-off value c for the sample mean xbarn such that if the sample of size n is drawn and items in this sample are measured, the
batch is accepted if the sample mean xbarn exceeds c. The batch is rejected otherwise. We require that when the population mean of the
product produced is µ1 or larger, the batch is accepted with a probability of at least 1 - , and when the population mean is µ2 or
smaller, the batch is accepted with probability only  or less, where  is the producer's risk and  is the consumer's risk (see Figure
11.5).

Suppose that x is the value of x such that the probability for x < x is . According to the criterion given above we can derive two
relationships that relate n and c. However, the above system of inequalities may not have a unique ( n, c) solution. From elementary
statistical theory, if the form of x is known (for example, when x follows a normal distribution), from these inequalities we can
determine a minimum value for the sample size n (which is an integer), and a range for the cut-off point c for the sample mean xbar.
Such a sampling plan is a called single specification limit variable sampling plan. If an upper specification limit U instead of a lower
specification limit L is set for x, we only need to consider the lower specification limit problem with (-x) replacing x and (-U) replacing L.

When a product item is regarded as acceptable only if a certain measurement x of it lies between a lower specification limit L and an
upper specification limit U, a double specification limit variable sampling plan will be use In a double specification limit variable
sampling plan, a sample size n, a lower cut-off value cL and an upper cut-off value cU for the sample mean xbar are specifie A batch is
accepted if any only if the sample mean of a sample of size n from the batch lies between cL and cU . Calculations for cL,, cU, and n are more
complicated than the single specification case.

( x  x )
n
( 1   2 )

x  x 
  2  c   1
n n

International standards for sampling plans are now available. Many of these are based on the work of Professors Dodge and Romig
(1959). The plan was originally was developed for single and multiple attribute sampling for the US army during WW II is now widely
used in industries. It is called the MIL-STD-105E. An equivalent Indian standard known as IS 2500 has been published by the Bureau of
Indian Standards. Many other official standards for various attribute sampling plans (such as those based on AOQ, or CSP's, and so on)
and variable sampling plans (assuming the variable has a normal distribution, when the population variance is known or unknown, and
so on) have been published by the US government and the British Standards Institution.

Before we end this section, we stress again that sampling inspection is only a screening tool for separating batches of good quality
products from batches of poor quality products. To some extent this screening assures the quality of incoming parts and materials.

44
Actually, the use of sampling plans does help an industry to do this screening more effectively than the drawing samples arbitrarily.
Therefore, sampling inspection can be used during purchasing, for checking the quality of incoming materials, whenever one is not sure
about the conditions and QC procedures in use in the vendor's plant.

Acceptance sampling can also be used for the final checking of products after production. This, to a limited degree, guarantees the
quality of the products being readied for a customer before they are physically dispatche Even mobile phone producers use acceptance
sampling as a temporary means of quality control until permanent corrective actions can be implemente But note that unlike SPC,
acceptance sampling does not help in the prevention of the production of poor quality products.

In the following section we discuss analytical procedures that come from the domain of quality engineering, a recently formalized
discipline that aims at developing products whose superior performance delights the discriminating usernot only when the package is
opened, but also throughout their lifetime of use. The quality of such products is robust, i.e., it remains unaffected by the deleterious
impact of environmental or other factors often beyond the users' control. We also introduce the conduct of experiments to lead to high
quality products.

13 What are Taguchi Methods?

Since the topic of quality engineering is of notably broad appeal, we include below a brief review of the associated rationale and methods.

The term “quality engineering” (QE) was used till recently by Japanese quality experts only. One such expert is Genichi Taguchi (1986)
who reasoned that even the best available manufacturing technology was by itself no assurance that the final product would actually
function in the hands of its user as desire To achieve this Taguchi suggested the designer must “engineer” quality into the product, just
as he/she specifies the product’s physical dimensions to make the dimensions of the final product correct.

QE requires systematic experimentation with carefully developed prototypes whose performance is tested in actual field conditions. The
object is to discover the optimum set-point values of the design parameters, to ensure that the final product would perform as expected
consistently in actual use. A product designed by QE is robust.

13.1 The Secret of Creating a Robust Design

A practice common in traditional engineering design is sensitivity analysis. For instance, in traditional electronic circuit design, as
well as the development of performance design equations, sensitivity analysis of the circuit developed remains a key step that the
designer must complete before his job is over. Sensitivity analysis evaluates the likely changes in the device's performance, usually
due to element value tolerances or due to value changes with time and temperature.

Sensitivity analysis also determines the changes to be expected in the design’s performance due to factor variations of
uncontrollable character. If the design is found to be too sensitive, the designer projects the worst-case scenarioto help plan for
the unexpecte However, studies indicate that worst-case projections or conservative designs are often unnecessary and that a
“robust design” can greatly reduce off-target performance caused by poorly controlled manufacturing conditions, temperature or
humidity shifts, wider component tolerances used during fabrication, and also field abuse that might occur due to
voltage/frequency fluctuations, vibration, etc.

Robust design should not be confused with rugged or conservative design, which adds to unit cost by using heavier insulation or
high reliability, high tolerance components. As an engineering methodology robust design seeks to reduce the sensitivity of the
product/process performance to the uncontrolled factors through a careful selection of the values of the design parameters. One
straightforward way to produce robust designs is to apply the "Taguchi metho"

The Taguchi method may be illustrated as follows. Suppose that a European product (chocolate bars) is to be introduced in a
tropical country where the ambient temperature rises to 45C. If the European product formulation is directly adopted, the result

45
may be molten bars on store shelves in Mumbai and Singapore and gooey hands and dirty dresses, due to the high temperature
sensitivity of the European formula (Curve 1, Figure 12.1).

The behavior of the chocolate bar’s plasticity may be experimentally explored to determine its robustness to temperature, but few
product designers actually attempt this. Taguchi would suggest that we do here some special “statistical experiments” in which
both the bar’s formulation (the original European and perhaps an alternate prototype formulation that we would call “X”) and
ambient temperature would be varied simultaneously and systematically and the consequent plasticities observe

Taguchi was able to show that by such experiments it is often possible to discover an alternate bar design (here an appropriate
chocolate bar formulation) that would be robust to temperature. The trick, he said, is to uncover any “exploitable” interaction
between the effect of changing the design (e.g. from the European formulation to Formulation “X”) and temperature. In the
language of statistics, two factors are said to interact when the influence of one on a response is found to depend on the setting of
the other factor (Montgomery, 1997). Figure 12.2 shows such an interaction, experimentally uncovered (the two plotted response
lines are not parallel). Thus, a “robust” chocolate bar may be created for the tropical market if the original European formation is
changed to Formulation “X.”

46
13.2 Robust Design by the "Two-step" Taguchi Method

Note that a product’s performance is “fixed” primarily by its design, i.e., by the settings selected for its various design factors.
Performance may also be affected by noiseenvironmental factors, unit to unit variation in material, workmanship, methods, etc.,
or due to aging/deterioration (Figure 12.3). The breakthrough in product design that Taguchi achieved renders performance
robust even in the presence of noise, without actually controlling the noise factors themselves. Taguchi’s special “designnoise
array” experiments (Figure 12.4) discover those optimum settings. Briefly, the procedure first builds a special assortment of
prototype designs (as guided by the “design array”) and then tests these prototypes for their robustness in “noisy” conditions. For
this, each prototype is “shaken” by deliberately subjecting it to different levels of noise (selected from the “noise array”, which
simulates noise variation in field conditions). Thus performance is studied systematically under noise in order to find eventually a
design that is insensitive to the influence of noise.

To guide the discovery the “optimum” design factor settings Taguchi suggested a two-step procedure. In Step 1, optimum settings
for certain design factors (called “robustness seeking factors”) are sought so as to ensure that the response (for the bar, plasticity)
becomes robust (i.e., the bar does not collapse into a blob at least up to 50C temperature). In Step 2, the optimum setting of some
other design factor (called the “adjustment” factor) is sought to put the design’s average response at the desired target (e.g., for
plasticity a level that is easily chewable).

47
For the chocolate bar design problem, the alternative design “factors” are two candidate formulationsone the original European,
and the other that we called “X”. Thus, the design array would contain two alternatives “X” and “Eu.” “Noise” here is ambient
temperature, to be experimentally varied over the range 0C to 50C, winter to summer, as seen in the tropics.

The experimental outcome of the designnoise array experiments were most interesting. For instance, response value Plasticity
X_50 was observed when formulation X was tested at 50C. Figure 12.2 is a compact illustrative display of these experimental
results. It is evident from Figure 12.2 that Formulation X’s behavior is quite robust (has minimum variation) even at tropical
temperatures. Therefore, the adaptation of Formulation “X” would make the chocolate bar “robust”, i.e., its plasticity would not vary
much even if ambient temperature had wide swings.

Note, however, that Taguchi’s “two-step” procedure works well in about 50% of actual design situations when design factor effects are
separable, enabling identification of the robustness-seeking and the adjustment factors. Numerous other design projects such as
unbreakable plastic parts (that must retain their toughness), robust electronic devices (that must consistently deliver amplification,
display of signals, error-free data transmission features, etc. as designed), or metallurgical processes (that must withstand wide
variations in ore quality) must use more powerful modeling and optimization methods. However, the concept of robustness remains a
unique contribution by Taguchi in quality assurance.

14 The Six Sigma Principle

The Six Sigma principle is Motorola's own rendering of what is known in the quality literature as the zero defects (ZD) program. Zero
defects is a philosophical benchmark or standard of excellence in quality proposed by Philip Crosby. Crosby explained the mission and
essence of ZD by the statement "What standard would you set on how many babies nurses are allowed to drop?" ZD is aimed at
stimulating each employee to care about accuracy and completeness, to pay attention to detail, and to improve work habits. By adopting
this mind-set, everyone assumes the responsibility toward reducing his or her own errors to zero.

One might think that having the natural variability ( x  3x) equal to tolerance (= upper spec limit - lower spec limit) would mean good
quality. After all, if distribution is normal, only 0.27% of the output would be expected to fall outside the product's specs or tolerance
range. But what does this really mean? In reality, performance that stays with natural variability ( x  3x) implies that

 At least 10,000 wrong prescriptions written by doctors each year


 No electricity or water for about nine hours each year in a typical township
 500 incorrect eye operations each year in local hospitals
 2000 lost mails each month in a city

For a typical consumer perhaps this is no surprise; he/she routinely experiences even worse performance. Recently 50 newborn babies
died in a north Indian hospital over one week; this was traced to unsafe acts by the staff. This would also equal 3 unsafe aircraft landings
every week somewhere in the country. Can this be called satisfactory performance? Motorola Company did not. They set the following
quality goal in 1987 as a principle:

Improve product and services quality ten times by 1989, and at least one hundred fold by 1991. Achieve six-sigma
capability by 1992. With a deep sense of urgency, spread dedication to quality to every facet of the corporation, and
achieve a culture of continual improvement to assure total customer satisfaction. There is only one goal: zero
defectsin everything we do.

The concept of six-sigma quality is to shrink the inherent variation in a process to half of the spec range ( Cp = 2.0) while allowing the
mean to shift at most 1.5 sigma from the spec midpoint, the target quality. The area under the shifted curves beyond the six sigma ranges
(the tolerance limits) is only 0.0000034, or 3.4 parts per million. If the process mean can be controlled to within 1.5x of the target, a
maximum of 3.4 defects per million pieces produced can be expecte If the process mean is held exactly on target, only 2.0 defects per
billion would be expecte Within its organization Motorola defines six sigma as a state of the production or service unit that represents
"almost perfect quality with no more than 3.4 defects per million."

Motorola prescribes six steps to achieve the six sigma state, as follows.

48
Step 1: Identify the product you create or service you provide.
Step 2: Identify the customer(s) and determine what they consider important.
Step 3: Identify your needs to provide the product or service to satisfy the customer.
Step 4: Define the process for doing the work.
Step 5: Mistake-proof the process and eliminate waste effort.
Step 6: Ensure continuous improvement by DMAIC (see below).

Motorola’s method is enshrined into an analytical framework called DMAIC—Define, Measure, Analyze, Improve and Control. The actual
work to improve quality is achieved in “I”—improve the process; the technique used is Design of Experiments.

15 Taguchi Experiments for Quality Improvement

Recall the message given by Figures 1.1 and 1.2. Rather typically, the performance of a product is determined by a multitude of factors.
This also holds for processes that deliver those products. It is also well-known that over 2/3rd of all product malfunctions may be traced
to the design of the product. To the extent basic scientific and engineering principles allow the designer to guide his/her design, the
designer does his/her best to come up with selections and settings of design parameter values that would ensure good performance.
Frequently though, not everything can be predicted by theory. Experimentation or prototyping must be then resorted to and the design
must be empirically optimize The planning and analysis of the results of such multifactor investigations fall in the domain of statistical
design of experiments (DOE) (Montgomery 2004).

Many text book methods for DOE are too elaborate and computationally cumbersome. This discourages many practicing engineers from
trying out this powerful methodology in real product design and process optimization work. Genichi Taguchi of Japan, an electrical
engineer, observed this and popularized a class of simpler experimental plans (called orthogonal array or OA experiments) that can still
reveal a lot about the performance of a product or process, without the burden of heavy statistical theory. We show below this with a real
design example.

49
A fire extinguisher is to be designed so as to effectively cover flames in a fire. The designer wishes to achieve this by either using higher
pressure inside the CO2 cylinder, or by altering the nozzle design. Theoretical models of such systems using computational fluid dynamics
(CFD) are too complex and too cumbersome to optimize. The question to be answered here is, which is more effective—higher pressure
or wider nozzle?

The Taguchi method would set up experiments guided by a matrix of factor setting combinations. In this method two settings each of the
two design factors would be needed and four experiments would be run. Table 14.1 shows the experimental plan and the results observe
The same results are shown graphically in Figure 14.1. The observed response was the spray area for each factor setting combination
(one experiment). From the response plots it is clear that expanding nozzle dia from 5 t0 10 mm did raise the area covere However,
raising CO2 pressure from 2 to 4 bars gave a more pronounced effect—a wider spray resulte So if only one factor could be adjusted by
the designer, he would only raise pressure. If he had resources to also adjust nozzle dia, he could use 10mm.

This small example is every bit real and this is how many designs and processes can be optimized—experimentally.

16 The Japanese keep their statistical experiments simple

Design, Performance and Factor Effects

When unpacking Japanese LCD TVs or driving off in a new Toyota, customers at far corners of the globe point at the superlative operation
of these products as the main reason for buying them. These products are competitively priced, yet, surprisingly, they also excel in their
ability to delight the customer. The strong rivalry that Honda and Toyota sedans provided to US and European autos in the 80’s onward
can be traced to two things—the superiority of the manufacturing methods the Japanese employ to produce these vehicles, and the
excellence in crafting their design.

Robust design, another Japanese innovation that uses systematically conducted experiments during R&D, constitutes a key plank in these
products reaching such high performance levels. This section in this Text is a novice’s introduction to the reasons why product/process
developers should incorporate “statistical experiments” into their design development strategy. Statistical experiments are a collection
of empirical procedures that can economically and reliably uncover the key factors that influence the performance of the system or the
process being designe

16.1 Performance Improvement by Experimentation

Industry discovered the value of statistical experiments in process improvement out of dire economic necessity. In 1953, a Japanese
ceramic tile manufacturer found that while the unevenness of temperature at which tiles were baked affected strongly the percent of
defective tiles produced, kiln temperature was rather costly to control to within a close tolerance. A statistician and consultant, Genichi
Taguchi, suggested that an alternative quality control method could be devised, by studying the interaction of the effect of tile’s recipe

(formulation of ingredients) with baking temperatureexperimentally. The object would be to discover a formulation that rendered
the tiles less sensitive (or robust) to the uneven temperature distribution in the kiln. (We return to discuss this study later in this
section.)

Taguchi incidentally was also the first to provide industry the motivation for and propose a method to deliver excellence in product as
well as process design. Taguchi proposed a radically different view of how quality should be defined and perceive He coined the term
"quality engineering," a design and development task to be completed before manufacturing begins. Quality engineering focuses directly
on achieving robustness, i.e., on the reduction of variability in product/process performance (Taguchi and Clausing 1990).

Sharply contrasted to conventional product/process design methods, robust design is a fundamentally different approach to improve an
initial, functional engineering design. In conventional engineering design, sensitivity analysis is the final step that the designer completes
before handing over the design to the fabricator. Sensitivity analysis unveils the extent of changes in the product's or process's
performance due to variation that may occur in the selected values of the different design parameters. If sensitivity analysis shows, for

50
instance, an electronic circuit to be too sensitive to the effect of temperature or component quality, the designer may then devise suitable
safeguards.

In such case the common practice is to design for the worst case scenario, to help protect against the unexpected, or to improve the
design’s “ruggedness” with more expensive components and materials. By contrast, the approach of robust design aims at maintaining a
target level of performance in the face of factors outside the designer’s (or the product’s user’s) control. Rather than make the product
rugged, robust design attempts to devise inexpensive but effective countermeasures to minimize the influence of the uncontrollable

factorscommonly called “noise” (Fig. 1). Robust design systematically seeks out the optimum settings for the design parameter
values in order that the final design has reduced sensitivity to noise.

Robust design is thus proactive. It optimizes the basic design to prevent a good laboratory design from delivering poor performance in
the fiel Frequently, as Taguchi showed, this feat can be accomplished by conducting certain experiments with different design
prototypes and then prudently adjusting the design parameters, without increasing manufacturing costs.

The distinction between the traditional approach to product/process design and robust design may be illustrated as follows. Suppose
that the objective is to optimize the fuel economy in a motorcycle design. Suppose also that carburetor design, engine displacement, tire
pressure, octane number of fuel, and driving speed have been identified as the factors influencing fuel economy. It is obvious that only
two of these factors, namely, carburetor design and engine displacement, would be under the control of the designer. These are called
the design parameters (see Figure 2.3). The other three factors would likely remain outside the designer’s control (unless the designer
himself is riding the bike). These factors would be the noise factors.

Traditional engineering design would attempt to maximize fuel economy, given the constraints of power requirement and cost, by
manipulating only the two design parameters, leaving the noise factors alone. Taguchi, however, would require us to take an important
step further. He would suggest that the undesirable large difference (a loss) in fuel economy between urban and city driving could
possibly be reduced, without adding to the cost of the product, by studying the interaction of the effects of the two design factors and the
noise factors. We explain this procedure now by returning to ceramic tile manufacturing example we mentioned earlier.

16.2 Tile Manufacturing using Statistical Experiments

A Japanese ceramics manufacturer, the Ina Tile Company, had purchased an 80 meter-long kiln for baking tiles. The kiln, purchased from
Germany, however had quality problems because the tiles at the center were baked at a lower temperature than the outer tiles. This
resulted in nonuniform dimensions as well as other defects. Consequently, when baking was complete, 100% inspection had to be used
to screen out the unacceptable tiles, an activity wasteful and expensive.

The kiln could be redesigned to give uniform dimensions, but that would be very expensive and there was no guarantee that it would
correct the problem. Instead, the company was advised by Genichi Taguchi to run an “orthogonal statistical experiment”, involving all
seven process variables (factors) identified by the company to be potentially influencing. (In practice this identification may be done by
using the Ishikawa diagram.) Taguchi suggested that all these seven factors be studied at two levels each (A1 and A2, B1 and B2, etc.),
together, in experiments run according to a special scheme knows as the orthogonal experimental plan (OA plan).

Table 15.1 displays the OA plan that guided these statistical experiments. Why such experiments are called “statistical experiments”?
The answer is quite precise. The results of these experiments are analyzed using principles provided by statistics, the science that
enables us to study the behavior of observed or measured data. Such analysis does not evaluate only in “rough terms” whether an
influencing factor has an effect on the response, but it does so precisely: it determines how significant is such effect relative to the
influence of noisefactors such as vibration, temperature fluctuations, and measurement errors etc., not controlled by the investigator.
We must point out here that for statistical principles to be applicable to the tile making experiments, the different variables had to be set
strictly according to Table 15.1, each row precisely indicating the process conditions for one “experimental run”.

Notice also that the plan shown in Table 15.1 allowed varying more than one variable at a time. Yetand this was the key benefit from
using the scheme shown in Table 15.1the associated data analysis could precisely separate the individual effect of each of the seven
process variables. This permitted the investigators to quickly determine which of the seven process variables had significant influence
on the quality of the tiles without compelling them to conduct extensive and elaborate investigations.

51
The experimental scheme shown in Table 15.1 is one example of orthogonal experimental (or OA) schemes. OA schemes are often
sufficient in many process R&D investigations. The particularly complex processes involving intricate interaction among variable effects
may require the use of more powerful statistical experimental schemes, such as the “full factorial” experimental plan.

Table 15.1 Orthogonal Experimental Plan used by Ina Tile Company to reduce production of defective tiles

Process Fraction of Tiles


A B C D E F G
Variable  found defective

Expt #1 A1 B1 C1 D1 E1 F1 G1 16/100
2 A1 B1 C1 D2 E2 F2 G2 17/100
3 A1 B2 C2 D1 E1 F2 G2 12/100
4 A1 B2 C2 D2 E2 F1 G1 6/100
5 A2 B1 C2 D1 E2 F1 G2 6/100
6 A2 B1 C2 D2 E1 F2 G1 68/100
7 A2 B2 C1 D1 E2 F2 G1 42/100
8 A2 B2 C1 D2 E1 F1 G2 26/100

The special arrangement of process variable levels as shown in Table 15.1, as you may have already noted, is not arbitrary. The
arrangement follows a particular pattern, as is characteristic of experiments conducted based on statistical principles. One key
advantage of this special arrangement is economy in the number of trials to be run for the amount of information generated per
experiment. The other advantage is the ease with which one is able to compute the individual variable effects from the observed
response data.

Such considerations are obviously important for the manager with limited resources who wishes to complete a technical investigation
quickly and inexpensively.

The object of the tile making experiments was to uncover the “main effect” of each of the seven process factors A, B, C, ..., G. The term
main effect designates the effect an individual cause factor has on the response when the factor’s setting is changed, say from A1 to A2.
For example, the main effect of process variable A is (35.50 - 12.75) = 22.75. That of variable B is -5.25. For the tile making problem, the
main effects computed are shown in Table 15.2 (see Text E for the procedure). The effects are graphically displayed in Figure 15.1.

In the Ina Tile Company’s experiments it turned out that one variable (Factor “G”, the lime content of the clay used for making the tiles)
when adjusted from level “G1” (1%) to level “G2” (5%) caused defect levels to be considerably reduce This was considered by the
company to be cheaper than adjusting some of the other process factors, hence implemente (Which other factors do you think they could
have also adjusted?)

Sometimes the effect of one variable is also dependent on the level at which the other influencing variables are set. In such cases the
factor effects are said to interact. Interactions are phenomena when the difference in response (here % defective tiles produced)
between the levels of one process variable is not the same at all levels of the other variables. Interactions can make it difficult for us to
adjust the response in some desired direction, without studying the process more closely, using experimental schemes more elaborate
than the orthogonal (OA) scheme. One such scheme is the “full factorial experiment” (Montgomery and Runger 1999).

Table 15.2 and Figure 15.1 were obtained by running eight orthogonal experiments rather than a full factorial experiment (see Text C).
Consequently, it is not possible here to tell whether the effects of factors A, B, C, etc. in the tile making example interact with each other in
some manner. Thus, the above procedure could improve the process, but perhaps not optimize it. For the time being you should
remember that if you are seeking exact optimization of a process, generally speaking, you will have to conduct more elaborate statistical
experiments. There is no escape from this. However, the information you will thus produce can be of enormous value for your
company’s quest for quality and productivity!

52
Figure 15.1: Influence of changing process conditions observed on defective tile production

Table 15.2 Summary of Estimated Process Factor Effects in Tile making

Factor Level % Defective


A1 12.75
A2 35.50
B1 26.75
B2 21.50
C1 25.25
C2 23.00
D1 19.00
D2 29.25
E1 30.50
E2 175
F1 13.50
F2 34.75
G1 33.00
G2 15.25

17 Statistical Experiments are the Cornerstone of Quality Engineering!

In many real life design problems it is often not possible to develop an exact response model of the system being designed in precise
mathematical terms. In such cases the cornerstone for high performance product and process design, as illustrated by the example
above, remains systematic experimental investigation suitably combined with optimization. As discovered by the Japanese and adopted
as a key strategy in their own product/process R&D, statistically planned experiments remain the most effective method for studying the
effect of a large number of influencing factors on a complex process (Montgomery 2004). Indeed, no other method is known today that
can match statistically planned experiments in economy as well as in producing results that can be reliable and reproducible. This
methodology is applicable not only in mechanical and metallurgical processes, but also in chemical, electronic, software development,
and a variety of other processes involving multiple influencing variables. Text E provides the methodology for DOE.

18 References

1. Bernard, G A (1982). “Causation”, Encyclopedia of Statistical Sciences, Vol 1, eds. S Kotz, N Johnson and C Read, John Wiley, 387-389.
2. Dodge, H F and H G Romig (1959). Sampling Inspection Tables, Single and Double Sampling, 2nd e, Wiley.
3. Montgomery, D C (2012). Statistical Quality Control—A Modern Introduction, 6th e, Wiley.
4. Montgomery, D C (2004). Design and Analysis of Experiments, 5th e, Wiley.
5. Montgomery, D C and G C Runger (1999). Applied Statistics and Probability for Engineers, 2nd e, Wiley.
6. Taguchi, G and Don Clausing (1990). Robust Quality, Harvard Business Review, January-February, pp. 65-75.
7. Washio, Y (1993). "Steps for Developing New Products Viewed from the Standpoint of Total Quality Control" in Quality
Engineering, W Kuo (e), Elsevier Science, pp. 65-70.

53

View publication stats

You might also like