Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
37 views19 pages

Quality Metrics Draft

This document provides an introduction to Shewhart control charting, which is a statistical process monitoring technique. It discusses how Shewhart control charts can be used to detect when a process is stable versus out of control by plotting process data over time and comparing it to control limits. The document explains how to calculate control limits based on estimates of the process mean and standard deviation, and provides an example using sample data from a "brown bag" process to illustrate retrospective control chart analysis when the actual process parameters are unknown.

Uploaded by

Al Rammohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views19 pages

Quality Metrics Draft

This document provides an introduction to Shewhart control charting, which is a statistical process monitoring technique. It discusses how Shewhart control charts can be used to detect when a process is stable versus out of control by plotting process data over time and comparing it to control limits. The document explains how to calculate control limits based on estimates of the process mean and standard deviation, and provides an example using sample data from a "brown bag" process to illustrate retrospective control chart analysis when the actual process parameters are unknown.

Uploaded by

Al Rammohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

IE 361 Module 10

Introduction to Shewhart Control Charting


(Statistical Process Control, or More Helpfully: Statistical
Process Monitoring )

Prof.s Stephen B. Vardeman and Max D. Morris

Reading: Section 3.1, Statistical Quality Assurance Methods for Engineers


1
Generalities About Shewhart Control Charting

SPC (SPM) is process watching for purposes of change detection.

Figure 1: SPC (SPM) is About "Process Watching"

2
One famous statistician has called it "organized attention to process data."

Walter Shewhart, working at Bell Labs in the late 20’s and early 30’s reasoned
that while some variation is inevitable in any real process, the variation seen in
data taken on a process can be decomposed as

overall observed variation = baseline variation


+ variation that can be eliminated

baseline that which can be eliminated


inherent to a system configuration
due to system/common/universal causes due to special/assignable causes
random nonrandom
short term long term
3
If one accepts Shewhart’s conceptualization .... how is one to detect the pres-
ence of the second kind of variation so that appropriate steps can be taken to
eliminate it? The hope is to leave behind a process that might be termed phys-
ically stable (not without variation, but consistent in its pattern of variation).
The point of Shewhart control charting is to provide a detection tool.

Shewhart’s charting idea was to periodically take a sample from a process and

• compute the value of a statistic meant to summarize process behavior at


the period in question

• plot against time order of observation

• compare to so-called control limits


4
For a generic statistic, Q, this looks like

Figure 2: A Generic Shewhart Control Chart

5
Points plotting "out of control" indicate process change, i.e. the presence
of variation of the 2nd kind, and signal the need for intervention (of some
unspecified type) to find and take action on the physical source of any assignable
cause.

There are many different kinds of Shewhart charts, corresponding to various


choices of the plotted statistic, Q.

A basic question is how to set the control limits, UCLQ and LCLQ. Shewhart’s
answer was essentially:

If one models process output (individual measurements from the process)


under stable conditions as random draws from a fixed distribution,
then probability theory can often be invoked to produce a distribution
6
for Q and corresponding mean μQ and standard deviation σ Q. For
many distributions, most of the probability is within three standard
deviations of the mean. So, if μQ and σ Q are respectively a "stable
process" mean and standard deviation for Q, common generic control
limits are

UCLQ = μQ + 3σ Q and LCLQ = μQ − 3σ Q.


and it is common to draw a "center line" on a Shewhart control chart
at
CLQ = μQ.

To make this concrete, consider the sample mean of n individual measurements,


Q = x. If individuals can be modeled random draws from a process distribution
with mean μ and standard deviation σ, elementary probability implies that
7
Q = x has a distribution with mean μQ = μx = μ and standard deviation

σ Q = σ x = σ/ n. It follows that typical control limits for x are
σ σ
UCLx = μ + 3 √ and LCLx = μ − 3 √
n n
with a center line drawn at μ.

This illustrates that very often process parameters appear in formulas for control
limits ... and values for them must come from somewhere. There are two
possibilities:

• Sometimes past experience with a process, engineering standards, or other


considerations made prior to a particular application specify what values
should be used. (This is a standards given situation.)

8
• In other circumstances, one has no information on a process outside a
series of samples that are presented along with the question "Is it plausi-
ble that the process was physically stable over the period represented by
these data?" This is sometimes called an as-past-data or retrospective
scenario. Here all that one can do is

— tentatively assume that, in fact, the process was stable

— make provisional estimates of process parameters and plug them into


formulas for control limits, and

— apply those limits to the data in hand as a means of criticizing the


provisional assumption of stability

In a standards given context, with each new sample one faces the question
9
"Are process parameters currently at their standard values?"

In a retrospective context, one can only wait until a number of samples have
been collected (often, using data from a minimum of 20—25 time periods is
recommended) and then looking back over the data ask the question

"Are these data consistent with any fixed set of process parameters?"

The greatest importance of control charts is in standards given applications


to real-time process monitoring. Textbooks, however, are full of problems
involving retrospective charts. (Of course, what other kinds of data sets could
be in a textbook???!!)

10
Example 10-1 (The "Brown Bag" Process) The population of numbers on
washers used in the "Deming Drama" in IE 361 Labs is (approximately) Normal
with mean μ = 5 and standard deviation σ = 1.715. If (as in the Deming
Drama) one is sampling n = 5 washers per period and doing process monitoring
with the hope of maintaining the "brown bag parameters," appropriate control
limits for x are
1.715σ 1.715
UCLx = 5 + 3 √ = 7.3 and LCLx = 5 − 3 √ = 2.7
5 5
and these are to be applied in an "on-line" fashion to sample means as they
are observed.

As a first illustration of retrospective calculations for Shewhart charts, the


following table shows sample means and ranges for 15 samples collected in a
Deming Drama. The first 12 of those samples were drawn from the brown bag
and the last 3 from a much different population.
11
Sample 1 2 3 4 5 6 7 8 9
x 5.4 6.2 6.0 5.6 4.4 3.4 4.0 4.8 4.4
R 3 5 5 6 6 5 2 2 3

Sample 10 11 12 13 14 15
x 6.2 5.6 4.8 8.0 9.6 12.8
R 2 6 4 10 9 5

Note, by the way, that all 12 of the samples actually drawn from the brown
bag have sample means inside the control limits for x. Only after the process
change between samples 12 and 13 (indicated by the vertical double line in the
table) do the x’s begin to plot outside the standards given control limits 2.7
and 7.3.

But what could one do in the way of control charting if the brown bag parame-
ters were not provided? Is it possible to look at the 13 sample means and ranges
12
in the table above and conclude that there was clearly some kind of process
change over the period of data collection? One might reason (retrospectively)
as follows.

IF in fact there had been no process change, all 15 of the sample means x
would be estimates of μ (that we know to be 5 for the first 12 samples). So
a reasonable empirical replacement for μ in control limit formulas might be
5.4 + 6.2 + · · · + 12.8
x= = 6.08
15
Further (in a manner similar to what we did in the range-based estimation
of σ repeatability in Gauge R&R analyses), one might invent an estimate of σ
from each of the 15 ranges in the table, as σ̂ = R/d2, where d2 is based on
the sample size of n = 5. Supposing again that there had been no process
change in the data collection, these might be averaged across the 15 samples

13
to produce
R̄ (3 + 5 + · · · + 9 + 5) /15
= = 2.092
d2 2.326
as a potential empirical replacement for σ in control limit formulas. That is,
as-past-data control limits for x in the Deming Drama might have been

CLx = μ̂ = 6.08

σ̂
UCLx = μ̂ + 3 √
n
2.092
= 6.08 + 3 √ = 8.9
5
and
2.092
LCLx = 6.08 − 3 √ = 3.3
5

14
If these limits are applied to the 15 sample means in the table, we see that
(although the change between samples 12 and 13 is not caught until sample
14) even though there are 3 samples from a process different from the brown
bag process that "contaminate" our estimates of the brown bag parameters,
this method of retrospective calculation does detect the fact that there was
some kind of process change in the period of data collection.

A matter that needs some general introductory discussion is that of rational


subgrouping or rational sampling. It is essential that anything one calls a
single "sample" be collected over a short enough period that there is little
question that the process was "physically stable" during that period. It must
be clear that a "random draws from a fixed distribution" model is appropriate
for describing data in a given sample. This is because the variation within such
a sample essentially specifies the level of "background noise" against which one
looks for process changes. If what one calls "samples" often contain data from
15
genuinely different process conditions, the level of background noise will be so
large as to mask important process changes.

In high-volume manufacturing applications of control charts, single samples


(rational subgroups) typically consist of n consecutive items taken from a pro-
duction line. On the other hand, in extremely low-volume (or batch) operations,
where one unit of product might take many hours to produce and there is sig-
nificant opportunity for real process change between consecutive units, the only
natural samples may be of size n = 1.

An unfortunate and persistent type of confusion concerns the much different


concepts of control limits and engineering specifications.

Control Limits Specifications


have to do with process stability have to do with product acceptability
apply to Q apply to individuals, x
usually derived from process data derived from performance requirements
16
A real process can be stable without being acceptable and vice versa. There is
not a direct link between being "in control" and producing acceptable results.

Example 10-1 continued The Deming Drama as typically run in an IE 361


Lab uses the understanding that washers from L = 3 to U = 7 are "in
specifications"/functional, while those outside those limits are not. This has
nothing whatsoever to do with either the truth about the brown bag process
or any data collected in the Drama. In fact, using the process parameters one
might compute
U −μ 7−5
z1 = = = 1.17
σ 1.715
and
L−μ 3−5
z2 = = = −1.17
σ 1.715

17
and note that the fact that

P [−1.17 ≤ a standard normal random variable ≤ 1.17] ≈ .76


suggests that only about 76% of the brown bag process actually meets those
specifications/requirements. (We are here ignoring the fact that the basic
discreteness of what is drawn from the brown bag serves to make this estimate
lower than the truth.) And, of course, lacking known values for μ and σ,
one could use past data to estimate μ and σ and then this kind of fraction
conforming for the brown bag process.

Control charting can signal the need for process intervention and can keep one
from ill-advised and detrimental over-adjustment of a process that is behaving
in a stable fashion. But in doing so, what is achieved is simply reducing variation
to the minimum possible for a given system configuration (in terms of equip-
ment, methods of operation, methods of measurement, etc.). That achieving of
18
process stability provides a solid background against which to evaluate possible
innovations and fundamental/order-of-magnitude improvements in production
methods.

19

You might also like