A Computer Virus
A Computer Virus
subroutine infect-executable:=
{loop:file = get-random-executable-file;
if first-line-of-file = 1234567 then goto loop;
prepend virus to file;
}
subroutine do-damage:=
{whatever damage is to be done}
subroutine trigger-pulled:=
{return true if some condition holds}
main-program:=
{infect-executable;
if trigger-pulled then do-damage;
goto next;}
next:}
A Simple Virus "V"
This example virus (V) searches for an uninfected executable file (E) by looking for
executable files without the "1234567" in the beginning, and prepends V to E, turning
it into an infected file (I). V then checks to see if some triggering condition is true,
and does damage. Finally, V executes the rest of the program it was prepended to.
When the user attempts to execute E, I is executed in its place; it infects another file
and then executes as if it were E. With the exception of a slight delay for infection, I
appears to be E until the triggering condition causes damage.
A common misconception of a virus relates it to programs that simply propagate
through networks. The worm program, 'core wars', and other similar programs have
done this, but none of them actually involve infection. The key property of a virus is
its ability to infect other programs, thus reaching the transitive closure of sharing
between users. As an example, if V infected one of user A's executables (E), and user
B then ran E, V could spread to user B's files as well.
It should be pointed out that a virus need not be used for evil purposes or be a Trojan
horse. As an example, a compression virus could be written to find uninfected
executables, compress them upon the user's permission, and prepend itself to them.
Upon execution, the infected program decompresses itself and executes normally.
Since it always asks permission before performing services, it is not a Trojan horse,
but since it has the infection property, it is still a virus. Studies indicate that such a
virus could save over 50% of the space taken up by executable files in an average
system. The performance of infected programs would decrease slightly as they are
decompressed, and thus the compression virus implements a particular time space
tradeoff. A sample compression virus could be written as follows:
program compression-virus:=
{01234567;
subroutine infect-executable:=
{loop:file = get-random-executable-file;
if first-line-of-file = 01234567 then goto loop;
compress file;
prepend compression-virus to file;
}
main-program:=
{if ask-permission then infect-executable;
uncompress the-rest-of-this-file into tmpfile;
run tmpfile;}
}
A Compression Virus "C"
This program (C) finds an uninfected executable (E), compresses it, and prepends C to
form an infected executable (I). It then uncompresses the rest of itself into a
temporary file and executes normally. When I is run, it will seek out and compress
another executable before decompressing E into a temporary file and executing it. The
effect is to spread through the system compressing executable files, and decompress
them as they are to be executed. Users will experience significant delays as their
executables are decompressed before being run.
subroutine trigger-pulled:=
{if year>1984 then return true otherwise return false;}
...
A Denial of Services Virus
Given a general purpose system in which users are capable of using information in
their possession as they wish and passing such information as they see fit to others, it
should be clear that the ability to share information is transitive. That is, if there is a
path from user A to user B, and there is a path from user B to user C, then there is a
path from user A to user C with the witting or unwitting cooperation of user B.
Partition Models
Two limits on the paths of information flow can be distinguished, those that partition
users into closed proper subsets under transitivity, and those that don't. Flow
restrictions that result in closed subsets can be viewed as partitions of a system into
isolated subsystems. These limit each infection to one partition. This is a viable
means of preventing complete viral takeover at the expense of limited isolationism,
and is equivalent to giving each partition its own computer.
The integrity model [Biba77] is an example of a policy that can be used to partition
systems into closed subsets under transitivity. In the Biba model, an integrity level is
associated with all information. The strict integrity properties are the dual of the Bell-
LaPadula properties; no user at a given integrity level can read an object of lower
integrity or write an object of higher integrity. In Biba's original model, a distinction
was made between read and execute access, but this cannot be enforced without
restricting the generality of information interpretation since a high integrity program
can write a low integrity object, make low integrity copies of itself, and then read low
integrity input and produce low integrity output.
If the integrity model and the Bell-LaPadula model coexist, a form of limited
isolationism results which divides the space into closed subsets under transitivity. If
the same divisions are used for both mechanisms (higher integrity corresponds to
higher security), isolationism results since information moving up security levels also
moves up integrity levels, and this is not permitted. When the Biba model has
boundaries within the Bell-LaPadula boundaries, infection can only spread from the
higher integrity levels to lower ones within a given security level. Finally, when the
Bell-LaPadula boundaries are within the Biba boundaries, infection can only spread
from lower security levels to higher security levels within a given integrity level.
There are actually 9 cases corresponding to all pairings of lower boundaries with
upper boundaries, but the three shown graphically below are sufficient for
understanding.
Same Divisions Biba within B-L B-L within Biba
-------------- --------------- ---------------
Biba B-L Result Biba B-L Result Biba B-L Result
---- ---- ---- ---- ---- ---- ---- ---- ----
|\\| |//| |XX| |\\| |//| |XX| |\\| |//| |XX|
|\\| |//| |XX| |\\| | | |\\| | | |//| |//|
| | + | | = | | | | + | | = | | | | + | | = | |
|//| |\\| |XX| |//| | | |//| | | |\\| |\\|
|//| |\\| |XX| |//| |\\| |XX| |//| |\\| |XX|
---- ---- ---- ---- ---- ---- ---- ---- ----
\\ = can't write // = can't read XX = no access \ + / = X
Biba's work also included two other integrity policies, the 'low water mark' policy
which makes output the lowest integrity of any input, and the 'ring' policy in which
users cannot invoke everything they can read. The former policy tends to move all
information towards lower integrity levels, while the latter attempts to make a
distinction that cannot be made with generalized information interpretation.
Just as systems based on the Bell-LaPadula model tend to cause all information to
move towards higher levels of security by always increasing the level to meet the
highest level user, the Biba model tends to move all information towards lower
integrity levels by always reducing the integrity of results to that of the lowest
incoming integrity. We also know that a precise system for integrity is NP-complete
(just as its dual is NP-complete).
The most trusted programmer is (by definition) the programmer that can write
programs executable by the most users. In order to maintain the Bell-LaPadula policy,
high level users cannot write programs used by lower level users. This means that the
most trusted programmers must be those at the lowest security level. This seems
contradictory. When we mix the Biba and Bell-LaPadula models, we find that the
resulting isolationism secures us from viruses, but doesn't permit any user to write
programs that can be used throughout the system. Somehow, just as we allow
encryption or declassification of data to move it from higher security levels to lower
ones, we should be able to use program testing and verification to move information
from lower integrity levels to higher ones.
Another commonly used policy used to partition systems into closed subsets is the
category policy used in typical military applications. This policy partitions users into
categories, with each user only able to access information required for their duties. If
every user in a strict category system has access to only one category at a time, the
system is secure from viral attack across category boundaries because they are
isolated. Unfortunately, in current systems, users may have simultaneous access to
multiple categories. In this case, infection can spread across category boundaries to
the transitive closure of information flow.
Flow Models
In policies that don't partition systems into closed proper subsets under transitivity,
it is possible to limit the extent over which a virus can spread. The 'flow distance'
policy implements a distance metric by keeping track of the distance (number of
sharings) over which data has flowed. The rules are; the distance of output
information is the maximum of the distances of input information, and the distance
of shared information is one more than the distance of the same information before
sharing. Protection is provided by enforcing a threshold above which information
becomes unusable. Thus a file with distance 8 shared into a process with distance 2
increases the process to distance 9, and any further output will be at at least that
distance.
A B C D E
+-+ +-+ +-+ +-+ +-+
|X|---|1|---|0|---|1|---|X|
+-+ +-+ +-+ +-+ +-+
The following figure shows an example of a flow list system implementing different
restrictions (indicated by A and B) for different users (at row,col 1,3 and 2,5). Notice
that although information is allowed to get to 1,5, it can't actually get there because
there is no path from its source at 1,3. As in the distance metric system, transitivity of
information flow does not hold, so that even if information indicated by B were able
to reach 2,3, it could not transit any further.
Rules:
F(output)=Union(F(inputs))
Information is accessible iff B(F)=1
1 2 3 4 5 6
+-+ +-+ +-+ +-+ +-+ +-+
1 |A|---|A|---|A|---| |---|A|---|B|
+-+ +-+ +-+ +-+ +-+ +-+
| | | | | |
+-+ +-+ +-+ +-+ +-+ +-+
2 | |---| |---|A|---| |---|B|---|B|
+-+ +-+ +-+ +-+ +-+ +-+
| | | | | |
+-+ +-+ +-+ +-+ +-+ +-+
3 |B|---|B|---|B|---|B|---|B|---|B|
+-+ +-+ +-+ +-+ +-+ +-+
In a system with unlimited information paths, limited transitivity may have an effect if
users don't use all available paths, but since there is always a direct path between any
two users, there is always the possibility of infection. As an example, in a system with
transitivity limited to a distance of 1 it is 'safe' to share information with any user you
'trust' without having to worry about whether that user has incorrectly trusted another
user.
Limited Interpretation
With limits on the generality of interpretation less restrictive than fixed first order
interpretation, the ability to infect is an open question because infection depends on
the functions permitted. Certain functions are required for infection. The ability to
write is required, but any useful program must have output. It is possible to design a
set of operations that don't allow infection in even the most general case of sharing
and transitivity, but it is not known whether any such set includes non fixed first
order functions.
As an example, a system with only the function 'display-file' can only display the
contents of a file to a user, and cannot possibly modify any file. In fixed database or
mail systems, this may have practical applications, but certainly not in a development
environment. In many cases, computer mail is a sufficient means of communications,
and so long as the computer mail system is partitioned from other applications so that
no information can flow between them except in the covert channel through the user,
this may be used to prevent infection.
Although no fixed interpretation scheme can itself be infected, a high order fixed
interpretation scheme can be used to infect programs written to be interpreted by it.
As an example, the microcode of a computer may be fixed, but code in the machine
language it interprets can still be infected. LISP, APL, and Basic are all examples of
fixed interpretation schemes that can interpret information in general ways. Since their
ability to interpret is general, it is possible to write a program in any of these
languages that infects programs in any or all of these languages.
In limited interpretation systems, infections cannot spread any further than in general
interpretation systems, because every function in a limited system must also be able to
be performed in a general system. The previous results therefore provide upper
bounds on the spread of a virus in systems with limited interpretation.
Precision Problems
Although isolationism and limited transitivity offer solutions to the infection
problem, they are not ideal in the sense that widespread sharing is generally
considered a valuable tool in computing. Of these policies, only isolationism can be
precisely implemented in practice because tracing exact information flow requires
NP-complete time, and maintaining markings requires large amounts of
space [Denning82]. This leaves us with imprecise techniques. The problem with
imprecise techniques is that they tend to move systems towards isolationism. This is
because they use conservative estimates of effects in order to prevent potential
damage. The philosophy behind this is that it is better to be safe than sorry.
The problem is that when information has been unjustly deemed unreadable by a
given user, the system becomes less usable for that user. This is a form of denial of
services in that access to information that should be accessible is denied. Such a
system always tends to make itself less and less usable for sharing until it either
becomes completely isolationist or reaches a stability point where all estimates are
precise. If such a stability point existed, we would have a precise system for that
stability point. Since we know that any precise stability point besides isolationism
requires the solution to an NP-complete problem, we know that any non NP-complete
solution must tend towards isolationism.
subroutine copy-virus-with-random-insertions:=
{loop: copy evolutionary-virus to virus till semicolon-found;
if random-bit = 1 then print-random-statement;
if ~end-of-input-file goto loop;
}
main-program:=
{copy-virus-with-random-insertions;
infect-executable;
if trigger-pulled do-damage;
goto next;}
next:}
An Evolutionary Virus "EV"
In general, proof of the equivalence of two evolutions of a program 'P' ('P1' and 'P2')
is undecidable because any decision procedure 'D' capable of finding their equivalence
could be invoked by P1 and P2. If found equivalent they perform different operations,
and if found different they act the same, and are thus equivalent. This is exemplified
by the following modification to program EV in which the decision procedure D
returns "true" iff two input programs are equivalent.
program undecidable-evolutionary-virus:=
{...
subroutine copy-with-undecidable-assertion:=
{copy undecidable-evolutionary-virus to file till line-starts-with-zzz;
if file = P1 then print "if D(P1,P2) then print 1;";
if file = P2 then print "if D(P1,P2) then print 0;";
copy undecidable-evolutionary-virus to file till end-of-input-file;
}
main-program:=
{if random-bit = 0 then file = P1 otherwise file = P2;
copy-with-undecidable-assertion;
zzz:
infect-executable;
if trigger-pulled do-damage;
goto next;}
next:}
Undecidable Equivalence of Evolutions of a Virus "UEV"
The program UEV evolves into one of two types of programs P1 or P2. If the program
type is P1, the statement labeled "zzz" will become:
while if the program type is P2, the statement labeled "zzz" will become:
The two evolutions each call decision procedure D to decide whether they are
equivalent. If D indicates that they are equivalent, then P1 will print a 1 while P2 will
print a 0, and D will be contradicted. If D indicates that they are different, neither
prints anything. Since they are otherwise equal, D is again contradicted. Therefore,
the hypothetical decision procedure D is self contradictory, and the precise
determination of the equivalence of these two programs by their appearance is
undecidable.
Since both P1 and P2 are evolutions of the same program, the equivalence of
evolutions of a program is undecidable, and since they are both viruses, the
equivalence of evolutions of a virus is undecidable. Program UEV also demonstrates
that two unequivalent evolutions can both be viruses. The evolutions are equivalent in
terms of their viral effects, but may have slightly different side effects.
This result leads to the idea that a balance of coexistent viruses and defenses could
exist, such that a given virus could only do damage to a given portion of the system,
while a given protection scheme could only protect against a given set of viruses. If
each user and attacker used identical defenses and viruses, there could be an ultimate
virus or defense. It makes sense from both the attacker's point of view and the
defender's point of view to have a set of (perhaps incompatible) viruses and defenses.
In the case where viruses and protection schemes didn't evolve, this would likely lead
to some set of fixed survivors, but since programs can be written to evolve, the
program that evolved into a difficult to attack program would more likely survive as
would a virus that was more difficult to detect. As evolution takes place, balances tend
to change, with the eventual result being unclear in all but the simplest circumstances.
This has very strong analogies to biological theories of evolution [Dawkins78], and
might relate well to genetic theories of diseases. Similarly, the spread of viruses
through systems might well be analyzed by using mathematical models used in the
study of infectious diseases [Baily57].
Since we cannot precisely detect a virus, we are left with the problem of defining
potentially illegitimate use in a decidable and easily computable way. We might be
willing to detect many programs that are not viruses and even not detect some viruses
in order to detect a large number of viruses. If an event is relatively rare in 'normal'
use, it has high information content when it occurs, and we can define a threshold at
which reporting is done. If sufficient instrumentation is available, flow lists can be
kept which track all users who have effected any given file. Users that appear in many
incoming flow lists could be considered suspicious. The rate at which users enter
incoming flow lists might also be a good indicator of a virus.
This type of measure could be of value if the services used by viruses are rarely used
by other programs, but presents several problems. If the threshold is known to the
attacker, the virus can be made to work within it. An intelligent thresholding scheme
could adapt so the threshold could not be easily determined by the attacker. Although
this 'game' can clearly be played back and forth, the frequency of dangerous requests
might be kept low enough to slow the undetected virus without interfering
significantly with legitimate use.
Several systems were examined for their abilities to detect viral attacks. Surprisingly,
none of these systems even include traces of the owner of a program run by other
users. Marking of this sort must almost certainly be used if even the simplest of viral
attacks are to be detected.
Once a virus is implanted, it may not be easy to fully remove. If the system is kept
running during removal, a disinfected program could be reinfected. This presents the
potential for infinite tail chasing. Without some denial of services, removal is likely to
be impossible unless the program performing removal is faster at spreading than the
virus being removed. Even in cases where the removal is slower than the virus, it may
be possible to allow most activities to continue during removal without having the
removal process be very fast. For example, one could isolate a user or subset of users
and cure them without denying services to other users.
The initial infection was implanted in 'vd', a program that displays Unix file structures
graphically, and introduced to users via the system bulletin board. Since vd was a new
program on the system, no performance characteristics or other details of its operation
were known. The virus was implanted at the beginning of the program so that it was
performed before any other processing.
In order to keep the attack under control several precautions were taken. All infections
were performed manually by the attacker, and no damage was done, only reporting.
Traces were included to assure that the virus would not spread without detection,
access controls were used for the infection process, and the code required for the
attack was kept in segments, each encrypted and protected to prevent illicit use.
In each of five attacks, all system rights were granted to the attacker in under an hour.
The shortest time was under 5 minutes, and the average under 30 minutes. Even those
who knew the attack was taking place were infected. In each case, files were
'disinfected' after experimentation to assure that no user's privacy would be violated. It
was expected that the attack would be successful, but the very short takeover times
were quite surprising. In addition, the virus was fast enough (under 1/2 second) that
the delay to infected programs went unnoticed.
Once the results of the experiments were announced, administrators decided that no
further computer security experiments would be permitted on their system. This ban
included the planned addition of traces which could track potential viruses and
password augmentation experiments which could potentially have improved security
to a great extent. This apparent fear reaction is typical, rather than try to solve
technical problems technically, policy solutions are often chosen.
After successful experiments had been performed on a Unix system, it was quite
apparent that the same techniques would work on many other systems. In particular,
experiments were planned for a Tops-20 system, a VMS system, a VM/370 system,
and a network containing several of these systems. In the process of negotiating with
administrators, feasibility was demonstrated by developing and testing prototypes.
Prototype attacks for the Tops-20 system were developed by an experienced Tops-20
user in 6 hours, a novice VM/370 user with the help of an experienced programmer in
30 hours, and a novice VMS user without assistance in 20 hours. These programs
demonstrated the ability to find files to be infected, infect them, and cross user
boundaries.
After several months of negotiation and administrative changes, it was decided that
the experiments would not be permitted. The security officer at the facility was in
constant opposition to security experiments, and would not even read any proposals.
This is particularly interesting in light of the fact that it was offered to allow systems
programmers and security officers to observe and oversee all aspects of all
experiments. In addition, systems administrators were unwilling to allow sanitized
versions of log tapes to be used to perform offline analysis of the potential threat of
viruses, and were unwilling to have additional traces added to their systems by their
programmers to help detect viral attacks. Although there is no apparent threat posed
by these activities, and they require little time, money, and effort, administrators were
unwilling to allow investigations. It appears that their reaction was the same as the
fear reaction of the Unix administrators.
A Bell-LaPadula Based System
In March of 1984, negotiations began over the performance of experiments on a
Bell-LaPadula [Bell73] based system implemented on a Univac 1108. The experiment
was agreed upon in principal in a matter of hours, but took several months to
become solidified. In July of 1984, a two week period was arranged for
experimentation. The purpose of this experiment was merely to demonstrate the
feasibility of a virus on a Bell-LaPadula based system by implementing a prototype.
Because of the extremely limited time allowed for development (26 hours of computer
usage by a user who had never used an 1108, with the assistance of a programmer
who hadn't used an 1108 in 5 years), many issues were ignored in the implementation.
In particular, performance and generality of the attack were completely ignored. As a
result, each infection took about 20 seconds, even though they could easily have been
done in under a second. Traces of the virus were left on the system although they
could have been eliminated to a large degree with little effort. Rather than infecting
many files at once, only one file at a time was infected. This allowed the progress of a
virus to be demonstrated very clearly without involving a large number of users or
programs. As a security precaution, the system was used in a dedicated mode with
only a system disk, one terminal, one printer, and accounts dedicated to the
experiment.
After 18 hours of connect time, the 1108 virus performed its first infection. The host
provided a fairly complete set of user manuals, use of the system, and the assistance
of a competent past user of the system. After 26 hours of use, the virus was
demonstrated to a group of about 10 people including administrators, programmers,
and security officers. The virus demonstrated the ability to cross user boundaries and
move from a given security level to a higher security level. Again it should be
emphasized that no system bugs were involved in this activity, but rather that the Bell-
LaPadula model allows this sort of activity to legitimately take place.
All in all, the attack was not difficult to perform. The code for the virus consisted of 5
lines of assembly code, about 200 lines of Fortran code, and about 50 lines of
command files. It is estimated that a competent systems programmer could write a
much better virus for this system in under 2 weeks. In addition, once the nature of a
viral attack is understood, developing a specific attack is not difficult. Each of the
programmers present was convinced that they could have built a better virus in the
same amount of time. (This is believable since this attacker had no previous 1108
experience.)
Instrumentation
In early August of 1984, permission was granted to instrument a VAX Unix system to
measure sharing and analyze viral spreading. Data at this time is quite limited, but
several trends have appeared. The degree of sharing appears to vary greatly
between systems, and many systems may have to be instrumented before these
deviations are well understood. A small number of users appear to account for the
vast majority of sharing, and a virus could be greatly slowed by protecting them. The
protection of a few 'social' individuals might also slow biological diseases. The
instrumentation was conservative in the sense that infection could happen without
the instrumentation picking it up, so estimated attack times are unrealistically slow.
Users who were not shared with are ignored in these calculations, but other
experiments indicate that any user can get shared with by offering a program on the
system bulletin board. Detailed analysis demonstrated that systems administrators
tend to try these programs as soon as they are announced. This allows normal users to
infect system files within minutes. Administrators used their accounts for running
other users' programs and storing commonly executed system files, and several
normal users owned very commonly used files. These conditions make viral attack
very quick. The use of seperate accounts for systems administrators during normal use
was immediately suggested, and the systematic movement (after verification) of
commonly used programs into the system domain was also considered.
Other Experiments
Similar experiments have since been performed on a variety of systems to
demonstrate feasibility and determine the ease of implementing a virus on many
systems. Simple viruses have been written for VAX VMS and VAX Unix in the
respective command languages, and neither program required more than 10 lines of
command language to implement. The Unix virus is independent of the computer on
which it is implemented, and is therefore able to run under IDRIS, VENIX, and a host
of other UNIX based operating systems on a wide variety of systems. A virus written
in Basic has been implemented in under 100 lines for the Radio Shack TRS-80, the
IBM PC, and several other machines with extended Basic capabilities. Although this is
a source level virus and could be detected fairly easily by the originator of any given
program, it is rare that a working program is examined by its creator after it is in
operation. In all of these cases, the viruses have been written so that the traces in
the respective operating systems would be incapable of determining the source of
the virus even if the virus itself had been detected. Since the UNIX and Basic virus
could spread through a heterogeneous network so easily, they are seen as quite
dangerous.
The problems with policies that prevent controlled security experiments are clear;
denying users the ability to continue their work promotes illicit attacks; and if one
user can launch an attack without using system bugs or special knowledge, other users
will also be able to. By simply telling users not to launch attacks, little is
accomplished; users who can be trusted will not launch attacks; but users who would
do damage cannot be trusted, so only legitimate work is blocked. The perspective that
every attack allowed to take place reduces security is in the author's opinion a fallacy.
The idea of using attacks to learn of problems is even required by government policies
for trusted systems [Klein83] [Kaplan82]. It would be more rational to use open and
controlled experiments as a resource to improve security.
It has been demonstrated that a virus has the potential to spread throughout any
system which allows sharing. Every general purpose system currently in use is open to
at least limited viral attack. In many current 'secure' systems, viruses tend to spread
further when created by less trusted users. Experiments show the viability of viral
attack, and indicate that viruses spread quickly and are easily created on a variety of
operating systems. Further experimentation is still underway.
The results presented are not operating system or implementation specific, but are
based on the fundamental properties of systems. More importantly, they reflect
realistic assumptions about systems currently in use. Further, nearly every 'secure'
system currently under development is based on the Bell-LaPadula or lattice policy
alone, and this work has clearly demonstrated that these models are insufficient to
prevent viral attack. The virus essentially proves that integrity control must be
considered an essential part of any secure operating system.
Several undecidable problems have been identified with respect to viruses and
countermeasures. The are summarized here:
Several potential countermeasures were examined in some depth, and none appear
to offer ideal solutions. Several of the techniques suggested in this paper which
could offer limited viral protection are in limited use at this time. To be perfectly
secure against viral attacks, a system must protect against incoming information
flow, while to be secure against leakage of information a system must protect
against outgoing information flow. In order for systems to allow sharing, there must
be some information flow. It is therefore the major conclusion of this paper that the
goals of sharing in a general purpose multilevel security system may be in such direct
opposition to the goals of viral security as to make their reconciliation and
coexistence impossible.
The most important ongoing research involves the effect of viruses on computer
networks. Of primary interest is determining how quickly a virus could spread to a
large percentage of the computers in the world. This is being done through simplified
mathematical models and studies of viral spreading in 'typical' computer networks.
The implications of a virus in a secure network are also of great interest. Since the
virus leads us to believe that both integrity and security must be maintained in a
system in order to prevent viral attack, a network must also maintain both criterion in
order to allow multilevel sharing between computers. This introduces significant
constraints on these networks.
Acknowledgements
Because of the sensitive nature of much of this research and the experiments
performed in its course, many of the people to whom I am greatly indebted cannot
be explicitly thanked. Rather than ignoring anyone's help, I have decided to give only
first names. Len and David have provided a lot of good advice in both the research
and writing of this paper, and without them I likely would never have gotten it to this
point. John, Frank, Connie, Chris, Peter, Terry, Dick, Jerome, Mike, Marv, Steve, Lou,
Steve, Andy, and Loraine all put their noses on the line more than just a little bit in
their efforts to help perform experiments, publicize results, and lend covert support
to the work. Martin, John, Magdy, Xi-an, Satish, Chris, Steve, JR, Jay, Bill, Fadi, Irv,
Saul, and Frank all listened and suggested, and their patience and friendship were
invaluable. Alice, John, Mel, Ann, and Ed provided better blocking than the USC front
4 ever has.