Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views41 pages

Section 2

Chapter 2 of 'Linear Mixed Models' discusses the concept of blocking in experimental design, emphasizing its role in controlling variability among experimental units. It explains the use of randomized complete block designs to improve accuracy in experiments, illustrated through examples involving tread wear of tyres and adhesive strength testing. The chapter also covers mixed models, focusing on fixed and random effects, variance components, and hypothesis testing related to these effects.

Uploaded by

nguyettmn01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views41 pages

Section 2

Chapter 2 of 'Linear Mixed Models' discusses the concept of blocking in experimental design, emphasizing its role in controlling variability among experimental units. It explains the use of randomized complete block designs to improve accuracy in experiments, illustrated through examples involving tread wear of tyres and adhesive strength testing. The chapter also covers mixed models, focusing on fixed and random effects, variance components, and hypothesis testing related to these effects.

Uploaded by

nguyettmn01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Linear Mixed Models

Chapter 2: Mixed Models

Craig Anderson
Blocking

Blocks are groups of experimental units that are formed so


that units within blocks are as homogeneous as possible.

Blocks are almost always random effects in mixed model


designs.

The purpose of blocking is to isolate variability due to


extraneous causes.

2/41
Blocking

Blocking is a statistical technique designed to identify and


control variation among groups of experimental units.

A blocking factor is often referred to as a nuisance factor


because it is a source of variability but usually not of
research interest.

Examples of blocking factors include units of test


equipment of machinery, batches of raw material, people,
and time.

3/41
Example
Tread wear

In order to study the amount of tread wear of four brands


of tyres, these tyres must be mounted onto cars.

There will be variation from car to car where these tyres


can be used → car: blocking factor.

One approach would be to randomly assign all the tyres to


all the wheels across all cars included in the study
(completely randomized design).

However, we can instead use a (randomised complete


block (RCB) design) by randomly assigning the four
brands of tyres to the four wheels on each car.

4/41
Randomised complete block design

The randomised complete block design usually results in


more accurate results than the completely randomised
design.

One can also consider further blocking to produce an even


more accurate design.

In the car example, we could also block on the positions of


the wheels to improve the accuracy of our results.

5/41
Question

Plant Breeding

A plant breeder wants to study the effect of three levels of


nitrogen and four levels of potassium on his new variety of
corn.
His treatment structure consists of 12 treatment
combinations.
He carries out his study using several different plots of
land, each of which is partitioned into 12 parts.
Each treatment combination is assigned at random to one
part in each plot.
What is the blocking factor here?

6/41
Example

Adhesives

An engineer wants to test the strength of three adhesives


used as bonding agents.
Seven toys are randomly selected from a population of toys
and are used for this strength test.
Three different brands of adhesives, a, b, and c, are used to
glue parts from each toy.
The amount of pressure required to break the bond is then
recorded.
Data source: SAS.

7/41
Adhesives Example

8/41
Adhesives example
Randomized complete block design
This consists of the effects
adhesive: a treatment effect.
This is a fixed effect because only three adhesives (a, b,
and c) are used in the study, and the engineer is only
interested in making inference about these three adhesives.
toy: a blocking effect.
This is a random effect because the seven toys are
randomly selected from a population of toys, and the
inference about the treatment means is made over the
entire population of toys.

The treatments are assumed not to interact with the blocking


variable.
9/41
Adhesives example

Goal
The purpose of such an experiment is to

1 Estimate and compare the treatment means over the entire


population of blocks.
2 Account for the variability in the response variable due to
the blocks.

10/41
Pressure by toy number

85

adhesive

b
c
80

a
pressure

75
70

1 2 3 4 5 6 7

toy
11/41
Pressure by adhesive type

85
80
pressure

75
70

a b c

12/41
Adhesives example

Model

yij = µ + αi + bj + eij

where
yij is the breaking strength for the ith adhesive and jth toy,
i = 1, . . . , I (I = 3) and j = 1, . . . , J (J = 7).
µ is the overall mean.
αi is the fixed effect associated with the ith adhesive.
bj is the random effect associated with the jth toy (block).
eij is the experimental error associated with samples within
blocks.

13/41
Adhesives example

Distributional assumptions

The random variables bj are i.i.d. N(0, σB2 ). The variance


σB2 is the parameter to be estimated in the mixed model for
this effect.

The eij are i.i.d. N(0, σE2 ). The variance σE2 is the parameter
to be estimated in the mixed model for random error.

The effects bj and eij are assumed to be independent


random variables.

14/41
Adhesives example

Expectation

E(Yij ) = µ + αi

This is the mean pressure for adhesive i averaged across all


toys in the population.

Variance

Var (Yij ) = σB2 + σE2

The variance of an observation is the sum of the variances


due to blocks (often referred to as between-block
variation) and random errors (within-block variation).

15/41
Adhesives example

Hypotheses about fixed effects

Hypotheses about the fixed effects same as those in the


fixed-effects model:

HA : αi = 0 for i = 1, . . . , I.

Are there significant treatment effects?

16/41
Adhesives example

Hypotheses about random effects

Our hypothesis about the random effects take the simple


form:
HB : σB2 = 0.

Are the variance components associated with the random


effects equal to zero? In other words, are there significant
variations due to these random variables?

17/41
Adhesives example

Important Note

When the interest is in the fixed effect, inferences about


random effects are of little interest.

The primary role of random effects is to model sources of


variation so that the fixed effects can be more accurately
estimated and tested.

18/41
Results: Fixed Effects

Fixed effects:
Estimate Std. Error t value
(Intercept) 70.1857 1.7655 39.75
adhesiveb 5.7143 1.7214 3.32
adhesivec 0.9143 1.7214 0.53

Adhesive a acts a baseline and the other two are compared


to it.

The t-values can be used to test significance based on a


t-distribution.

Here we see significant differences between a and b and


thus we reject HA .

19/41
Results: Random Effects

Random effects:
Groups Name Variance Std.Dev.
toy (Intercept) 11.45 3.383
Residual 10.37 3.220

These give us our estimates of σB2 and σE2 respectively.

We can test HB using a likelihood ratio test - we will look


at this later in the course.

20/41
Variance
The variance of the treatment means is a function of both
σE2 (error variance) and σB2 (block variance):
1
Var (Ȳi· ) = Var (b· + ei· ) = (σB2 + σE2 )
J
The variance of the difference between two treatment
means is a function of only σE2 :

Var (Ȳi1 · − Ȳi2 · ) = Var (b· + ei1 · − (b· + ei2 · ))


2
= Var (ei1 · − ei2 · ) = σE2 .
J
For instance the standard error for the adhesive a treatment
mean is 1.766, while that for the difference between
adhesives a and b is 1.721.
21/41
Two-factor mixed model with interaction

Consider the model

yijk = µ + αi + bj + (αb)ij + eijk

for i = 1, . . . , I, j = 1, . . . , J and k = 1, . . . , K.

Factor A: fixed
Factor B: random
I
P
To ensure identifiability, let αi = 0.
i=1

22/41
Two-factor mixed model with interaction

Random variables

iid
bj ∼ N(0, σB2 )

iid 2
(αb)ij ∼ N(0, σAB )

iid
eijk ∼ N(0, σE2 )

The effects bj , (αb)ij and eijk are assumed to be mutually


independent.

23/41
Two-factor mixed model with interaction

Notation
I J K
1 XXX
Ȳ = Yijk
IJK i=1 j=1 k=1
J K
1 XX
Ȳi·· = Yijk
JK j=1 k=1
I K
1 XX
Ȳ·j· = Yijk
IK i=1 k=1
K
1X
Ȳij· = Yijk
K k=1

24/41
Data decomposition

Data

yijk = ȳ + (ȳi·· − ȳ) + (ȳ·j· − ȳ)


+ (ȳij· − ȳi·· − ȳ·j· + ȳ) + (yijk − ȳij· )

Degrees of freedom

IJK = 1 + (I − 1) + (J − 1)
+ (I − 1)(J − 1) + IJ(K − 1)

25/41
Sums of squares
The sum of squares can be broken down as

SST = SSA + SSB + SSAB + SSE

as follows:
I X
X J X
K I
X J
X
2 2
(yijk − ȳ) = JK (ȳi·· − ȳ) + IK (ȳ·j· − ȳ)2
i=1 j=1 k=1 i=1 j=1
I X
X J
+K (ȳij· − ȳi·· − ȳ·j· + ȳ)2
i=1 j=1
I X
X J X
K
+ (yijk − ȳij· )2
i=1 j=1 k=1

26/41
Expected mean squares

Fixed Effect

I
SSA 1 X
MSA = = (ȳi·· − ȳ)2
I−1 I − 1 i=1
I I
1 X 1 X 2
= (αi − α· )2 = α ≡ Q[A]
I − 1 i=1 I − 1 i=1 i

Random effects

MSB, MSAB and MSE same as in model with interaction


for two random factors.

27/41
ANOVA table

ANOVA table
Source MS E(MS) F

SSA MSA
A σE2 + KσAB
2
+ JKQ[A] FA =
I−1 MSAB
SSB MSB
B σE2 + KσAB
2
+ IKσB2 FB =
J−1 MSAB
SSAB MSAB
AB σE2 + KσAB
2
FAB =
(I − 1)(J − 1) MSE

SSE
Error σE2
IJ(K − 1)

28/41
Hypothesis tests

Hypothesis for fixed effect

HA : αi = 0, i = 1, . . . , I

Test statistic
MSA
FA = ∼ F(I − 1, (I − 1)(J − 1))
MSAB
under HA .

29/41
Hypothesis tests

Hypotheses for random effects

Under HB : σB2 = 0,

MSB
FB = ∼ F(J − 1, (I − 1)(J − 1))
MSAB

2
Under HAB : σAB =0

MSAB
FAB = ∼ F((I − 1)(J − 1), IJ(K − 1))
MSE

30/41
Example

Grass

Three seed growth methods are applied to seeds from each


of five varieties of turf grass.

Six pots are planted with seeds from each


method-by-variety combination.

The 90 pots are randomly placed in a uniform growth


chamber, and dry matter yields are measured from
clippings at the end of four weeks.

Data source: SAS.

31/41
Grass Example

This is an example of a completely randomized


experiment with a factorial arrangement of treatments.

The fifteen treatments are the combinations of levels of the


two factors:
variety (five levels), and
method (three levels).

Assume that the five varieties were randomly chosen from


a broader population of varieties.

Interest is not in these particular five varieties but in the


population from which they were chosen.

32/41
Grass Example

The variable variety is considered to be a random effect.

The variable method is considered to be a fixed effect


because the interest is only on these three methods.

The method by variety interaction is a random effect.


Since both random and fixed effects are involved, the
model is defined as being mixed.

The varieties are random, and the inference about method


means that differences should apply across all varieties.

33/41
Grass Example

Goal
The purpose of such an experiment is to
1 Estimate and compare the mean yield for the three growth
methods over the entire population of grass varieties.
2 Account for the variability in yield due to variety and
variety-by-method combination.

34/41
Grass Example

Model
yijk = µ + αi + bj + (αb)ij + eijk

yijk is kth observation (k = 1, . . . , 6) for the ith method


(i = 1, 2, 3) and jth variety (j = 1, . . . , 5);
µ is the overall mean
αi is the fixed effect associated with the ith growth method
bj is the random effect associated with the jth grass variety
(αb)ij is the interaction between the ith method and the jth
variety
eijk is the experimental error.

35/41
Grass Example

Distributional assumptions
iid
bj ∼ N(0, σB2 ).

iid 2
(αb)ij ∼ N(0, σAB ).

iid
eijk ∼ N(0, σE2 ).

The effects bj , (αb)ij and eij are assumed to be independent


random variables.

36/41
Grass Example

Expectation

E(Yijk ) = µ + αi = µi

This is the mean yield for method i averaged across all


varieties in the population.

Variance

Var (Yijk ) = σB2 + σAB


2
+ σE2

2
The variance components are σB2 (variety variance), σAB
(method-by-variety variance) and σE2 (error variance).

37/41
Grass Example

Hypotheses about fixed effects

HA : αi = 0 for i = 1, . . . , I.

Hypotheses about random effects

HB : σB2 = 0

2
HAB : σAB = 0.

38/41
Interaction plot

30

method

A
25

B
C
20
yield

15
10
5

1 2 3 4 5

variety
39/41
ANOVA

Type 1 Analysis of Variance


Source DF SS MS E(MS)
M 2 925.923 462.961 Var(Res) + 6Var(M*V)+Q(M)
V 4 219.105 54.766 Var(Res) + 6Var(M*V) + 18Var(V)
M*V 8 376.510 47.064 Var(Res) + 6Var(M*V)
Res 75 1382.600 18.435 Var(Res)

Type 1 Analysis of Variance


Source Error Term Error DF F Value Pr > F
M MS(M*V) 8 9.84 0.0070
V MS(M*V) 8 1.16 0.3946
M*V MS(Res) 75 2.55 0.0162
Res

40/41
Contrasts

Estimates
Label Estimate St. Error DF t Value Pr > |t|
A vs B and C 6.7583 1.5340 8 4.41 0.0023
Method A mean 23.0100 1.2863 11.9 40.27 <.0001

Confidence intervals
Label Alpha Lower Upper
A vs B and C 0.05 3.2209 10.2958
Method A mean 0.05 20.2058 25.8142

41/41

You might also like