Lecture 5
Lecture 5
Statics
1 Where we are
• Last time, we saw the setup for the one-dimensional version of Monotone Comparative Statics
Theorem. Let x∗ (t) = arg maxx∈X g(x, t), where X ⊆ R. If g has increasing differences, then
x∗ (t) is increasing in t via the Strong Set Order.
• Today we’ll build a little bit more intuition with the single-dimensional case,
then quickly move on to the more interesting case where X is multi-dimensional
78
1.1 How do we use the result?
• to go back to the example we did right at the end of class Tuesday...
with f is increasing
∂g
• Since f is increasing, ∂p = f (z) is increasing in z,
so g has increasing differences in p and z –
so when p goes up, input used (and output produced) goes up
• We can introduce a new variable ŵ = −w, and think of the problem in terms of ŵ
• This is obviously the same problem; but the new objective function has increasing differences
in z and ŵ
• So z ∗ is increasing in ŵ
79
• (Note that we can apply this parameter by parameter to any parameter in the problem –
we don’t need to worry about the relationship between the different parameters,
we can just think of holding the others fixed while we change one,
so we just worry about the relationship between the choice variable and one parameter at a
time)
• Here’s the fun part – the objective function still has increasing differences in z and −w,
so without knowing anything about the shape of demand P (·) or the production function
f (·),
we know that when w goes up, z must go down (at least via the SSO)
80
2 Minor Extensions
• First of all, recall that the Strong Set Order is just regular weak inequality when the sets are
singletons, so:
• Corollary. If g has increasing differences and x∗ (t) is single-valued, then x∗ is weakly in-
creasing in t in the “usual” sense.
• Theorem. Suppose g has strictly increasing differences, that is, g(x0 , t) − g(x, t) is strictly
increasing in t for any x0 > x. Then for any x ∈ x∗ (t) and x0 ∈ x∗ (t0 ), x0 ≥ x.
• Our examples so far have had strictly increasing differences, so we get the stronger result
81
• Note, though, that even with strictly increasing differences,
we’re not claiming x∗ is strictly increasing in t –
that would require some differentiability assumptions –
although with those, we could get a strictly-increasing result.
– To see why strictly increasing differences does not imply x∗ (t) strictly increasing,
consider the example
(
tx if x ≤ 0
g(x, t) =
(t − 3)x if x > 0
on X = R and T = {1, 2}
– g has strictly increasing differences –
for any x0 > x, g(x0 , t) − g(x) is strictly increasing in t –
– but x = 0 is optimal for both t = 1 and t = 2
– What goes wrong here is the “kink” –
since the objective function is kinked at the optimum (not differentiable in x),
even a strictly positive interaction between x and t does not ensure that x∗ (t) moves
strictly as t moves
82
• One other technical note
is increasing in t,
but all we actually need is that when this is positive for one value of t,
it’s also positive for higher values of t
• That is, we don’t really care whether this difference is 5 or 7, we only care about when it’s
positive and when it’s negative, and whether that is increasing in t
• So we can weaken the “increasing differences” condition to what’s called single crossing
differences – that
• It’s easy to show that the proof we gave of Topkis’ Theorem only relies on this, not increasing
differences
83
3 Motivating the bigger problem
• So we get a nice clean result – when the choice variable is one-dimensional,
if the objective function has increasing differences in the choice variable and the parameter,
the optimal choice is increasing in the parameter
• What about a firm with two inputs, capital and labor, say, and a production function f
• If p goes up, will the firm use more capital and more labor?
Or more capital and less labor? Or more labor and less capital?
• That’s where we’re headed next – generalizing Topkis to cover the case of a multi-dimensional
choice problem like the firm’s choice of inputs
84
4 The Multi-Dimensional Case
4.1 Setup
• So let’s consider a general multi-dimensional optimization problem,
1. Extend the Strong Set Order to a way to rank sets that are subsets of Rm rather than
R,
so we’ll know what it means to say x∗ is increasing in t
2. Put a condition on the choice set X to make our approach work
3. Generalize increasing differences to a condition on the objective function g in a multi-
dimensional problem
4. Show the analogous result: given the conditions on X and g, x∗ (t) is increasing in t
85
4.2 First: when is a subset of Rm above another one?
• To be able to say that x∗ is increasing in t,
we need to know what it means for x∗ (t0 ) to be greater than x∗ (t),
when they are both sets of points in Rm
• In two dimensions:
x∨y
x (join) x∨y x=x∨y
x (join) x=x∨y
x ∧ y x ∧ yy y y = xy ∧
= xy∧ y
(meet) (meet)
(Note that if x ≥ y, then the join is just x and the meet is just y)
• If we consider the partial order on individual points where x ≥ y if it’s weakly higher in every
dimension,
A A A
A the join of two points is their least upper
then A bound – the “lowest” point biggerA than both;
and the meet is the greatest lower bound – the highest point lower than both
B B
B B B
B
86
• With the meet and the join defined,
we’ll say that a set A is bigger than a set B, A ≥ B, if
• If X is one-dimensional,
x∨y this is identical to the Strong Set Order,
x x=x∨y
because if a ∨(join)
b = max{a, b} and a ∧ b = min{a, b}
A A A
B B
B
• This is what we’ll mean when we say A ≥ B when they’re both subsets of Rm ;
so our goal will be to show x∗ (t0 ) ≥ x∗ (t) via this ranking when t0 > t
87
4.3 Second: what conditions do we need for X?
• So, we’re defining a set x∗ (t0 ) to be above another set x∗ (t) if for any points in the two sets,
x∗ (t0 ) also contains the join, and x∗ (t) also contains the meet
• Since x∗ is a subset of X,
this will only make sense if the meet and the join are also in the choice set X
• For this reason, we can only apply Monotone Comparative Statics to choice sets X that have
a certain shape
• Specifically, we need the set X to be closed under the meet and the join –
for any x, y ∈ X, we need X to also contain x ∨ y and x ∧ y
• This actually rules out a lot of the problems we consider this semester
• When we think about a firm’s profit maximization problem over a production set Y ,
we’re optimizing over some weird shape that would not satisfy this condition
x (join) x=x∨y
• When we think about cost minimization,
we’re minimizing over the set of input vectors generating enough output,
x ∨ y ∉Y
x
x ∨ y ∉B(p,w)
x
y
Y
y
B(p,w)
88
• So, what kind of choice sets do work?
X = X1 × X2 × . . . × Xm
where each Xi ⊆ R
• (This is another reason the consumer problem doesn’t fit this setup –
a consumer’s budget set changes as prices change)
• We can’t analyze the general profit maximization over Y problem this way,
because Y is almost certainly not a product set;
but if we think about just choosing input combinations,
we’re choosing over Rm
+ , so we can do it this way.
89
4.4 Third: what do we need for g?
• So now we know when a set A is greater than a set B,
so we’ll know how to say that x∗ is increasing in t,
and we know what type of choice set X we’re able to consider
for any x, y ∈ X.
• This sounds like a tough condition to check, but it turns out to be equivalent to a simpler
one:
• (We’ll get intuition for why pairwise increasing differences is good enough,
when we talk about the intuition for the upcoming result)
• We’ll also say g has increasing differences in (X, T ) if it has increasing differences in
(xi , tj ) for each i and j.
• This will ensure that all “feedback loops” and indirect effects reinforce the primary effects,
which will give us strong results
90
• So now, we know what it means to say x∗ (t0 ) ≥ x∗ (t);
we know what type of choice set X we want to allow;
and we have a condition on g that we can impose
If...
1. g is supermodular in X, and
• That is, if x ∈ x∗ (t) and x0 ∈ x∗ (t0 ), with t0 > t, then x ∨ x0 ∈ x∗ (t0 ), and x ∧ x0 ∈ x∗ (t)
91
4.5 Example
• Before getting into the proof, an example will help clarify exactly what’s going on
∂2f
• For simplicity, let’s suppose that f is twice differentiable, and that ∂k∂` ≥0
∂2g ∂ f 2
• Then g is differentiable, and ∂k∂` = p ∂k∂` ≥ 0,
so g is supermodular in the choice variables X = (k, `)
• In differentiable cases, I find the easiest way to check is to take first derivatives of g with
respect to each choice variable, and check whether they’re monotonic in each parameter
• In this case, we’re best off thinking of the parameters as T = (p, −w, −r)
∂g
• ∂k = p ∂f
∂k − r is increasing in p and −r,
and since it doesn’t depend on w, we’re free to say it’s (weakly) increasing in −w
∂g
• And ∂` = p ∂f
∂` − w is increasing in p and −w,
• So g has increasing differences in (X, T ), where X = (k, `) and T = (p, −w, −r)
92
• Since g is supermodular in X and has increasing differences in (X, T ), we can apply Topkis’
Theorem
• In this case, if we assume that the firm’s problem has a unique solution (so we don’t have to
worry about stating things in terms of sets above other sets),
we simply get that (k ∗ , `∗ ) is increasing in p and decreasing in w and `
• The obvious first response is that the firm reduces the labor input `
∂f
• But since ∂k is increasing in `,
when ` goes down, that reduces the marginal product of capital;
so the firm reduces its use of capital k
∂f
• But since ∂` is increasing in k,
when the firm reduces its use of capital,
that reduces the marginal product of labor,
so the firm reduces its labor demand again
• Supermodularity basically ensures that all the feedback loops go in the same direction –
every change the firm wants to make, reinforces the other changes
93
∂2f
Next, let’s consider the same example, but if ∂k∂`
<0
∂2f
• Next, we’ll consider the same example, but when ∂k∂` <0
• Well, when we had an interaction between a choice variable and a parameter that went in the
“wrong direction,”
we just flipped the sign of the parameter
• Now,
∂2g ∂2f
= −p ≥ 0
∂`∂ k̂ ∂`∂k
so if we think of the firm as choosing (`, k̂) = (`, −k), the problem is now supermodular
94
∂g
• Well, ∂` = p ∂f
∂` (−k̂, `) − w is increasing in p and −w
∂g
• And = −p ∂f
∂k + r is decreasing in p and increasing in r
∂ k̂
• Ignore p!
95
∂2f
• Of course, ∂k∂` < 0 is basically saying that when capital goes up,
the marginal product of labor goes down –
or capital and labor are substitutes!
• What about p?
Well, when p goes up, we know the firm will want to produce more,
but without knowing more about f ,
we can’t tell whether it will use more capital and less labor,
more labor and less capital,
or more of both
96
5 Why Is Topkis’ Theorem True?
• I’m not going to give the formal proof,
but a “half proof” will give good intuition for why it holds
• The result is that as a parameter t increases, the set of solutions to a problem increases,
provided the problem is supermodular and has ID in the choice variables and t
max g(x, y, t)
x,y
97
• There are a few possible cases
or
g(5, 10, 0) − g(3, 10, 0) ≥ 0
• But we started with c = (3, 20) is optimal at t = 1; so if d = (5, 20) is at least as good,
then d must also be optimal at t = 1, which is what we wanted to show
98
• Case 3: a is optimal at t = 1 and d is optimal at t = 0
[g(5, 20, 0) − g(3, 20, 0)] + [g(3, 20, 0) − g(3, 10, 0)] ≥ 0
[g(5, 20, 1) − g(3, 20, 1)] + [g(3, 20, 1) − g(3, 10, 1)]
≥
[g(5, 20, 0) − g(3, 20, 0)] + [g(3, 20, 0) − g(3, 10, 0)]
≥
0
or
g(5, 20, 1) − g(3, 10, 1) ≥ 0
• But since a = (3, 10) is optimal at t = 1, and d = (5, 20) is at least as good,
d is also optimal at t = 1, which is what we wanted to show
99
6 Proof (skipped – included in notes for completeness)
• So suppose x ∈ x∗ (t), and x0 ∈ x∗ (t0 ), with t0 > t
• We want to show that if g is supermodular in X and has increasing differences in (X, t),
then x ∧ x0 ∈ x∗ (t) and x ∨ x0 ∈ x∗ (t0 ).
g(x, t) − g(x ∧ x0 , t) ≥ 0
g(x0 , t0 ) − g(x ∨ x0 , t0 ) ≥ 0
or
g(x, t0 ) − g(x ∧ x0 , t0 ) ≥ 0 ≥ g(x ∨ x0 , t0 ) − g(x0 , t0 )
or that
meaning all of these terms must be equal, and therefore g(x∨x0 , t0 ) = g(x0 , t0 ) = maxx̂∈X g(x̂, t0 ).
• (We would also want to prove that the meet/join definition of supermodularity is equivalent
to pairwise increasing differences,
which is mechanical – like with the “example” above,
it just means breaking up a difference into a bunch of component differences)
100