Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views23 pages

Lecture 5

Uploaded by

qzhan5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views23 pages

Lecture 5

Uploaded by

qzhan5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Lecture 5: Monotone Compar.

Statics

1 Where we are
• Last time, we saw the setup for the one-dimensional version of Monotone Comparative Statics

• We start with a parameterized optimization problem over a one-dimensional choice set,

max g(x, t) X⊆R


x∈X

and let x∗ (t) = arg maxx∈X g(x, t) be the set of maximizers

• We define a partial order of subsets of R, the Strong Set Order:


A ≥SSO B if for any a ∈ A and b ∈ B, max{a, b} ∈ A and min{a, b} ∈ B,
which reduces to a ∈ A ≥ b ∈ B in the case of singleton sets

• We define the function g as having increasing differences if for any x0 > x,


the difference g(x0 , t) − g(x, t) is weakly increasing in t,
∂g ∂g ∂2g
which is equivalent to ∂x increasing in t, or ∂t increasing in x, or ∂x∂t ≥ 0,
whenever these derivatives exist

• And we gave the “punchline”, which I called “Baby Topkis”:

Theorem. Let x∗ (t) = arg maxx∈X g(x, t), where X ⊆ R. If g has increasing differences, then
x∗ (t) is increasing in t via the Strong Set Order.

• Today we’ll build a little bit more intuition with the single-dimensional case,
then quickly move on to the more interesting case where X is multi-dimensional

• But first: any questions?

78
1.1 How do we use the result?
• to go back to the example we did right at the end of class Tuesday...

• a single-input, single-output firm with production function f , solving

max{pf (z) − wz}


z≥0

with f is increasing
∂g
• Since f is increasing, ∂p = f (z) is increasing in z,
so g has increasing differences in p and z –
so when p goes up, input used (and output produced) goes up

• What about when w changes?


∂g ∂2g
• Well, ∂w = −z, so ∂w∂z = −1, so the problem does not have increasing differences in z and
w

• So what can we do?

• Well, basically, we can flip the sign of w

• We can introduce a new variable ŵ = −w, and think of the problem in terms of ŵ

• That is, consider


z ∗ (ŵ) = arg max{pf (z) + ŵz}
z≥0

• This is obviously the same problem; but the new objective function has increasing differences
in z and ŵ

• So z ∗ is increasing in ŵ

• But since ŵ is just −w, we learn z ∗ is decreasing in w

– In practice, we don’t need to formally define a new variable,


we can just think of −w instead of w as the parameter of interest,
and note that g has increasing differences in z and −w

79
• (Note that we can apply this parameter by parameter to any parameter in the problem –
we don’t need to worry about the relationship between the different parameters,
we can just think of holding the others fixed while we change one,
so we just worry about the relationship between the choice variable and one parameter at a
time)

• Let’s do one more little complication,


so at least we’re proving something that we didn’t already know from the Law of Supply

• Suppose the firm isn’t a price taker in the output market,


but faces a downward-sloping demand curve giving inverse demand P (q) at each q

• The firm is now solving


max{P (f (z))f (z) − wz}
z

• Here’s the fun part – the objective function still has increasing differences in z and −w,
so without knowing anything about the shape of demand P (·) or the production function
f (·),
we know that when w goes up, z must go down (at least via the SSO)

80
2 Minor Extensions
• First of all, recall that the Strong Set Order is just regular weak inequality when the sets are
singletons, so:

• Corollary. If g has increasing differences and x∗ (t) is single-valued, then x∗ is weakly in-
creasing in t in the “usual” sense.

• We can also show a stronger result if we have strictly increasing differences:

• Theorem. Suppose g has strictly increasing differences, that is, g(x0 , t) − g(x, t) is strictly
increasing in t for any x0 > x. Then for any x ∈ x∗ (t) and x0 ∈ x∗ (t0 ), x0 ≥ x.

• That is, x∗ (t) is increasing in t in the more intuitive sense –


every solution at t0 is at least as big as every solution at t < t0

– To prove it, suppose t0 > t, x ∈ x∗ (t), and x0 ∈ x∗ (t0 )


– x ∈ x∗ (t) requires g(x, t) − g(x0 , t) ≥ 0
– and x0 ∈ x∗ (t0 ) requires g(x, t0 ) − g(x0 , t0 ) ≤ 0
– So together, g(x, t0 ) − g(x0 , t0 ) ≤ g(x, t) − g(x0 , t)
– If g has strictly increasing differences, this is impossible for x > x0 ,
which proves x0 ≥ x

• This is called the Monotone Selection Theorem:


for any selection x and x0 from x∗ (t) and x∗ (t0 ),
x0 ≥ x

• Our examples so far have had strictly increasing differences, so we get the stronger result

• (which we already knew from the Law of Supply)

81
• Note, though, that even with strictly increasing differences,
we’re not claiming x∗ is strictly increasing in t –
that would require some differentiability assumptions –
although with those, we could get a strictly-increasing result.

– To see why strictly increasing differences does not imply x∗ (t) strictly increasing,
consider the example
(
tx if x ≤ 0
g(x, t) =
(t − 3)x if x > 0

on X = R and T = {1, 2}
– g has strictly increasing differences –
for any x0 > x, g(x0 , t) − g(x) is strictly increasing in t –
– but x = 0 is optimal for both t = 1 and t = 2
– What goes wrong here is the “kink” –
since the objective function is kinked at the optimum (not differentiable in x),
even a strictly positive interaction between x and t does not ensure that x∗ (t) moves
strictly as t moves

82
• One other technical note

• We made the assumption that


g(x0 , t) − g(x, t)

is increasing in t,
but all we actually need is that when this is positive for one value of t,
it’s also positive for higher values of t

• That is, we don’t really care whether this difference is 5 or 7, we only care about when it’s
positive and when it’s negative, and whether that is increasing in t

• So we can weaken the “increasing differences” condition to what’s called single crossing
differences – that

g(x0 , t) − g(x, t) ≥ 0 −→ g(x0 , t0 ) − g(x, t0 ) ≥ 0

for any x0 > x and t0 > t

• It’s easy to show that the proof we gave of Topkis’ Theorem only relies on this, not increasing
differences

• We go with increasing differences because it’s typically easier to check


(Whether g has increasing differences depends only on “interaction effects” between x and t;
single-crossing differences depends on levels,
so adding a function of x that isn’t a function of t to g could change whether single crossing
differences holds, but doesn’t change whether increasing differences holds
so I think it’s easier to check increasing differences;
but it’s good to know the weaker condition still gives the same result)

83
3 Motivating the bigger problem
• So we get a nice clean result – when the choice variable is one-dimensional,
if the objective function has increasing differences in the choice variable and the parameter,
the optimal choice is increasing in the parameter

• But what if there’s more than one choice variable?

• What about a firm with two inputs, capital and labor, say, and a production function f

• The firm’s problem is


max {pf (k, `) − wk − r`}
k,`≥0

• We already know that if p goes up, output will go up –


because we can restate this as a one-dimensional quantity-setting problem with some cost
function c(q),
or from the Law of Supply

• But what about inputs?

• If p goes up, will the firm use more capital and more labor?
Or more capital and less labor? Or more labor and less capital?

• If w goes up, the Law of Supply says k will go down,


but what about `, and output?

• What do we need to answer this question?

• That’s where we’re headed next – generalizing Topkis to cover the case of a multi-dimensional
choice problem like the firm’s choice of inputs

84
4 The Multi-Dimensional Case
4.1 Setup
• So let’s consider a general multi-dimensional optimization problem,

x∗ (t) = arg max g(x, t)


x∈X

where g : X × T → R and now X ⊆ Rm


(We can still let T ⊆ R, since we only need to consider one parameter at a time,
although for notational convenience I’ll often think of T ⊂ Rn as well)

• Our goal is the same as last time:


to say when the solution x∗ changes in a predictable direction when a parameter changes

• To do this, we’ll need to do the following:

1. Extend the Strong Set Order to a way to rank sets that are subsets of Rm rather than
R,
so we’ll know what it means to say x∗ is increasing in t
2. Put a condition on the choice set X to make our approach work
3. Generalize increasing differences to a condition on the objective function g in a multi-
dimensional problem
4. Show the analogous result: given the conditions on X and g, x∗ (t) is increasing in t

85
4.2 First: when is a subset of Rm above another one?
• To be able to say that x∗ is increasing in t,
we need to know what it means for x∗ (t0 ) to be greater than x∗ (t),
when they are both sets of points in Rm

• To do that, we introduce a generalization of the Strong Set Order

• For two points a, b ∈ X ⊂ Rm , we’ll define their componentwise maximum

a∨b = “a join b” = (max{a1 , b1 }, max{a2 , b2 }, . . . , max{am , bm })

and their componentwise minimum

a∧b = “a meet b” = (min{a1 , b1 }, min{a2 , b2 }, . . . , min{am , bm })

• In two dimensions:
x∨y
x (join) x∨y x=x∨y
x (join) x=x∨y

x ∧ y x ∧ yy y y = xy ∧
= xy∧ y
(meet) (meet)

(Note that if x ≥ y, then the join is just x and the meet is just y)

• If we consider the partial order on individual points where x ≥ y if it’s weakly higher in every
dimension,
A A A
A the join of two points is their least upper
then A bound – the “lowest” point biggerA than both;
and the meet is the greatest lower bound – the highest point lower than both
B B
B B B
B

86
• With the meet and the join defined,
we’ll say that a set A is bigger than a set B, A ≥ B, if

a∈A and b ∈ B −→ a∨b∈A and a ∧ b ∈ B

• If X is one-dimensional,
x∨y this is identical to the Strong Set Order,
x x=x∨y
because if a ∨(join)
b = max{a, b} and a ∧ b = min{a, b}

• If X is multi-dimensional but A and B are singleton sets {a} and {b},


x∧y y y=x∧y
(meet)
then this requires that a ≥ b, that is, the point in A is weakly bigger than the point in B in
every dimension

• But this also allows a bunch of other configurations:

A A A

B B
B

• This is what we’ll mean when we say A ≥ B when they’re both subsets of Rm ;
so our goal will be to show x∗ (t0 ) ≥ x∗ (t) via this ranking when t0 > t

87
4.3 Second: what conditions do we need for X?
• So, we’re defining a set x∗ (t0 ) to be above another set x∗ (t) if for any points in the two sets,
x∗ (t0 ) also contains the join, and x∗ (t) also contains the meet

• Since x∗ is a subset of X,
this will only make sense if the meet and the join are also in the choice set X

• For this reason, we can only apply Monotone Comparative Statics to choice sets X that have
a certain shape

• Specifically, we need the set X to be closed under the meet and the join –
for any x, y ∈ X, we need X to also contain x ∨ y and x ∧ y

• This actually rules out a lot of the problems we consider this semester

• When we think about a firm’s profit maximization problem over a production set Y ,
we’re optimizing over some weird shape that would not satisfy this condition
x (join) x=x∨y
• When we think about cost minimization,
we’re minimizing over the set of input vectors generating enough output,

∧ y contour yset of a production function


or thexupper y = x–∧ y
(meet)
this would also not satisfy this condition

• When we get to consumer theory,


we’ll generally be assuming that consumers choose from budget sets,
which are triangles, and don’t satisfy this condition

x ∨ y ∉Y
x
x ∨ y ∉B(p,w)
x
y
Y
y
B(p,w)

88
• So, what kind of choice sets do work?

• It suffices for X to be a product set

X = X1 × X2 × . . . × Xm

where each Xi ⊆ R

• (X doesn’t need to be a product set –


formally, it needs to be a sublattice of Rm ,
which just means for any two points in X, the meet and join are also in X –
but this is a natural assumption, and a sufficient one)

• So basically, X is a grid or a rectangle, not a triangle or some other funny-shaped thing

• We’re also assume that X is fixed while the parameter changes;


this can also be relaxed some, but not completely, and it’s safest to just leave it fixed

• (This is another reason the consumer problem doesn’t fit this setup –
a consumer’s budget set changes as prices change)

• This is another reason the single-output, production-function formulation is useful:

• We can’t analyze the general profit maximization over Y problem this way,
because Y is almost certainly not a product set;
but if we think about just choosing input combinations,
we’re choosing over Rm
+ , so we can do it this way.

89
4.4 Third: what do we need for g?
• So now we know when a set A is greater than a set B,
so we’ll know how to say that x∗ is increasing in t,
and we know what type of choice set X we’re able to consider

• What we need now is conditions on the objective function g,


which will allow us to say that x∗ (t) is increasing in t

• Basically, we need to extend Increasing Differences to a multi-dimensional environment

• For now, fix t, so we can think of g as a function from X to R

• Definition. For X a product set in Rm , a function g : X → R is supermodular if

g(x ∨ y) + g(x ∧ y) ≥ g(x) + g(y)

for any x, y ∈ X.

• This sounds like a tough condition to check, but it turns out to be equivalent to a simpler
one:

• Equivalent Definition. A function g : X → R is supermodular if and only if it has


increasing differences in xi and xj for every pair (i, j), holding the other variables fixed.

• This is awesome, because we already know that if g is twice differentiable,


∂2g
this just means all its mixed partials ∂xi ∂xj ≥ 0, which is easy to check if we know g

• (We’ll get intuition for why pairwise increasing differences is good enough,
when we talk about the intuition for the upcoming result)

• We’ll also say g has increasing differences in (X, T ) if it has increasing differences in
(xi , tj ) for each i and j.

• So basically, the conditions we want come down to pairwise increasing differences –


increasing differences between any two of the choice variables,
and increasing differences between any choice variable and any parameter we’re considering

• This will ensure that all “feedback loops” and indirect effects reinforce the primary effects,
which will give us strong results

90
• So now, we know what it means to say x∗ (t0 ) ≥ x∗ (t);
we know what type of choice set X we want to allow;
and we have a condition on g that we can impose

• And that gives us the result:

Theorem (Topkis). Let X be a product set in Rm , T ⊆ Rn , g : X × T → R, and

x∗ (t) = arg max g(x, t)


x∈X

If...

1. g is supermodular in X, and

2. g has increasing differences in X and T ,

then x∗ (t) is increasing in t.

• That is, if x ∈ x∗ (t) and x0 ∈ x∗ (t0 ), with t0 > t, then x ∨ x0 ∈ x∗ (t0 ), and x ∧ x0 ∈ x∗ (t)

• Corollary. If x∗ is single-valued, this means x∗ (t) is weakly increasing in every dimension


(That is, if t0 > t, then x∗ (t0 ) ≥ x∗ (t), meaning x∗i (t0 ) ≥ x∗i (t) for every dimension i)

91
4.5 Example
• Before getting into the proof, an example will help clarify exactly what’s going on

• Let’s consider the two-input firm I mentioned earlier,


which uses capital k and labor ` as inputs, and solves

max {pf (k, `) − w` − rk}


k,`≥0

∂2f
• For simplicity, let’s suppose that f is twice differentiable, and that ∂k∂` ≥0
∂2g ∂ f 2
• Then g is differentiable, and ∂k∂` = p ∂k∂` ≥ 0,
so g is supermodular in the choice variables X = (k, `)

• What about increasing differences in (X, T )?

• In differentiable cases, I find the easiest way to check is to take first derivatives of g with
respect to each choice variable, and check whether they’re monotonic in each parameter

• In this case, we’re best off thinking of the parameters as T = (p, −w, −r)
∂g
• ∂k = p ∂f
∂k − r is increasing in p and −r,

and since it doesn’t depend on w, we’re free to say it’s (weakly) increasing in −w
∂g
• And ∂` = p ∂f
∂` − w is increasing in p and −w,

and (weakly) increasing in −r as well

• So g has increasing differences in (X, T ), where X = (k, `) and T = (p, −w, −r)

92
• Since g is supermodular in X and has increasing differences in (X, T ), we can apply Topkis’
Theorem

• In this case, if we assume that the firm’s problem has a unique solution (so we don’t have to
worry about stating things in terms of sets above other sets),
we simply get that (k ∗ , `∗ ) is increasing in p and decreasing in w and `

• So if the price of output goes up,


the firm demands more labor and more capital,
and therefore produces more output (as we already knew);

• and if either w or r goes up,


the firm demands less capital and less labor,
and therefore produces less output

• Why does this make sense?

• Suppose the price of labor, w, goes up

• The obvious first response is that the firm reduces the labor input `
∂f
• But since ∂k is increasing in `,
when ` goes down, that reduces the marginal product of capital;
so the firm reduces its use of capital k
∂f
• But since ∂` is increasing in k,
when the firm reduces its use of capital,
that reduces the marginal product of labor,
so the firm reduces its labor demand again

• And so on, and so on

• Supermodularity basically ensures that all the feedback loops go in the same direction –
every change the firm wants to make, reinforces the other changes

• Here, we assumed f was differentiable and the solution was single-valued,


but we could easily drop these assumptions;
the only really substantive assumption we needed was that f is supermodular,
i.e., more capital makes labor more productive and vice versa,
i.e., capital and labor are complements in production!

93
∂2f
Next, let’s consider the same example, but if ∂k∂`
<0
∂2f
• Next, we’ll consider the same example, but when ∂k∂` <0

• Topkis’ Theorem only applies when the objective function is supermodular,


∂2g 2
∂ f
and we know ∂k∂` = p ∂k∂` , so what do we do when this is negative?

• Well, when we had an interaction between a choice variable and a parameter that went in the
“wrong direction,”
we just flipped the sign of the parameter

• Here, we can do the same thing

• Define a new variable k̂ = −k, and think of the firm’s problem as

max {pf (−k̂, `) − w` + rk̂}


`≥0,k̂≤0

• Now,
∂2g ∂2f
= −p ≥ 0
∂`∂ k̂ ∂`∂k
so if we think of the firm as choosing (`, k̂) = (`, −k), the problem is now supermodular

• What about increasing differences in X and T ?

94
∂g
• Well, ∂` = p ∂f
∂` (−k̂, `) − w is increasing in p and −w

∂g
• And = −p ∂f
∂k + r is decreasing in p and increasing in r
∂ k̂

• So if we need to flip the sign of k to make the problem supermodular,


we can’t get increasing differences in both choice variables and p

• So what can we do?

• Ignore p!

• If we still think of the firm as choosing ` and −k,


the objective function has increasing differences in X = (`, −k) and T = (−w, r),
and we just won’t be able to say anything about p

• So here, Topkis’ Theorem says that if r goes up,


then ` and −k both go up;
or if the price of capital goes up,
the firm uses less capital but more labor
∂f
• (Since ∂` is decreasing in k,
when the price of labor goes up and the firm uses less capital,
that increases the productivity of labor, so the firm hires more;
∂f
since ∂k is decreasing in `,
hiring more labor depresses the productivity of capital,
so the firm demands even less capital,
and so on)

• And if the wage w goes up, then −w goes down,


so ` and −k go down,
so the firm uses less labor and more capital

95
∂2f
• Of course, ∂k∂` < 0 is basically saying that when capital goes up,
the marginal product of labor goes down –
or capital and labor are substitutes!

• So when capital gets more expensive,


the firm uses less capital,
but that makes labor more productive,
so then the firm uses more labor!

• What about p?
Well, when p goes up, we know the firm will want to produce more,
but without knowing more about f ,
we can’t tell whether it will use more capital and less labor,
more labor and less capital,
or more of both

96
5 Why Is Topkis’ Theorem True?
• I’m not going to give the formal proof,
but a “half proof” will give good intuition for why it holds

• The result is that as a parameter t increases, the set of solutions to a problem increases,
provided the problem is supermodular and has ID in the choice variables and t

• So let’s consider a problem in two dimensions,

max g(x, y, t)
x,y

where g is supermodular in X = (x, y) and has increasing differences in (x, y) and t

• And let’s consider t ∈ {0, 1}

• What Topkis says here is that if a point (x, y) is optimal at t = 0,


and a point (x0 , y 0 ) is optimal at t = 1,
then the point (max{x, x0 }, max{y, y 0 }) must also be optimal at t = 1,
and point (min{x, x0 }, min{y, y 0 }) must also be optimal at t = 0

97
• There are a few possible cases

• Case 1: a is optimal at t = 0 and d is optimal at t = 1

• In that case, we need to show that a ∨ d = d is optimal at t = 1,


and a ∧ d = a is optimal at t = 0,
so there’s nothing to show – we’re done

• Case 2: b is optimal at t = 0 and c is optimal at t = 1

• Here, we need to show that d = b ∨ c is also optimal at t = 1,


and a = b ∧ c is also optimal at t = 0

• Let’s do the first

• If b = (5, 10) is optimal at t = 0, this means b is at least as good at a at t = 0, or

g(5, 10, 0) ≥ g(3, 10, 0)

or
g(5, 10, 0) − g(3, 10, 0) ≥ 0

• But supermodularity implies g(5, y, t) − g(3, y, t) is increasing in t, giving

g(5, 20, 0) − g(3, 20, 0) ≥ g(5, 10, 0) − g(3, 10, 0) ≥ 0

and increasing differences implies g(5, y, t) − g(3, y, t) is increasing in t, giving

g(5, 20, 1) − g(3, 20, 1) ≥ g(5, 20, 0) − g(3, 20, 0) ≥ 0

• But we started with c = (3, 20) is optimal at t = 1; so if d = (5, 20) is at least as good,
then d must also be optimal at t = 1, which is what we wanted to show

• (We could use analogous steps to show a optimal at t = 0)

• By symmetry, it should be obvious that if c were optimal at t = 0 and b optimal at t = 1,


we’d get the same result; so all that’s left is...

98
• Case 3: a is optimal at t = 1 and d is optimal at t = 0

• In that case, we need to show d is also optimal at t = 1, and a at t = 0

• Again, we’ll show the first

• If d is optimal at t = 0, it’s at least as good as a, so

g(5, 20, 0) − g(3, 10, 0) ≥ 0

• Let’s add and subtract g(3, 20, 0), giving

[g(5, 20, 0) − g(3, 20, 0)] + [g(3, 20, 0) − g(3, 10, 0)] ≥ 0

• By increasing differences, g(5, y, t) − g(3, y, t) is increasing in t,


and g(x, 20, t) − g(x, 10, t) is increasing in t, so

[g(5, 20, 1) − g(3, 20, 1)] + [g(3, 20, 1) − g(3, 10, 1)]

[g(5, 20, 0) − g(3, 20, 0)] + [g(3, 20, 0) − g(3, 10, 0)]

0

or
g(5, 20, 1) − g(3, 10, 1) ≥ 0

• But since a = (3, 10) is optimal at t = 1, and d = (5, 20) is at least as good,
d is also optimal at t = 1, which is what we wanted to show

• (Again, analogous arguments show a optimal at t = 0)

• So that’s the result

• if (x, y) is optimal at t and (x0 , y 0 ) is optimal at higher t0 ,


then their meet is optimal at t0 and their join is optimal at t

• Or, if the problem has a unique solution at each value of t,


then that solution is weakly increasing in every dimension in the parameters

99
6 Proof (skipped – included in notes for completeness)
• So suppose x ∈ x∗ (t), and x0 ∈ x∗ (t0 ), with t0 > t

• We want to show that if g is supermodular in X and has increasing differences in (X, t),
then x ∧ x0 ∈ x∗ (t) and x ∨ x0 ∈ x∗ (t0 ).

• Now, if x ∈ x∗ (t), then


g(x, t) ≥ g(y, t)

for any y ∈ X; in particular, plugging in y = x ∧ x0 ,

g(x, t) − g(x ∧ x0 , t) ≥ 0

• Recall that x ∧ x0 is the componentwise min of x and x0 , so x ≥ x ∧ x0

• And if g has increasing differences in t and each element of x, then

g(x, t0 ) − g(x ∧ x0 , t0 ) ≥ g(x, t) − g(x ∧ x0 , t) ≥ 0

• Of course, if x0 ∈ x∗ (t0 ), then g(x0 , t0 ) ≥ g(y, t0 ) for any y, so in particular

g(x0 , t0 ) − g(x ∨ x0 , t0 ) ≥ 0

or
g(x, t0 ) − g(x ∧ x0 , t0 ) ≥ 0 ≥ g(x ∨ x0 , t0 ) − g(x0 , t0 )

• However, supermodularity tells us that

g(x ∧ x0 , t0 ) + g(x ∨ x0 , t0 ) ≥ g(x, t0 ) + g(x0 , t0 )

or that

g(x ∨ x0 , t0 ) − g(x0 , t0 ) ≥ g(x, t0 ) − g(x ∧ x0 , t0 ) ≥ 0 ≥ g(x ∨ x0 , t0 ) − g(x0 , t0 )

meaning all of these terms must be equal, and therefore g(x∨x0 , t0 ) = g(x0 , t0 ) = maxx̂∈X g(x̂, t0 ).

• That proves x ∨ x0 ∈ x∗ (t0 ); the proof that x ∧ x0 ∈ x∗ (t) is analogous

• (We would also want to prove that the meet/join definition of supermodularity is equivalent
to pairwise increasing differences,
which is mechanical – like with the “example” above,
it just means breaking up a difference into a bunch of component differences)

100

You might also like