Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views16 pages

3602 Final Practice Questions

The document is a practice exam covering topics in the Theory of Computation, including Regular Languages, Context-Free Languages, Computability Theory, and Complexity Theory. It consists of various problems such as designing DFAs, converting NFAs to DFAs, proving languages are not regular or context-free, and discussing closure properties and decidability. Solutions to the problems are provided, demonstrating the necessary concepts and proofs in each area.

Uploaded by

Darian Jagdeo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views16 pages

3602 Final Practice Questions

The document is a practice exam covering topics in the Theory of Computation, including Regular Languages, Context-Free Languages, Computability Theory, and Complexity Theory. It consists of various problems such as designing DFAs, converting NFAs to DFAs, proving languages are not regular or context-free, and discussing closure properties and decidability. Solutions to the problems are provided, demonstrating the necessary concepts and proofs in each area.

Uploaded by

Darian Jagdeo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Theory of Computation Practice Exam

Topic 1: Regular Languages


1. DFA Design: Consider the language,

L = {w ∈ {0, 1}∗ | w contains an odd number of 0s and ends with a 1}.

Design a DFA that accepts this language.

q0 0 q1

1 1
0

1 q2 q3 1

2. NFA to DFA Conversion: Convert the following NFA into an equivalent DFA using
the subset construction method. The alphabet is Σ = {a, b}.
Original NFA:

b
a
q0 q1 b q2

1
Resulting DFA:

a
b

a
A = {q0 } B = {q0 , q1 }

a b
b

C = {q0 , q2 }

3. Regular Expressions: Consider the language L over Σ = {a, b} where every ‘a’ must
be immediately followed by a ‘b’. Write a regular expression for L.

4. Pumping Lemma: Use the Pumping Lemma for Regular Languages to prove that
the following language is not regular: L = {ak bk+1 | k ≥ 0}.

5. Closure Properties: Are the regular languages closed under the difference opera-
tion? Justify your answer.
(That is, if L1 and L2 are regular, is L1 \ L2 also guaranteed to be regular?)

6. Closure Properties (Proofs): Prove by construction that if L1 and L2 are regular


languages, then so are their union, concatenation, and intersection.

2
Topic 2: Context-Free Languages
1. CFG Design: Provide a Context-Free Grammar (CFG) for the language L = {ai bj ck |
i, j, k ≥ 0 and i + j = k}.

2. PDA Design: Design a Pushdown Automaton (PDA) that recognizes the language
of even-length palindromes over Σ = {0, 1}, i.e., L = {wwR | w ∈ {0, 1}∗ }.

0, ε → 0 0, 0 → ε

ε, ε → $ ε, ε → ε ε, $ → ε
qstart qloop qpop qacc

1, ε → 1 1, 1 → ε

3. Pumping Lemma: Use the Pumping Lemma for Context-Free Languages to prove
that the language L = {ai bj ck | 0 ≤ i ≤ j ≤ k} is not context-free.

4. Ambiguity: Show that the following grammar G is ambiguous by finding a string


that has two different leftmost derivations.

S →S+S |S∗S |a

5. Closure Properties: Prove that Context-Free Languages are not closed under inter-
section.
(Hint: Consider L1 = {an bn cm | n, m ≥ 0} and L2 = {am bn cn | n, m ≥ 0}).

3
Topic 3: Computability Theory
1. Turing Machine Design: Provide an implementation-level description of a Turing
Machine that decides the language L = {an b2n | n ≥ 0}.

2. Decidability vs. Recognizability: Define the terms Turing-decidable and Turing-


recognizable. Give an example of a language that is recognizable but not decidable.

3. Undecidability Proof: Let ALLT M = {⟨M ⟩ | M is a TM and L(M ) = Σ∗ }. Prove


that ALLT M is undecidable by providing a reduction from AT M .

4. TM Variants: Explain why a Turing Machine with a doubly infinite tape is equivalent
in power to a standard Turing Machine with a tape that is infinite in only one direction.

5. Rice’s Theorem: State Rice’s Theorem. Then, use it to prove that the language
LP AL = {⟨M ⟩ | L(M ) contains only palindromes} is undecidable.

6. Diagonalization Proof: Use a direct diagonalization argument to prove that the


language AT M = {⟨M, w⟩ | M is a TM that accepts string w} is undecidable.

4
Topic 4: Complexity Theory
1. P vs. NP: Explain what it means for a decision problem to be in class P and class
NP. What is a polynomial-time verifier, and what is its role in the definition of NP?

2. NP-Completeness Proof: The SUBSET-SUM problem is defined as:

• Input: A set of integers S and a target integer t.


• Question: Is there a non-empty subset of S whose elements sum to t?

Prove that SUBSET-SUM is NP-complete. You may assume it is in NP; focus on


the reduction.

3. co-NP: Define the complexity class co-NP. Give an example of a problem in co-NP
and explain why the question ”Is P = co-NP?” is equivalent to ”Is P = NP?”.

4. Polynomial-Time Reduction: Show that CLIQUE is polynomial-time reducible to


VERTEX-COVER (CLIQUE ≤P VERTEX-COVER). Explain your reduction clearly.

5. 2-SAT vs. 3-SAT: The 2-SAT problem is a variant of SAT where every clause
has exactly two literals. Briefly explain why 2-SAT is in P, contrasting this with the
NP-completeness of 3-SAT.

5
Solutions
Topic 1: Regular Languages
1. The provided DFA is correct. The states represent the following conditions:

• q0 : An even number of 0s have been read, and the string does not end with a 1
(initial state).
• q1 : An odd number of 0s have been read, and the string does not end with a 1.
• q2 : An even number of 0s have been read, and the string ends with a 1.
• q3 : An odd number of 0s have been read, and the string ends with a 1 (accepting
state).

The transitions correctly update these conditions based on the input symbol. For
example, from q1 (odd 0s, not ending in 1), reading a ’0’ leads to q0 (even 0s, not
ending in 1), and reading a ’1’ leads to q3 (odd 0s, ending in 1).

2. The provided DFA is the correct result of the subset construction.

• Initial State: The start state of the NFA is q0 , so the start state of the DFA is
A = {q0 }.
• Transitions from A (= {q0 }):
– δ(A, a) = δ(q0 , a) = {q0 , q1 }. This is a new state, B = {q0 , q1 }.
– δ(A, b) = δ(q0 , b) = {q0 } = A.
• Transitions from B (= {q0 , q1 }):
– δ(B, a) = δ(q0 , a) ∪ δ(q1 , a) = {q0 , q1 } ∪ ∅ = {q0 , q1 } = B.
– δ(B, b) = δ(q0 , b) ∪ δ(q1 , b) = {q0 } ∪ {q2 } = {q0 , q2 }. This is a new state,
C = {q0 , q2 }.
• Transitions from C (= {q0 , q2 }):
– δ(C, a) = δ(q0 , a) ∪ δ(q2 , a) = {q0 , q1 } ∪ ∅ = {q0 , q1 } = B.
– δ(C, b) = δ(q0 , b) ∪ δ(q2 , b) = {q0 } ∪ ∅ = {q0 } = A.
• Accepting States: Any state containing an NFA accepting state (q2 ) is an
accepting state. Thus, C = {q0 , q2 } is the only accepting state.

3. A string is in this language if it is composed of blocks of ‘b‘s and ‘ab‘s. For instance, ‘b‘,
‘ab‘, ‘bab‘, ‘bbab‘ are all valid. The empty string is also valid. This can be captured
by the regular expression:
(b ∪ ab)∗
or equivalently, (b|ab)∗ .

4. Proof by Pumping Lemma:

(a) Assume L = {ak bk+1 | k ≥ 0} is regular.

6
(b) Let p be the pumping length given by the lemma.
(c) Choose the string s = ap bp+1 . We have s ∈ L and |s| = 2p + 1 ≥ p.
(d) By the lemma, s can be split into xyz such that |xy| ≤ p and |y| > 0.
(e) Since |xy| ≤ p and s begins with p a’s, xy must consist only of a’s. So, y = am
for some m ≥ 1.
(f) The lemma states xy i z ∈ L for all i ≥ 0. Let’s pump up with i = 2.
(g) The new string is xy 2 z = ap am bp+1 = ap+m bp+1 .
(h) For this string to be in L, the number of a’s must be one less than the number of
b’s. The number of b’s is p + 1, so we would need p a’s. But our string has p + m
a’s. Since m ≥ 1, p + m ̸= p.
(i) This is a contradiction. Therefore, the assumption that L is regular is false.

5. Yes, the regular languages are closed under the difference operation.
Justification: The set difference L1 \ L2 is defined as L1 ∩ L2 .

• If L2 is a regular language, its complement L2 is also regular.


• If L1 and L2 are regular languages, their intersection L1 ∩ L2 is also regular.

Since L1 \ L2 can be constructed using operations (complementation, intersection)


under which regular languages are closed, it is also regular.

6. Let L1 and L2 be regular languages, recognized by machines M1 = (Q1 , Σ, δ1 , q1 , F1 )


and M2 = (Q2 , Σ, δ2 , q2 , F2 ) respectively.
Note: While I don’t draw state diagrams here, proof sketches via state diagrams are
best for these kinds of proofs.
Union (L1 ∪ L2 ): We construct an NFA N that recognizes L1 ∪ L2 .

• Create a new start state q0 .


• The states of N are Q1 ∪ Q2 ∪ {q0 }.
• Add ε-transitions from q0 to the start states of M1 and M2 .
• The accepting states of N are F1 ∪ F2 .

N accepts an input if it can be accepted by either M1 or M2 . Since there is an NFA


for the union, the language is regular.
Concatenation (L1 ◦ L2 ): We construct an NFA N that recognizes L1 ◦ L2 .

• The start state of N is the start state of M1 , q1 .


• The states of N are Q1 ∪ Q2 .
• The accepting states of N are the accepting states of M2 , F2 .
• For every state in F1 (the accepting states of M1 ), add an ε-transition to the start
state of M2 , q2 .

7
This machine accepts a string w1 w2 where w1 is accepted by M1 and w2 is accepted
by M2 . Since there is an NFA for the concatenation, the language is regular.
Intersection (L1 ∩ L2 ): We construct a DFA M that recognizes L1 ∩ L2 using a
product construction (assuming M1 and M2 are DFAs).

• The states of M are Q = Q1 × Q2 .


• The start state is q = (q1 , q2 ).
• The transition function is δ((r1 , r2 ), a) = (δ1 (r1 , a), δ2 (r2 , a)). The machine simu-
lates M1 and M2 in parallel.
• The accepting states are F = F1 × F2 . A string is accepted only if it ends in an
accepting state in both machines.

Since we have constructed a DFA for the intersection, the language is regular.

8
Topic 2: Context-Free Languages
1. The grammar must generate a number of c’s equal to the number of a’s plus b’s. We
can achieve this by having one rule generate an ‘a‘ at the front and a ‘c‘ at the back,
and another set of rules to generate a ‘b‘ in the middle and a ‘c‘ at the back.

S → aSc | B
B → bBc | ε

The rule S → aSc generates ai and a matching ci . When it stops, it transitions to


B. The rule B → bBc generates bj and a matching cj . The final string has the form
ai (bj εcj )ci = ai bj ci+j .

2. The provided PDA is correct. It operates as follows:

• Phase 1 (Push): The machine starts in qstart , pushes a bottom-of-stack marker


‘$’ and moves to qloop . In qloop , it reads the first half of the string, w, and pushes
each symbol onto the stack.
• Phase 2 (Midpoint): It non-deterministically decides it has reached the middle
of the string via the ε-transition to qpop . This is the ”guess” that it has finished
reading w and is about to read wR .
• Phase 3 (Pop and Match): In qpop , it reads the second half of the string, wR .
For each input symbol, it must match the top of the stack. If a match occurs, the
stack symbol is popped.
• Phase 4 (Accept): If the entire input is consumed and the stack contains only
the ‘$’ marker, it pops the ‘$’ and moves to the final state qacc to accept.

3. Proof by Pumping Lemma for CFLs:

(a) Assume L = {ai bj ck | 0 ≤ i ≤ j ≤ k} is a CFL.


(b) Let p be the pumping length.
(c) Choose the string s = ap bp cp . We have s ∈ L and |s| = 3p ≥ p.
(d) The lemma says s can be split into uvwxy where |vwx| ≤ p, |vx| > 0, and
uv i wxi y ∈ L for all i ≥ 0.
(e) Since |vwx| ≤ p, the substring vwx cannot contain a’s, b’s, and c’s. It can contain
at most two different symbols.
(f) Case 1: vwx contains no c’s. The substring consists of only a’s and b’s.
Pumping up (i = 2) gives s′ = uv 2 wx2 y. This adds a’s or b’s (or both), but no
c’s. The number of c’s remains p. Since |vx| > 0, either the count of a’s or b’s
increases. If the b-count increases, the new string has > p b’s, violating the j ≤ k
condition. If only the a-count increases, the new string has > p a’s, violating the
i ≤ j condition. In either case, s′ ∈
/ L.

9
(g) Case 2: vwx contains no a’s. The substring consists of only b’s and c’s.
Pumping down (i = 0) gives s′ = uwy. This removes b’s or c’s (or both). The
number of a’s remains p. Since |vx| > 0, either the count of b’s or c’s decreases.
If the b-count decreases, the new string has < p b’s, violating the i ≤ j condition.
If only the c-count decreases, the new string has < p c’s, violating the j ≤ k
condition. In either case, s′ ∈
/ L.
(h) We have found a contradiction in all cases. Thus, L is not a CFL.

4. The grammar G is ambiguous because the string a + a ∗ a has two different leftmost
derivations, corresponding to two different parse trees.

• Leftmost Derivation 1 (grouping multiplication first):

S ⇒S+S ⇒a+S ⇒a+S∗S ⇒a+a∗S ⇒a+a∗a

• Leftmost Derivation 2 (grouping addition first):

S ⇒S∗S ⇒S+S∗S ⇒a+S∗S ⇒a+a∗S ⇒a+a∗a

Since there is more than one leftmost derivation for the same string, the grammar is
ambiguous.

5. Proof by counterexample:

(a) Let L1 = {an bn cm | n, m ≥ 0}. L1 is a CFL. It can be generated by S →


XC, X → aXb | ε, C → cC | ε.
(b) Let L2 = {am bn cn | n, m ≥ 0}. L2 is a CFL. It can be generated by S → AY, A →
aA | ε, Y → bY c | ε.
(c) Consider the intersection L = L1 ∩ L2 . A string in L must satisfy the properties
of both languages.
• From L1 , the number of a’s must equal the number of b’s.
• From L2 , the number of b’s must equal the number of c’s.
(d) Therefore, for a string to be in the intersection, the number of a’s, b’s, and c’s
must all be equal. So, L = {an bn cn | n ≥ 0}.
(e) The language L = {an bn cn | n ≥ 0} is a canonical example of a language that is
not context-free.
(f) Since the intersection of two CFLs resulted in a non-CFL, the class of Context-
Free Languages is not closed under intersection.

10
Topic 3: Computability Theory
1. TM Description for L = {an b2n | n ≥ 0}: The TM will systematically mark off one
‘a‘ for every two ‘b‘s.

(a) Scan the tape from left to right. If the tape is blank, accept (for n = 0). If the
first symbol is a ‘b‘, reject.
(b) If the first symbol is an ‘a‘, replace it with a marker ‘X‘ and move right.
(c) Scan right past any ‘a‘s and markers until the first ‘b‘ is found. If no ‘b‘ is found,
reject.
(d) Replace the ‘b‘ with a marker ‘Y‘ and continue scanning right.
(e) Find the next ‘b‘. If none is found, reject. Replace this second ‘b‘ with a ‘Y‘.
(f) Move the head all the way back to the left, stopping at the first symbol after the
block of ‘X‘s.
(g) Repeat the process from step 2.
(h) If, after marking off all ‘a‘s, the scan for the next ‘a‘ in step 2 finds a ‘Y‘, it means
all ‘a‘s are gone. The machine must now verify no ‘b‘s are left.
(i) Scan right across all ‘Y‘s. If an unmarked ‘b‘ is found, reject. If the scan reaches
a blank without finding any ‘b‘s, accept.

2. • Turing-recognizable: A language L is recognizable if there is a Turing Machine


M that accepts every string w ∈ L. For strings not in L, M may either reject or
loop forever.
• Turing-decidable: A language L is decidable if there is a Turing Machine M
that halts on every input. M must accept all strings in L and reject all strings
not in L. A decider never loops.
• Example: The language AT M = {⟨M, w⟩ | M is a TM that accepts string w} is
recognizable but not decidable. It is recognizable by a universal TM that simulates
M on w. It is not decidable because of the halting problem; if M loops on w, the
simulation will also loop.

3. Proof by reduction from AT M :

(a) Assume, for contradiction, that ALLT M is decidable by a TM called H.


(b) We will use H to construct a decider for AT M , which we know is impossible.
(c) Let our new decider for AT M be D. On input ⟨M, w⟩, D does the following: a.
Construct a new TM, Mw , which takes an input x. b. Mw (x) is designed to
behave as follows:
• Ignore its input x.
• Run the original TM M on the fixed string w.
• If M accepts w, then Mw accepts x.
c. Now, analyze the language of Mw :

11
• If M accepts w, then Mw will accept all possible inputs x. So, L(Mw ) = Σ∗ .
• If M does not accept w (it rejects or loops), then Mw never accepts any input.
So, L(Mw ) = ∅.
d. Our decider D now runs the hypothetical decider H on ⟨Mw ⟩. e. If H accepts
⟨Mw ⟩, it means L(Mw ) = Σ∗ , which implies M accepts w. So D accepts. f. If H
rejects ⟨Mw ⟩, it means L(Mw ) ̸= Σ∗ , which implies M does not accept w. So D
rejects.
(d) We have constructed a decider for AT M . This is a contradiction.
(e) Therefore, our assumption was false, and ALLT M is undecidable.
4. A TM with a doubly infinite tape can be simulated by a standard TM with a singly
infinite tape.
Simulation Method: We can use a two-track tape on the standard TM.
• Imagine folding the doubly infinite tape at the start cell (cell 0).
• The top track of the standard TM’s tape will store the non-negative portion of
the doubly infinite tape (cells 0, 1, 2, ...).
• The bottom track will store the negative portion in reverse (cells -1, -2, -3, ...).
• A special symbol on the standard TM’s tape can mark the location of the ”fold”
(cell 0).
• The standard TM’s firmware is updated to simulate the head movement. Moving
right on the positive side is a simple right move. Moving left from cell 0 on the
doubly infinite tape is simulated by moving to the first cell on the bottom track
and reversing the direction of logic (a logical ”left” becomes a physical ”right” on
the bottom track).
Since a simulation is possible, the models are equivalent in power.
5. • Rice’s Theorem: Any non-trivial property of the languages recognized by Tur-
ing machines is undecidable. A property is “non-trivial” if there exists at least
one TM whose language has the property and at least one whose language does
not. A property is about the language, not the TM itself, meaning it must hold
for all TMs recognizing the same language.
• Proof for LP AL : We want to prove the undecidability of LP AL = {⟨M ⟩ |
L(M ) contains only palindromes}.
(a) Is it a language property? Yes. Whether a language contains only palin-
dromes depends only on the set of strings in the language (L(M )), not the
machine’s internal details.
(b) Is it non-trivial?
– Yes, there exists a TM for a language with the property. E.g., a TM
recognizing L = {101, aba}.
– Yes, there exists a TM for a language without the property. E.g., a TM
recognizing L = {10}, which is not a palindrome.

12
(c) Since the property is a non-trivial language property, by Rice’s Theorem,
LP AL is undecidable.

6. Proof of AT M ’s Undecidability by Diagonalization:

(a) Assume, for contradiction, that AT M is decidable. This means there exists a
Turing Machine H (a decider) such that:
(
accept if M accepts w
H(⟨M, w⟩) =
reject if M does not accept w

(b) Using H as a subroutine, we construct a new Turing Machine D that takes as


input the encoding of a single TM, ⟨M ⟩.
(c) D works as follows on input ⟨M ⟩:
i. It runs H on the input ⟨M, ⟨M ⟩⟩. This is the diagonalization step, where M
is fed its own description as input.
ii. D then does the opposite of H’s output. If H accepts, D rejects. If H rejects,
D accepts.
(d) In summary, the behavior of D is:
(
reject if H accepts ⟨M, ⟨M ⟩⟩
D(⟨M ⟩) =
accept if H rejects ⟨M, ⟨M ⟩⟩

(e) Now, consider what happens when we run D on its own description, ⟨D⟩. We
substitute M with D in the definition above:
(
reject if D accepts ⟨D⟩
D(⟨D⟩) =
accept if D does not accept ⟨D⟩

(f) This statement is a logical paradox.


• If D accepts ⟨D⟩, the definition states that it must reject ⟨D⟩. Contradiction.
• If D rejects ⟨D⟩ (i.e., does not accept), the definition states that it must
accept ⟨D⟩. Contradiction.
(g) Since the existence of the machine D leads to a contradiction, our initial as-
sumption that a decider H for AT M exists must be false. Therefore, AT M is
undecidable.

13
Topic 4: Complexity Theory
1. • Class P: A decision problem is in P if there is a deterministic algorithm (e.g., a
deterministic Turing machine) that solves it in polynomial time with respect to
the input size. These are problems considered ”efficiently solvable.”
• Class NP: A decision problem is in NP if a given solution (”certificate” or ”wit-
ness”) can be verified as correct in polynomial time by a deterministic algorithm.
• Polynomial-time verifier: A verifier is an algorithm V that takes an instance
of a problem x and a certificate c. For a problem to be in NP, there must be a
verifier V that runs in polynomial time such that for any ”yes” instance x, there
is a certificate c where V (x, c) outputs ”yes.” For any ”no” instance x, no such
certificate exists. The role of the verifier provides the common ”guess and check”
definition of NP problems.

2. To prove SUBSET-SUM is NP-complete, we reduce a known NP-complete problem,


3-SAT, to it.
Reduction: 3-SAT ≤P SUBSET-SUM

(a) Given a 3-SAT formula ϕ with l variables (x1 , . . . , xl ) and k clauses (C1 , . . . , Ck ).
(b) We construct a set of integers S and a target t. The numbers are large and
have l + k digits, preventing carries in the sum. The first l digits correspond to
variables, the last k to clauses.
(c) Numbers for variables: For each variable xi , create two numbers, yi and zi .
• yi (for xi =true): A ‘1’ in digit i. For each clause Cj containing xi , a ‘1’ in
digit l + j.
• zi (for xi =false): A ‘1’ in digit i. For each clause Cj containing ¬xi , a ‘1’ in
digit l + j.
(d) Slack numbers for clauses: For each clause Cj , create two numbers: sj with a
‘1’ in digit l + j, and s′j with a ‘2’ in digit l + j.
(e) Target t: A ‘1’ in each of the first l digits. A ‘4’ in each of the last k digits.
(f) Correctness: A satisfying assignment for ϕ corresponds to a subset of S summing
to t.
• To get a ‘1’ in each variable digit, we must pick exactly one of yi or zi for
each variable, which mirrors a truth assignment.
• For each clause Cj , it is satisfied, so 1 to 3 chosen variable numbers will
contribute a ‘1’ to the l + j digit. The target for this digit is ‘4’. We can
always reach ‘4’ by adding the appropriate slack variables (sj and/or s′j ).
(g) This reduction can be done in polynomial time. Since SUBSET-SUM is in NP
and is NP-hard, it is NP-complete.

3. • co-NP: co-NP is the class of problems whose complement is in NP. If a problem


is in co-NP, a “no” instance has a short, efficiently verifiable proof.

14
• Example: TAUTOLOGY, the problem of determining if a Boolean formula is
true for all assignments. Its complement is determining if a formula is not a
tautology. A proof for this is a single assignment that makes the formula false,
which is easy to verify. Thus, the complement is in NP, so TAUTOLOGY is in
co-NP.
• Equivalence: The class P is closed under complementation (if you can solve a
problem efficiently, you can solve its complement efficiently by just flipping the
answer).
– If P = NP, then for any problem in co-NP, its complement is in NP (=P).
Since P is closed under complement, the original problem is also in P. Thus
P = co-NP.
– If P = co-NP, then for any problem in NP, its complement is in co-NP (=P).
Since P is closed under complement, the original problem is also in P. Thus
P = NP.
Therefore, the two questions are equivalent.

4. Reduction: CLIQUE ≤P VERTEX-COVER

(a) Input: A graph G = (V, E) and an integer k for the CLIQUE problem.
(b) Construction: Create an instance ⟨G′ , k ′ ⟩ for VERTEX-COVER.
• Let G′ = Ḡ (the complement graph of G). Ḡ has the same vertices as G, and
an edge (u, v) exists in Ḡ if and only if it does not exist in G.
• Let k ′ = |V | − k.
(c) Correctness: A set of vertices C is a clique in G if and only if the set V \ C is
a vertex cover in Ḡ.
• (⇒) If C is a k-clique in G, consider any edge (u, v) in Ḡ. By definition, there
is no edge between u and v in G. Therefore, u and v cannot both be in the
clique C. So at least one of them must be in V \ C. Thus V \ C is a vertex
cover of size |V | − k in Ḡ.
• (⇐) If V ′ is a (|V | − k)-vertex cover in Ḡ, consider the set C = V \ V ′ . For
any two vertices u, v ∈ C, they are not in the vertex cover V ′ , so there cannot
be an edge between them in Ḡ. This means there must be an edge between
them in G. Therefore, C is a clique of size k in G.

5. • 2-SAT is in P: Each clause in 2-SAT, such as (x ∨ y), is equivalent to two im-


plications: (¬x → y) and (¬y → x). We can build an ”implication graph” where
vertices are the literals (xi , ¬xi ) and directed edges represent these implications.
The formula is unsatisfiable if and only if a variable and its negation (xi and ¬xi )
belong to the same strongly connected component (SCC) in this graph. Finding
SCCs can be done in linear time (e.g., Tarjan’s algorithm), so 2-SAT is in P.
• Contrast with 3-SAT: 3-SAT is NP-complete. A 3-literal clause like (x ∨ y ∨ z)
corresponds to more complex implications like (¬x ∧ ¬y) → z, which cannot be

15
directly modeled with simple edges between single literals. This structural differ-
ence fundamentally increases the problem’s combinatorial complexity, pushing it
from P to NP-complete.

16

You might also like