Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 7734f8a

Browse files
antmarakisnorvig
authored andcommitted
NLP Notebook: Languages (aimacode#586)
* Update nlp.py * Update test_nlp.py * Add files via upload * Update nlp.ipynb * add generate_random
1 parent b022791 commit 7734f8a

File tree

4 files changed

+279
-12
lines changed

4 files changed

+279
-12
lines changed

images/parse_tree.png

13.3 KB
Loading

nlp.ipynb

Lines changed: 258 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
"outputs": [],
2121
"source": [
2222
"import nlp\n",
23-
"from nlp import Page, HITS"
23+
"from nlp import Page, HITS, Lexicon, Rules, Grammar"
2424
]
2525
},
2626
{
@@ -32,6 +32,7 @@
3232
"## CONTENTS\n",
3333
"\n",
3434
"* Overview\n",
35+
"* Languages\n",
3536
"* HITS\n",
3637
"* Question Answering"
3738
]
@@ -45,6 +46,261 @@
4546
"`TODO...`"
4647
]
4748
},
49+
{
50+
"cell_type": "markdown",
51+
"metadata": {},
52+
"source": [
53+
"## LANGUAGES\n",
54+
"\n",
55+
"Languages can be represented by a set of grammar rules over a lexicon of words. Different languages can be represented by different types of grammar, but in Natural Language Processing we are mainly interested in context-free grammars.\n",
56+
"\n",
57+
"### Context-Free Grammars\n",
58+
"\n",
59+
"A lot of natural and programming languages can be represented by a **Context-Free Grammar (CFG)**. A CFG is a grammar that has a single non-terminal symbol on the left-hand side. That means a non-terminal can be replaced by the right-hand side of the rule regardless of context. An example of a CFG:\n",
60+
"\n",
61+
"```\n",
62+
"S -> aSb | e\n",
63+
"```\n",
64+
"\n",
65+
"That means `S` can be replaced by either `aSb` or `e` (with `e` we denote the empty string). The lexicon of the language is comprised of the terminals `a` and `b`, while with `S` we denote the non-terminal symbol. In general, non-terminals are capitalized while terminals are not, and we usually name the starting non-terminal `S`. The language generated by the above grammar is the language a<sup>n</sup>b<sup>n</sup> for n greater or equal than 1."
66+
]
67+
},
68+
{
69+
"cell_type": "markdown",
70+
"metadata": {},
71+
"source": [
72+
"### Probabilistic Context-Free Grammar\n",
73+
"\n",
74+
"While a simple CFG can be very useful, we might want to know the chance of each rule occuring. Above, we do not know if `S` is more likely to be replaced by `aSb` or `e`. **Probabilistic Context-Free Grammars (PCFG)** are built to fill exactly that need. Each rule has a probability, given in brackets, and the probabilities of a rule sum up to 1:\n",
75+
"\n",
76+
"```\n",
77+
"S -> aSb [0.7] | e [0.3]\n",
78+
"```\n",
79+
"\n",
80+
"Now we know it is more likely for `S` to be replaced by `aSb` than by `e`."
81+
]
82+
},
83+
{
84+
"cell_type": "markdown",
85+
"metadata": {},
86+
"source": [
87+
"### Lexicon\n",
88+
"\n",
89+
"The lexicon of a language is defined as a list of allowable words. These words are grouped into the usual classes: `verbs`, `nouns`, `adjectives`, `adverbs`, `pronouns`, `names`, `articles`, `prepositions` and `conjuctions`. For the first five classes it is impossible to list all words, since words are continuously being added in the classes. Recently \"google\" was added to the list of verbs, and words like that will continue to pop up and get added to the lists. For that reason, these first five categories are called **open classes**. The rest of the categories have much fewer words and much less development. While words like \"thou\" were commonly used in the past but have declined almost completely in usage, most changes take many decades or centuries to manifest, so we can safely assume the categories will remain static for the foreseeable future. Thus, these categories are called **closed classes**.\n",
90+
"\n",
91+
"An example lexicon for a PCFG (note that other classes can also be used according to the language, like `digits`, or `RelPro` for relative pronoun):\n",
92+
"\n",
93+
"```\n",
94+
"Verb -> is [0.3] | say [0.1] | are [0.1] | ...\n",
95+
"Noun -> robot [0.1] | sheep [0.05] | fence [0.05] | ...\n",
96+
"Adjective -> good [0.1] | new [0.1] | sad [0.05] | ...\n",
97+
"Adverb -> here [0.1] | lightly [0.05] | now [0.05] | ...\n",
98+
"Pronoun -> me [0.1] | you [0.1] | he [0.05] | ...\n",
99+
"RelPro -> that [0.4] | who [0.2] | which [0.2] | ...\n",
100+
"Name -> john [0.05] | mary [0.05] | peter [0.01] | ...\n",
101+
"Article -> the [0.35] | a [0.25] | an [0.025] | ...\n",
102+
"Preposition -> to [0.25] | in [0.2] | at [0.1] | ...\n",
103+
"Conjuction -> and [0.5] | or [0.2] | but [0.2] | ...\n",
104+
"Digit -> 1 [0.3] | 2 [0.2] | 0 [0.2] | ...\n",
105+
"```"
106+
]
107+
},
108+
{
109+
"cell_type": "markdown",
110+
"metadata": {},
111+
"source": [
112+
"### Grammar\n",
113+
"\n",
114+
"With grammars we combine words from the lexicon into valid phrases. A grammar is comprised of **grammar rules**. Each rule transforms the left-hand side of the rule into the right-hand side. For example, `A -> B` means that `A` transforms into `B`. Let's build a grammar for the language we started building with the lexicon. We will use a PCFG.\n",
115+
"\n",
116+
"```\n",
117+
"S -> NP VP [0.9] | S Conjuction S [0.1]\n",
118+
"\n",
119+
"NP -> Pronoun [0.3] | Name [0.1] | Noun [0.1] | Article Noun [0.25] |\n",
120+
" Article Adjs Noun [0.05] | Digit [0.05] | NP PP [0.1] |\n",
121+
" NP RelClause [0.05]\n",
122+
"\n",
123+
"VP -> Verb [0.4] | VP NP [0.35] | VP Adjective [0.05] | VP PP [0.1]\n",
124+
" VP Adverb [0.1]\n",
125+
"\n",
126+
"Adjs -> Adjective [0.8] | Adjective Adjs [0.2]\n",
127+
"\n",
128+
"PP -> Preposition NP [1.0]\n",
129+
"\n",
130+
"RelClause -> RelPro VP [1.0]\n",
131+
"```\n",
132+
"\n",
133+
"Some valid phrases the grammar produces: \"`mary is sad`\", \"`you are a robot`\" and \"`she likes mary and a good fence`\".\n",
134+
"\n",
135+
"What if we wanted to check if the phrase \"`mary is sad`\" is actually a valid sentence? We can use a **parse tree** to constructively prove that a string of words is a valid phrase in the given language and even calculate the probability of the generation of the sentence.\n",
136+
"\n",
137+
"![parse_tree](images/parse_tree.png)\n",
138+
"\n",
139+
"The probability of the whole tree can be calculated by multiplying the probabilities of each individual rule transormation: `0.9 * 0.1 * 0.05 * 0.05 * 0.4 * 0.05 * 0.3 = 0.00000135`.\n",
140+
"\n",
141+
"To conserve space, we can also write the tree in linear form:\n",
142+
"\n",
143+
"[S [NP [Name **mary**]] [VP [VP [Verb **is**]] [Adjective **sad**]]]\n",
144+
"\n",
145+
"Unfortunately, the current grammar **overgenerates**, that is, it creates sentences that are not grammatically correct (according to the English language), like \"`the fence are john which say`\". It also **undergenerates**, which means there are valid sentences it does not generate, like \"`he believes mary is sad`\"."
146+
]
147+
},
148+
{
149+
"cell_type": "markdown",
150+
"metadata": {},
151+
"source": [
152+
"### Implementation\n",
153+
"\n",
154+
"In the module we have implemented a `Lexicon` and a `Rules` function, which we can combine to create a `Grammar` object.\n",
155+
"\n",
156+
"Execute the cells below to view the implemenations:"
157+
]
158+
},
159+
{
160+
"cell_type": "code",
161+
"execution_count": 2,
162+
"metadata": {
163+
"collapsed": true
164+
},
165+
"outputs": [],
166+
"source": [
167+
"%psource Lexicon"
168+
]
169+
},
170+
{
171+
"cell_type": "code",
172+
"execution_count": 3,
173+
"metadata": {
174+
"collapsed": true
175+
},
176+
"outputs": [],
177+
"source": [
178+
"%psource Rules"
179+
]
180+
},
181+
{
182+
"cell_type": "code",
183+
"execution_count": 4,
184+
"metadata": {
185+
"collapsed": true
186+
},
187+
"outputs": [],
188+
"source": [
189+
"%psource Grammar"
190+
]
191+
},
192+
{
193+
"cell_type": "markdown",
194+
"metadata": {},
195+
"source": [
196+
"Let's build a lexicon and a grammar for the above language:"
197+
]
198+
},
199+
{
200+
"cell_type": "code",
201+
"execution_count": 2,
202+
"metadata": {},
203+
"outputs": [
204+
{
205+
"name": "stdout",
206+
"output_type": "stream",
207+
"text": [
208+
"Lexicon {'Article': ['the', 'a', 'an'], 'Adverb': ['here', 'lightly', 'now'], 'Digit': ['1', '2', '0'], 'Pronoun': ['me', 'you', 'he'], 'Name': ['john', 'mary', 'peter'], 'Adjective': ['good', 'new', 'sad'], 'Conjuction': ['and', 'or', 'but'], 'Preposition': ['to', 'in', 'at'], 'RelPro': ['that', 'who', 'which'], 'Verb': ['is', 'say', 'are'], 'Noun': ['robot', 'sheep', 'fence']}\n",
209+
"\n",
210+
"Rules: {'Adjs': [['Adjective'], ['Adjective', 'Adjs']], 'PP': [['Preposition', 'NP']], 'RelClause': [['RelPro', 'VP']], 'VP': [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']], 'NP': [['Pronoun'], ['Name'], ['Noun'], ['Article', 'Noun'], ['Article', 'Adjs', 'Noun'], ['Digit'], ['NP', 'PP'], ['NP', 'RelClause']], 'S': [['NP', 'VP'], ['S', 'Conjuction', 'S']]}\n"
211+
]
212+
}
213+
],
214+
"source": [
215+
"lexicon = Lexicon(\n",
216+
" Verb=\"is | say | are\",\n",
217+
" Noun=\"robot | sheep | fence\",\n",
218+
" Adjective=\"good | new | sad\",\n",
219+
" Adverb=\"here | lightly | now\",\n",
220+
" Pronoun=\"me | you | he\",\n",
221+
" RelPro=\"that | who | which\",\n",
222+
" Name=\"john | mary | peter\",\n",
223+
" Article=\"the | a | an\",\n",
224+
" Preposition=\"to | in | at\",\n",
225+
" Conjuction=\"and | or | but\",\n",
226+
" Digit=\"1 | 2 | 0\"\n",
227+
")\n",
228+
"\n",
229+
"print(\"Lexicon\", lexicon)\n",
230+
"\n",
231+
"rules = Rules(\n",
232+
" S=\"NP VP | S Conjuction S\",\n",
233+
" NP=\"Pronoun | Name | Noun | Article Noun | Article Adjs Noun | Digit | NP PP | NP RelClause\",\n",
234+
" VP=\"Verb | VP NP | VP Adjective | VP PP | VP Adverb\",\n",
235+
" Adjs=\"Adjective | Adjective Adjs\",\n",
236+
" PP=\"Preposition NP\",\n",
237+
" RelClause=\"RelPro VP\"\n",
238+
")\n",
239+
"\n",
240+
"print(\"\\nRules:\", rules)"
241+
]
242+
},
243+
{
244+
"cell_type": "markdown",
245+
"metadata": {},
246+
"source": [
247+
"Both the functions return a dictionary with keys the left-hand side of the rules. For the lexicon, the values are the terminals for each left-hand side non-terminal, while for the rules the values are the right-hand sides as lists.\n",
248+
"\n",
249+
"We can now use the variables `lexicon` and `rules` to build a grammar. After we've done so, we can find the transformations of a non-terminal (the `Noun`, `Verb` and the other basic classes do **not** count as proper non-terminals in the implementation). We can also check if a word is in a particular class."
250+
]
251+
},
252+
{
253+
"cell_type": "code",
254+
"execution_count": 3,
255+
"metadata": {},
256+
"outputs": [
257+
{
258+
"name": "stdout",
259+
"output_type": "stream",
260+
"text": [
261+
"How can we rewrite 'VP'? [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']]\n",
262+
"Is 'the' an article? True\n",
263+
"Is 'here' a noun? False\n"
264+
]
265+
}
266+
],
267+
"source": [
268+
"grammar = Grammar(\"A Simple Grammar\", rules, lexicon)\n",
269+
"\n",
270+
"print(\"How can we rewrite 'VP'?\", grammar.rewrites_for('VP'))\n",
271+
"print(\"Is 'the' an article?\", grammar.isa('the', 'Article'))\n",
272+
"print(\"Is 'here' a noun?\", grammar.isa('here', 'Noun'))"
273+
]
274+
},
275+
{
276+
"cell_type": "markdown",
277+
"metadata": {},
278+
"source": [
279+
"Finally, we can generate random phrases using our grammar. Most of them will be complete gibberish, falling under the overgenerated phrases of the grammar. That goes to show that in the grammar the valid phrases are much fewer than the overgenerated ones."
280+
]
281+
},
282+
{
283+
"cell_type": "code",
284+
"execution_count": 7,
285+
"metadata": {},
286+
"outputs": [
287+
{
288+
"data": {
289+
"text/plain": [
290+
"'a robot is to a robot sad but robot say you 0 in me in a robot at the sheep at 1 good an fence in sheep in me that are in john new lightly lightly here a new good new robot lightly new in sheep lightly'"
291+
]
292+
},
293+
"execution_count": 7,
294+
"metadata": {},
295+
"output_type": "execute_result"
296+
}
297+
],
298+
"source": [
299+
"from nlp import generate_random\n",
300+
"\n",
301+
"generate_random(grammar)"
302+
]
303+
},
48304
{
49305
"cell_type": "markdown",
50306
"metadata": {},
@@ -245,7 +501,7 @@
245501
"name": "python",
246502
"nbconvert_exporter": "python",
247503
"pygments_lexer": "ipython3",
248-
"version": "3.5.2+"
504+
"version": "3.5.3"
249505
}
250506
},
251507
"nbformat": 4,

nlp.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
"""A chart parser and some grammars. (Chapter 22)"""
1+
"""Natural Language Processing; Chart Parsing and PageRanking (Chapter 22-23)"""
22

33
# (Written for the second edition of AIMA; expect some discrepanciecs
44
# from the third edition until this gets reviewed.)
@@ -23,8 +23,8 @@ def Rules(**rules):
2323

2424
def Lexicon(**rules):
2525
"""Create a dictionary mapping symbols to alternative words.
26-
>>> Lexicon(Art = "the | a | an")
27-
{'Art': ['the', 'a', 'an']}
26+
>>> Lexicon(Article = "the | a | an")
27+
{'Article': ['the', 'a', 'an']}
2828
"""
2929
for (lhs, rhs) in rules.items():
3030
rules[lhs] = [word.strip() for word in rhs.split('|')]
@@ -96,8 +96,8 @@ def __repr__(self):
9696
N='man'))
9797

9898

99-
def generate_random(grammar=E_, s='S'):
100-
"""Replace each token in s by a random entry in grammar (recursively).
99+
def generate_random(grammar=E_, S='S'):
100+
"""Replace each token in S by a random entry in grammar (recursively).
101101
This is useful for testing a grammar, e.g. generate_random(E_)"""
102102
import random
103103

@@ -111,7 +111,7 @@ def rewrite(tokens, into):
111111
into.append(token)
112112
return into
113113

114-
return ' '.join(rewrite(s.split(), []))
114+
return ' '.join(rewrite(S.split(), []))
115115

116116
# ______________________________________________________________________________
117117
# Chart Parsing

tests/test_nlp.py

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,31 @@
44
from nlp import loadPageHTML, stripRawHTML, findOutlinks, onlyWikipediaURLS
55
from nlp import expand_pages, relevant_pages, normalize, ConvergenceDetector, getInlinks
66
from nlp import getOutlinks, Page, determineInlinks, HITS
7-
from nlp import Rules, Lexicon
7+
from nlp import Rules, Lexicon, Grammar
88
# Clumsy imports because we want to access certain nlp.py globals explicitly, because
9-
# they are accessed by function's within nlp.py
9+
# they are accessed by functions within nlp.py
1010

1111
from unittest.mock import patch
1212
from io import BytesIO
1313

1414

1515
def test_rules():
16-
assert Rules(A="B C | D E") == {'A': [['B', 'C'], ['D', 'E']]}
16+
check = {'A': [['B', 'C'], ['D', 'E']], 'B': [['E'], ['a'], ['b', 'c']]}
17+
assert Rules(A="B C | D E", B="E | a | b c") == check
1718

1819

1920
def test_lexicon():
20-
assert Lexicon(Art="the | a | an") == {'Art': ['the', 'a', 'an']}
21+
check = {'Article': ['the', 'a', 'an'], 'Pronoun': ['i', 'you', 'he']}
22+
assert Lexicon(Article="the | a | an", Pronoun="i | you | he") == check
23+
24+
25+
def test_grammar():
26+
rules = Rules(A="B C | D E", B="E | a | b c")
27+
lexicon = Lexicon(Article="the | a | an", Pronoun="i | you | he")
28+
grammar = Grammar("Simplegram", rules, lexicon)
29+
30+
assert grammar.rewrites_for('A') == [['B', 'C'], ['D', 'E']]
31+
assert grammar.isa('the', 'Article')
2132

2233

2334
# ______________________________________________________________________________

0 commit comments

Comments
 (0)