Thanks to visit codestin.com
Credit goes to github.com

Skip to content

NLP Notebook: Languages #586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 24, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added images/parse_tree.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
260 changes: 258 additions & 2 deletions nlp.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
"outputs": [],
"source": [
"import nlp\n",
"from nlp import Page, HITS"
"from nlp import Page, HITS, Lexicon, Rules, Grammar"
]
},
{
Expand All @@ -32,6 +32,7 @@
"## CONTENTS\n",
"\n",
"* Overview\n",
"* Languages\n",
"* HITS\n",
"* Question Answering"
]
Expand All @@ -45,6 +46,261 @@
"`TODO...`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## LANGUAGES\n",
"\n",
"Languages can be represented by a set of grammar rules over a lexicon of words. Different languages can be represented by different types of grammar, but in Natural Language Processing we are mainly interested in context-free grammars.\n",
"\n",
"### Context-Free Grammars\n",
"\n",
"A lot of natural and programming languages can be represented by a **Context-Free Grammar (CFG)**. A CFG is a grammar that has a single non-terminal symbol on the left-hand side. That means a non-terminal can be replaced by the right-hand side of the rule regardless of context. An example of a CFG:\n",
"\n",
"```\n",
"S -> aSb | e\n",
"```\n",
"\n",
"That means `S` can be replaced by either `aSb` or `e` (with `e` we denote the empty string). The lexicon of the language is comprised of the terminals `a` and `b`, while with `S` we denote the non-terminal symbol. In general, non-terminals are capitalized while terminals are not, and we usually name the starting non-terminal `S`. The language generated by the above grammar is the language a<sup>n</sup>b<sup>n</sup> for n greater or equal than 1."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Probabilistic Context-Free Grammar\n",
"\n",
"While a simple CFG can be very useful, we might want to know the chance of each rule occuring. Above, we do not know if `S` is more likely to be replaced by `aSb` or `e`. **Probabilistic Context-Free Grammars (PCFG)** are built to fill exactly that need. Each rule has a probability, given in brackets, and the probabilities of a rule sum up to 1:\n",
"\n",
"```\n",
"S -> aSb [0.7] | e [0.3]\n",
"```\n",
"\n",
"Now we know it is more likely for `S` to be replaced by `aSb` than by `e`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Lexicon\n",
"\n",
"The lexicon of a language is defined as a list of allowable words. These words are grouped into the usual classes: `verbs`, `nouns`, `adjectives`, `adverbs`, `pronouns`, `names`, `articles`, `prepositions` and `conjuctions`. For the first five classes it is impossible to list all words, since words are continuously being added in the classes. Recently \"google\" was added to the list of verbs, and words like that will continue to pop up and get added to the lists. For that reason, these first five categories are called **open classes**. The rest of the categories have much fewer words and much less development. While words like \"thou\" were commonly used in the past but have declined almost completely in usage, most changes take many decades or centuries to manifest, so we can safely assume the categories will remain static for the foreseeable future. Thus, these categories are called **closed classes**.\n",
"\n",
"An example lexicon for a PCFG (note that other classes can also be used according to the language, like `digits`, or `RelPro` for relative pronoun):\n",
"\n",
"```\n",
"Verb -> is [0.3] | say [0.1] | are [0.1] | ...\n",
"Noun -> robot [0.1] | sheep [0.05] | fence [0.05] | ...\n",
"Adjective -> good [0.1] | new [0.1] | sad [0.05] | ...\n",
"Adverb -> here [0.1] | lightly [0.05] | now [0.05] | ...\n",
"Pronoun -> me [0.1] | you [0.1] | he [0.05] | ...\n",
"RelPro -> that [0.4] | who [0.2] | which [0.2] | ...\n",
"Name -> john [0.05] | mary [0.05] | peter [0.01] | ...\n",
"Article -> the [0.35] | a [0.25] | an [0.025] | ...\n",
"Preposition -> to [0.25] | in [0.2] | at [0.1] | ...\n",
"Conjuction -> and [0.5] | or [0.2] | but [0.2] | ...\n",
"Digit -> 1 [0.3] | 2 [0.2] | 0 [0.2] | ...\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Grammar\n",
"\n",
"With grammars we combine words from the lexicon into valid phrases. A grammar is comprised of **grammar rules**. Each rule transforms the left-hand side of the rule into the right-hand side. For example, `A -> B` means that `A` transforms into `B`. Let's build a grammar for the language we started building with the lexicon. We will use a PCFG.\n",
"\n",
"```\n",
"S -> NP VP [0.9] | S Conjuction S [0.1]\n",
"\n",
"NP -> Pronoun [0.3] | Name [0.1] | Noun [0.1] | Article Noun [0.25] |\n",
" Article Adjs Noun [0.05] | Digit [0.05] | NP PP [0.1] |\n",
" NP RelClause [0.05]\n",
"\n",
"VP -> Verb [0.4] | VP NP [0.35] | VP Adjective [0.05] | VP PP [0.1]\n",
" VP Adverb [0.1]\n",
"\n",
"Adjs -> Adjective [0.8] | Adjective Adjs [0.2]\n",
"\n",
"PP -> Preposition NP [1.0]\n",
"\n",
"RelClause -> RelPro VP [1.0]\n",
"```\n",
"\n",
"Some valid phrases the grammar produces: \"`mary is sad`\", \"`you are a robot`\" and \"`she likes mary and a good fence`\".\n",
"\n",
"What if we wanted to check if the phrase \"`mary is sad`\" is actually a valid sentence? We can use a **parse tree** to constructively prove that a string of words is a valid phrase in the given language and even calculate the probability of the generation of the sentence.\n",
"\n",
"![parse_tree](images/parse_tree.png)\n",
"\n",
"The probability of the whole tree can be calculated by multiplying the probabilities of each individual rule transormation: `0.9 * 0.1 * 0.05 * 0.05 * 0.4 * 0.05 * 0.3 = 0.00000135`.\n",
"\n",
"To conserve space, we can also write the tree in linear form:\n",
"\n",
"[S [NP [Name **mary**]] [VP [VP [Verb **is**]] [Adjective **sad**]]]\n",
"\n",
"Unfortunately, the current grammar **overgenerates**, that is, it creates sentences that are not grammatically correct (according to the English language), like \"`the fence are john which say`\". It also **undergenerates**, which means there are valid sentences it does not generate, like \"`he believes mary is sad`\"."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Implementation\n",
"\n",
"In the module we have implemented a `Lexicon` and a `Rules` function, which we can combine to create a `Grammar` object.\n",
"\n",
"Execute the cells below to view the implemenations:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%psource Lexicon"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%psource Rules"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%psource Grammar"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's build a lexicon and a grammar for the above language:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Lexicon {'Article': ['the', 'a', 'an'], 'Adverb': ['here', 'lightly', 'now'], 'Digit': ['1', '2', '0'], 'Pronoun': ['me', 'you', 'he'], 'Name': ['john', 'mary', 'peter'], 'Adjective': ['good', 'new', 'sad'], 'Conjuction': ['and', 'or', 'but'], 'Preposition': ['to', 'in', 'at'], 'RelPro': ['that', 'who', 'which'], 'Verb': ['is', 'say', 'are'], 'Noun': ['robot', 'sheep', 'fence']}\n",
"\n",
"Rules: {'Adjs': [['Adjective'], ['Adjective', 'Adjs']], 'PP': [['Preposition', 'NP']], 'RelClause': [['RelPro', 'VP']], 'VP': [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']], 'NP': [['Pronoun'], ['Name'], ['Noun'], ['Article', 'Noun'], ['Article', 'Adjs', 'Noun'], ['Digit'], ['NP', 'PP'], ['NP', 'RelClause']], 'S': [['NP', 'VP'], ['S', 'Conjuction', 'S']]}\n"
]
}
],
"source": [
"lexicon = Lexicon(\n",
" Verb=\"is | say | are\",\n",
" Noun=\"robot | sheep | fence\",\n",
" Adjective=\"good | new | sad\",\n",
" Adverb=\"here | lightly | now\",\n",
" Pronoun=\"me | you | he\",\n",
" RelPro=\"that | who | which\",\n",
" Name=\"john | mary | peter\",\n",
" Article=\"the | a | an\",\n",
" Preposition=\"to | in | at\",\n",
" Conjuction=\"and | or | but\",\n",
" Digit=\"1 | 2 | 0\"\n",
")\n",
"\n",
"print(\"Lexicon\", lexicon)\n",
"\n",
"rules = Rules(\n",
" S=\"NP VP | S Conjuction S\",\n",
" NP=\"Pronoun | Name | Noun | Article Noun | Article Adjs Noun | Digit | NP PP | NP RelClause\",\n",
" VP=\"Verb | VP NP | VP Adjective | VP PP | VP Adverb\",\n",
" Adjs=\"Adjective | Adjective Adjs\",\n",
" PP=\"Preposition NP\",\n",
" RelClause=\"RelPro VP\"\n",
")\n",
"\n",
"print(\"\\nRules:\", rules)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Both the functions return a dictionary with keys the left-hand side of the rules. For the lexicon, the values are the terminals for each left-hand side non-terminal, while for the rules the values are the right-hand sides as lists.\n",
"\n",
"We can now use the variables `lexicon` and `rules` to build a grammar. After we've done so, we can find the transformations of a non-terminal (the `Noun`, `Verb` and the other basic classes do **not** count as proper non-terminals in the implementation). We can also check if a word is in a particular class."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"How can we rewrite 'VP'? [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']]\n",
"Is 'the' an article? True\n",
"Is 'here' a noun? False\n"
]
}
],
"source": [
"grammar = Grammar(\"A Simple Grammar\", rules, lexicon)\n",
"\n",
"print(\"How can we rewrite 'VP'?\", grammar.rewrites_for('VP'))\n",
"print(\"Is 'the' an article?\", grammar.isa('the', 'Article'))\n",
"print(\"Is 'here' a noun?\", grammar.isa('here', 'Noun'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can generate random phrases using our grammar. Most of them will be complete gibberish, falling under the overgenerated phrases of the grammar. That goes to show that in the grammar the valid phrases are much fewer than the overgenerated ones."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'a robot is to a robot sad but robot say you 0 in me in a robot at the sheep at 1 good an fence in sheep in me that are in john new lightly lightly here a new good new robot lightly new in sheep lightly'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from nlp import generate_random\n",
"\n",
"generate_random(grammar)"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -245,7 +501,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2+"
"version": "3.5.3"
}
},
"nbformat": 4,
Expand Down
12 changes: 6 additions & 6 deletions nlp.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""A chart parser and some grammars. (Chapter 22)"""
"""Natural Language Processing; Chart Parsing and PageRanking (Chapter 22-23)"""

# (Written for the second edition of AIMA; expect some discrepanciecs
# from the third edition until this gets reviewed.)
Expand All @@ -23,8 +23,8 @@ def Rules(**rules):

def Lexicon(**rules):
"""Create a dictionary mapping symbols to alternative words.
>>> Lexicon(Art = "the | a | an")
{'Art': ['the', 'a', 'an']}
>>> Lexicon(Article = "the | a | an")
{'Article': ['the', 'a', 'an']}
"""
for (lhs, rhs) in rules.items():
rules[lhs] = [word.strip() for word in rhs.split('|')]
Expand Down Expand Up @@ -96,8 +96,8 @@ def __repr__(self):
N='man'))


def generate_random(grammar=E_, s='S'):
"""Replace each token in s by a random entry in grammar (recursively).
def generate_random(grammar=E_, S='S'):
"""Replace each token in S by a random entry in grammar (recursively).
This is useful for testing a grammar, e.g. generate_random(E_)"""
import random

Expand All @@ -111,7 +111,7 @@ def rewrite(tokens, into):
into.append(token)
return into

return ' '.join(rewrite(s.split(), []))
return ' '.join(rewrite(S.split(), []))

# ______________________________________________________________________________
# Chart Parsing
Expand Down
19 changes: 15 additions & 4 deletions tests/test_nlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,31 @@
from nlp import loadPageHTML, stripRawHTML, findOutlinks, onlyWikipediaURLS
from nlp import expand_pages, relevant_pages, normalize, ConvergenceDetector, getInlinks
from nlp import getOutlinks, Page, determineInlinks, HITS
from nlp import Rules, Lexicon
from nlp import Rules, Lexicon, Grammar
# Clumsy imports because we want to access certain nlp.py globals explicitly, because
# they are accessed by function's within nlp.py
# they are accessed by functions within nlp.py

from unittest.mock import patch
from io import BytesIO


def test_rules():
assert Rules(A="B C | D E") == {'A': [['B', 'C'], ['D', 'E']]}
check = {'A': [['B', 'C'], ['D', 'E']], 'B': [['E'], ['a'], ['b', 'c']]}
assert Rules(A="B C | D E", B="E | a | b c") == check


def test_lexicon():
assert Lexicon(Art="the | a | an") == {'Art': ['the', 'a', 'an']}
check = {'Article': ['the', 'a', 'an'], 'Pronoun': ['i', 'you', 'he']}
assert Lexicon(Article="the | a | an", Pronoun="i | you | he") == check


def test_grammar():
rules = Rules(A="B C | D E", B="E | a | b c")
lexicon = Lexicon(Article="the | a | an", Pronoun="i | you | he")
grammar = Grammar("Simplegram", rules, lexicon)

assert grammar.rewrites_for('A') == [['B', 'C'], ['D', 'E']]
assert grammar.isa('the', 'Article')


# ______________________________________________________________________________
Expand Down