diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c8a165a25..ed17ed4da 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -23,7 +23,7 @@ In more detail: ## Port to Python 3; Pythonic Idioms; py.test -- Check for common problems in [porting to Python 3](http://python3porting.com/problems.html), such as: `print` is now a function; `range` and `map` and other functions no longer produce `list`s; objects of different types can no longer be compared with `<`; strings are now Unicode; it would be nice to move `%` string formating to `.format`; there is a new `next` function for generators; integer division now returns a float; we can now use set literals. +- Check for common problems in [porting to Python 3](http://python3porting.com/problems.html), such as: `print` is now a function; `range` and `map` and other functions no longer produce `list`s; objects of different types can no longer be compared with `<`; strings are now Unicode; it would be nice to move `%` string formatting to `.format`; there is a new `next` function for generators; integer division now returns a float; we can now use set literals. - Replace old Lisp-based idioms with proper Python idioms. For example, we have many functions that were taken directly from Common Lisp, such as the `every` function: `every(callable, items)` returns true if every element of `items` is callable. This is good Lisp style, but good Python style would be to use `all` and a generator expression: `all(callable(f) for f in items)`. Eventually, fix all calls to these legacy Lisp functions and then remove the functions. - Add more tests in `test_*.py` files. Strive for terseness; it is ok to group multiple asserts into one `def test_something():` function. Move most tests to `test_*.py`, but it is fine to have a single `doctest` example in the docstring of a function in the `.py` file, if the purpose of the doctest is to explain how to use the function, rather than test the implementation. @@ -83,7 +83,7 @@ Reporting Issues - Under which versions of Python does this happen? -- Provide an example of the issue occuring. +- Provide an example of the issue occurring. - Is anybody working on this? diff --git a/agents.ipynb b/agents.ipynb index 6c547ee6c..ed6920bd0 100644 --- a/agents.ipynb +++ b/agents.ipynb @@ -566,7 +566,7 @@ " print('{} decided to move {}wards at location: {}'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n", " agent.moveforward()\n", " else:\n", - " print('{} decided to move {}wards at location: {}, but couldnt'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n", + " print('{} decided to move {}wards at location: {}, but couldn\\'t'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n", " agent.moveforward(False)\n", " elif action == \"eat\":\n", " items = self.list_things_at(agent.location, tclass=Food)\n", @@ -605,17 +605,17 @@ "EnergeticBlindDog decided to move downwards at location: [0, 1]\n", "EnergeticBlindDog drank Water at location: [0, 2]\n", "EnergeticBlindDog decided to turnright at location: [0, 2]\n", - "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldnt\n", + "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldn't\n", "EnergeticBlindDog decided to turnright at location: [0, 2]\n", "EnergeticBlindDog decided to turnright at location: [0, 2]\n", "EnergeticBlindDog decided to turnleft at location: [0, 2]\n", "EnergeticBlindDog decided to turnleft at location: [0, 2]\n", - "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldnt\n", + "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldn't\n", "EnergeticBlindDog decided to turnleft at location: [0, 2]\n", "EnergeticBlindDog decided to turnright at location: [0, 2]\n", - "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldnt\n", + "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldn't\n", "EnergeticBlindDog decided to turnleft at location: [0, 2]\n", - "EnergeticBlindDog decided to move downwards at location: [0, 2], but couldnt\n", + "EnergeticBlindDog decided to move downwards at location: [0, 2], but couldn't\n", "EnergeticBlindDog decided to turnright at location: [0, 2]\n", "EnergeticBlindDog decided to turnleft at location: [0, 2]\n", "EnergeticBlindDog decided to turnleft at location: [0, 2]\n", @@ -684,7 +684,7 @@ " print('{} decided to move {}wards at location: {}'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n", " agent.moveforward()\n", " else:\n", - " print('{} decided to move {}wards at location: {}, but couldnt'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n", + " print('{} decided to move {}wards at location: {}, but couldn\\'t'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n", " agent.moveforward(False)\n", " elif action == \"eat\":\n", " items = self.list_things_at(agent.location, tclass=Food)\n", @@ -1012,7 +1012,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "EnergeticBlindDog decided to move leftwards at location: [0, 3], but couldnt\n" + "EnergeticBlindDog decided to move leftwards at location: [0, 3], but couldn't\n" ] }, { @@ -1069,7 +1069,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "EnergeticBlindDog decided to move leftwards at location: [0, 3], but couldnt\n" + "EnergeticBlindDog decided to move leftwards at location: [0, 3], but couldn't\n" ] }, { diff --git a/csp.ipynb b/csp.ipynb index 2192352cf..f6414f701 100644 --- a/csp.ipynb +++ b/csp.ipynb @@ -647,7 +647,7 @@ "source": [ "## TREE CSP SOLVER\n", "\n", - "The `tree_csp_solver` function (**Figure 6.11** in the book) can be used to solve problems whose constraint graph is a tree. Given a CSP, with `neighbors` forming a tree, it returns an assignement that satisfies the given constraints. The algorithm works as follows:\n", + "The `tree_csp_solver` function (**Figure 6.11** in the book) can be used to solve problems whose constraint graph is a tree. Given a CSP, with `neighbors` forming a tree, it returns an assignment that satisfies the given constraints. The algorithm works as follows:\n", "\n", "First it finds the *topological sort* of the tree. This is an ordering of the tree where each variable/node comes after its parent in the tree. The function that accomplishes this is `topological_sort`, which builds the topological sort using the recursive function `build_topological`. That function is an augmented DFS, where each newly visited node of the tree is pushed on a stack. The stack in the end holds the variables topologically sorted.\n", "\n", @@ -896,7 +896,7 @@ "\n", "visualize_callback = make_visualize(iteration_slider)\n", "\n", - "visualize_button = widgets.ToggleButton(desctiption = \"Visualize\", value = False)\n", + "visualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\n", "time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n", "\n", "a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\n", @@ -1055,7 +1055,7 @@ "\n", "visualize_callback = make_visualize(iteration_slider)\n", "\n", - "visualize_button = widgets.ToggleButton(desctiption = \"Visualize\", value = False)\n", + "visualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\n", "time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n", "\n", "a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\n", @@ -1138,7 +1138,7 @@ "\n", "visualize_callback = make_visualize(iteration_slider)\n", "\n", - "visualize_button = widgets.ToggleButton(desctiption = \"Visualize\", value = False)\n", + "visualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\n", "time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n", "\n", "a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\n", diff --git a/games.ipynb b/games.ipynb index 042116969..51a2015b4 100644 --- a/games.ipynb +++ b/games.ipynb @@ -210,7 +210,7 @@ "\n", "\n", "\n", - "The states are represented wih capital letters inside the triangles (eg. \"A\") while moves are the labels on the edges between states (eg. \"a1\"). Terminal nodes carry utility values. Note that the terminal nodes are named in this example 'B1', 'B2' and 'B2' for the nodes below 'B', and so forth.\n", + "The states are represented with capital letters inside the triangles (eg. \"A\") while moves are the labels on the edges between states (eg. \"a1\"). Terminal nodes carry utility values. Note that the terminal nodes are named in this example 'B1', 'B2' and 'B2' for the nodes below 'B', and so forth.\n", "\n", "We will model the moves, utilities and initial state like this:" ] diff --git a/gui/xy_vacuum_environment.py b/gui/xy_vacuum_environment.py index 14c3abc1a..4ba4497ea 100644 --- a/gui/xy_vacuum_environment.py +++ b/gui/xy_vacuum_environment.py @@ -124,7 +124,7 @@ def update_env(self): xf, yf = agt.location def reset_env(self, agt): - """Resets the GUI environment to the intial state.""" + """Resets the GUI environment to the initial state.""" self.read_env() for i, btn_row in enumerate(self.buttons): for j, btn in enumerate(btn_row): diff --git a/learning.ipynb b/learning.ipynb index 16bb4bd6b..f58d60e85 100644 --- a/learning.ipynb +++ b/learning.ipynb @@ -1065,7 +1065,7 @@ "source": [ "The implementation of `DecisionTreeLearner` provided in [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py) uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices:\n", "
    \n", - "
  1. If the input at the current step has no training data we return the mode of classes of input data recieved in the parent step (previous level of recursion).
  2. \n", + "
  3. If the input at the current step has no training data we return the mode of classes of input data received in the parent step (previous level of recursion).
  4. \n", "
  5. If all values in training data belong to the same class it returns a `DecisionLeaf` whose class label is the class which all the data belongs to.
  6. \n", "
  7. If the data has no attributes that can be tested we return the class with highest plurality value in the training data.
  8. \n", "
  9. We choose the attribute which gives the highest amount of entropy gain and return a `DecisionFork` which splits based on this attribute. Each branch recursively calls `decision_tree_learning` to construct the sub-tree.
  10. \n", @@ -1155,7 +1155,7 @@ "\n", "*a)* The probability of **Class** in the dataset.\n", "\n", - "*b)* The conditional probability of each feature occuring in an item classified in **Class**.\n", + "*b)* The conditional probability of each feature occurring in an item classified in **Class**.\n", "\n", "*c)* The probabilities of each individual feature.\n", "\n", @@ -1339,7 +1339,7 @@ "source": [ "You can see the means of the features for the \"Setosa\" class and the deviations for \"Versicolor\".\n", "\n", - "The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occuring with the conditional probabilities of the feature values for the class.\n", + "The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occurring with the conditional probabilities of the feature values for the class.\n", "\n", "Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value." ] diff --git a/logic.ipynb b/logic.ipynb index fb42df7aa..4ac164861 100644 --- a/logic.ipynb +++ b/logic.ipynb @@ -766,7 +766,7 @@ "metadata": {}, "source": [ "\"Nono ... has some missiles\"
    \n", - "This states the existance of some missile which is owned by Nono. $\\exists x \\text{Owns}(\\text{Nono}, x) \\land \\text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.\n", + "This states the existence of some missile which is owned by Nono. $\\exists x \\text{Owns}(\\text{Nono}, x) \\land \\text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.\n", "\n", "$\\text{Owns}(\\text{Nono}, \\text{M1}), \\text{Missile}(\\text{M1})$" ] diff --git a/mdp.ipynb b/mdp.ipynb index af46f948c..6af87d401 100644 --- a/mdp.ipynb +++ b/mdp.ipynb @@ -327,7 +327,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "With this we have sucessfully represented our MDP. Later we will look at ways to solve this MDP." + "With this we have successfully represented our MDP. Later we will look at ways to solve this MDP." ] }, { @@ -754,7 +754,7 @@ "\n", "visualize_callback = make_visualize(iteration_slider)\n", "\n", - "visualize_button = widgets.ToggleButton(desctiption = \"Visualize\", value = False)\n", + "visualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\n", "time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n", "a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\n", "display(a)" diff --git a/nlp.ipynb b/nlp.ipynb index f95d8283c..7d4f3c87a 100644 --- a/nlp.ipynb +++ b/nlp.ipynb @@ -79,7 +79,7 @@ "source": [ "### Probabilistic Context-Free Grammar\n", "\n", - "While a simple CFG can be very useful, we might want to know the chance of each rule occuring. Above, we do not know if `S` is more likely to be replaced by `aSb` or `ε`. **Probabilistic Context-Free Grammars (PCFG)** are built to fill exactly that need. Each rule has a probability, given in brackets, and the probabilities of a rule sum up to 1:\n", + "While a simple CFG can be very useful, we might want to know the chance of each rule occurring. Above, we do not know if `S` is more likely to be replaced by `aSb` or `ε`. **Probabilistic Context-Free Grammars (PCFG)** are built to fill exactly that need. Each rule has a probability, given in brackets, and the probabilities of a rule sum up to 1:\n", "\n", "```\n", "S -> aSb [0.7] | ε [0.3]\n", @@ -89,7 +89,7 @@ "\n", "An issue with *PCFGs* is how we will assign the various probabilities to the rules. We could use our knowledge as humans to assign the probabilities, but that is a laborious and prone to error task. Instead, we can *learn* the probabilities from data. Data is categorized as labeled (with correctly parsed sentences, usually called a **treebank**) or unlabeled (given only lexical and syntactic category names).\n", "\n", - "With labeled data, we can simply count the occurences. For the above grammar, if we have 100 `S` rules and 30 of them are of the form `S -> ε`, we assign a probability of 0.3 to the transformation.\n", + "With labeled data, we can simply count the occurrences. For the above grammar, if we have 100 `S` rules and 30 of them are of the form `S -> ε`, we assign a probability of 0.3 to the transformation.\n", "\n", "With unlabeled data we have to learn both the grammar rules and the probability of each rule. We can go with many approaches, one of them the **inside-outside** algorithm. It uses a dynamic programming approach, that first finds the probability of a substring being generated by each rule, and then estimates the probability of each rule." ] @@ -119,7 +119,7 @@ "source": [ "### Lexicon\n", "\n", - "The lexicon of a language is defined as a list of allowable words. These words are grouped into the usual classes: `verbs`, `nouns`, `adjectives`, `adverbs`, `pronouns`, `names`, `articles`, `prepositions` and `conjuctions`. For the first five classes it is impossible to list all words, since words are continuously being added in the classes. Recently \"google\" was added to the list of verbs, and words like that will continue to pop up and get added to the lists. For that reason, these first five categories are called **open classes**. The rest of the categories have much fewer words and much less development. While words like \"thou\" were commonly used in the past but have declined almost completely in usage, most changes take many decades or centuries to manifest, so we can safely assume the categories will remain static for the foreseeable future. Thus, these categories are called **closed classes**.\n", + "The lexicon of a language is defined as a list of allowable words. These words are grouped into the usual classes: `verbs`, `nouns`, `adjectives`, `adverbs`, `pronouns`, `names`, `articles`, `prepositions` and `conjunctions`. For the first five classes it is impossible to list all words, since words are continuously being added in the classes. Recently \"google\" was added to the list of verbs, and words like that will continue to pop up and get added to the lists. For that reason, these first five categories are called **open classes**. The rest of the categories have much fewer words and much less development. While words like \"thou\" were commonly used in the past but have declined almost completely in usage, most changes take many decades or centuries to manifest, so we can safely assume the categories will remain static for the foreseeable future. Thus, these categories are called **closed classes**.\n", "\n", "An example lexicon for a PCFG (note that other classes can also be used according to the language, like `digits`, or `RelPro` for relative pronoun):\n", "\n", @@ -133,7 +133,7 @@ "Name -> john [0.05] | mary [0.05] | peter [0.01] | ...\n", "Article -> the [0.35] | a [0.25] | an [0.025] | ...\n", "Preposition -> to [0.25] | in [0.2] | at [0.1] | ...\n", - "Conjuction -> and [0.5] | or [0.2] | but [0.2] | ...\n", + "Conjunction -> and [0.5] | or [0.2] | but [0.2] | ...\n", "Digit -> 1 [0.3] | 2 [0.2] | 0 [0.2] | ...\n", "```" ] @@ -147,7 +147,7 @@ "With grammars we combine words from the lexicon into valid phrases. A grammar is comprised of **grammar rules**. Each rule transforms the left-hand side of the rule into the right-hand side. For example, `A -> B` means that `A` transforms into `B`. Let's build a grammar for the language we started building with the lexicon. We will use a PCFG.\n", "\n", "```\n", - "S -> NP VP [0.9] | S Conjuction S [0.1]\n", + "S -> NP VP [0.9] | S Conjunction S [0.1]\n", "\n", "NP -> Pronoun [0.3] | Name [0.1] | Noun [0.1] | Article Noun [0.25] |\n", " Article Adjs Noun [0.05] | Digit [0.05] | NP PP [0.1] |\n", @@ -216,9 +216,9 @@ "name": "stdout", "output_type": "stream", "text": [ - "Lexicon {'Adverb': ['here', 'lightly', 'now'], 'Verb': ['is', 'say', 'are'], 'Digit': ['1', '2', '0'], 'RelPro': ['that', 'who', 'which'], 'Conjuction': ['and', 'or', 'but'], 'Name': ['john', 'mary', 'peter'], 'Pronoun': ['me', 'you', 'he'], 'Article': ['the', 'a', 'an'], 'Noun': ['robot', 'sheep', 'fence'], 'Adjective': ['good', 'new', 'sad'], 'Preposition': ['to', 'in', 'at']}\n", + "Lexicon {'Adverb': ['here', 'lightly', 'now'], 'Verb': ['is', 'say', 'are'], 'Digit': ['1', '2', '0'], 'RelPro': ['that', 'who', 'which'], 'Conjunction': ['and', 'or', 'but'], 'Name': ['john', 'mary', 'peter'], 'Pronoun': ['me', 'you', 'he'], 'Article': ['the', 'a', 'an'], 'Noun': ['robot', 'sheep', 'fence'], 'Adjective': ['good', 'new', 'sad'], 'Preposition': ['to', 'in', 'at']}\n", "\n", - "Rules: {'RelClause': [['RelPro', 'VP']], 'Adjs': [['Adjective'], ['Adjective', 'Adjs']], 'NP': [['Pronoun'], ['Name'], ['Noun'], ['Article', 'Noun'], ['Article', 'Adjs', 'Noun'], ['Digit'], ['NP', 'PP'], ['NP', 'RelClause']], 'S': [['NP', 'VP'], ['S', 'Conjuction', 'S']], 'VP': [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']], 'PP': [['Preposition', 'NP']]}\n" + "Rules: {'RelClause': [['RelPro', 'VP']], 'Adjs': [['Adjective'], ['Adjective', 'Adjs']], 'NP': [['Pronoun'], ['Name'], ['Noun'], ['Article', 'Noun'], ['Article', 'Adjs', 'Noun'], ['Digit'], ['NP', 'PP'], ['NP', 'RelClause']], 'S': [['NP', 'VP'], ['S', 'Conjunction', 'S']], 'VP': [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']], 'PP': [['Preposition', 'NP']]}\n" ] } ], @@ -233,14 +233,14 @@ " Name = \"john | mary | peter\",\n", " Article = \"the | a | an\",\n", " Preposition = \"to | in | at\",\n", - " Conjuction = \"and | or | but\",\n", + " Conjunction = \"and | or | but\",\n", " Digit = \"1 | 2 | 0\"\n", ")\n", "\n", "print(\"Lexicon\", lexicon)\n", "\n", "rules = Rules(\n", - " S = \"NP VP | S Conjuction S\",\n", + " S = \"NP VP | S Conjunction S\",\n", " NP = \"Pronoun | Name | Noun | Article Noun \\\n", " | Article Adjs Noun | Digit | NP PP | NP RelClause\",\n", " VP = \"Verb | VP NP | VP Adjective | VP PP | VP Adverb\",\n", @@ -393,9 +393,9 @@ "name": "stdout", "output_type": "stream", "text": [ - "Lexicon {'Noun': [('robot', 0.4), ('sheep', 0.4), ('fence', 0.2)], 'Name': [('john', 0.4), ('mary', 0.4), ('peter', 0.2)], 'Adverb': [('here', 0.6), ('lightly', 0.1), ('now', 0.3)], 'Digit': [('0', 0.35), ('1', 0.35), ('2', 0.3)], 'Adjective': [('good', 0.5), ('new', 0.2), ('sad', 0.3)], 'Pronoun': [('me', 0.3), ('you', 0.4), ('he', 0.3)], 'Article': [('the', 0.5), ('a', 0.25), ('an', 0.25)], 'Preposition': [('to', 0.4), ('in', 0.3), ('at', 0.3)], 'Verb': [('is', 0.5), ('say', 0.3), ('are', 0.2)], 'Conjuction': [('and', 0.5), ('or', 0.2), ('but', 0.3)], 'RelPro': [('that', 0.5), ('who', 0.3), ('which', 0.2)]}\n", + "Lexicon {'Noun': [('robot', 0.4), ('sheep', 0.4), ('fence', 0.2)], 'Name': [('john', 0.4), ('mary', 0.4), ('peter', 0.2)], 'Adverb': [('here', 0.6), ('lightly', 0.1), ('now', 0.3)], 'Digit': [('0', 0.35), ('1', 0.35), ('2', 0.3)], 'Adjective': [('good', 0.5), ('new', 0.2), ('sad', 0.3)], 'Pronoun': [('me', 0.3), ('you', 0.4), ('he', 0.3)], 'Article': [('the', 0.5), ('a', 0.25), ('an', 0.25)], 'Preposition': [('to', 0.4), ('in', 0.3), ('at', 0.3)], 'Verb': [('is', 0.5), ('say', 0.3), ('are', 0.2)], 'Conjunction': [('and', 0.5), ('or', 0.2), ('but', 0.3)], 'RelPro': [('that', 0.5), ('who', 0.3), ('which', 0.2)]}\n", "\n", - "Rules: {'S': [(['NP', 'VP'], 0.6), (['S', 'Conjuction', 'S'], 0.4)], 'RelClause': [(['RelPro', 'VP'], 1.0)], 'VP': [(['Verb'], 0.3), (['VP', 'NP'], 0.2), (['VP', 'Adjective'], 0.25), (['VP', 'PP'], 0.15), (['VP', 'Adverb'], 0.1)], 'Adjs': [(['Adjective'], 0.5), (['Adjective', 'Adjs'], 0.5)], 'PP': [(['Preposition', 'NP'], 1.0)], 'NP': [(['Pronoun'], 0.2), (['Name'], 0.05), (['Noun'], 0.2), (['Article', 'Noun'], 0.15), (['Article', 'Adjs', 'Noun'], 0.1), (['Digit'], 0.05), (['NP', 'PP'], 0.15), (['NP', 'RelClause'], 0.1)]}\n" + "Rules: {'S': [(['NP', 'VP'], 0.6), (['S', 'Conjunction', 'S'], 0.4)], 'RelClause': [(['RelPro', 'VP'], 1.0)], 'VP': [(['Verb'], 0.3), (['VP', 'NP'], 0.2), (['VP', 'Adjective'], 0.25), (['VP', 'PP'], 0.15), (['VP', 'Adverb'], 0.1)], 'Adjs': [(['Adjective'], 0.5), (['Adjective', 'Adjs'], 0.5)], 'PP': [(['Preposition', 'NP'], 1.0)], 'NP': [(['Pronoun'], 0.2), (['Name'], 0.05), (['Noun'], 0.2), (['Article', 'Noun'], 0.15), (['Article', 'Adjs', 'Noun'], 0.1), (['Digit'], 0.05), (['NP', 'PP'], 0.15), (['NP', 'RelClause'], 0.1)]}\n" ] } ], @@ -410,14 +410,14 @@ " Name = \"john [0.4] | mary [0.4] | peter [0.2]\",\n", " Article = \"the [0.5] | a [0.25] | an [0.25]\",\n", " Preposition = \"to [0.4] | in [0.3] | at [0.3]\",\n", - " Conjuction = \"and [0.5] | or [0.2] | but [0.3]\",\n", + " Conjunction = \"and [0.5] | or [0.2] | but [0.3]\",\n", " Digit = \"0 [0.35] | 1 [0.35] | 2 [0.3]\"\n", ")\n", "\n", "print(\"Lexicon\", lexicon)\n", "\n", "rules = ProbRules(\n", - " S = \"NP VP [0.6] | S Conjuction S [0.4]\",\n", + " S = \"NP VP [0.6] | S Conjunction S [0.4]\",\n", " NP = \"Pronoun [0.2] | Name [0.05] | Noun [0.2] | Article Noun [0.15] \\\n", " | Article Adjs Noun [0.1] | Digit [0.05] | NP PP [0.15] | NP RelClause [0.1]\",\n", " VP = \"Verb [0.3] | VP NP [0.2] | VP Adjective [0.25] | VP PP [0.15] | VP Adverb [0.1]\",\n", diff --git a/nlp.py b/nlp.py index f34d088b5..ace6de90d 100644 --- a/nlp.py +++ b/nlp.py @@ -214,7 +214,7 @@ def __repr__(self): E_Prob = ProbGrammar('E_Prob', # The Probabilistic Grammar from the notebook ProbRules( - S="NP VP [0.6] | S Conjuction S [0.4]", + S="NP VP [0.6] | S Conjunction S [0.4]", NP="Pronoun [0.2] | Name [0.05] | Noun [0.2] | Article Noun [0.15] \ | Article Adjs Noun [0.1] | Digit [0.05] | NP PP [0.15] | NP RelClause [0.1]", VP="Verb [0.3] | VP NP [0.2] | VP Adjective [0.25] | VP PP [0.15] | VP Adverb [0.1]", @@ -232,7 +232,7 @@ def __repr__(self): Name="john [0.4] | mary [0.4] | peter [0.2]", Article="the [0.5] | a [0.25] | an [0.25]", Preposition="to [0.4] | in [0.3] | at [0.3]", - Conjuction="and [0.5] | or [0.2] | but [0.3]", + Conjunction="and [0.5] | or [0.2] | but [0.3]", Digit="0 [0.35] | 1 [0.35] | 2 [0.3]" )) diff --git a/nlp_apps.ipynb b/nlp_apps.ipynb index d50588cb7..2da3b9283 100644 --- a/nlp_apps.ipynb +++ b/nlp_apps.ipynb @@ -30,7 +30,7 @@ "\n", "First we need to build our dataset. We will take as input text in English and in German and we will extract n-gram character models (in this case, *bigrams* for n=2). For English, we will use *Flatland* by Edwin Abbott and for German *Faust* by Goethe.\n", "\n", - "Let's build our text models for each language, which will hold the probability of each bigram occuring in the text." + "Let's build our text models for each language, which will hold the probability of each bigram occurring in the text." ] }, { diff --git a/notebook.py b/notebook.py index 3fe64de2d..6e1a0fbfc 100644 --- a/notebook.py +++ b/notebook.py @@ -260,7 +260,7 @@ class Canvas: """Inherit from this class to manage the HTML canvas element in jupyter notebooks. To create an object of this class any_name_xyz = Canvas("any_name_xyz") The first argument given must be the name of the object being created. - IPython must be able to refernce the variable name that is being passed.""" + IPython must be able to reference the variable name that is being passed.""" def __init__(self, varname, width=800, height=600, cid=None): self.name = varname @@ -279,10 +279,10 @@ def mouse_move(self, x, y): raise NotImplementedError def execute(self, exec_str): - """Stores the command to be exectued to a list which is used later during update()""" + """Stores the command to be executed to a list which is used later during update()""" if not isinstance(exec_str, str): print("Invalid execution argument:", exec_str) - self.alert("Recieved invalid execution command format") + self.alert("Received invalid execution command format") prefix = "{0}_canvas_object.".format(self.cid) self.exec_list.append(prefix + exec_str + ';') diff --git a/planning.ipynb b/planning.ipynb index 37461ee9b..1054f1ee8 100644 --- a/planning.ipynb +++ b/planning.ipynb @@ -63,7 +63,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It is interesting to see the way preconditions and effects are represented here. Instead of just being a list of expressions each, they consist of two lists - `precond_pos` and `precond_neg`. This is to work around the fact that PDDL doesn't allow for negations. Thus, for each precondition, we maintain a seperate list of those preconditions that must hold true, and those whose negations must hold true. Similarly, instead of having a single list of expressions that are the result of executing an action, we have two. The first (`effect_add`) contains all the expressions that will evaluate to true if the action is executed, and the the second (`effect_neg`) contains all those expressions that would be false if the action is executed (ie. their negations would be true).\n", + "It is interesting to see the way preconditions and effects are represented here. Instead of just being a list of expressions each, they consist of two lists - `precond_pos` and `precond_neg`. This is to work around the fact that PDDL doesn't allow for negations. Thus, for each precondition, we maintain a separate list of those preconditions that must hold true, and those whose negations must hold true. Similarly, instead of having a single list of expressions that are the result of executing an action, we have two. The first (`effect_add`) contains all the expressions that will evaluate to true if the action is executed, and the the second (`effect_neg`) contains all those expressions that would be false if the action is executed (ie. their negations would be true).\n", "\n", "The constructor parameters, however combine the two precondition lists into a single `precond` parameter, and the effect lists into a single `effect` parameter." ] diff --git a/probability.py b/probability.py index 5c9e28245..a9f65fbb0 100644 --- a/probability.py +++ b/probability.py @@ -651,7 +651,7 @@ def particle_filtering(e, N, HMM): return s # _________________________________________________________________________ -## TODO: Implement continous map for MonteCarlo similar to Fig25.10 from the book +## TODO: Implement continuous map for MonteCarlo similar to Fig25.10 from the book class MCLmap: """Map which provides probability distributions and sensor readings. diff --git a/rl.ipynb b/rl.ipynb index b0920b8ed..019bef3b7 100644 --- a/rl.ipynb +++ b/rl.ipynb @@ -336,7 +336,7 @@ "source": [ "The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the __ call __ method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent.\n", "\n", - " Let us use the same GridMDP object we used above. **Figure 17.1 (sequential_decision_environment)** is similar to **Figure 21.1** but has some discounting as **gamma = 0.9**. The class also implements an exploration function **f** which returns fixed **Rplus** untill agent has visited state, action **Ne** number of times. This is the same as the one defined on page **842** of the book. The method **actions_in_state** returns actions possible in given state. It is useful when applying max and argmax operations." + " Let us use the same GridMDP object we used above. **Figure 17.1 (sequential_decision_environment)** is similar to **Figure 21.1** but has some discounting as **gamma = 0.9**. The class also implements an exploration function **f** which returns fixed **Rplus** until agent has visited state, action **Ne** number of times. This is the same as the one defined on page **842** of the book. The method **actions_in_state** returns actions possible in given state. It is useful when applying max and argmax operations." ] }, { @@ -381,7 +381,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now let us see the Q Values. The keys are state-action pairs. Where differnt actions correspond according to:\n", + "Now let us see the Q Values. The keys are state-action pairs. Where different actions correspond according to:\n", "\n", "north = (0, 1)\n", "south = (0,-1)\n", diff --git a/rl.py b/rl.py index 868784e9f..3258bfffe 100644 --- a/rl.py +++ b/rl.py @@ -13,7 +13,7 @@ class PassiveADPAgent: on a given MDP and policy. [Figure 21.2]""" class ModelMDP(MDP): - """ Class for implementing modifed Version of input MDP with + """ Class for implementing modified Version of input MDP with an editable transition model P and a custom function T. """ def __init__(self, init, actlist, terminals, gamma, states): super().__init__(init, actlist, terminals, gamma) diff --git a/search-4e.ipynb b/search-4e.ipynb index 73da69119..c2d0dae61 100644 --- a/search-4e.ipynb +++ b/search-4e.ipynb @@ -929,7 +929,7 @@ " \"\"\"Provide an initial state and optional goal states.\n", " A subclass can have additional keyword arguments.\"\"\"\n", " self.initial = initial # The initial state of the problem.\n", - " self.goals = goals # A collection of possibe goal states.\n", + " self.goals = goals # A collection of possible goal states.\n", " self.__dict__.update(**additional_keywords)\n", "\n", " def actions(self, state):\n", @@ -2706,7 +2706,7 @@ " // Register the callback with on_msg.\n", " comm.on_msg(function(msg) {\n", " //console.log('receiving', msg['content']['data'], msg);\n", - " // Pass the mpl event to the overriden (by mpl) onmessage function.\n", + " // Pass the mpl event to the overridden (by mpl) onmessage function.\n", " ws.onmessage(msg['content']['data'])\n", " });\n", " return ws;\n", @@ -3559,7 +3559,7 @@ " // Register the callback with on_msg.\n", " comm.on_msg(function(msg) {\n", " //console.log('receiving', msg['content']['data'], msg);\n", - " // Pass the mpl event to the overriden (by mpl) onmessage function.\n", + " // Pass the mpl event to the overridden (by mpl) onmessage function.\n", " ws.onmessage(msg['content']['data'])\n", " });\n", " return ws;\n", diff --git a/search.ipynb b/search.ipynb index 6da1d0ef5..bf3fe5a37 100644 --- a/search.ipynb +++ b/search.ipynb @@ -2091,7 +2091,7 @@ "source": [ "### Explanation\n", "\n", - "Before we solve problems using the genetic algorithm, we will explain how to intuitively understand the algorithm using a trivial exmaple.\n", + "Before we solve problems using the genetic algorithm, we will explain how to intuitively understand the algorithm using a trivial example.\n", "\n", "#### Generating Phrases\n", "\n", diff --git a/search.py b/search.py index b705d6f28..8458cb132 100644 --- a/search.py +++ b/search.py @@ -894,7 +894,7 @@ def mutate(x, gene_pool, pmut): class Graph: - """A graph connects nodes (verticies) by edges (links). Each edge can also + """A graph connects nodes (vertices) by edges (links). Each edge can also have a length associated with it. The constructor call is something like: g = Graph({'A': {'B': 1, 'C': 2}) this makes a graph with 3 nodes, A, B, and C, with an edge of length 1 from diff --git a/tests/test_utils.py b/tests/test_utils.py index a07bc76ef..dbc1bc01a 100644 --- a/tests/test_utils.py +++ b/tests/test_utils.py @@ -281,7 +281,7 @@ def test_FIFOQueue() : front_head += 1 # check for __len__ method assert len(queue) == front_head - back_head - # chek for __contains__ method + # check for __contains__ method if front_head - back_head > 0 : assert random.choice(test_data[back_head:front_head]) in queue diff --git a/text.ipynb b/text.ipynb index aeebf8ecd..f8c3aea13 100644 --- a/text.ipynb +++ b/text.ipynb @@ -115,7 +115,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We see that the most used word in *Flatland* is 'the', with 2081 occurences, while the most used sequence is 'of the' with 368 occurences. Also, the probability of 'an' is approximately 0.003, while for 'i was' it is close to 0.001. Note that the strings used as keys are all lowercase. For the unigram model, the keys are single strings, while for n-gram models we have n-tuples of strings.\n", + "We see that the most used word in *Flatland* is 'the', with 2081 occurrences, while the most used sequence is 'of the' with 368 occurrences. Also, the probability of 'an' is approximately 0.003, while for 'i was' it is close to 0.001. Note that the strings used as keys are all lowercase. For the unigram model, the keys are single strings, while for n-gram models we have n-tuples of strings.\n", "\n", "Below we take a look at how we can get information from the conditional probabilities of the model, and how we can generate the next word in a sequence." ] @@ -297,7 +297,7 @@ "\n", "We are given a string containing words of a sentence, but all the spaces are gone! It is very hard to read and we would like to separate the words in the string. We can accomplish this by employing the `Viterbi Segmentation` algorithm. It takes as input the string to segment and a text model, and it returns a list of the separate words.\n", "\n", - "The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into \"windows\", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occuring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words." + "The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into \"windows\", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occurring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words." ] }, { @@ -386,7 +386,7 @@ "\n", "How does an IR system determine which documents are relevant though? We can sign a document as relevant if all the words in the query appear in it, and sign it as irrelevant otherwise. We can even extend the query language to support boolean operations (for example, \"paint AND brush\") and then sign as relevant the outcome of the query for the document. This technique though does not give a level of relevancy. All the documents are either relevant or irrelevant, but in reality some documents are more relevant than others.\n", "\n", - "So, instead of a boolean relevancy system, we use a *scoring function*. There are many scoring functions around for many different situations. One of the most used takes into account the frequency of the words appearing in a document, the frequency of a word appearing across documents (for example, the word \"a\" appears a lot, so it is not very important) and the length of a document (since large documents will have higher occurences for the query terms, but a short document with a lot of occurences seems very relevant). We combine these properties in a formula and we get a numeric score for each document, so we can then quantify relevancy and pick the best documents.\n", + "So, instead of a boolean relevancy system, we use a *scoring function*. There are many scoring functions around for many different situations. One of the most used takes into account the frequency of the words appearing in a document, the frequency of a word appearing across documents (for example, the word \"a\" appears a lot, so it is not very important) and the length of a document (since large documents will have higher occurrences for the query terms, but a short document with a lot of occurrences seems very relevant). We combine these properties in a formula and we get a numeric score for each document, so we can then quantify relevancy and pick the best documents.\n", "\n", "These scoring functions are not perfect though and there is room for improvement. For instance, for the above scoring function we assume each word is independent. That is not the case though, since words can share meaning. For example, the words \"painter\" and \"painters\" are closely related. If in a query we have the word \"painter\" and in a document the word \"painters\" appears a lot, this might be an indication that the document is relevant but we are missing out since we are only looking for \"painter\". There are a lot of ways to combat this. One of them is to reduce the query and document words into their stems. For example, both \"painter\" and \"painters\" have \"paint\" as their stem form. This can improve slightly the performance of algorithms.\n", "\n", @@ -527,7 +527,7 @@ "source": [ "## INFORMATION EXTRACTION\n", "\n", - "**Information Extraction (IE)** is a method for finding occurences of object classes and relationships in text. Unlike IR systems, an IE system includes (limited) notions of syntax and semantics. While it is difficult to extract object information in a general setting, for more specific domains the system is very useful. One model of an IE system makes use of templates that match with strings in a text.\n", + "**Information Extraction (IE)** is a method for finding occurrences of object classes and relationships in text. Unlike IR systems, an IE system includes (limited) notions of syntax and semantics. While it is difficult to extract object information in a general setting, for more specific domains the system is very useful. One model of an IE system makes use of templates that match with strings in a text.\n", "\n", "A typical example of such a model is reading prices from web pages. Prices usually appear after a dollar and consist of numbers, maybe followed by two decimal points. Before the price, usually there will appear a string like \"price:\". Let's build a sample template.\n", "\n", @@ -535,7 +535,7 @@ "\n", "`[$][0-9]+([.][0-9][0-9])?`\n", "\n", - "Where `+` means 1 or more occurences and `?` means at most 1 occurence. Usually a template consists of a prefix, a target and a postfix regex. In this template, the prefix regex can be \"price:\", the target regex can be the above regex and the postfix regex can be empty.\n", + "Where `+` means 1 or more occurrences and `?` means at most 1 occurrence. Usually a template consists of a prefix, a target and a postfix regex. In this template, the prefix regex can be \"price:\", the target regex can be the above regex and the postfix regex can be empty.\n", "\n", "A template can match with multiple strings. If this is the case, we need a way to resolve the multiple matches. Instead of having just one template, we can use multiple templates (ordered by priority) and pick the match from the highest-priority template. We can also use other ways to pick. For the dollar example, we can pick the match closer to the numerical half of the highest match. For the text \"Price $90, special offer $70, shipping $5\" we would pick \"$70\" since it is closer to the half of the highest match (\"$90\")." ] diff --git a/utils.py b/utils.py index e5dbfd5cd..709c5621f 100644 --- a/utils.py +++ b/utils.py @@ -22,7 +22,7 @@ def sequence(iterable): def removeall(item, seq): - """Return a copy of seq (or string) with all occurences of item removed.""" + """Return a copy of seq (or string) with all occurrences of item removed.""" if isinstance(seq, str): return seq.replace(item, '') else: @@ -135,7 +135,7 @@ def element_wise_product(X, Y): def matrix_multiplication(X_M, *Y_M): - """Return a matrix as a matrix-multiplication of X_M and arbitary number of matrices *Y_M""" + """Return a matrix as a matrix-multiplication of X_M and arbitrary number of matrices *Y_M""" def _mat_mult(X_M, Y_M): """Return a matrix as a matrix-multiplication of two matrices X_M and Y_M @@ -418,7 +418,7 @@ def open_data(name, mode='r'): def failure_test(algorithm, tests): """Grades the given algorithm based on how many tests it passes. - Most algorithms have arbitary output on correct execution, which is difficult + Most algorithms have arbitrary output on correct execution, which is difficult to check for correctness. On the other hand, a lot of algorithms output something particular on fail (for example, False, or None). tests is a list with each element in the form: (values, failure_output).""" diff --git a/vacuum_world.ipynb b/vacuum_world.ipynb index 34bcd2d5b..2679eb464 100644 --- a/vacuum_world.ipynb +++ b/vacuum_world.ipynb @@ -116,7 +116,7 @@ "# Initialize the two-state environment\n", "trivial_vacuum_env = TrivialVacuumEnvironment()\n", "\n", - "# Check the intial state of the environment\n", + "# Check the initial state of the environment\n", "print(\"State of the Environment: {}.\".format(trivial_vacuum_env.status))" ] }, @@ -305,7 +305,7 @@ "source": [ "## SIMPLE REFLEX AGENT PROGRAM\n", "\n", - "A simple reflex agent program selects actions on the basis of the *current* percept, ignoring the rest of the percept history. These agents work on a **condition-action rule** (also called **situation-action rule**, **production** or **if-then rule**), which tells the agent the action to trigger when a particular situtation is encountered. \n", + "A simple reflex agent program selects actions on the basis of the *current* percept, ignoring the rest of the percept history. These agents work on a **condition-action rule** (also called **situation-action rule**, **production** or **if-then rule**), which tells the agent the action to trigger when a particular situation is encountered. \n", "\n", "The schematic diagram shown in **Figure 2.9** of the book will make this more clear:\n", "\n", @@ -415,7 +415,7 @@ "source": [ "## MODEL-BASED REFLEX AGENT PROGRAM\n", "\n", - "A model-based reflex agent maintains some sort of **internal state** that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. In additon to this, it also requires a **model** of the world, that is, knowledge about \"how the world works\".\n", + "A model-based reflex agent maintains some sort of **internal state** that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. In addition to this, it also requires a **model** of the world, that is, knowledge about \"how the world works\".\n", "\n", "The schematic diagram shown in **Figure 2.11** of the book will make this more clear:\n", "" @@ -442,7 +442,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We need a another function UPDATE-STATE which will be reponsible for creating a new state description." + "We need a another function UPDATE-STATE which will be responsible for creating a new state description." ] }, {