diff --git a/.coveragerc b/.coveragerc
new file mode 100644
index 000000000..2398f62e3
--- /dev/null
+++ b/.coveragerc
@@ -0,0 +1,3 @@
+[report]
+omit =
+ tests/*
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
index 9a4bb620f..58e83214e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -44,6 +44,7 @@ nosetests.xml
coverage.xml
*,cover
.hypothesis/
+*.pytest_cache
# Translations
*.mo
@@ -70,3 +71,8 @@ target/
# dotenv
.env
+.idea
+
+# for macOS
+.DS_Store
+._.DS_Store
diff --git a/.travis.yml b/.travis.yml
index e0932e6b2..e465e8e4c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,20 +1,19 @@
-language:
- - python
+language: python
python:
- - "3.4"
+ - 3.5
+ - 3.6
+ - 3.7
+ - 3.8
before_install:
- git submodule update --remote
install:
- - pip install six
- - pip install flake8
- - pip install ipython
- - pip install matplotlib
+ - pip install --upgrade -r requirements.txt
script:
- - py.test
+ - py.test --cov=./
- python -m doctest -v *.py
after_success:
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 400455274..f92643700 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,12 +1,14 @@
How to Contribute to aima-python
==========================
-Thanks for considering contributing to `aima-python`! Whether you are an aspiring [Google Summer of Code](https://summerofcode.withgoogle.com/organizations/5663121491361792/) student, or an independent contributor, here is a guide on how you can help.
+Thanks for considering contributing to `aima-python`! Whether you are an aspiring [Google Summer of Code](https://summerofcode.withgoogle.com/organizations/5431334980288512/) student, or an independent contributor, here is a guide on how you can help.
-The main ways you can contribute to the repository are the following:
+First of all, you can read these write-ups from past GSoC students to get an idea about what you can do for the project. [Chipe1](https://github.com/aimacode/aima-python/issues/641) - [MrDupin](https://github.com/aimacode/aima-python/issues/632)
+
+In general, the main ways you can contribute to the repository are the following:
1. Implement algorithms from the [list of algorithms](https://github.com/aimacode/aima-python/blob/master/README.md#index-of-algorithms).
-1. Add tests for algorithms that are missing them (you can also add more tests to algorithms that already have some).
+1. Add tests for algorithms.
1. Take care of [issues](https://github.com/aimacode/aima-python/issues).
1. Write on the notebooks (`.ipynb` files).
1. Add and edit documentation (the docstrings in `.py` files).
@@ -19,20 +21,16 @@ In more detail:
- Look at the [issues](https://github.com/aimacode/aima-python/issues) and pick one to work on.
- One of the issues is that some algorithms are missing from the [list of algorithms](https://github.com/aimacode/aima-python/blob/master/README.md#index-of-algorithms) and that some don't have tests.
-## Port to Python 3; Pythonic Idioms; py.test
+## Port to Python 3; Pythonic Idioms
-- Check for common problems in [porting to Python 3](http://python3porting.com/problems.html), such as: `print` is now a function; `range` and `map` and other functions no longer produce `list`s; objects of different types can no longer be compared with `<`; strings are now Unicode; it would be nice to move `%` string formating to `.format`; there is a new `next` function for generators; integer division now returns a float; we can now use set literals.
+- Check for common problems in [porting to Python 3](http://python3porting.com/problems.html), such as: `print` is now a function; `range` and `map` and other functions no longer produce `list`; objects of different types can no longer be compared with `<`; strings are now Unicode; it would be nice to move `%` string formatting to `.format`; there is a new `next` function for generators; integer division now returns a float; we can now use set literals.
- Replace old Lisp-based idioms with proper Python idioms. For example, we have many functions that were taken directly from Common Lisp, such as the `every` function: `every(callable, items)` returns true if every element of `items` is callable. This is good Lisp style, but good Python style would be to use `all` and a generator expression: `all(callable(f) for f in items)`. Eventually, fix all calls to these legacy Lisp functions and then remove the functions.
-- Add more tests in `test_*.py` files. Strive for terseness; it is ok to group multiple asserts into one `def test_something():` function. Move most tests to `test_*.py`, but it is fine to have a single `doctest` example in the docstring of a function in the `.py` file, if the purpose of the doctest is to explain how to use the function, rather than test the implementation.
## New and Improved Algorithms
- Implement functions that were in the third edition of the book but were not yet implemented in the code. Check the [list of pseudocode algorithms (pdf)](https://github.com/aimacode/pseudocode/blob/master/aima3e-algorithms.pdf) to see what's missing.
- As we finish chapters for the new fourth edition, we will share the new pseudocode in the [`aima-pseudocode`](https://github.com/aimacode/aima-pseudocode) repository, and describe what changes are necessary.
We hope to have an `algorithm-name.md` file for each algorithm, eventually; it would be great if contributors could add some for the existing algorithms.
-- Give examples of how to use the code in the `.ipynb` files.
-
-We still support a legacy branch, `aima3python2` (for the third edition of the textbook and for Python 2 code).
## Jupyter Notebooks
@@ -67,21 +65,12 @@ a one-line docstring suffices. It is rarely necessary to list what each argument
- At some point I may add [Pep 484](https://www.python.org/dev/peps/pep-0484/) type annotations, but I think I'll hold off for now;
I want to get more experience with them, and some people may still be in Python 3.4.
-
-Contributing a Patch
-====================
-
-1. Submit an issue describing your proposed change to the repo in question (or work on an existing issue).
-1. The repo owner will respond to your issue promptly.
-1. Fork the desired repo, develop and test your code changes.
-1. Submit a pull request.
-
Reporting Issues
================
- Under which versions of Python does this happen?
-- Provide an example of the issue occuring.
+- Provide an example of the issue occurring.
- Is anybody working on this?
@@ -95,28 +84,8 @@ Patch Rules
without your patch.
- Follow the style guidelines described above.
-
-Running the Test-Suite
-=====================
-
-The minimal requirement for running the testsuite is ``py.test``. You can
-install it with:
-
- pip install pytest
-
-Clone this repository:
-
- git clone https://github.com/aimacode/aima-python.git
-
-Fetch the aima-data submodule:
-
- cd aima-python
- git submodule init
- git submodule update
-
-Then you can run the testsuite from the `aima-python` or `tests` directory with:
-
- py.test
+- Refer the issue you have fixed.
+- Explain in brief what changes you have made with affected files name.
# Choice of Programming Languages
diff --git a/README.md b/README.md
index 0174290c2..17f1d6085 100644
--- a/README.md
+++ b/README.md
@@ -1,125 +1,172 @@
-
-
-
+
# `aima-python` [](https://travis-ci.org/aimacode/aima-python) [](http://mybinder.org/repo/aimacode/aima-python)
Python code for the book *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu).* You can use this in conjunction with a course on AI, or for study on your own. We're looking for [solid contributors](https://github.com/aimacode/aima-python/blob/master/CONTRIBUTING.md) to help.
-## Python 3.4
+# Updates for 4th Edition
+
+The 4th edition of the book as out now in 2020, and thus we are updating the code. All code here will reflect the 4th edition. Changes include:
+
+- Move from Python 3.5 to 3.7.
+- More emphasis on Jupyter (Ipython) notebooks.
+- More projects using external packages (tensorflow, etc.).
+
+
+
+# Structure of the Project
+
+When complete, this project will have Python implementations for all the pseudocode algorithms in the book, as well as tests and examples of use. For each major topic, such as `search`, we provide the following files:
+
+- `search.ipynb` and `search.py`: Implementations of all the pseudocode algorithms, and necessary support functions/classes/data. The `.py` file is generated automatically from the `.ipynb` file; the idea is that it is easier to read the documentation in the `.ipynb` file.
+- `search_XX.ipynb`: Notebooks that show how to use the code, broken out into various topics (the `XX`).
+- `tests/test_search.py`: A lightweight test suite, using `assert` statements, designed for use with [`py.test`](http://pytest.org/latest/), but also usable on their own.
+
+# Python 3.7 and up
+
+The code for the 3rd edition was in Python 3.5; the current 4th edition code is in Python 3.7. It should also run in later versions, but does not run in Python 2. You can [install Python](https://www.python.org/downloads) or use a browser-based Python interpreter such as [repl.it](https://repl.it/languages/python3).
+You can run the code in an IDE, or from the command line with `python -i filename.py` where the `-i` option puts you in an interactive loop where you can run Python functions. All notebooks are available in a [binder environment](http://mybinder.org/repo/aimacode/aima-python). Alternatively, visit [jupyter.org](http://jupyter.org/) for instructions on setting up your own Jupyter notebook environment.
+
+Features from Python 3.6 and 3.7 that we will be using for this version of the code:
+- [f-strings](https://docs.python.org/3.6/whatsnew/3.6.html#whatsnew36-pep498): all string formatting should be done with `f'var = {var}'`, not with `'var = {}'.format(var)` nor `'var = %s' % var`.
+- [`typing` module](https://docs.python.org/3.7/library/typing.html): declare functions with type hints: `def successors(state) -> List[State]:`; that is, give type declarations, but omit them when it is obvious. I don't need to say `state: State`, but in another context it would make sense to say `s: State`.
+- Underscores in numerics: write a million as `1_000_000` not as `1000000`.
+- [`dataclasses` module](https://docs.python.org/3.7/library/dataclasses.html#module-dataclasses): replace `namedtuple` with `dataclass`.
+
+
+[//]: # (There is a sibling [aima-docker]https://github.com/rajatjain1997/aima-docker project that shows you how to use docker containers to run more complex problems in more complex software environments.)
+
+
+## Installation Guide
+
+To download the repository:
+
+`git clone https://github.com/aimacode/aima-python.git`
+
+Then you need to install the basic dependencies to run the project on your system:
+
+```
+cd aima-python
+pip install -r requirements.txt
+```
+
+You also need to fetch the datasets from the [`aima-data`](https://github.com/aimacode/aima-data) repository:
+
+```
+git submodule init
+git submodule update
+```
+
+Wait for the datasets to download, it may take a while. Once they are downloaded, you need to install `pytest`, so that you can run the test suite:
-This code is in Python 3.4 (Python 3.5 and later also works, but Python 2.x does not). You can [install the latest Python version](https://www.python.org/downloads) or use a browser-based Python interpreter such as [repl.it](https://repl.it/languages/python3).
-You can run the code in an IDE, or from the command line with `python -i filename.py` where the `-i` option puts you in an interactive loop where you can run Python functions.
+`pip install pytest`
-In addition to the `filename.py` files, there are also `filename.ipynb` files, which are Jupyter (formerly IPython) notebooks. You can read these notebooks, and you can also run the code embedded with them. See [jupyter.org](http://jupyter.org/) for instructions on setting up a Jupyter notebook environment. Some modules also have `filename_apps.ipynb` files, which are notebooks for applications of the module.
+Then to run the tests:
-## Structure of the Project
+`py.test`
-When complete, this project will have Python code for all the pseudocode algorithms in the book. For each major topic, such as `nlp`, we will have the following three files in the main branch:
+And you are good to go!
-- `nlp.py`: Implementations of all the pseudocode algorithms, and necessary support functions/classes/data.
-- `nlp.ipynb`: A Jupyter (IPython) notebook that explains and gives examples of how to use the code.
-- `nlp_apps.ipynb`: A Jupyter notebook that gives example applications of the code.
-- `tests/test_nlp.py`: A lightweight test suite, using `assert` statements, designed for use with [`py.test`](http://pytest.org/latest/), but also usable on their own.
# Index of Algorithms
-Here is a table of algorithms, the figure, name of the algorithm in the book and in the repository, and the file where they are implemented in the repository. This chart was made for the third edition of the book and needs to be updated for the upcoming fourth edition. Empty implementations are a good place for contributors to look for an issue. The [aima-pseudocode](https://github.com/aimacode/aima-pseudocode) project describes all the algorithms from the book. An asterisk next to the file name denotes the algorithm is not fully implemented.
-
-| **Figure** | **Name (in 3rd edition)** | **Name (in repository)** | **File** | **Tests**
-|:--------|:-------------------|:---------|:-----------|:-------|
-| 2.1 | Environment | `Environment` | [`agents.py`][agents] | Done |
-| 2.1 | Agent | `Agent` | [`agents.py`][agents] | Done |
-| 2.3 | Table-Driven-Vacuum-Agent | `TableDrivenVacuumAgent` | [`agents.py`][agents] | |
-| 2.7 | Table-Driven-Agent | `TableDrivenAgent` | [`agents.py`][agents] | |
-| 2.8 | Reflex-Vacuum-Agent | `ReflexVacuumAgent` | [`agents.py`][agents] | Done |
-| 2.10 | Simple-Reflex-Agent | `SimpleReflexAgent` | [`agents.py`][agents] | |
-| 2.12 | Model-Based-Reflex-Agent | `ReflexAgentWithState` | [`agents.py`][agents] | |
-| 3 | Problem | `Problem` | [`search.py`][search] | Done |
-| 3 | Node | `Node` | [`search.py`][search] | Done |
-| 3 | Queue | `Queue` | [`utils.py`][utils] | Done |
-| 3.1 | Simple-Problem-Solving-Agent | `SimpleProblemSolvingAgent` | [`search.py`][search] | |
-| 3.2 | Romania | `romania` | [`search.py`][search] | Done |
-| 3.7 | Tree-Search | `tree_search` | [`search.py`][search] | Done |
-| 3.7 | Graph-Search | `graph_search` | [`search.py`][search] | Done |
-| 3.11 | Breadth-First-Search | `breadth_first_search` | [`search.py`][search] | Done |
-| 3.14 | Uniform-Cost-Search | `uniform_cost_search` | [`search.py`][search] | Done |
-| 3.17 | Depth-Limited-Search | `depth_limited_search` | [`search.py`][search] | Done |
-| 3.18 | Iterative-Deepening-Search | `iterative_deepening_search` | [`search.py`][search] | Done |
-| 3.22 | Best-First-Search | `best_first_graph_search` | [`search.py`][search] | Done |
-| 3.24 | A\*-Search | `astar_search` | [`search.py`][search] | Done |
-| 3.26 | Recursive-Best-First-Search | `recursive_best_first_search` | [`search.py`][search] | Done |
-| 4.2 | Hill-Climbing | `hill_climbing` | [`search.py`][search] | Done |
-| 4.5 | Simulated-Annealing | `simulated_annealing` | [`search.py`][search] | Done |
-| 4.8 | Genetic-Algorithm | `genetic_algorithm` | [`search.py`][search] | Done |
-| 4.11 | And-Or-Graph-Search | `and_or_graph_search` | [`search.py`][search] | Done |
-| 4.21 | Online-DFS-Agent | `online_dfs_agent` | [`search.py`][search] | |
-| 4.24 | LRTA\*-Agent | `LRTAStarAgent` | [`search.py`][search] | Done |
-| 5.3 | Minimax-Decision | `minimax_decision` | [`games.py`][games] | Done |
-| 5.7 | Alpha-Beta-Search | `alphabeta_search` | [`games.py`][games] | Done |
-| 6 | CSP | `CSP` | [`csp.py`][csp] | Done |
-| 6.3 | AC-3 | `AC3` | [`csp.py`][csp] | Done |
-| 6.5 | Backtracking-Search | `backtracking_search` | [`csp.py`][csp] | Done |
-| 6.8 | Min-Conflicts | `min_conflicts` | [`csp.py`][csp] | Done |
-| 6.11 | Tree-CSP-Solver | `tree_csp_solver` | [`csp.py`][csp] | Done |
-| 7 | KB | `KB` | [`logic.py`][logic] | Done |
-| 7.1 | KB-Agent | `KB_Agent` | [`logic.py`][logic] | Done |
-| 7.7 | Propositional Logic Sentence | `Expr` | [`logic.py`][logic] | Done |
-| 7.10 | TT-Entails | `tt_entails` | [`logic.py`][logic] | Done |
-| 7.12 | PL-Resolution | `pl_resolution` | [`logic.py`][logic] | Done |
-| 7.14 | Convert to CNF | `to_cnf` | [`logic.py`][logic] | Done |
-| 7.15 | PL-FC-Entails? | `pl_fc_resolution` | [`logic.py`][logic] | Done |
-| 7.17 | DPLL-Satisfiable? | `dpll_satisfiable` | [`logic.py`][logic] | Done |
-| 7.18 | WalkSAT | `WalkSAT` | [`logic.py`][logic] | Done |
-| 7.20 | Hybrid-Wumpus-Agent | `HybridWumpusAgent` | | |
-| 7.22 | SATPlan | `SAT_plan` | [`logic.py`][logic] | Done |
-| 9 | Subst | `subst` | [`logic.py`][logic] | Done |
-| 9.1 | Unify | `unify` | [`logic.py`][logic] | Done |
-| 9.3 | FOL-FC-Ask | `fol_fc_ask` | [`logic.py`][logic] | Done |
-| 9.6 | FOL-BC-Ask | `fol_bc_ask` | [`logic.py`][logic] | Done |
-| 9.8 | Append | | | |
-| 10.1 | Air-Cargo-problem | `air_cargo` | [`planning.py`][planning] | Done |
-| 10.2 | Spare-Tire-Problem | `spare_tire` | [`planning.py`][planning] | Done |
-| 10.3 | Three-Block-Tower | `three_block_tower` | [`planning.py`][planning] | Done |
-| 10.7 | Cake-Problem | `have_cake_and_eat_cake_too` | [`planning.py`][planning] | Done |
-| 10.9 | Graphplan | `GraphPlan` | [`planning.py`][planning] | |
-| 10.13 | Partial-Order-Planner | | | |
-| 11.1 | Job-Shop-Problem-With-Resources | `job_shop_problem` | [`planning.py`][planning] | Done |
-| 11.5 | Hierarchical-Search | `hierarchical_search` | [`planning.py`][planning] | |
-| 11.8 | Angelic-Search | | | |
-| 11.10 | Doubles-tennis | `double_tennis_problem` | [`planning.py`][planning] | |
-| 13 | Discrete Probability Distribution | `ProbDist` | [`probability.py`][probability] | Done |
-| 13.1 | DT-Agent | `DTAgent` | [`probability.py`][probability] | |
-| 14.9 | Enumeration-Ask | `enumeration_ask` | [`probability.py`][probability] | Done |
-| 14.11 | Elimination-Ask | `elimination_ask` | [`probability.py`][probability] | Done |
-| 14.13 | Prior-Sample | `prior_sample` | [`probability.py`][probability] | |
-| 14.14 | Rejection-Sampling | `rejection_sampling` | [`probability.py`][probability] | Done |
-| 14.15 | Likelihood-Weighting | `likelihood_weighting` | [`probability.py`][probability] | Done |
-| 14.16 | Gibbs-Ask | `gibbs_ask` | [`probability.py`][probability] | |
-| 15.4 | Forward-Backward | `forward_backward` | [`probability.py`][probability] | Done |
-| 15.6 | Fixed-Lag-Smoothing | `fixed_lag_smoothing` | [`probability.py`][probability] | Done |
-| 15.17 | Particle-Filtering | `particle_filtering` | [`probability.py`][probability] | Done |
-| 16.9 | Information-Gathering-Agent | | |
-| 17.4 | Value-Iteration | `value_iteration` | [`mdp.py`][mdp] | Done |
-| 17.7 | Policy-Iteration | `policy_iteration` | [`mdp.py`][mdp] | Done |
-| 17.9 | POMDP-Value-Iteration | | | |
-| 18.5 | Decision-Tree-Learning | `DecisionTreeLearner` | [`learning.py`][learning] | Done |
-| 18.8 | Cross-Validation | `cross_validation` | [`learning.py`][learning] | |
-| 18.11 | Decision-List-Learning | `DecisionListLearner` | [`learning.py`][learning]\* | |
-| 18.24 | Back-Prop-Learning | `BackPropagationLearner` | [`learning.py`][learning] | Done |
-| 18.34 | AdaBoost | `AdaBoost` | [`learning.py`][learning] | |
-| 19.2 | Current-Best-Learning | `current_best_learning` | [`knowledge.py`](knowledge.py) | Done |
-| 19.3 | Version-Space-Learning | `version_space_learning` | [`knowledge.py`](knowledge.py) | Done |
-| 19.8 | Minimal-Consistent-Det | `minimal_consistent_det` | [`knowledge.py`](knowledge.py) | Done |
-| 19.12 | FOIL | `FOIL_container` | [`knowledge.py`](knowledge.py) | Done |
-| 21.2 | Passive-ADP-Agent | `PassiveADPAgent` | [`rl.py`][rl] | Done |
-| 21.4 | Passive-TD-Agent | `PassiveTDAgent` | [`rl.py`][rl] | Done |
-| 21.8 | Q-Learning-Agent | `QLearningAgent` | [`rl.py`][rl] | Done |
-| 22.1 | HITS | `HITS` | [`nlp.py`][nlp] | Done |
-| 23 | Chart-Parse | `Chart` | [`nlp.py`][nlp] | Done |
-| 23.5 | CYK-Parse | `CYK_parse` | [`nlp.py`][nlp] | Done |
-| 25.9 | Monte-Carlo-Localization| `monte_carlo_localization` | [`probability.py`][probability] | Done |
+Here is a table of algorithms, the figure, name of the algorithm in the book and in the repository, and the file where they are implemented in the repository. This chart was made for the third edition of the book and is being updated for the upcoming fourth edition. Empty implementations are a good place for contributors to look for an issue. The [aima-pseudocode](https://github.com/aimacode/aima-pseudocode) project describes all the algorithms from the book. An asterisk next to the file name denotes the algorithm is not fully implemented. Another great place for contributors to start is by adding tests and writing on the notebooks. You can see which algorithms have tests and notebook sections below. If the algorithm you want to work on is covered, don't worry! You can still add more tests and provide some examples of use in the notebook!
+
+| **Figure** | **Name (in 3rd edition)** | **Name (in repository)** | **File** | **Tests** | **Notebook**
+|:-------|:----------------------------------|:------------------------------|:--------------------------------|:-----|:---------|
+| 2 | Random-Vacuum-Agent | `RandomVacuumAgent` | [`agents.py`][agents] | Done | Included |
+| 2 | Model-Based-Vacuum-Agent | `ModelBasedVacuumAgent` | [`agents.py`][agents] | Done | Included |
+| 2.1 | Environment | `Environment` | [`agents.py`][agents] | Done | Included |
+| 2.1 | Agent | `Agent` | [`agents.py`][agents] | Done | Included |
+| 2.3 | Table-Driven-Vacuum-Agent | `TableDrivenVacuumAgent` | [`agents.py`][agents] | Done | Included |
+| 2.7 | Table-Driven-Agent | `TableDrivenAgent` | [`agents.py`][agents] | Done | Included |
+| 2.8 | Reflex-Vacuum-Agent | `ReflexVacuumAgent` | [`agents.py`][agents] | Done | Included |
+| 2.10 | Simple-Reflex-Agent | `SimpleReflexAgent` | [`agents.py`][agents] | Done | Included |
+| 2.12 | Model-Based-Reflex-Agent | `ReflexAgentWithState` | [`agents.py`][agents] | Done | Included |
+| 3 | Problem | `Problem` | [`search.py`][search] | Done | Included |
+| 3 | Node | `Node` | [`search.py`][search] | Done | Included |
+| 3 | Queue | `Queue` | [`utils.py`][utils] | Done | No Need |
+| 3.1 | Simple-Problem-Solving-Agent | `SimpleProblemSolvingAgent` | [`search.py`][search] | Done | Included |
+| 3.2 | Romania | `romania` | [`search.py`][search] | Done | Included |
+| 3.7 | Tree-Search | `depth/breadth_first_tree_search` | [`search.py`][search] | Done | Included |
+| 3.7 | Graph-Search | `depth/breadth_first_graph_search` | [`search.py`][search] | Done | Included |
+| 3.11 | Breadth-First-Search | `breadth_first_graph_search` | [`search.py`][search] | Done | Included |
+| 3.14 | Uniform-Cost-Search | `uniform_cost_search` | [`search.py`][search] | Done | Included |
+| 3.17 | Depth-Limited-Search | `depth_limited_search` | [`search.py`][search] | Done | Included |
+| 3.18 | Iterative-Deepening-Search | `iterative_deepening_search` | [`search.py`][search] | Done | Included |
+| 3.22 | Best-First-Search | `best_first_graph_search` | [`search.py`][search] | Done | Included |
+| 3.24 | A\*-Search | `astar_search` | [`search.py`][search] | Done | Included |
+| 3.26 | Recursive-Best-First-Search | `recursive_best_first_search` | [`search.py`][search] | Done | Included |
+| 4.2 | Hill-Climbing | `hill_climbing` | [`search.py`][search] | Done | Included |
+| 4.5 | Simulated-Annealing | `simulated_annealing` | [`search.py`][search] | Done | Included |
+| 4.8 | Genetic-Algorithm | `genetic_algorithm` | [`search.py`][search] | Done | Included |
+| 4.11 | And-Or-Graph-Search | `and_or_graph_search` | [`search.py`][search] | Done | Included |
+| 4.21 | Online-DFS-Agent | `online_dfs_agent` | [`search.py`][search] | Done | Included |
+| 4.24 | LRTA\*-Agent | `LRTAStarAgent` | [`search.py`][search] | Done | Included |
+| 5.3 | Minimax-Decision | `minimax_decision` | [`games.py`][games] | Done | Included |
+| 5.7 | Alpha-Beta-Search | `alphabeta_search` | [`games.py`][games] | Done | Included |
+| 6 | CSP | `CSP` | [`csp.py`][csp] | Done | Included |
+| 6.3 | AC-3 | `AC3` | [`csp.py`][csp] | Done | Included |
+| 6.5 | Backtracking-Search | `backtracking_search` | [`csp.py`][csp] | Done | Included |
+| 6.8 | Min-Conflicts | `min_conflicts` | [`csp.py`][csp] | Done | Included |
+| 6.11 | Tree-CSP-Solver | `tree_csp_solver` | [`csp.py`][csp] | Done | Included |
+| 7 | KB | `KB` | [`logic.py`][logic] | Done | Included |
+| 7.1 | KB-Agent | `KB_AgentProgram` | [`logic.py`][logic] | Done | Included |
+| 7.7 | Propositional Logic Sentence | `Expr` | [`utils.py`][utils] | Done | Included |
+| 7.10 | TT-Entails | `tt_entails` | [`logic.py`][logic] | Done | Included |
+| 7.12 | PL-Resolution | `pl_resolution` | [`logic.py`][logic] | Done | Included |
+| 7.14 | Convert to CNF | `to_cnf` | [`logic.py`][logic] | Done | Included |
+| 7.15 | PL-FC-Entails? | `pl_fc_entails` | [`logic.py`][logic] | Done | Included |
+| 7.17 | DPLL-Satisfiable? | `dpll_satisfiable` | [`logic.py`][logic] | Done | Included |
+| 7.18 | WalkSAT | `WalkSAT` | [`logic.py`][logic] | Done | Included |
+| 7.20 | Hybrid-Wumpus-Agent | `HybridWumpusAgent` | | | |
+| 7.22 | SATPlan | `SAT_plan` | [`logic.py`][logic] | Done | Included |
+| 9 | Subst | `subst` | [`logic.py`][logic] | Done | Included |
+| 9.1 | Unify | `unify` | [`logic.py`][logic] | Done | Included |
+| 9.3 | FOL-FC-Ask | `fol_fc_ask` | [`logic.py`][logic] | Done | Included |
+| 9.6 | FOL-BC-Ask | `fol_bc_ask` | [`logic.py`][logic] | Done | Included |
+| 10.1 | Air-Cargo-problem | `air_cargo` | [`planning.py`][planning] | Done | Included |
+| 10.2 | Spare-Tire-Problem | `spare_tire` | [`planning.py`][planning] | Done | Included |
+| 10.3 | Three-Block-Tower | `three_block_tower` | [`planning.py`][planning] | Done | Included |
+| 10.7 | Cake-Problem | `have_cake_and_eat_cake_too` | [`planning.py`][planning] | Done | Included |
+| 10.9 | Graphplan | `GraphPlan` | [`planning.py`][planning] | Done | Included |
+| 10.13 | Partial-Order-Planner | `PartialOrderPlanner` | [`planning.py`][planning] | Done | Included |
+| 11.1 | Job-Shop-Problem-With-Resources | `job_shop_problem` | [`planning.py`][planning] | Done | Included |
+| 11.5 | Hierarchical-Search | `hierarchical_search` | [`planning.py`][planning] | Done | Included |
+| 11.8 | Angelic-Search | `angelic_search` | [`planning.py`][planning] | Done | Included |
+| 11.10 | Doubles-tennis | `double_tennis_problem` | [`planning.py`][planning] | Done | Included |
+| 13 | Discrete Probability Distribution | `ProbDist` | [`probability.py`][probability] | Done | Included |
+| 13.1 | DT-Agent | `DTAgent` | [`probability.py`][probability] | Done | Included |
+| 14.9 | Enumeration-Ask | `enumeration_ask` | [`probability.py`][probability] | Done | Included |
+| 14.11 | Elimination-Ask | `elimination_ask` | [`probability.py`][probability] | Done | Included |
+| 14.13 | Prior-Sample | `prior_sample` | [`probability.py`][probability] | Done | Included |
+| 14.14 | Rejection-Sampling | `rejection_sampling` | [`probability.py`][probability] | Done | Included |
+| 14.15 | Likelihood-Weighting | `likelihood_weighting` | [`probability.py`][probability] | Done | Included |
+| 14.16 | Gibbs-Ask | `gibbs_ask` | [`probability.py`][probability] | Done | Included |
+| 15.4 | Forward-Backward | `forward_backward` | [`probability.py`][probability] | Done | Included |
+| 15.6 | Fixed-Lag-Smoothing | `fixed_lag_smoothing` | [`probability.py`][probability] | Done | Included |
+| 15.17 | Particle-Filtering | `particle_filtering` | [`probability.py`][probability] | Done | Included |
+| 16.9 | Information-Gathering-Agent | `InformationGatheringAgent` | [`probability.py`][probability] | Done | Included |
+| 17.4 | Value-Iteration | `value_iteration` | [`mdp.py`][mdp] | Done | Included |
+| 17.7 | Policy-Iteration | `policy_iteration` | [`mdp.py`][mdp] | Done | Included |
+| 17.9 | POMDP-Value-Iteration | `pomdp_value_iteration` | [`mdp.py`][mdp] | Done | Included |
+| 18.5 | Decision-Tree-Learning | `DecisionTreeLearner` | [`learning.py`][learning] | Done | Included |
+| 18.8 | Cross-Validation | `cross_validation` | [`learning.py`][learning]\* | | |
+| 18.11 | Decision-List-Learning | `DecisionListLearner` | [`learning.py`][learning]\* | | |
+| 18.24 | Back-Prop-Learning | `BackPropagationLearner` | [`learning.py`][learning] | Done | Included |
+| 18.34 | AdaBoost | `AdaBoost` | [`learning.py`][learning] | Done | Included |
+| 19.2 | Current-Best-Learning | `current_best_learning` | [`knowledge.py`](knowledge.py) | Done | Included |
+| 19.3 | Version-Space-Learning | `version_space_learning` | [`knowledge.py`](knowledge.py) | Done | Included |
+| 19.8 | Minimal-Consistent-Det | `minimal_consistent_det` | [`knowledge.py`](knowledge.py) | Done | Included |
+| 19.12 | FOIL | `FOIL_container` | [`knowledge.py`](knowledge.py) | Done | Included |
+| 21.2 | Passive-ADP-Agent | `PassiveADPAgent` | [`rl.py`][rl] | Done | Included |
+| 21.4 | Passive-TD-Agent | `PassiveTDAgent` | [`rl.py`][rl] | Done | Included |
+| 21.8 | Q-Learning-Agent | `QLearningAgent` | [`rl.py`][rl] | Done | Included |
+| 22.1 | HITS | `HITS` | [`nlp.py`][nlp] | Done | Included |
+| 23 | Chart-Parse | `Chart` | [`nlp.py`][nlp] | Done | Included |
+| 23.5 | CYK-Parse | `CYK_parse` | [`nlp.py`][nlp] | Done | Included |
+| 25.9 | Monte-Carlo-Localization | `monte_carlo_localization` | [`probability.py`][probability] | Done | Included |
# Index of data structures
@@ -127,20 +174,20 @@ Here is a table of algorithms, the figure, name of the algorithm in the book and
Here is a table of the implemented data structures, the figure, name of the implementation in the repository, and the file where they are implemented.
| **Figure** | **Name (in repository)** | **File** |
-|:-----------|:-------------------------|:---------|
-| 3.2 | romania_map | [`search.py`][search] |
-| 4.9 | vacumm_world | [`search.py`][search] |
-| 4.23 | one_dim_state_space | [`search.py`][search] |
-| 6.1 | australia_map | [`search.py`][search] |
-| 7.13 | wumpus_world_inference | [`logic.py`][logic] |
-| 7.16 | horn_clauses_KB | [`logic.py`][logic] |
-| 17.1 | sequential_decision_environment | [`mdp.py`][mdp] |
-| 18.2 | waiting_decision_tree | [`learning.py`][learning] |
+|:-------|:--------------------------------|:--------------------------|
+| 3.2 | romania_map | [`search.py`][search] |
+| 4.9 | vacumm_world | [`search.py`][search] |
+| 4.23 | one_dim_state_space | [`search.py`][search] |
+| 6.1 | australia_map | [`search.py`][search] |
+| 7.13 | wumpus_world_inference | [`logic.py`][logic] |
+| 7.16 | horn_clauses_KB | [`logic.py`][logic] |
+| 17.1 | sequential_decision_environment | [`mdp.py`][mdp] |
+| 18.2 | waiting_decision_tree | [`learning.py`][learning] |
# Acknowledgements
-Many thanks for contributions over the years. I got bug reports, corrected code, and other support from Darius Bacon, Phil Ruggera, Peng Shao, Amit Patil, Ted Nienstedt, Jim Martin, Ben Catanzariti, and others. Now that the project is on GitHub, you can see the [contributors](https://github.com/aimacode/aima-python/graphs/contributors) who are doing a great job of actively improving the project. Many thanks to all contributors, especially @darius, @SnShine, @reachtarunhere, @MrDupin, and @Chipe1.
+Many thanks for contributions over the years. I got bug reports, corrected code, and other support from Darius Bacon, Phil Ruggera, Peng Shao, Amit Patil, Ted Nienstedt, Jim Martin, Ben Catanzariti, and others. Now that the project is on GitHub, you can see the [contributors](https://github.com/aimacode/aima-python/graphs/contributors) who are doing a great job of actively improving the project. Many thanks to all contributors, especially [@darius](https://github.com/darius), [@SnShine](https://github.com/SnShine), [@reachtarunhere](https://github.com/reachtarunhere), [@antmarakis](https://github.com/antmarakis), [@Chipe1](https://github.com/Chipe1), [@ad71](https://github.com/ad71) and [@MariannaSpyrakou](https://github.com/MariannaSpyrakou).
[agents]:../master/agents.py
diff --git a/SUBMODULE.md b/SUBMODULE.md
index b9048ea4c..2c080bb91 100644
--- a/SUBMODULE.md
+++ b/SUBMODULE.md
@@ -1,4 +1,4 @@
-This is a guide on how to update the `aima-data` submodule. This needs to be done every time something changes in the [aima-data](https://github.com/aimacode/aima-data) repository. All the below commands should be executed from the local directory of the `aima-python` repository, using `git`.
+This is a guide on how to update the `aima-data` submodule to the latest version. This needs to be done every time something changes in the [aima-data](https://github.com/aimacode/aima-data) repository. All the below commands should be executed from the local directory of the `aima-python` repository, using `git`.
```
git submodule deinit aima-data
diff --git a/agents.ipynb b/agents.ipynb
index 968c8cdc9..636df75e3 100644
--- a/agents.ipynb
+++ b/agents.ipynb
@@ -4,26 +4,120 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# AGENT #\n",
+ "# Intelligent Agents #\n",
"\n",
- "An agent, as defined in 2.1 is anything that can perceive its environment through sensors, and act upon that environment through actuators based on its agent program. This can be a dog, robot, or even you. As long as you can perceive the environment and act on it, you are an agent. This notebook will explain how to implement a simple agent, create an environment, and create a program that helps the agent act on the environment based on its percepts.\n",
+ "This notebook serves as supporting material for topics covered in **Chapter 2 - Intelligent Agents** from the book *Artificial Intelligence: A Modern Approach.* This notebook uses implementations from [agents.py](https://github.com/aimacode/aima-python/blob/master/agents.py) module. Let's start by importing everything from agents module."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from agents import *\n",
+ "from notebook import psource"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## CONTENTS\n",
"\n",
- "Before moving on, review the Agent and Environment classes in [agents.py](https://github.com/aimacode/aima-python/blob/master/agents.py).\n",
+ "* Overview\n",
+ "* Agent\n",
+ "* Environment\n",
+ "* Simple Agent and Environment\n",
+ "* Agents in a 2-D Environment\n",
+ "* Wumpus Environment\n",
"\n",
- "Let's begin by importing all the functions from the agents.py module and creating our first agent - a blind dog."
+ "## OVERVIEW\n",
+ "\n",
+ "An agent, as defined in 2.1, is anything that can perceive its environment through sensors, and act upon that environment through actuators based on its agent program. This can be a dog, a robot, or even you. As long as you can perceive the environment and act on it, you are an agent. This notebook will explain how to implement a simple agent, create an environment, and implement a program that helps the agent act on the environment based on its percepts.\n",
+ "\n",
+ "## AGENT\n",
+ "\n",
+ "Let us now see how we define an agent. Run the next cell to see how `Agent` is defined in agents module."
]
},
{
"cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "collapsed": false,
- "scrolled": true
- },
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
- "from agents import *\n",
+ "psource(Agent)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `Agent` has two methods.\n",
+ "* `__init__(self, program=None)`: The constructor defines various attributes of the Agent. These include\n",
+ "\n",
+ " * `alive`: which keeps track of whether the agent is alive or not \n",
+ " \n",
+ " * `bump`: which tracks if the agent collides with an edge of the environment (for eg, a wall in a park)\n",
+ " \n",
+ " * `holding`: which is a list containing the `Things` an agent is holding, \n",
+ " \n",
+ " * `performance`: which evaluates the performance metrics of the agent \n",
+ " \n",
+ " * `program`: which is the agent program and maps an agent's percepts to actions in the environment. If no implementation is provided, it defaults to asking the user to provide actions for each percept.\n",
+ " \n",
+ "* `can_grab(self, thing)`: Is used when an environment contains things that an agent can grab and carry. By default, an agent can carry nothing.\n",
+ "\n",
+ "## ENVIRONMENT\n",
+ "Now, let us see how environments are defined. Running the next cell will display an implementation of the abstract `Environment` class."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "psource(Environment)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "`Environment` class has lot of methods! But most of them are incredibly simple, so let's see the ones we'll be using in this notebook.\n",
+ "\n",
+ "* `thing_classes(self)`: Returns a static array of `Thing` sub-classes that determine what things are allowed in the environment and what aren't\n",
+ "\n",
+ "* `add_thing(self, thing, location=None)`: Adds a thing to the environment at location\n",
+ "\n",
+ "* `run(self, steps)`: Runs an environment with the agent in it for a given number of steps.\n",
+ "\n",
+ "* `is_done(self)`: Returns true if the objective of the agent and the environment has been completed\n",
"\n",
+ "The next two functions must be implemented by each subclasses of `Environment` for the agent to recieve percepts and execute actions \n",
+ "\n",
+ "* `percept(self, agent)`: Given an agent, this method returns a list of percepts that the agent sees at the current time\n",
+ "\n",
+ "* `execute_action(self, agent, action)`: The environment reacts to an action performed by a given agent. The changes may result in agent experiencing new percepts or other elements reacting to agent input."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## SIMPLE AGENT AND ENVIRONMENT\n",
+ "\n",
+ "Let's begin by using the `Agent` class to creating our first agent - a blind dog."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
"class BlindDog(Agent):\n",
" def eat(self, thing):\n",
" print(\"Dog: Ate food at {}.\".format(self.location))\n",
@@ -43,19 +137,9 @@
},
{
"cell_type": "code",
- "execution_count": 2,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "True\n"
- ]
- }
- ],
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
"source": [
"print(dog.alive)"
]
@@ -72,20 +156,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# ENVIRONMENT #\n",
- "\n",
- "A park is an example of an environment because our dog can perceive and act upon it. The Environment class in agents.py is an abstract class, so we will have to create our own subclass from it before we can use it. The abstract class must contain the following methods:\n",
+ "### ENVIRONMENT - Park\n",
"\n",
- "
percept(self, agent) - returns what the agent perceives
\n",
- "
execute_action(self, agent, action) - changes the state of the environment based on what the agent does.
"
+ "A park is an example of an environment because our dog can perceive and act upon it. The Environment class is an abstract class, so we will have to create our own subclass from it before we can use it."
]
},
{
"cell_type": "code",
- "execution_count": 3,
- "metadata": {
- "collapsed": false
- },
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
"class Food(Thing):\n",
@@ -96,7 +175,7 @@
"\n",
"class Park(Environment):\n",
" def percept(self, agent):\n",
- " '''prints & return a list of things that are in our agent's location'''\n",
+ " '''return a list of things that are in our agent's location'''\n",
" things = self.list_things_at(agent.location)\n",
" return things\n",
" \n",
@@ -130,35 +209,16 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"source": [
- "# PROGRAM - BlindDog #\n",
- "Now that we have a Park Class, we need to implement a program module for our dog. A program controls how the dog acts upon it's environment. Our program will be very simple, and is shown in the table below.\n",
- "
\n",
- "
\n",
- "
Percept:
\n",
- "
Feel Food
\n",
- "
Feel Water
\n",
- "
Feel Nothing
\n",
- "
\n",
- "
\n",
- "
Action:
\n",
- "
eat
\n",
- "
drink
\n",
- "
move down
\n",
- "
\n",
- " \n",
- "
\n"
+ "### PROGRAM - BlindDog\n",
+ "Now that we have a Park Class, we re-implement our BlindDog to be able to move down and eat food or drink water only if it is present.\n"
]
},
{
"cell_type": "code",
- "execution_count": 4,
- "metadata": {
- "collapsed": false
- },
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
"class BlindDog(Agent):\n",
@@ -170,19 +230,46 @@
" def eat(self, thing):\n",
" '''returns True upon success or False otherwise'''\n",
" if isinstance(thing, Food):\n",
- " #print(\"Dog: Ate food at {}.\".format(self.location))\n",
" return True\n",
" return False\n",
" \n",
" def drink(self, thing):\n",
" ''' returns True upon success or False otherwise'''\n",
" if isinstance(thing, Water):\n",
- " #print(\"Dog: Drank water at {}.\".format(self.location))\n",
" return True\n",
- " return False\n",
+ " return False"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now its time to implement a program module for our dog. A program controls how the dog acts upon its environment. Our program will be very simple, and is shown in the table below.\n",
+ "
\n",
+ "
\n",
+ "
Percept:
\n",
+ "
Feel Food
\n",
+ "
Feel Water
\n",
+ "
Feel Nothing
\n",
+ "
\n",
+ "
\n",
+ "
Action:
\n",
+ "
eat
\n",
+ "
drink
\n",
+ "
move down
\n",
+ "
\n",
" \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
"def program(percepts):\n",
- " '''Returns an action based on it's percepts'''\n",
+ " '''Returns an action based on the dog's percepts'''\n",
" for p in percepts:\n",
" if isinstance(p, Food):\n",
" return 'eat'\n",
@@ -195,28 +282,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Lets now run our simulation by creating a park with some food, water, and our dog."
+ "Let's now run our simulation by creating a park with some food, water, and our dog."
]
},
{
"cell_type": "code",
- "execution_count": 5,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "BlindDog decided to move down at location: 1\n",
- "BlindDog decided to move down at location: 2\n",
- "BlindDog decided to move down at location: 3\n",
- "BlindDog decided to move down at location: 4\n",
- "BlindDog ate Food at location: 5\n"
- ]
- }
- ],
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
"source": [
"park = Park()\n",
"dog = BlindDog(program)\n",
@@ -235,26 +308,14 @@
"source": [
"Notice that the dog moved from location 1 to 4, over 4 steps, and ate food at location 5 in the 5th step.\n",
"\n",
- "Lets continue this simulation for 5 more steps."
+ "Let's continue this simulation for 5 more steps."
]
},
{
"cell_type": "code",
- "execution_count": 6,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "BlindDog decided to move down at location: 5\n",
- "BlindDog decided to move down at location: 6\n",
- "BlindDog drank Water at location: 7\n"
- ]
- }
- ],
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
"source": [
"park.run(5)"
]
@@ -263,32 +324,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Perfect! Note how the simulation stopped after the dog drank the water - exhausting all the food and water ends our simulation, as we had defined before. Lets add some more water and see if our dog can reach it."
+ "Perfect! Note how the simulation stopped after the dog drank the water - exhausting all the food and water ends our simulation, as we had defined before. Let's add some more water and see if our dog can reach it."
]
},
{
"cell_type": "code",
- "execution_count": 7,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "BlindDog decided to move down at location: 7\n",
- "BlindDog decided to move down at location: 8\n",
- "BlindDog decided to move down at location: 9\n",
- "BlindDog decided to move down at location: 10\n",
- "BlindDog decided to move down at location: 11\n",
- "BlindDog decided to move down at location: 12\n",
- "BlindDog decided to move down at location: 13\n",
- "BlindDog decided to move down at location: 14\n",
- "BlindDog drank Water at location: 15\n"
- ]
- }
- ],
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
"source": [
"park.add_thing(water, 15)\n",
"park.run(10)"
@@ -298,26 +341,29 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This is how to implement an agent, its program, and environment. However, this was a very simple case. Lets try a 2-Dimentional environment now with multiple agents.\n",
+ "Above, we learnt to implement an agent, its program, and an environment on which it acts. However, this was a very simple case. Let's try to add complexity to it by creating a 2-Dimensional environment!\n",
+ "\n",
"\n",
+ "## AGENTS IN A 2D ENVIRONMENT\n",
"\n",
- "# 2D Environment #\n",
- "To make our Park 2D, we will need to make it a subclass of XYEnvironment instead of Environment. Please note that our park is indexed in the 4th quadrant of the X-Y plane.\n",
+ "For us to not read so many logs of what our dog did, we add a bit of graphics while making our Park 2D. To do so, we will need to make it a subclass of GraphicEnvironment instead of Environment. Parks implemented by subclassing GraphicEnvironment class adds these extra properties to it:\n",
"\n",
- "We will also eventually add a person to pet the dog."
+ " - Our park is indexed in the 4th quadrant of the X-Y plane.\n",
+ " - Every time we create a park subclassing GraphicEnvironment, we need to define the colors of all the things we plan to put into the park. The colors are defined in typical [RGB digital 8-bit format](https://en.wikipedia.org/wiki/RGB_color_model#Numeric_representations), common across the web.\n",
+ " - Fences are added automatically to all parks so that our dog does not go outside the park's boundary - it just isn't safe for blind dogs to be outside the park by themselves! GraphicEnvironment provides `is_inbounds` function to check if our dog tries to leave the park.\n",
+ " \n",
+ "First let us try to upgrade our 1-dimensional `Park` environment by just replacing its superclass by `GraphicEnvironment`. "
]
},
{
"cell_type": "code",
- "execution_count": 8,
- "metadata": {
- "collapsed": true
- },
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
- "class Park2D(XYEnvironment):\n",
+ "class Park2D(GraphicEnvironment):\n",
" def percept(self, agent):\n",
- " '''prints & return a list of things that are in our agent's location'''\n",
+ " '''return a list of things that are in our agent's location'''\n",
" things = self.list_things_at(agent.location)\n",
" return things\n",
" \n",
@@ -349,8 +395,8 @@
" return dead_agents or no_edibles\n",
"\n",
"class BlindDog(Agent):\n",
- " location = [0,1]# change location to a 2d value\n",
- " direction = Direction(\"down\")# variable to store the direction our dog is facing\n",
+ " location = [0,1] # change location to a 2d value\n",
+ " direction = Direction(\"down\") # variable to store the direction our dog is facing\n",
" \n",
" def movedown(self):\n",
" self.location[1] += 1\n",
@@ -365,58 +411,23 @@
" ''' returns True upon success or False otherwise'''\n",
" if isinstance(thing, Water):\n",
" return True\n",
- " return False\n",
- " \n",
- "def program(percepts):\n",
- " '''Returns an action based on it's percepts'''\n",
- " for p in percepts:\n",
- " if isinstance(p, Food):\n",
- " return 'eat'\n",
- " elif isinstance(p, Water):\n",
- " return 'drink'\n",
- " return 'move down'"
+ " return False"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now lets test this new park with our same dog, food and water"
+ "Now let's test this new park with our same dog, food and water. We color our dog with a nice red and mark food and water with orange and blue respectively."
]
},
{
"cell_type": "code",
- "execution_count": 9,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "BlindDog decided to move down at location: [0, 1]\n",
- "BlindDog decided to move down at location: [0, 2]\n",
- "BlindDog decided to move down at location: [0, 3]\n",
- "BlindDog decided to move down at location: [0, 4]\n",
- "BlindDog ate Food at location: [0, 5]\n",
- "BlindDog decided to move down at location: [0, 5]\n",
- "BlindDog decided to move down at location: [0, 6]\n",
- "BlindDog drank Water at location: [0, 7]\n",
- "BlindDog decided to move down at location: [0, 7]\n",
- "BlindDog decided to move down at location: [0, 8]\n",
- "BlindDog decided to move down at location: [0, 9]\n",
- "BlindDog decided to move down at location: [0, 10]\n",
- "BlindDog decided to move down at location: [0, 11]\n",
- "BlindDog decided to move down at location: [0, 12]\n",
- "BlindDog decided to move down at location: [0, 13]\n",
- "BlindDog decided to move down at location: [0, 14]\n",
- "BlindDog drank Water at location: [0, 15]\n"
- ]
- }
- ],
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
"source": [
- "park = Park2D(5,20) # park width is set to 5, and height to 20\n",
+ "park = Park2D(5,20, color={'BlindDog': (200,0,0), 'Water': (0, 200, 200), 'Food': (230, 115, 40)}) # park width is set to 5, and height to 20\n",
"dog = BlindDog(program)\n",
"dogfood = Food()\n",
"water = Water()\n",
@@ -425,6 +436,7 @@
"park.add_thing(water, [0,7])\n",
"morewater = Water()\n",
"park.add_thing(morewater, [0,15])\n",
+ "print(\"BlindDog starts at (1,1) facing downwards, lets see if he can find any food!\")\n",
"park.run(20)"
]
},
@@ -432,11 +444,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This works, but our blind dog doesn't make any use of the 2 dimensional space available to him. Let's make our dog more energetic so that he turns and moves forward, instead of always moving down. We'll also need to make appropriate changes to our environment to be able to handle this extra motion.\n",
+ "Adding some graphics was a good idea! We immediately see that the code works, but our blind dog doesn't make any use of the 2 dimensional space available to him. Let's make our dog more energetic so that he turns and moves forward, instead of always moving down. In doing so, we'll also need to make some changes to our environment to be able to handle this extra motion.\n",
"\n",
- "# PROGRAM - EnergeticBlindDog #\n",
+ "### PROGRAM - EnergeticBlindDog\n",
"\n",
- "Lets make our dog turn or move forwards at random - except when he's at the edge of our park - in which case we make him change his direction explicitly by turning to avoid trying to leave the park. Our dog is blind, however, so he wouldn't know which way to turn - he'd just have to try arbitrarily.\n",
+ "Let's make our dog turn or move forwards at random - except when he's at the edge of our park - in which case we make him change his direction explicitly by turning to avoid trying to leave the park. However, our dog is blind so he wouldn't know which way to turn - he'd just have to try arbitrarily.\n",
"\n",
"
\n",
"
\n",
@@ -470,24 +482,19 @@
},
{
"cell_type": "code",
- "execution_count": 10,
- "metadata": {
- "collapsed": false
- },
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
"from random import choice\n",
"\n",
- "turn = False# global variable to remember to turn if our dog hits the boundary\n",
"class EnergeticBlindDog(Agent):\n",
" location = [0,1]\n",
" direction = Direction(\"down\")\n",
" \n",
" def moveforward(self, success=True):\n",
- " '''moveforward possible only if success (ie valid destination location)'''\n",
- " global turn\n",
+ " '''moveforward possible only if success (i.e. valid destination location)'''\n",
" if not success:\n",
- " turn = True # if edge has been reached, remember to turn\n",
" return\n",
" if self.direction.direction == Direction.R:\n",
" self.location[0] += 1\n",
@@ -504,30 +511,28 @@
" def eat(self, thing):\n",
" '''returns True upon success or False otherwise'''\n",
" if isinstance(thing, Food):\n",
- " #print(\"Dog: Ate food at {}.\".format(self.location))\n",
" return True\n",
" return False\n",
" \n",
" def drink(self, thing):\n",
" ''' returns True upon success or False otherwise'''\n",
" if isinstance(thing, Water):\n",
- " #print(\"Dog: Drank water at {}.\".format(self.location))\n",
" return True\n",
" return False\n",
" \n",
"def program(percepts):\n",
" '''Returns an action based on it's percepts'''\n",
- " global turn\n",
+ " \n",
" for p in percepts: # first eat or drink - you're a dog!\n",
" if isinstance(p, Food):\n",
" return 'eat'\n",
" elif isinstance(p, Water):\n",
" return 'drink'\n",
- " if turn: # then recall if you were at an edge and had to turn\n",
- " turn = False\n",
- " choice = random.choice((1,2));\n",
- " else:\n",
- " choice = random.choice((1,2,3,4)) # 1-right, 2-left, others-forward\n",
+ " if isinstance(p,Bump): # then check if you are at an edge and have to turn\n",
+ " turn = False\n",
+ " choice = random.choice((1,2));\n",
+ " else:\n",
+ " choice = random.choice((1,2,3,4)) # 1-right, 2-left, others-forward\n",
" if choice == 1:\n",
" return 'turnright'\n",
" elif choice == 2:\n",
@@ -541,141 +546,33 @@
"cell_type": "markdown",
"metadata": {},
"source": [
+ "### ENVIRONMENT - Park2D\n",
+ "\n",
"We also need to modify our park accordingly, in order to be able to handle all the new actions our dog wishes to execute. Additionally, we'll need to prevent our dog from moving to locations beyond our park boundary - it just isn't safe for blind dogs to be outside the park by themselves."
]
},
{
"cell_type": "code",
- "execution_count": 11,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "class Park2D(XYEnvironment):\n",
- " def percept(self, agent):\n",
- " '''prints & return a list of things that are in our agent's location'''\n",
- " things = self.list_things_at(agent.location)\n",
- " return things\n",
- " \n",
- " def execute_action(self, agent, action):\n",
- " '''changes the state of the environment based on what the agent does.'''\n",
- " if action == 'turnright':\n",
- " print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))\n",
- " agent.turn(Direction.R)\n",
- " #print('now facing {}'.format(agent.direction.direction))\n",
- " elif action == 'turnleft':\n",
- " print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))\n",
- " agent.turn(Direction.L)\n",
- " #print('now facing {}'.format(agent.direction.direction))\n",
- " elif action == 'moveforward':\n",
- " loc = copy.deepcopy(agent.location) # find out the target location\n",
- " if agent.direction.direction == Direction.R:\n",
- " loc[0] += 1\n",
- " elif agent.direction.direction == Direction.L:\n",
- " loc[0] -= 1\n",
- " elif agent.direction.direction == Direction.D:\n",
- " loc[1] += 1\n",
- " elif agent.direction.direction == Direction.U:\n",
- " loc[1] -= 1\n",
- " #print('{} at {} facing {}'.format(agent, loc, agent.direction.direction))\n",
- " if self.is_inbounds(loc):# move only if the target is a valid location\n",
- " print('{} decided to move {}wards at location: {}'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n",
- " agent.moveforward()\n",
- " else:\n",
- " print('{} decided to move {}wards at location: {}, but couldnt'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n",
- " agent.moveforward(False)\n",
- " elif action == \"eat\":\n",
- " items = self.list_things_at(agent.location, tclass=Food)\n",
- " if len(items) != 0:\n",
- " if agent.eat(items[0]):\n",
- " print('{} ate {} at location: {}'\n",
- " .format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))\n",
- " self.delete_thing(items[0])\n",
- " elif action == \"drink\":\n",
- " items = self.list_things_at(agent.location, tclass=Water)\n",
- " if len(items) != 0:\n",
- " if agent.drink(items[0]):\n",
- " print('{} drank {} at location: {}'\n",
- " .format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))\n",
- " self.delete_thing(items[0])\n",
- " \n",
- " def is_done(self):\n",
- " '''By default, we're done when we can't find a live agent, \n",
- " but to prevent killing our cute dog, we will stop before itself - when there is no more food or water'''\n",
- " no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)\n",
- " dead_agents = not any(agent.is_alive() for agent in self.agents)\n",
- " return dead_agents or no_edibles\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "dog started at [0,0], facing down. Lets see if he found any food or water!\n",
- "EnergeticBlindDog decided to move downwards at location: [0, 0]\n",
- "EnergeticBlindDog decided to move downwards at location: [0, 1]\n",
- "EnergeticBlindDog drank Water at location: [0, 2]\n",
- "EnergeticBlindDog decided to turnright at location: [0, 2]\n",
- "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldnt\n",
- "EnergeticBlindDog decided to turnright at location: [0, 2]\n",
- "EnergeticBlindDog decided to turnright at location: [0, 2]\n",
- "EnergeticBlindDog decided to turnleft at location: [0, 2]\n",
- "EnergeticBlindDog decided to turnleft at location: [0, 2]\n",
- "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldnt\n",
- "EnergeticBlindDog decided to turnleft at location: [0, 2]\n",
- "EnergeticBlindDog decided to turnright at location: [0, 2]\n",
- "EnergeticBlindDog decided to move leftwards at location: [0, 2], but couldnt\n",
- "EnergeticBlindDog decided to turnleft at location: [0, 2]\n",
- "EnergeticBlindDog decided to move downwards at location: [0, 2], but couldnt\n",
- "EnergeticBlindDog decided to turnright at location: [0, 2]\n",
- "EnergeticBlindDog decided to turnleft at location: [0, 2]\n",
- "EnergeticBlindDog decided to turnleft at location: [0, 2]\n",
- "EnergeticBlindDog decided to move rightwards at location: [0, 2]\n",
- "EnergeticBlindDog ate Food at location: [1, 2]\n"
- ]
- }
- ],
- "source": [
- "park = Park2D(3,3)\n",
- "dog = EnergeticBlindDog(program)\n",
- "dogfood = Food()\n",
- "water = Water()\n",
- "park.add_thing(dog, [0,0])\n",
- "park.add_thing(dogfood, [1,2])\n",
- "park.add_thing(water, [2,1])\n",
- "morewater = Water()\n",
- "park.add_thing(morewater, [0,2])\n",
- "print('dog started at [0,0], facing down. Lets see if he found any food or water!')\n",
- "park.run(20)"
- ]
- },
- {
- "cell_type": "markdown",
+ "execution_count": null,
"metadata": {},
- "source": [
- "This is good, but it still lacks graphics. What if we wanted to visualize our park as it changed? To do that, all we have to do is make our park a subclass of GraphicEnvironment instead of XYEnvironment. Lets see how this looks."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "metadata": {
- "collapsed": true
- },
"outputs": [],
"source": [
- "class GraphicPark(GraphicEnvironment):\n",
+ "class Park2D(GraphicEnvironment):\n",
" def percept(self, agent):\n",
- " '''prints & return a list of things that are in our agent's location'''\n",
+ " '''return a list of things that are in our agent's location'''\n",
" things = self.list_things_at(agent.location)\n",
+ " loc = copy.deepcopy(agent.location) # find out the target location\n",
+ " #Check if agent is about to bump into a wall\n",
+ " if agent.direction.direction == Direction.R:\n",
+ " loc[0] += 1\n",
+ " elif agent.direction.direction == Direction.L:\n",
+ " loc[0] -= 1\n",
+ " elif agent.direction.direction == Direction.D:\n",
+ " loc[1] += 1\n",
+ " elif agent.direction.direction == Direction.U:\n",
+ " loc[1] -= 1\n",
+ " if not self.is_inbounds(loc):\n",
+ " things.append(Bump())\n",
" return things\n",
" \n",
" def execute_action(self, agent, action):\n",
@@ -683,28 +580,12 @@
" if action == 'turnright':\n",
" print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))\n",
" agent.turn(Direction.R)\n",
- " #print('now facing {}'.format(agent.direction.direction))\n",
" elif action == 'turnleft':\n",
" print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))\n",
" agent.turn(Direction.L)\n",
- " #print('now facing {}'.format(agent.direction.direction))\n",
" elif action == 'moveforward':\n",
- " loc = copy.deepcopy(agent.location) # find out the target location\n",
- " if agent.direction.direction == Direction.R:\n",
- " loc[0] += 1\n",
- " elif agent.direction.direction == Direction.L:\n",
- " loc[0] -= 1\n",
- " elif agent.direction.direction == Direction.D:\n",
- " loc[1] += 1\n",
- " elif agent.direction.direction == Direction.U:\n",
- " loc[1] -= 1\n",
- " #print('{} at {} facing {}'.format(agent, loc, agent.direction.direction))\n",
- " if self.is_inbounds(loc):# move only if the target is a valid location\n",
- " print('{} decided to move {}wards at location: {}'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n",
- " agent.moveforward()\n",
- " else:\n",
- " print('{} decided to move {}wards at location: {}, but couldnt'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n",
- " agent.moveforward(False)\n",
+ " print('{} decided to move {}wards at location: {}'.format(str(agent)[1:-1], agent.direction.direction, agent.location))\n",
+ " agent.moveforward()\n",
" elif action == \"eat\":\n",
" items = self.list_things_at(agent.location, tclass=Food)\n",
" if len(items) != 0:\n",
@@ -732,419 +613,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "That is the only change we make. The rest of our code stays the same. There is a slight difference in usage though. Every time we create a GraphicPark, we need to define the colors of all the things we plan to put into the park. The colors are defined in typical [RGB digital 8-bit format](https://en.wikipedia.org/wiki/RGB_color_model#Numeric_representations), common across the web."
+ "Now that our park is ready for the 2D motion of our energetic dog, lets test it!"
]
},
{
"cell_type": "code",
- "execution_count": 19,
- "metadata": {
- "collapsed": false,
- "scrolled": true
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "dog started at [0,0], facing down. Lets see if he found any food or water!\n"
- ]
- },
- {
- "data": {
- "text/html": [
- "