diff --git a/adxyz_agents_chap2.ipynb b/adxyz_agents_chap2.ipynb
new file mode 100644
index 000000000..33569a7ae
--- /dev/null
+++ b/adxyz_agents_chap2.ipynb
@@ -0,0 +1,1827 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# AGENT #\n",
+ "\n",
+ "An agent, as defined in 2.1 is anything that can perceive its environment through sensors, and act upon that environment through actuators based on its agent program. This can be a dog, robot, or even you. As long as you can perceive the environment and act on it, you are an agent. This notebook will explain how to implement a simple agent, create an environment, and create a program that helps the agent act on the environment based on its percepts.\n",
+ "\n",
+ "Before moving on, review the Agent and Environment classes in [agents.py](https://github.com/aimacode/aima-python/blob/master/agents.py).\n",
+ "\n",
+ "Let's begin by importing all the functions from the agents.py module and creating our first agent - a blind dog."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false,
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "#from agents import *\n",
+ "\n",
+ "#class BlindDog(Agent):\n",
+ "# def eat(self, thing):\n",
+ "# print(\"Dog: Ate food at {}.\".format(self.location))\n",
+ "# \n",
+ "# def drink(self, thing):\n",
+ "# print(\"Dog: Drank water at {}.\".format( self.location))\n",
+ "#\n",
+ "#dog = BlindDog()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "What we have just done is create a dog who can only feel what's in his location (since he's blind), and can eat or drink. Let's see if he's alive..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#print(dog.alive)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# ENVIRONMENT #\n",
+ "\n",
+ "A park is an example of an environment because our dog can perceive and act upon it. The Environment class in agents.py is an abstract class, so we will have to create our own subclass from it before we can use it. The abstract class must contain the following methods:\n",
+ "\n",
+ "
percept(self, agent) - returns what the agent perceives\n",
+ "execute_action(self, agent, action) - changes the state of the environment based on what the agent does."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#class Food(Thing):\n",
+ "# pass\n",
+ "\n",
+ "#class Water(Thing):\n",
+ "# pass\n",
+ "\n",
+ "#class Park(Environment):\n",
+ "# def percept(self, agent):\n",
+ "# '''prints & return a list of things that are in our agent's location'''\n",
+ "# things = self.list_things_at(agent.location)\n",
+ "# print(things)\n",
+ "# return things\n",
+ " \n",
+ "# def execute_action(self, agent, action):\n",
+ "# '''changes the state of the environment based on what the agent does.'''\n",
+ "# if action == \"move down\":\n",
+ "# agent.movedown()\n",
+ "# elif action == \"eat\":\n",
+ "# items = self.list_things_at(agent.location, tclass=Food)\n",
+ "# if len(items) != 0:\n",
+ "# if agent.eat(items[0]): #Have the dog pick eat the first item\n",
+ "# self.delete_thing(items[0]) #Delete it from the Park after.\n",
+ "# elif action == \"drink\":\n",
+ "# items = self.list_things_at(agent.location, tclass=Water)\n",
+ "# if len(items) != 0:\n",
+ "# if agent.drink(items[0]): #Have the dog drink the first item\n",
+ "# self.delete_thing(items[0]) #Delete it from the Park after.\n",
+ " \n",
+ "# def is_done(self):\n",
+ "# '''By default, we're done when we can't find a live agent, \n",
+ "# but to prevent killing our cute dog, we will or it with when there is no more food or water'''\n",
+ "# no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)\n",
+ "# dead_agents = not any(agent.is_alive() for agent in self.agents)\n",
+ "# return dead_agents or no_edibles\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Wumpus Environment"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#from ipythonblocks import BlockGrid\n",
+ "#from agents import *\n",
+ "\n",
+ "#color = {\"Breeze\": (225, 225, 225),\n",
+ "# \"Pit\": (0,0,0),\n",
+ "# \"Gold\": (253, 208, 23),\n",
+ "# \"Glitter\": (253, 208, 23),\n",
+ "# \"Wumpus\": (43, 27, 23),\n",
+ "# \"Stench\": (128, 128, 128),\n",
+ "# \"Explorer\": (0, 0, 255),\n",
+ "# \"Wall\": (44, 53, 57)\n",
+ "# }\n",
+ "\n",
+ "#def program(percepts):\n",
+ "# '''Returns an action based on it's percepts'''\n",
+ "# print(percepts)\n",
+ "# return input()\n",
+ "\n",
+ "#w = WumpusEnvironment(program, 7, 7) \n",
+ "#grid = BlockGrid(w.width, w.height, fill=(123, 234, 123))\n",
+ "\n",
+ "#def draw_grid(world):\n",
+ "# global grid\n",
+ "# grid[:] = (123, 234, 123)\n",
+ "# for x in range(0, len(world)):\n",
+ "# for y in range(0, len(world[x])):\n",
+ "# if len(world[x][y]):\n",
+ "# grid[y, x] = color[world[x][y][-1].__class__.__name__]\n",
+ "\n",
+ "#def step():\n",
+ "# global grid, w\n",
+ "# draw_grid(w.get_world())\n",
+ "# grid.show()\n",
+ "# w.step()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": true
+ },
+ "source": [
+ "# PROGRAM #\n",
+ "Now that we have a Park Class, we need to implement a program module for our dog. A program controls how the dog acts upon it's environment. Our program will be very simple, and is shown in the table below.\n",
+ "\n",
+ " \n",
+ " Percept: | \n",
+ " Feel Food | \n",
+ " Feel Water | \n",
+ " Feel Nothing | \n",
+ "
\n",
+ " \n",
+ " Action: | \n",
+ " eat | \n",
+ " drink | \n",
+ " move up | \n",
+ "
\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#class BlindDog(Agent):\n",
+ "# location = 1\n",
+ " \n",
+ "# def movedown(self):\n",
+ "# self.location += 1\n",
+ " \n",
+ "# def eat(self, thing):\n",
+ "# '''returns True upon success or False otherwise'''\n",
+ "# if isinstance(thing, Food):\n",
+ "# print(\"Dog: Ate food at {}.\".format(self.location))\n",
+ "# return True\n",
+ "# return False\n",
+ " \n",
+ "# def drink(self, thing):\n",
+ "# ''' returns True upon success or False otherwise'''\n",
+ "# if isinstance(thing, Water):\n",
+ "# print(\"Dog: Drank water at {}.\".format(self.location))\n",
+ "# return True\n",
+ "# return False\n",
+ " \n",
+ "#def program(percepts):\n",
+ "# '''Returns an action based on it's percepts'''\n",
+ "# for p in percepts:\n",
+ "# if isinstance(p, Food):\n",
+ "# return 'eat'\n",
+ "# elif isinstance(p, Water):\n",
+ "# return 'drink'\n",
+ "# return 'move down' "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#park = Park()\n",
+ "#dog = BlindDog(program)\n",
+ "#dogfood = Food()\n",
+ "#water = Water()\n",
+ "#park.add_thing(dog, 0)\n",
+ "#park.add_thing(dogfood, 5)\n",
+ "#park.add_thing(water, 7)\n",
+ "\n",
+ "#park.run(10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "That's how easy it is to implement an agent, its program, and environment. But that was a very simple case. What if our environment was 2-Dimentional instead of 1? And what if we had multiple agents?\n",
+ "\n",
+ "To make our Park 2D, we will need to make it a subclass of XYEnvironment instead of Environment. Also, let's add a person to play fetch with the dog."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "#class Park(XYEnvironment):\n",
+ "# def percept(self, agent):\n",
+ "# '''prints & return a list of things that are in our agent's location'''\n",
+ "# things = self.list_things_at(agent.location)\n",
+ "# print(things)\n",
+ "# return things\n",
+ " \n",
+ "# def execute_action(self, agent, action):\n",
+ "# '''changes the state of the environment based on what the agent does.'''\n",
+ "# if action == \"move down\":\n",
+ "# agent.movedown()\n",
+ "# elif action == \"eat\":\n",
+ "# items = self.list_things_at(agent.location, tclass=Food)\n",
+ "# if len(items) != 0:\n",
+ "# if agent.eat(items[0]): #Have the dog pick eat the first item\n",
+ "# self.delete_thing(items[0]) #Delete it from the Park after.\n",
+ "# elif action == \"drink\":\n",
+ "# items = self.list_things_at(agent.location, tclass=Water)\n",
+ "# if len(items) != 0:\n",
+ "# if agent.drink(items[0]): #Have the dog drink the first item\n",
+ "# self.delete_thing(items[0]) #Delete it from the Park after.\n",
+ " \n",
+ "# def is_done(self):\n",
+ "# '''By default, we're done when we can't find a live agent, \n",
+ "# but to prevent killing our cute dog, we will or it with when there is no more food or water'''\n",
+ "# no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)\n",
+ "# dead_agents = not any(agent.is_alive() for agent in self.agents)\n",
+ "# return dead_agents or no_edibles"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Notes and exercises from the book."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Part I Artificial Intelligence \n",
+ " 1 Introduction \n",
+ "The basis of the course is the idea of an intelligent agent (to be described later). The intelligent agent receives input from its environment, is able to do some processing, and then act on the environment to modify it in some desired fashion. \n",
+ "In some cases, large amounts of training data will make a poor algorithm outperform a good algorithm on a smaller amount of data. (This is very interesting). \n",
+ "--Yarowsky 1995, Word Sense Disambiguiation. \n",
+ "--Banko and Brill, 2001, bootstrap \n",
+ "--Hays and Efros, 2007, filling photos with holes \n",
+ "Exercises: \n",
+ "1.1 Define in your own words intelligence, artificial intelligence, agent, rationality, logical reasoning. \n",
+ "Intelligence is the ability to take information from external sources, and along with internal stored information, take actions which change the external environment in a way that appears to have a structure or pattern that can be observed and recognized by others. For example, if a door is closed and locked, and a person tries to open it by reaching into their pocket and seeking a key that fits, this would be considered an act of intelligence. By contrast, simply kicking the door down would not be considered intelligence, since it is wasteful of resources. \n",
+ "Artificial Intelligence is simulating the intelligence of living beings by non-living machines, but not necessarily by duplicating the methods (which may be unknown). The pattern of taking external input, processing it in some way to reach a choice, and then acting on that choice externally are the key elements to reproduce. \n",
+ " \n",
+ "Notes on Turing (1950): In this paper, Alan Turing describes the \"Imitation Game\" which is what we now think of as the Turing test. In his original version, there is the interregator, the test subject, and also a third player who is human, and tries to provide evidence of human responses to the interogator (probably to simulate a control-test experimental setup). Other parts of the paper describe learning machines, reinforcement learning, and the characterization of \"thinking\" as being related to storage size. At a storage size of 1Gbit or so, Turing believes machines will pass the \"Imitation Game\" test in that an interogator will be unable to distinguish between a human and machine. The human brain, estimated at the time of this paper, was asserted to have approximately 10^10 to 10^15 bits of storage, which is 1GB to 100TB – this is a fairly wide range, but with current day technology, easily achievable either in the cloud or on native hardware."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2 Intelligent Agents \n",
+ " \n",
+ "Rational Agent Design: \n",
+ "A rational agent will maximize the performance measure given the percept value it has seen so far. \n",
+ "Four Types of Agents: \n",
+ "-a- Goal Seeking \n",
+ "-b- Utility Maximization \n",
+ "-c- Reflex \n",
+ "-d- Model based \n",
+ "-e- Learning agents \n",
+ " \n",
+ "Exercises: \n",
+ "2.1 Suppose that the performance measure is concerned with just the first T steps of the environment and ignores everything thereafter. Show that a rational agent's action may depend not on just the state of the environment but also the time step it has reached: \n",
+ "In the general case, we would have a situation where you would need to include the time step and compare it to T in order to decide on an action. Once the time step T is reached, any further action cannot change the performance measure, any action is acceptable. However, before that time step is reached, the action taken must be such that the performance measure is maximized. \n",
+ " \n",
+ " 2.2: \n",
+ "A) Show that the simple vacuum-cleaner agent function described in Figure 2.3 is indeed rational under the assumptions listed on page 38: \n",
+ "Note: A time step consists of observing a percept, taking an action, and having the performance measure updated: \n",
+ "Definition of a Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. \n",
+ "Assumptions: \n",
+ "-a- The performance measure awards one point for each clean square at each time step, over a \"lifetime\" of 1000 time steps. (Note: The performance measure is assessed from the state of the entire environment, not agent, point of view) \n",
+ "-b- The geography of the environment is known a priori (Figure 2.2), but the dirt distribution and initial location of the agent are not. Clean squares stay clean and sucking cleans the current square. The Left and Right actions move the agent outside the environment, in which case the agent remains where it is. \n",
+ "-c- The only available actions are Left, Right, and Suck. \n",
+ "-d- The agent correctly perceives its location and whether that location contains dirt. \n",
+ "Solution: \n",
+ "There are four possible environments that can arise as initial condition (not including the initial location of the agent, which will double the count) \n",
+ "Case 1: A clean, B clean \n",
+ "Case 2: A dirty, B clean \n",
+ "Case 3: A clean, B dirty \n",
+ "Case 4: A dirty, B dirty \n",
+ "----> \n",
+ "For the following analysis, assume that the agent is always initially places in square A. \n",
+ "For Case 1: The maximum theoretical performance value is 2x1000=2000 points, since at each time step, both squares are clean, so we are awarded 1 point for each panel and the total time history is 1000. The agent will reproduce this theoretical maximum, because it will merely oscillate between squares A and B. \n",
+ "For Case 2: The maximum theoretical performance value is also 2x1000=2000 points, since we assume that if there is dirt in square A, then it will be cleaned, and at the end of the time step, the evaluation of the measure will be based on two clean squares since we are assuming that our agent is in square A initially. Our agent performs actions that will produce the same result. \n",
+ "For Case 3: The maximum theoretical performance value is (1+0) + 2*999=1999. During the first time step, we only have one clean square, and the agent will have to move from A to B in order to clean square B. Including and after the second time step, we have two clean squares and gain 2 points for each time step. Our agent reproduces this action and thus will match the maximum possible performance. \n",
+ "For Case 4: The maximum theoretical performance value is (1+0) + (1+0)+2*998= 1998. On the first time step, we have can only clean the first square A, thus only obtaining 1 points. On the second time step, we move to square B, and only have the clean square A to give a point. On the third and subsequent steps, we have both squares clean, and get a full two points each step. The agent will match this theoretical performance by its actions. \n",
+ " \n",
+ "B) Describe a rational agent function for the case where each movement costs one point. Does the corresponding agent program require internal state? \n",
+ "A key observation is that once a given square is clean you don't want to waste points by moving back to it, so we need to remember whether or not a cell is clean. \n",
+ "Like in part A, there are four cases that the environment can be in (8 if you include the position of the vacuum). \n",
+ "Set of rules for agent program: \n",
+ "[A, clean; A-State:unknown, B-state:unknown] -> [Right, A-State=clean, B-State=unknown] \n",
+ "[B, clean; A-State:unknown, B-state:unknown] -> [Left, A-State=unknown, B-State=Clean] \n",
+ "[A, dirty; A-State:unknown, B-state:unknown] -> [Suck, A-State=clean, B-State=unknown] \n",
+ "[B, dirty; A-State:unknown, B-state:unknown] -> [Suck, A-State=unknown, B-State=clean] \n",
+ "[A, clean; A-State:clean, B-state:unknown] -> [Right, A-State=clean, B-State=unknown] \n",
+ "[B, clean; A-State:clean, B-state:unknown] -> [Suck, A-State=clean, B-State=clean] \n",
+ "XX[A, dirty; A-State:clean, B-state:unknown] -> [Left, A-State=unknown, B-State=unknown] \n",
+ "[B, dirty; A-State:clean, B-state:unknown] -> [Suck, A-State=clean, B-State=clean] \n",
+ "[A, clean; A-State:unknown, B-state:clean] -> [Suck, A-State=clean, B-State=clean] \n",
+ "[B, clean; A-State:unknown, B-state:clean] -> [Left, A-State=unknown, B-State=clean] \n",
+ "[A, dirty; A-State:unknown, B-state:clean] -> [Suck, A-State=clean, B-State=clean] \n",
+ "XX[B, dirty; A-State:unknown, B-state:clean] -> [Left, A-State=unknown, B-State=unknown] \n",
+ "[A, clean; A-State:clean, B-state:clean] -> [Suck, A-State=clean, B-State=clean] \n",
+ "[B, clean; A-State:clean, B-state:clean] -> [Suck, A-State=clean, B-State=clean] \n",
+ "XX[A, dirty; A-State:clean, B-state:clean] -> [Left, A-State=unknown, B-State=unknown] \n",
+ "XX[B, dirty; A-State:clean, B-state:clean] -> [Left, A-State=unknown, B-State=unknown] \n",
+ " \n",
+ "2.3) \n",
+ "An agent that senses only partial information about the state cannot be perfectly rational. \n",
+ "The definition of a rational agent is: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. \n",
+ "By this definition, as long as the agent selects an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence, it is rational. If the percept sequence is something like [A, maybe dirty], there will be an action for this case that will attempt to maximize the known performance measure. \n",
+ "FALSE: Here is an example where the agent is perfectly rational. The environment is one square and a point is given for each time step where the square is clean. The only action is Suck. Assume the worse case that the agent cannot detect whether the square is clean or not. This becomes irrelevant because each time step, the agent will take the single action Suck and either clean the square if it is dirty, and get the point, or simply clean an already clean square, with no loss. Under the given performance measure (which is awarding points for clean squares) this is optimal. \n",
+ " \n",
+ "There exist task environments in which no pure reflex agent can behave rationally. \n",
+ "A pure reflex agent only uses the current percept to make a decision. It is not allowed to store information about previous percepts. Imagine a situation where a point is deducted for each move on the two square vacuum world, and one point is given for each clean square. Once a square has been cleaned, the agent shouldn't return to it. However, without this knowledge being stored, the agent is destined to repeatedly return to previously clean squares, which is not rational, given the fact that these precepts have already been observed. This assumes that the reflex agent is restricted to observing only the current square it is on. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Chapter 2 Exercises"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "2.7) Write pseudocode for the goal-based and utility based agents:\n",
+ "\n",
+ "** Goal-based: **\n",
+ "\n",
+ " currentDeltaAction=0\n",
+ " currentBestAction=[]\n",
+ " While goal==false:\n",
+ " for iAction in listOfActions:\n",
+ " if deltaValue(iAction)>currentDeltaAction:\n",
+ " currentBestAction=iAction\n",
+ " agent_action(currentBestAction):\n",
+ " \n",
+ "\n",
+ "** Utility-based: **\n",
+ "\n",
+ " currentDeltaUtility=0\n",
+ " currentBestAction=[]\n",
+ " While true:\n",
+ " if iAction in listOfActions:\n",
+ " if deltaUtility(iAction)>currentBestUtility:\n",
+ " currentBestAction=iAction\n",
+ " \n",
+ " agent_action(currentBestAction)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "2.8) Implement a performance-measuring environment simulator for the vacuum-cleaner world depicted in Figure 2.2 and specified on page 38. Your implementation should be modular so that the sensors, actuators, and enviroment characteristics (size, shape, dirt placement, etc.) can be changed easily."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Agent Name: Vacuum Robot Agent\n",
+ "-------------------------------\n",
+ "*Performance Measure:* +1 point for each clean square at each time step, for 1000 time steps\n",
+ "\n",
+ "*Environment:* Two squares at positions (0,0) and (1,0). The squares can either be dirty or clean. The agent cannot go outside those two positions.\n",
+ "\n",
+ "*Actuators:* The actuators for the agent consist of the ability to move between the squares and the ability to suck up dirt.\n",
+ "\n",
+ "*Sensors:* The sensors allow for the agent to know current location and also whether there is dirt or not at the square the currently occupy."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "from agents import *\n",
+ "\n",
+ "# Define the dirt clump class\n",
+ "class DirtClump(Thing):\n",
+ " pass\n",
+ "\n",
+ "#Define the environment class\n",
+ "class adxyz_VacuumEnvironment(XYEnvironment):\n",
+ "\n",
+ "# Need to override the percept method \n",
+ " def percept(self, agent):\n",
+ " print ()\n",
+ " print (\"In adxyz_VacuumEnvironment - percept override:\")\n",
+ " print (\"Self = \", self)\n",
+ " print (\"Self.things = \", self.things)\n",
+ " print (\"Agent ID = \", agent)\n",
+ " print (\"Agent location = \", agent.location)\n",
+ " print (\"Agent performance = \", agent.performance)\n",
+ " \n",
+ " for iThing in self.things:\n",
+ " if iThing.location==agent.location: #check location\n",
+ " if iThing != agent: # Don't return agent information\n",
+ " if (isinstance(iThing, DirtClump)):\n",
+ " print (\"A thing which is not agent, but a dirt clump = \", iThing )\n",
+ " print (\"Location = \", iThing.location)\n",
+ " return agent.location, \"DirtClump\"\n",
+ " \n",
+ " return agent.location, \"CleanSquare\" #Default, if we don't find a dirt clump.\n",
+ " \n",
+ "# Need to override the action method (and update performance measure.)\n",
+ " def execute_action(self, agent, action):\n",
+ " print ()\n",
+ " print (\"In adxyz_VacuumEnvironment - execute_action override:\")\n",
+ " print(\"self = \", self)\n",
+ " print(\"agent = \", agent)\n",
+ " print(\"current agent action = \", action)\n",
+ " print()\n",
+ " if action==\"Suck\":\n",
+ " print(\"Action-Suck\")\n",
+ " print(\"Need to remove dirt clump at correct location\")\n",
+ " deleteList = []\n",
+ " for iThing in self.things:\n",
+ " if iThing.location==agent.location: #check location\n",
+ " if (isinstance(iThing, DirtClump)): # Only suck dirt\n",
+ " print (\"A thing which is not agent, but a dirt clump = \", iThing)\n",
+ " print (\"Location of dirt clod = \", iThing.location)\n",
+ " self.delete_thing(iThing)\n",
+ " break # can only do one deletion per action.\n",
+ " \n",
+ " elif action==\"MoveRight\":\n",
+ " print(\"Action-MoveRight\")\n",
+ " print(\"agent direction before MoveRight = \", agent.direction)\n",
+ " print(\"agent location before MoveRight = \", agent.location)\n",
+ " agent.bump = False\n",
+ " agent.direction.direction = \"right\"\n",
+ " agent.bump = self.move_to(agent, agent.direction.move_forward(agent.location))\n",
+ " print(\"agent direction after MoveRight = \", agent.direction)\n",
+ " print(\"agent location after MoveRight = \", agent.location)\n",
+ " print()\n",
+ " \n",
+ " elif action==\"MoveLeft\":\n",
+ " print(\"Action-MoveLeft\")\n",
+ " print(\"agent direction before MoveLeft = \", agent.direction)\n",
+ " print(\"agent location before MoveLeft = \", agent.location)\n",
+ " agent.bump = False\n",
+ " agent.direction.direction = \"left\"\n",
+ " agent.bump = self.move_to(agent, agent.direction.move_forward(agent.location))\n",
+ " print(\"agent direction after MoveLeft = \", agent.direction)\n",
+ " print(\"agent location after MoveLeft = \", agent.location)\n",
+ " print()\n",
+ " \n",
+ " elif action==\"DoNothing\":\n",
+ " print(\"Action-DoNothing\")\n",
+ " \n",
+ " else:\n",
+ " print(\"Action-Not Understood\") #probably error. Don't go to score section.\n",
+ " return\n",
+ " \n",
+ "###\n",
+ "### Count up number of clean squares (indirectly)\n",
+ "### and add that to the agent peformance score\n",
+ "###\n",
+ "\n",
+ " print(\"Before dirt count update, agent.performance = \", agent.performance)\n",
+ " dirtCount=0\n",
+ " for iThing in self.things:\n",
+ " if isinstance(iThing, DirtClump):\n",
+ " dirtCount = dirtCount+1\n",
+ "\n",
+ " cleanSquareCount = self.width*self.height-dirtCount \n",
+ " agent.performance=agent.performance + cleanSquareCount\n",
+ " print(\"After execute_action, agent.performance = \", agent.performance)\n",
+ " return "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "2.9) Implement a simple reflex agent for the vacuum environment in Exercise 2.8. Run the environment with this agent for all possible initial dirt configurations and agent locations. Record the performance score for each consideration and the overall average score."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# The program for the simple reflex agent is:\n",
+ "# \n",
+ "# Percept: Action:\n",
+ "# -------- -------\n",
+ "# [(0,0),Clean] -> Right\n",
+ "# [(0,0),Dirty] -> Suck\n",
+ "# [(1,0),Clean] -> Left\n",
+ "# [(1,0),Dirty] -> Suck\n",
+ "#\n",
+ "\n",
+ "def adxyz_SimpleReflexVacuum(percept):\n",
+ " \n",
+ " if percept[0] == (0,0) and percept[1]==\"DirtClump\":\n",
+ " return \"Suck\"\n",
+ " elif percept[0] == (1,0) and percept[1]==\"DirtClump\":\n",
+ " return \"Suck\"\n",
+ " elif percept[0] == (0,0) and percept[1]==\"CleanSquare\":\n",
+ " return \"MoveRight\"\n",
+ " elif percept[0] == (1,0) and percept[1]==\"CleanSquare\":\n",
+ " return \"MoveLeft\"\n",
+ " else:\n",
+ " return \"DoNothing\" # Not sure how you would get here, but DoNothing to be safe.\n",
+ "\n",
+ "# Instantiate a simple reflex vacuum agent\n",
+ "class adxyz_SimpleReflexVacuumAgent(Agent):\n",
+ " pass"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# Define the initial dirt configurations\n",
+ "initDirt=[]\n",
+ "initDirt.append([]) # neither location dirty - format(X,Y)-locations:A=(0,0), B=(1,0)\n",
+ "###initDirt.append([(0,0)]) # square A dirty, square B clean\n",
+ "##initDirt.append([(1,0)]) # square A clean, square B dirty\n",
+ "###initDirt.append([(0,0),(1,0)]) # square A dirty, square B dirty\n",
+ "\n",
+ "print(\"initDirt = \", initDirt)\n",
+ "\n",
+ "#\n",
+ "# Create agent placements\n",
+ "#\n",
+ "initAgent=[]\n",
+ "initAgent.append((0,0))\n",
+ "initAgent.append((1,0))\n",
+ "print(\"initAgent = \", initAgent)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# Create a loop over environments to run simulation\n",
+ "\n",
+ "# Loop over agent placements\n",
+ "for iSimAgentPlacement in range(len(initAgent)):\n",
+ "###for iSimAgentPlacement in range(1):\n",
+ " print(\"Simulation: iSimAgentPlacement = \", iSimAgentPlacement)\n",
+ "\n",
+ "# Loop over dirt placements\n",
+ " for iSimDirtPlacement in range(len(initDirt)):\n",
+ " print (\"Simulation: iSimDirtPlacement = \" , iSimDirtPlacement)\n",
+ " myVacEnv = adxyz_VacuumEnvironment() #Create a new environment for each dirt/agent setup\n",
+ " myVacEnv.width = 2\n",
+ " myVacEnv.height = 1\n",
+ "\n",
+ " for iPlace in range(len(initDirt[iSimDirtPlacement])):\n",
+ " print (\"Simulation: iPlace = \" , iPlace)\n",
+ " myVacEnv.add_thing(DirtClump(),location=initDirt[iSimDirtPlacement][iPlace])\n",
+ " \n",
+ "#\n",
+ "# Now setup the agent.\n",
+ "#\n",
+ " myAgent=adxyz_SimpleReflexVacuumAgent()\n",
+ " myAgent.program=adxyz_SimpleReflexVacuum #Place the agent program here\n",
+ " myAgent.performance=0\n",
+ "\n",
+ "# Instantiate a direction object for 2D generality\n",
+ " myAgent.direction = Direction(\"up\") # need to leverage heading mechanism\n",
+ " \n",
+ "# Add agent to environment\n",
+ " myVacEnv.add_thing(myAgent,location=initAgent[iSimAgentPlacement])\n",
+ " print()\n",
+ " print(\"Environment:\")\n",
+ " for iThings in myVacEnv.things:\n",
+ " print(iThings, iThings.location)\n",
+ " print()\n",
+ " \n",
+ "#\n",
+ "# Now step the environment clock\n",
+ "#\n",
+ " numSteps = 5\n",
+ " for iStep in range(numSteps):\n",
+ " print()\n",
+ " print(\"<-START->\")\n",
+ " print(\"Simulation: step =\", iStep)\n",
+ " myVacEnv.step()\n",
+ " print(\"---END---\")\n",
+ " print(\"---------\")\n",
+ " print()\n",
+ " \n",
+ " print() \n",
+ " print(\"<====>\")\n",
+ " print(\"<====>\")\n",
+ " #need to keep running tally of initial configuration and final performance\n",
+ " print(\"Final performance measure for Agent = \", myAgent.performance)\n",
+ " print(\"======\")\n",
+ " print(\"======\")\n",
+ " print()\n",
+ "#\n",
+ "# End of script\n",
+ "#"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Todo:\n",
+ "- Clean up comments/prints (mostly done)\n",
+ "- Make processing more generalized\n",
+ "-- Introduce multiple dirt clods.\n",
+ "-- Introduce multiple agents.\n",
+ "- Move data to cloud"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "2.10) Consider the modified version of the performance metric where the agent is penalized on point for each movement:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "a) Can a simple reflex agent be perfectly rational for this environment?"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "For this problem, there are 8 cases to consider (8 states of the environment): 4 initial dirt configurations and 2 initial agent configurations.\n",
+ "\n",
+ "Case 1a) Clean A, Clean B, agent in square A: The maximum performance score would be 2 points awarded at each step, because there are two clean squares. If we were to design a reflex agent, we could use the following program: [(clean, squareA)-->DoNothing]\n",
+ "Case 1b) Clean A, Clean B, agent in square B: The maximum performance score would be 2 points awarded at each step, because there are two clean squares. If we were to design a reflex agent, we could use the following program: [(clean, SquareB)-->DoNothing]\n",
+ "\n",
+ "Case 2a) Dirt A, Clean B, agent in square A: The maximum performance score would be 2 points, once the dirt is removed from square A. The agent program that could accomplish this is: [(dirt, squareA)-->suck], [(clean, squareA)-->DoNothing]\n",
+ "Case 2b) Dirt A, Clean B, agent in square B: The maximum performance score would be 1-1 (1 point for clean B, -1 for move to A), then 2 points for each step after that. The agent program that could accomplish this is: [(clean, squareB)-->MoveLeft], [(dirt, squareA)-->suck], [(clean, squareA)-->DoNothing]. However, this is in conflict with the optimum program for Case 1b.\n",
+ "\n",
+ "Case 3a) Clean A, Dirt B, agent in squareA: The maximum peformance score would be 1(for clean initial square) -1 (for move to B) = 0 points for step 1. 2 points each step from then on. The agent program that could accomplish this would be: [(clean, squareA)-->MoveRight], [(Dirt, SquareB)-->Suck], [(clean,SquareB)-->doNothing]. However, we can see from this situation that our program for 3a is in conflict with the program for 1a.\n",
+ "Case 3b) Clean A, Dirt B, agent in squareB: The maximum performance score for this would be 2 per time step: The following agent program could accomplish this [(Dirt, SquareB)-->Suck][(clean,SquareB)-->doNothing.\n",
+ "\n",
+ "Case 4a) Dirt A, Dirt B, Agent in Square A: The maximum possible performance points would be 1 for first step, 1-1 for second step, 2 points from that step onwards. An agent function that could accomplish this is: [(dirt,squareA)-->suck], [(clean,squareA)-->moveRight], [(dirt,SquareB)-->suck], [(clean,SquareB)-->doNothing]. However, this includes an instruction which is in conflict with the optimum program in case 1a.\n",
+ "\n",
+ "Case 4b) Dirt A, Dirt B, Agent in Square B: The maximum possible performance points would be 1 for the first step, 1-1 for the second step, and 2 points from the step onwards. An agent function that could accomplish this is: [(dirt, squareB)-->suck], [(clean,squareB)-->moveLeft], [(dirt,squareA)-->suck], [(clean, squareA)-->doNothing]. This has instruction which are in conflict with case 1b. \n",
+ "\n",
+ "Because we have conflicting instructions in order to achieve optimum performance results, we would have to choose one or the other, which would lead to a suboptimal result in at least one case. Thus a perfectly rational agent cannot be designed. By perfectly rational, I mean one that is optimum in every case, since we must assume all cases are possible to occur."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "b) What about a reflex agent with state? Design such an agent."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# The program for the simple reflex agent with state is:\n",
+ "# \n",
+ "# Percept: Action:\n",
+ "# -------- -------\n",
+ "# [(0,0),Clean] -> Right\n",
+ "# [(0,0),Dirty] -> Suck\n",
+ "# [(1,0),Clean] -> Left\n",
+ "# [(1,0),Dirty] -> Suck\n",
+ "#\n",
+ "\n",
+ "def adxyz_SimpleReflexStateVacuum(percept):\n",
+ " \n",
+ " if percept[0] == (0,0) and percept[1]==\"DirtClump\":\n",
+ " return \"Suck\"\n",
+ " elif percept[0] == (1,0) and percept[1]==\"DirtClump\":\n",
+ " return \"Suck\"\n",
+ " elif percept[0] == (0,0) and percept[1]==\"CleanSquare\":\n",
+ " return \"MoveRight\"\n",
+ " elif percept[0] == (1,0) and percept[1]==\"CleanSquare\":\n",
+ " return \"MoveLeft\"\n",
+ " else:\n",
+ " return \"DoNothing\" # Not sure how you would get here, but DoNothing to be safe.\n",
+ "\n",
+ "# Instantiate a simple reflex vacuum agent\n",
+ "class adxyz_SimpleReflexStateVacuumAgent(Agent):\n",
+ " pass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Part II Problem Solving \n",
+ "\n",
+ "## Chapter 3 (Solving Problems by Searching"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We are looking to design agents that can solve goal seeking problems.\n",
+ "Step 1: Define the goal, which is a state of the environment. For example, the desired goal might be \"Car in Bucharest\" or \"Robot in square (10,10) with all squares clean\" \n",
+ "Step 2: Define the problem. \n",
+ "- Define the states of the environment (atomic)\n",
+ "- Define the initial state\n",
+ "- Define legal actions\n",
+ "- Define transitions (How the states change based on the actions)\n",
+ "- Define goal test\n",
+ "- Define path/step costs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "graph-search: A key algorithm for expanding the search space, that avoids redundent paths. The search methods in this chapter are based on graph-search algorithm.\n",
+ "Each step of the algorithm does this:\n",
+ "Unexplored state -> frontier states -> explored states.\n",
+ "A state can only be in one of the three above categories."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Infrastructure for search algorithms:\n",
+ "Graphs - nodes that include references to \n",
+ "parent nodes\n",
+ "state descriptions\n",
+ "action that got from parent to child node\n",
+ "path cost (from initial state).\n",
+ "\n",
+ "Types of cost:\n",
+ "Search cost (time to determine solution)\n",
+ "Path cost (cost of actual solution - for example distance on a roadmap)\n",
+ "Total cost: Sum of search + path cost (with appropriate scaling to put them in common units)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Types of Search Strategies"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Algorithm evaluation criteria:\n",
+ "- Completeness (Does the algorithm find a solution - or all solutions)\n",
+ "- Optimality (Does the algorithm find the best solution)\n",
+ "- Time complexity (how long does the algorithm take to find solution)\n",
+ "- Space complexity (how much memory is used)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Uninformed search"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This includes all search algorithms that have no idea whether one choice is \"more promising\" than another non-goal state. These algorithms generate non-goal states and test for goal states."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- Breadth-first search: Each node is expanded into the successor nodes one level at a time. Uses a FIFO queue for the frontier."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Pseudo-code for BFS search:\n",
+ "\n",
+ " unexploredNodes = dict()\n",
+ " exploredNodes = dict()\n",
+ " frontierNodes = initialState\n",
+ " goalNodeFound = False\n",
+ " \n",
+ " while not frontierNodes.empty:\n",
+ " currentNode = frontierNodes.pop\n",
+ " if currentNode.goal == True:\n",
+ " currentNode.pathCost=currentNode.parent.pathCost+currentNode.stepCost\n",
+ " goalNodeFound=True\n",
+ " break\n",
+ " else:\n",
+ " exploredNodes[currentNode]=True # add current node to explored nodes\n",
+ " for childNode,dummy in currentNode.links.items(): #Any link is a \"child\"\n",
+ " if (childNode in exploredNodes) or (childNode in frontierNodes):\n",
+ " continue\n",
+ " else:\n",
+ " frontierNodes.push(childNode)\n",
+ " childNode.stepCost=childNode.link[currentNode] # provide step cost\n",
+ " childNode.parent=currentNode\n",
+ " del unexploredNodes[childNode]\n",
+ " \n",
+ " If goalNodeFound != True: # goal node was not set\n",
+ " error\n",
+ " \n",
+ "Need to start at goal node and work back to initial state to provide solution pathway:\n",
+ "\n",
+ " pathSequence = queue.LifoQueue()\n",
+ "\n",
+ " currentNode = goalNode\n",
+ " pathSequence.put(currentNode)\n",
+ "\n",
+ " while currentNode != currentNode.parent:\n",
+ " pathSequence.put(currentNode.parent)\n",
+ " currentNode=currentNode.parent\n",
+ "\n",
+ " pathSequence.put(currentNode)\n",
+ "\n",
+ " while not pathSequence.empty():\n",
+ " print(\"Path sequence = \", pathSequence.get())\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We want to create a generic graph that could be undirected in general and search it using BFS and a FIFO frontier queue."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "class GraphNode():\n",
+ " def __init__(self, initName):\n",
+ " self.links=dict() # (name of link:step cost)\n",
+ " self.parent=None # Is assigned during BFS\n",
+ " self.goal=False # True if goal state\n",
+ " self.pathCost=0\n",
+ " self.stepCost=0\n",
+ " self.frontier=False # True if node has been added to frontier\n",
+ " self.name=initName"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# create map\n",
+ "#\n",
+ "#\n",
+ "# Node1 ----- 10 ----- Node2 ---7--- Node6\n",
+ "# | 28--------/ |\n",
+ "# | / 6\n",
+ "# | / |\n",
+ "# 15 / Node5\n",
+ "# | | |\n",
+ "# | | =======8======= \n",
+ "# |/ |\n",
+ "# Node3 \n",
+ "# |\n",
+ "# |\n",
+ "# |\n",
+ "# 17\n",
+ "# |\n",
+ "# |\n",
+ "# |\n",
+ "# Node4\n",
+ "#\n",
+ "#\n",
+ "Node1=GraphNode(\"Node1\")\n",
+ "Node2=GraphNode(\"Node2\")\n",
+ "Node3=GraphNode(\"Node3\")\n",
+ "Node4=GraphNode(\"Node4\")\n",
+ "Node5=GraphNode(\"Node5\")\n",
+ "Node6=GraphNode(\"Node6\")\n",
+ "\n",
+ "Node1.links[Node2]=10\n",
+ "Node1.links[Node3]=15\n",
+ "\n",
+ "Node2.links[Node1]=10\n",
+ "Node2.links[Node3]=28\n",
+ "Node2.links[Node5]=6\n",
+ "Node2.links[Node6]=7\n",
+ "\n",
+ "Node3.links[Node1]=15\n",
+ "Node3.links[Node2]=28\n",
+ "Node3.links[Node4]=17\n",
+ "Node3.links[Node5]=8\n",
+ "\n",
+ "Node4.links[Node3]=17\n",
+ "\n",
+ "Node5.links[Node2]=6\n",
+ "Node5.links[Node3]=8\n",
+ "\n",
+ "Node6.links[Node2]=7\n",
+ "\n",
+ "print(\"NodeSetup:\")\n",
+ "print(\"Node1 = \", Node1)\n",
+ "print(\"Node2 = \", Node2)\n",
+ "print(\"Node3 = \", Node3)\n",
+ "print(\"Node4 = \", Node4)\n",
+ "print(\"Node5 = \", Node5)\n",
+ "print(\"Node6 = \", Node6)\n",
+ "\n",
+ "print(\"Node1 links = \", Node1.links)\n",
+ "print(\"Node2 links = \", Node2.links)\n",
+ "print(\"Node3 links = \", Node3.links)\n",
+ "print(\"Node4 links = \", Node4.links)\n",
+ "print(\"Node5 links = \", Node5.links)\n",
+ "print(\"Node6 links = \", Node6.links)\n",
+ "\n",
+ "Node1.parent=Node1 # node1 is the initial node - pointing to itself as parent is the flag.\n",
+ "\n",
+ "Node6.goal=True\n",
+ "print(\"Node6.goal = \", Node6.goal)\n",
+ "print()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Run the BFS process\n",
+ "#\n",
+ "\n",
+ "import queue\n",
+ "\n",
+ "###exploredNodes = dict()\n",
+ "frontierNodes = queue.Queue()\n",
+ "goalNodeFound = False\n",
+ "\n",
+ "#\n",
+ "# Initialize the frontier queue\n",
+ "#\n",
+ "\n",
+ "frontierNodes.put(Node1)\n",
+ "Node1.frontier=True\n",
+ "\n",
+ "# Main loop\n",
+ "\n",
+ "while not frontierNodes.empty():\n",
+ " print(\"Exploring frontier nodes: \")\n",
+ " currentNode = frontierNodes.get()\n",
+ " if currentNode.goal == True:\n",
+ " goalNodeFound=True\n",
+ " break\n",
+ " else: \n",
+ " print(\"Expanding current node: \", currentNode.name)\n",
+ " for childNode,dummy in currentNode.links.items(): #Any link is a potential \"child\" \n",
+ " if (childNode.frontier==True):\n",
+ " print(\"Child Node has been seen before: \", childNode.name)\n",
+ " continue\n",
+ " else:\n",
+ " print(\"Child Node is being added to frontier: \", childNode.name)\n",
+ " frontierNodes.put(childNode)\n",
+ " childNode.frontier=True\n",
+ " childNode.parent=currentNode\n",
+ " childNode.stepCost=childNode.links[currentNode] # provide step cost\n",
+ " childNode.pathCost=currentNode.pathCost+childNode.stepCost\n",
+ " \n",
+ " print(\"End of frontier loop:\")\n",
+ " print(\"-------\")\n",
+ " print()\n",
+ " \n",
+ "if goalNodeFound != True: # goal node was not set\n",
+ " print (\"Goal node not found.\")\n",
+ "else:\n",
+ " print (\"Goal node found.\")\n",
+ " print (\"Current Node = \", currentNode.name)\n",
+ " print (\"Current Node Parent = \", currentNode.parent.name)\n",
+ " print (\"Current Node Step Cost = \", currentNode.stepCost)\n",
+ " print (\"Current Node Path Cost = \", currentNode.pathCost)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Report out the solution path working backwords from the goal node to the\n",
+ "# initial node (which is flagged by having the parent=node)\n",
+ "#\n",
+ "\n",
+ "pathSequence = queue.LifoQueue()\n",
+ "pathSequence.put(currentNode)\n",
+ "\n",
+ "while currentNode != currentNode.parent:\n",
+ " pathSequence.put(currentNode.parent)\n",
+ " currentNode=currentNode.parent\n",
+ "\n",
+ "# Add the final node, which is the initial in this case\n",
+ "# The initial node was specially marked to point to itself as parent\n",
+ "\n",
+ "while not pathSequence.empty():\n",
+ " print(\"Path sequence = \", pathSequence.get().name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Uniform Cost search"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This approach uses a priority queue for the frontier based on the smallest path cost to a given new node (need to check if this is smallest path cost or smallest step cost)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ " Example:\n",
+ "\n",
+ " Sib-----99-----Fagar\n",
+ " | |\n",
+ " | |\n",
+ " 80 |\n",
+ " | |\n",
+ " | |\n",
+ " RimV 211 \n",
+ " | /\n",
+ " | / \n",
+ " 97 / \n",
+ " | /\n",
+ " | /\n",
+ " Pit /\n",
+ " | /\n",
+ " | /\n",
+ " 101 /\n",
+ " | /\n",
+ " | /\n",
+ " Bucharest\n",
+ "\n",
+ "Initialize:\n",
+ "Frontier <- Sib\n",
+ "\n",
+ "Processing steps:\n",
+ "1. Pop frontier (priority queue, total path cost order), \n",
+ "2. Goal test\n",
+ "3. Generate descendent nodes and insert descendent nodes into frontier.\n",
+ "\n",
+ "Initialize:\n",
+ "Frontier <- Sib\n",
+ "\n",
+ "Order of examining nodes.\n",
+ " 1. Frontier(Sib) \n",
+ " 2. Frontier.pop -> Sib\n",
+ " 3. GoalTest(Sib) -> False\n",
+ " 4. Expand(Sib) -> RimV, Fagar\n",
+ " 5. Frontier(RimV, Fagar)\n",
+ " 6. Frontier.pop -> RimV\n",
+ " 7. GoalTest(RimV) -> False\n",
+ " 8. Expand(RimV) -> Pit\n",
+ " 9. Frontier(Fagar, Pit)\n",
+ " 10. Frontier.pop -> Fagar\n",
+ " 11. GoalTest(Fagar) -> False\n",
+ " 12. Expand(Fagar) -> Fagar-Bucharest\n",
+ " 13. Frontier(Pit, Fagar-Bucharest)\n",
+ " 14. Frontier.pop -> Pit\n",
+ " 15. GoalTest(Pit) -> False\n",
+ " 16. Expand(Pit) -> Pit-Bucharest\n",
+ " 17. Frontier(Pit-Bucharest, Fagar-Bucharest)\n",
+ " 18. Frontier.pop -> Pit-Bucharest\n",
+ " 19. GoalTest(Pit-Bucharest) -> True\n",
+ " 20. STOP"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Related proofs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- Prove optimality of uniform cost search (TBD)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Part II Problem Solving \n",
+ "#### 3 Solving Problems by Searching \n",
+ "The Chapter introduces solutions to environments that are deterministic, observable, static, and completely known. \n",
+ "Proof that uniform cost search is optimal (The algorithm will find the path to the goal state with the smallest path cost) \n",
+ "Algorithm (Frontier is Priority Queue with path cost as the priority – smaller path cost, higher priority: \n",
+ "Initialize: Frontier <- Initial State \n",
+ "Frontier.pop -> Ns = Node with smallest path cost in Frontier \n",
+ "GoalTest(Ns). If True, then stop else expand Ns(and mark as expanded) and place children in Frontier. \n",
+ "Repeat steps 2 & 3. \n",
+ "\n",
+ "Lemma 1: The path from the starting node to any unexpanded node in the graph must cross the Frontier. This is by the graph separation property. \n",
+ "\n",
+ "Definitions: A graph can be partitioned into three mutually exclusive sets. \n",
+ "\n",
+ "Expanded nodes: A node that is on any path from the initial state node to any frontier node. A node becomes expanded after two steps: 1) it has been added to the frontier and (2) its descendants have been added to the frontier at which point the node itself is marked as \"expanded\" and removed from the frontier set. \n",
+ "\n",
+ "Frontier nodes: A node which is currently in the frontier, but not its descendants. \n",
+ "\n",
+ "Unexpanded nodes: All other nodes in the graph. \n",
+ "\n",
+ "Proof: \n",
+ "\n",
+ "Base case. The start node is placed in the frontier during initialization, and thus every other node has to be outside the frontier that reaches the initial node. \n",
+ "\n",
+ "Inductive step: Assume a node is a frontier node. We expand all its descendants and make them frontier nodes and remove the original node from the frontier, marking it \"expanded.\" Note that there might be descendent nodes that are already frontier nodes. There are two possible paths to reach the original node from an unexpanded node. \n",
+ "\n",
+ "Path 1: Through a descendent node. Since each descendent node is on the frontier, this would entail crossing the frontier. \n",
+ "\n",
+ "Path 2: Through the parent(s) of the original node. However, since the original node was in the frontier, its parent must have been marked as expanded, meaning that all of its descendants had to be in the frontier thus preventing an unexpanded node from reaching the parent without crossing the frontier. This process can be repeated by induction until the initial node is reached, whereby definition it is already inside of the frontier and cannot be reached by an unexpanded node without crossing the frontier. \n",
+ "\n",
+ "Lemma 2: At each step, the unexpanded node with the smallest path cost will be selected from the Frontier for expansion. \n",
+ "\n",
+ "Base case: The start node is placed in the frontier. The path cost is zero (we assume all path costs to be non-zero positive numbers that require moving to a different node). Thus, the start node will be selected for expansion since the smallest possible path cost is zero. \n",
+ "\n",
+ "Inductive case: The unexpanded node with smallest path cost will be selected from the priority queue frontier for expansion. \n",
+ "Additionally, this path cost is optimal (there is no smaller path cost to this node). \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- Prove that the graph-search version of A-star is optimal if the heuristic function is consistent (TBD)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Informed search (heuristics)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "These approaches for searching the graph tend to produce faster results, but are dependent on information that may or may not be available at all times. At each step, an evaluation function is applied to each node in the frontier, and the one that has the optimal evaluation function value will be expanded."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Exercises in Chapter 3: \n",
+ "3.1 Explain why problem formulation must follow goal formulation. The goal (state) specifies the structure of the answer being sought by the agent. For example, the goal might be \"have all the tiles in an 8-tile puzzle in the correct location starting from an arbitrary random arrangement.\" This provides the framework that specifies the state space (arrangements tiles in an 8-tile puzzle), and also implies restrictions on how this can be accomplished (namely following the mechanics of how 8-til puzzles work, one-tile can be moved at a time into a blank, or alternatively, a blank can be moved in one of four directions from center, one of three on edge, one of two in a corner position. \n",
+ "\n",
+ "3.2 Your goal is to navigate a robot out of a maze. The robot starts at the center facing north. You can turn the robot to face north, east, south, or west. You can direct the robot to move a certain distance forward, although it will stop before hitting a wall. \n",
+ "\n",
+ "A) How large is the state space: If we consider the state space to be the location of the robot on a discrete x-y grid consisting of N locations, then the state space will be 4*N, for each of the four directions the robot can be facing at each location. \n",
+ "\n",
+ "B) We change the problem so that the only place you can turn is at the intersection of two or more corridors, how big is the state space now?: Let M be the number of intersections of the type just mentioned. Then, we would have a state space of 4*M. Each state would have the form, , which would then yield the next possible state that is reachable in the search tree. \n",
+ "\n",
+ "C) From each point in the maze, we can move in any of the four directions until we reach a turning point, and this is the only action we need to do. Reformulate the problem now. Do we still need to keep track of the robot's orientation? A state could be defined like this: \n",
+ ", Direction, Evaluation Function. No, you don’t need to keep track of the robots orientation. \n",
+ "\n",
+ "D) List the simplifications to the real world. Simplifications: \n",
+ "1) Only four directions are possible \n",
+ "2) You can only turn at intersections \n",
+ "3) The knowledge of the robots location and heading are exact. \n",
+ "\n",
+ "3.3) Suppose two friends live in different cities on a map, such as the Romania one. On every turn we can simultaneously move each friend to a neighboring city on the map. The amount of time to move from one city to another is the road distance d(I,j), but the friend that arrives first must wait for the other to arrive at their city before the next step. We want the two friends to meet as quickly as possible: \n",
+ "\n",
+ "A) Write a detailed formulation of the search problem: \n",
+ "\n",
+ "Initial State: FriendA in their starting location, FriendB in their starting location \n",
+ "\n",
+ "Actions: FriendA & FriendB select next destination that minimizes evaluation function. This action list should also include not moving from the given location (think of the degenerate case of only two cities- In the event of a tie on the goal contour, friendA can arbitrarily be selected to do the travelling. \n",
+ "\n",
+ "Transition Model: (Describing the state changes as a consequence of the actions): New locations for FriendA and FriendB \n",
+ "\n",
+ "Goal Test Function: Are FriendA and FriendB at same map location? \n",
+ "\n",
+ "Path Cost Function: Distance to next town + distance traveled so far. \n",
+ "If we imagine the state to consist of pairs of cities (with the goal state being the same city), then we can precompute the straight line distance between the city pairs, and this would be an admissible heuristic (it does not over estimate the travel time). At each time step, we would expand the nodes and take the state with the smallest heuristic. \n",
+ "\n",
+ "B) Admissible heuristics?\n",
+ "\n",
+ "C) Are there completely connected maps that have no solution? \n",
+ "\n",
+ "One possible case is a map consisting of two nodes that are connected. If the search algorithm doesn't take this into account, then the friends could swap cities, and then no longer be able to either swap back (since the state has been visited) and cannot meet.\n",
+ "\n",
+ "CityA ------------ CityB\n",
+ "\n",
+ "D) Are there maps in which all solutions require one friend to visit the same city twice?\n",
+ "\n",
+ "Consider the following map:\n",
+ "\n",
+ "\n",
+ "CityA--5---CityE\n",
+ "| |\n",
+ "| |\n",
+ "10 15\n",
+ "| |\n",
+ "| CityD\n",
+ "CityB (3) /\n",
+ " /\n",
+ "(BC=20) 25\n",
+ "(AC=30) /\n",
+ "CityC----\n",
+ "\n",
+ "\n",
+ "FriendA starts in CityA\n",
+ "FriendB starts in CityC\n",
+ "This map would require FriendA to visit CityA twice if we used the straightline distance heuristic (which we assert is the same as the road distances shown on the graph).\n",
+ "\n",
+ "3.4) Show that the 8-tile puzzle states are divided into two disjoint sets. You can reach any state from another other state within a given set, but cannot go between sets.\n",
+ "\n",
+ "B12 \n",
+ "345\n",
+ "678\n",
+ "\n",
+ "1B2\n",
+ "345\n",
+ "678\n",
+ "\n",
+ "12B\n",
+ "345\n",
+ "678\n",
+ "\n",
+ "125\n",
+ "34B\n",
+ "678\n",
+ "\n",
+ "125\n",
+ "348\n",
+ "67B\n",
+ "\n",
+ "3.5). Consider the 8-queens problem with the efficient incremental implementation on page 72. Explain why the state space has at least cube root of n factorial states and estimate the largest n for which exhaustive search is feasible.\n",
+ "\n",
+ "A given state is defined as the positions of the n-queens in n separate columns.\n",
+ "\n",
+ "In a given column we have the following situation.\n",
+ "\n",
+ "From the first column:\n",
+ "\n",
+ "In each column, we reduce the potential state space by 3 squares in each remaining column to the right in the worst case. Thus, the next column, i, will have at least N_i-3 new nodes to add to the search tree (although, the exact nodes as they refer to board locations can depend on the selected position of the queens in earlier rounds).\n",
+ "\n",
+ "Therefore, the sequence of branching is:\n",
+ "\n",
+ "i=0: N\n",
+ "i=1: N-3(i)\n",
+ "i=2: N-3(i)\n",
+ ".\n",
+ ".\n",
+ ".\n",
+ "i=7: N-3(i)\n",
+ "\n",
+ "The total number of states then is\n",
+ "\n",
+ "Prod(i=0 to N-1) max{N-3(i),1}\n",
+ "For the case of 8-queens, this is (although after the first three terms, the rest are set to 1): \n",
+ "\n",
+ " N(N-3)(N-6) * (1)(1)(1) * (1)(1) \n",
+ " \n",
+ "<= N(N-1)(N-2) * (N-3)(N-4)(N-5) * (N-6)(N-7) = N!\n",
+ "\n",
+ " X * X * X = N!\n",
+ "\n",
+ "where X=cuberoot(N!)\n",
+ "\n",
+ "Is the cuberoot of N! <= N(N-3)(N-6) ?\n",
+ "\n",
+ "if the cuberoot of N! = N(N-3)(N-6), then N! = N(N-3)(N-6) * N(N-3)(N-6) * N(N-3)(N-6), which it does not. Is N(N-3)(N-6) greater or less than the cuberoot of N! ? \n",
+ "\n",
+ "\n",
+ "N(N-3)(N-6) * N(N-3)(N-6) * N(N-3)(N-6)\n",
+ "\n",
+ "N*N*N >= N (N-1)(N-2)\n",
+ "(N-3)(N-3)(N-3) >= (N-3)(N-4)(N-5)\n",
+ "(N-6)(N-6)(N-6) >= (N-6)(N-7)(1)\n",
+ "\n",
+ "Therefore, cuberoot of N! <= N(N-3)(N-6), which itself was a lower bound on the number of states that would need to be searched, therefore the proof is complete. The number of states that must be searched is at least cuberoot of N!. This proof depends on being able to split up the N! evenly into 3 products, the first of which only includes those terms up to the point where 3i < N, for integer i.\n",
+ "\n",
+ "\n",
+ "3.6)\n",
+ "a) Using only four colors, you have to color a planar map such that no two adjacent regions have the same color.\n",
+ "\n",
+ "i) States:\n",
+ "A planar map, each region represented by a node, and adjacent regions are connected via links. Each node has a color attribute.\n",
+ "Choose any region as the initial state, and place it in frontier.\n",
+ "\n",
+ "ii) Actions: \n",
+ "a) While frontier is not empty, pop next node.\n",
+ "b) Check each linked node of this new node, placing unexplored nodes into the frontier, obtain their colors (if they have been assigned) and determine which color(s) have not been assigned.\n",
+ "c) Choose a color from the unassigned list and assign it to the current node. Mark this node as explored and return to step (a). \n",
+ "\n",
+ "iii) Transition: An additional node has color assigned to it.\n",
+ "\n",
+ "iv) Goal test:\n",
+ "No further regions remain uncolored: (Frontier is empty)\n",
+ "\n",
+ "v) Search/Path cost: Search cost will depend on search algorithm. Path cost is not relevant as only the final state matters for correctness and completeness."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "NodeSetup:\n",
+ "Node1 = <__main__.GraphNode object at 0x0000000004BF8940>\n",
+ "Node2 = <__main__.GraphNode object at 0x0000000004BF8978>\n",
+ "Node3 = <__main__.GraphNode object at 0x0000000004BF89B0>\n",
+ "Node4 = <__main__.GraphNode object at 0x0000000004BF89E8>\n",
+ "Node5 = <__main__.GraphNode object at 0x0000000004BF8A20>\n",
+ "Node6 = <__main__.GraphNode object at 0x0000000004BF8A58>\n",
+ "Node1 links = {<__main__.GraphNode object at 0x0000000004BF89B0>: 1, <__main__.GraphNode object at 0x0000000004BF8978>: 1}\n",
+ "Node2 links = {<__main__.GraphNode object at 0x0000000004BF8A20>: 1, <__main__.GraphNode object at 0x0000000004BF89B0>: 1, <__main__.GraphNode object at 0x0000000004BF8940>: 1, <__main__.GraphNode object at 0x0000000004BF8A58>: 1}\n",
+ "Node3 links = {<__main__.GraphNode object at 0x0000000004BF8A20>: 1, <__main__.GraphNode object at 0x0000000004BF8940>: 1, <__main__.GraphNode object at 0x0000000004BF89E8>: 1, <__main__.GraphNode object at 0x0000000004BF8978>: 1}\n",
+ "Node4 links = {<__main__.GraphNode object at 0x0000000004BF89B0>: 1}\n",
+ "Node5 links = {<__main__.GraphNode object at 0x0000000004BF89B0>: 1, <__main__.GraphNode object at 0x0000000004BF8978>: 1}\n",
+ "Node6 links = {<__main__.GraphNode object at 0x0000000004BF8978>: 1}\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test implementation of 4-color map (Exercise 3.6 a)\n",
+ "\n",
+ "#\n",
+ "# create map\n",
+ "#\n",
+ "#\n",
+ "# Node1 ----- 1 ----- Node2 ---1--- Node6\n",
+ "# | 1--------/ |\n",
+ "# | / 1\n",
+ "# | / |\n",
+ "# 1 / Node5\n",
+ "# | | |\n",
+ "# | | =======1======= \n",
+ "# |/ |\n",
+ "# Node3 \n",
+ "# |\n",
+ "# |\n",
+ "# |\n",
+ "# 1\n",
+ "# |\n",
+ "# |\n",
+ "# |\n",
+ "# Node4\n",
+ "#\n",
+ "#\n",
+ "\n",
+ "class GraphNode():\n",
+ " def __init__(self, initName=None):\n",
+ " self.links=dict() # (name of link:step cost)\n",
+ " self.parent=None # Is assigned during BFS\n",
+ " self.goal=False # True if goal state\n",
+ " self.pathCost=0\n",
+ " self.stepCost=0\n",
+ " self.frontier=False # True if node has been added to frontier\n",
+ " self.name=initName\n",
+ " self.color=None\n",
+ " \n",
+ "Node1=GraphNode(\"Node1\")\n",
+ "Node2=GraphNode(\"Node2\")\n",
+ "Node3=GraphNode(\"Node3\")\n",
+ "Node4=GraphNode(\"Node4\")\n",
+ "Node5=GraphNode(\"Node5\")\n",
+ "Node6=GraphNode(\"Node6\")\n",
+ "\n",
+ "Node1.links[Node2]=1\n",
+ "Node1.links[Node3]=1\n",
+ "\n",
+ "Node2.links[Node1]=1\n",
+ "Node2.links[Node3]=1\n",
+ "Node2.links[Node5]=1\n",
+ "Node2.links[Node6]=1\n",
+ "\n",
+ "Node3.links[Node1]=1\n",
+ "Node3.links[Node2]=1\n",
+ "Node3.links[Node4]=1\n",
+ "Node3.links[Node5]=1\n",
+ "\n",
+ "Node4.links[Node3]=1\n",
+ "\n",
+ "Node5.links[Node2]=1\n",
+ "Node5.links[Node3]=1\n",
+ "\n",
+ "Node6.links[Node2]=1\n",
+ "\n",
+ "print(\"NodeSetup:\")\n",
+ "print(\"Node1 = \", Node1)\n",
+ "print(\"Node2 = \", Node2)\n",
+ "print(\"Node3 = \", Node3)\n",
+ "print(\"Node4 = \", Node4)\n",
+ "print(\"Node5 = \", Node5)\n",
+ "print(\"Node6 = \", Node6)\n",
+ "\n",
+ "print(\"Node1 links = \", Node1.links)\n",
+ "print(\"Node2 links = \", Node2.links)\n",
+ "print(\"Node3 links = \", Node3.links)\n",
+ "print(\"Node4 links = \", Node4.links)\n",
+ "print(\"Node5 links = \", Node5.links)\n",
+ "print(\"Node6 links = \", Node6.links)\n",
+ "\n",
+ "Node1.parent=Node1 # node1 is the initial node - pointing to itself as parent is the flag."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Run the BFS process\n",
+ "#\n",
+ "\n",
+ "import queue\n",
+ "\n",
+ "###exploredNodes = dict()\n",
+ "frontierNodes = queue.Queue()\n",
+ "goalNodeFound = False\n",
+ "\n",
+ "#\n",
+ "# Initialize the frontier queue\n",
+ "#\n",
+ "\n",
+ "frontierNodes.put(Node1)\n",
+ "Node1.frontier=True\n",
+ "\n",
+ "# Main loop\n",
+ "\n",
+ "while not frontierNodes.empty():\n",
+ " print(\"Exploring frontier nodes: \")\n",
+ " currentNode = frontierNodes.get()\n",
+ " if currentNode.goal == True:\n",
+ " goalNodeFound=True\n",
+ " break\n",
+ " else: \n",
+ " print(\"Expanding current node: \", currentNode.name)\n",
+ " for childNode,dummy in currentNode.links.items(): #Any link is a potential \"child\" \n",
+ " if (childNode.frontier==True):\n",
+ " print(\"Child Node has been seen before: \", childNode.name)\n",
+ " continue\n",
+ " else:\n",
+ " print(\"Child Node is being added to frontier: \", childNode.name)\n",
+ " frontierNodes.put(childNode)\n",
+ " childNode.frontier=True\n",
+ " childNode.parent=currentNode\n",
+ " childNode.stepCost=childNode.links[currentNode] # provide step cost\n",
+ " childNode.pathCost=currentNode.pathCost+childNode.stepCost\n",
+ " \n",
+ " print(\"End of frontier loop:\")\n",
+ " print(\"-------\")\n",
+ " print()\n",
+ " \n",
+ "if goalNodeFound != True: # goal node was not set\n",
+ " print (\"Goal node not found.\")\n",
+ "else:\n",
+ " print (\"Goal node found.\")\n",
+ " print (\"Current Node = \", currentNode.name)\n",
+ " print (\"Current Node Parent = \", currentNode.parent.name)\n",
+ " print (\"Current Node Step Cost = \", currentNode.stepCost)\n",
+ " print (\"Current Node Path Cost = \", currentNode.pathCost)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Report out the solution path working backwords from the goal node to the\n",
+ "# initial node (which is flagged by having the parent=node)\n",
+ "#\n",
+ "\n",
+ "pathSequence = queue.LifoQueue()\n",
+ "pathSequence.put(currentNode)\n",
+ "\n",
+ "while currentNode != currentNode.parent:\n",
+ " pathSequence.put(currentNode.parent)\n",
+ " currentNode=currentNode.parent\n",
+ "\n",
+ "# Add the final node, which is the initial in this case\n",
+ "# The initial node was specially marked to point to itself as parent\n",
+ "\n",
+ "while not pathSequence.empty():\n",
+ " print(\"Path sequence = \", pathSequence.get().name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 4 Beyond Classical Search \n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5 Adversarial Search\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 6 Constraint Satisfaction Problems\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": true
+ },
+ "source": [
+ "Part III Knowledge and Reasoning \n",
+ " 7 Logical Agents \n",
+ " \n",
+ " 8 First-Order Logic \n",
+ " \n",
+ " 9 Inference in First-Order Logic \n",
+ " \n",
+ " 10 Classical Planning \n",
+ " \n",
+ " 11 Planning and Acting in the Real World \n",
+ " \n",
+ " 12 Knowledge Representation \n",
+ " \n",
+ "Part IV Uncertain Knowledge and Reasoning \n",
+ " 13 Quantifying Uncertainty \n",
+ " \n",
+ " 14 Probabilistic Reasoning \n",
+ " \n",
+ " 15 Probabilistic Reasoning over Time \n",
+ " \n",
+ " 16 Making Simple Decisions \n",
+ " \n",
+ " 17 Making Complex Decisions \n",
+ " \n",
+ "Part V Learning \n",
+ " 18 Learning from Examples \n",
+ " \n",
+ " 19 Knowledge in Learning \n",
+ " \n",
+ " 20 Learning Probabilistic Models \n",
+ " \n",
+ " 21 Reinforcement Learning \n",
+ " \n",
+ "Part VII Communicating, Perceiving, and Acting \n",
+ " 22 Natural Language Processing \n",
+ " \n",
+ " 23 Natural Language for Communication \n",
+ " \n",
+ " 24 Perception \n",
+ " \n",
+ " 25 Robotics \n",
+ " \n",
+ "Part VIII Conclusions \n",
+ " 26 Philosophical Foundations \n",
+ " \n",
+ " 27 AI: The Present and Future "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "A Mathematical Background [pdf] \n",
+ " B Notes on Languages and Algorithms [pdf] \n",
+ " Bibliography [pdf and histograms] \n",
+ " Index [html or pdf] "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### OVERALL NOTES: \n",
+ "Performance Measures: \n",
+ "We always consider first the performance measure that is evaluated on any given sequence of environment states (not states of the agent). This is critical. \n",
+ "As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave. \n",
+ "Rational agents maximize expected performance measures. \n",
+ "The definition of a rational agent is: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. \n",
+ "Perfect agents maximize actual performance measures. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "EXAMPLE: PEAS framework – Create this first, before designing the agent. \n",
+ "Agent Type \n",
+ "Performance Measure (EXTERNAL TO AGENT) \n",
+ "Environment (EXTERNAL TO AGENT) \n",
+ "Actuators \n",
+ "(AVAILABLE TO AGENT) \n",
+ "Sensors \n",
+ "(AVAILABLE TO AGENT) \n",
+ "Taxi Driver \n",
+ "Safe, fast, legal, comfortable trip, maximize profits \n",
+ "Roads, other traffic, pedestrians, customers \n",
+ "Steering, accelerator, brake, signal, horn, display \n",
+ "Cameras, sonar, speedometer, GPS, odometer, accelerometer, engine sensors, keyboard \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Types of Rational Agents: \n",
+ "Simple Reflex \n",
+ " \n",
+ "Model Based \n",
+ " \n",
+ "Goal Based \n",
+ "Problem-solving agent (Chap 3): Using atomic representations – states are black boxes. \n",
+ " \n",
+ "Planning agents (Chap 7): Using factored or structured state representations. \n",
+ " \n",
+ "Utility Based \n",
+ " \n",
+ "Learning Agents \n",
+ "Types of task environments: \n",
+ "1) Observability: \n",
+ "Fully observable \n",
+ "Partially observable \n",
+ "Totally unobservable \n",
+ "2) Agents: \n",
+ "Single \n",
+ "Multiple \n",
+ "3) Determinism: \n",
+ "Deterministic \n",
+ "Stochastic \n",
+ "4) Episode: \n",
+ "Episodic \n",
+ "Sequential \n",
+ "5) Dynamic: \n",
+ "Static \n",
+ "Semi-Dynamic \n",
+ "Dynamic \n",
+ "6) Discreteness: \n",
+ "Discrete \n",
+ "Continuous \n",
+ " \n",
+ "Types of states of the environment. \n",
+ "1) Atomic – each state of the environment is a discrete, indivisible state. \n",
+ "Search & Game Playing (Chapters 3-5) \n",
+ "Hidden Markov Models (Chapter 15) \n",
+ "Markov Decision Process (Chapter 17) \n",
+ "2) Factored -- each state of the environment can be described by internal values such as variables, booleans. \n",
+ "Constraint satisfaction (Chapter 6) \n",
+ "Propositional logic (Chapter 7) \n",
+ "Planning (Chapter 10-11) \n",
+ "Bayesian networks (13-16) \n",
+ "Machine Learning (18,20,21) \n",
+ "3) Structured -- Each state can consists of an internal structure with objects that have relationships to each other. \n",
+ "Relational Databases and first order logic (Chapter 8,9, 12) \n",
+ "First order probability models (Chapter 14) \n",
+ "Knowledge-based learning (Chapter 19) \n",
+ "Natural language processing (Chapter 22, 23) "
+ ]
+ }
+ ],
+ "metadata": {
+ "anaconda-cloud": {},
+ "celltoolbar": "Raw Cell Format",
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.5.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/agents.ipynb b/agents.ipynb
index db42f8d33..e2e852755 100644
--- a/agents.ipynb
+++ b/agents.ipynb
@@ -135,9 +135,21 @@
"cell_type": "code",
"execution_count": 4,
"metadata": {
- "collapsed": true
+ "collapsed": false
},
- "outputs": [],
+ "outputs": [
+ {
+ "ename": "ImportError",
+ "evalue": "No module named 'ipythonblocks'",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)",
+ "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[1;32mfrom\u001b[0m \u001b[0mipythonblocks\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mBlockGrid\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0magents\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[1;33m*\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m color = {\"Breeze\": (225, 225, 225),\n\u001b[1;32m 5\u001b[0m \u001b[1;34m\"Pit\"\u001b[0m\u001b[1;33m:\u001b[0m \u001b[1;33m(\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
+ "\u001b[0;31mImportError\u001b[0m: No module named 'ipythonblocks'"
+ ]
+ }
+ ],
"source": [
"from ipythonblocks import BlockGrid\n",
"from agents import *\n",
@@ -177,32 +189,11 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": null,
"metadata": {
"collapsed": false
},
- "outputs": [
- {
- "data": {
- "text/html": [
- ""
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "[[], [None], [], [], [None]]\n",
- "2\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"step()"
]
@@ -340,8 +331,9 @@
}
],
"metadata": {
+ "anaconda-cloud": {},
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [default]",
"language": "python",
"name": "python3"
},
@@ -355,7 +347,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.5.1"
+ "version": "3.5.2"
}
},
"nbformat": 4,