|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# ACTIVE REINFORCEMENT LEARNING\n", |
| 8 | + "\n", |
| 9 | + "This notebook mainly focuses on active reinforce learning algorithms. For a general introduction to reinforcement learning and passive algorithms, please refer to the notebook of **[Passive Reinforcement Learning](./Passive%20Reinforcement%20Learning.ipynb)**.\n", |
| 10 | + "\n", |
| 11 | + "Unlike Passive Reinforcement Learning in Active Reinforcement Learning, we are not bound by a policy pi and we need to select our actions. In other words, the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation. \n", |
| 12 | + "\n", |
| 13 | + "## QLearning Agent\n", |
| 14 | + "\n", |
| 15 | + "The QLearningAgent class in the rl module implements the Agent Program described in **Fig 21.8** of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not require a transition model and hence is a model-free method. Let us look into the source before we see some usage examples." |
| 16 | + ] |
| 17 | + }, |
| 18 | + { |
| 19 | + "cell_type": "code", |
| 20 | + "execution_count": null, |
| 21 | + "metadata": {}, |
| 22 | + "outputs": [], |
| 23 | + "source": [ |
| 24 | + "%psource QLearningAgent" |
| 25 | + ] |
| 26 | + }, |
| 27 | + { |
| 28 | + "cell_type": "markdown", |
| 29 | + "metadata": {}, |
| 30 | + "source": [ |
| 31 | + "The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the __ call __ method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a `mdp` object similar to the `PassiveTDAgent`.\n", |
| 32 | + "\n", |
| 33 | + " Let us use the same `GridMDP` object we used above. **Figure 17.1 (sequential_decision_environment)** is similar to **Figure 21.1** but has some discounting parameter as **gamma = 0.9**. The enviroment also implements an exploration function **f** which returns fixed **Rplus** until agent has visited state, action **Ne** number of times. The method **actions_in_state** returns actions possible in given state. It is useful when applying max and argmax operations." |
| 34 | + ] |
| 35 | + }, |
| 36 | + { |
| 37 | + "cell_type": "markdown", |
| 38 | + "metadata": {}, |
| 39 | + "source": [ |
| 40 | + "Let us create our object now. We also use the **same alpha** as given in the footnote of the book on **page 769**: $\\alpha(n)=60/(59+n)$ We use **Rplus = 2** and **Ne = 5** as defined in the book. The pseudocode can be referred from **Fig 21.7** in the book." |
| 41 | + ] |
| 42 | + }, |
| 43 | + { |
| 44 | + "cell_type": "code", |
| 45 | + "execution_count": 12, |
| 46 | + "metadata": {}, |
| 47 | + "outputs": [], |
| 48 | + "source": [ |
| 49 | + "import os, sys\n", |
| 50 | + "sys.path = [os.path.abspath(\"../../\")] + sys.path\n", |
| 51 | + "from rl4e import *\n", |
| 52 | + "from mdp import sequential_decision_environment, value_iteration" |
| 53 | + ] |
| 54 | + }, |
| 55 | + { |
| 56 | + "cell_type": "code", |
| 57 | + "execution_count": 6, |
| 58 | + "metadata": {}, |
| 59 | + "outputs": [], |
| 60 | + "source": [ |
| 61 | + "q_agent = QLearningAgent(sequential_decision_environment, Ne=5, Rplus=2, \n", |
| 62 | + " alpha=lambda n: 60./(59+n))" |
| 63 | + ] |
| 64 | + }, |
| 65 | + { |
| 66 | + "cell_type": "markdown", |
| 67 | + "metadata": {}, |
| 68 | + "source": [ |
| 69 | + "Now to try out the q_agent we make use of the **run_single_trial** function in rl.py (which was also used above). Let us use **200** iterations." |
| 70 | + ] |
| 71 | + }, |
| 72 | + { |
| 73 | + "cell_type": "code", |
| 74 | + "execution_count": 7, |
| 75 | + "metadata": {}, |
| 76 | + "outputs": [], |
| 77 | + "source": [ |
| 78 | + "for i in range(200):\n", |
| 79 | + " run_single_trial(q_agent,sequential_decision_environment)" |
| 80 | + ] |
| 81 | + }, |
| 82 | + { |
| 83 | + "cell_type": "markdown", |
| 84 | + "metadata": {}, |
| 85 | + "source": [ |
| 86 | + "Now let us see the Q Values. The keys are state-action pairs. Where different actions correspond according to:\n", |
| 87 | + "\n", |
| 88 | + "north = (0, 1) \n", |
| 89 | + "south = (0,-1) \n", |
| 90 | + "west = (-1, 0) \n", |
| 91 | + "east = (1, 0)" |
| 92 | + ] |
| 93 | + }, |
| 94 | + { |
| 95 | + "cell_type": "code", |
| 96 | + "execution_count": null, |
| 97 | + "metadata": {}, |
| 98 | + "outputs": [], |
| 99 | + "source": [ |
| 100 | + "q_agent.Q" |
| 101 | + ] |
| 102 | + }, |
| 103 | + { |
| 104 | + "cell_type": "markdown", |
| 105 | + "metadata": {}, |
| 106 | + "source": [ |
| 107 | + "The Utility U of each state is related to Q by the following equation.\n", |
| 108 | + "\n", |
| 109 | + "$$U (s) = max_a Q(s, a)$$\n", |
| 110 | + "\n", |
| 111 | + "Let us convert the Q Values above into U estimates.\n", |
| 112 | + "\n" |
| 113 | + ] |
| 114 | + }, |
| 115 | + { |
| 116 | + "cell_type": "code", |
| 117 | + "execution_count": 9, |
| 118 | + "metadata": {}, |
| 119 | + "outputs": [], |
| 120 | + "source": [ |
| 121 | + "U = defaultdict(lambda: -1000.) # Very Large Negative Value for Comparison see below.\n", |
| 122 | + "for state_action, value in q_agent.Q.items():\n", |
| 123 | + " state, action = state_action\n", |
| 124 | + " if U[state] < value:\n", |
| 125 | + " U[state] = value" |
| 126 | + ] |
| 127 | + }, |
| 128 | + { |
| 129 | + "cell_type": "markdown", |
| 130 | + "metadata": {}, |
| 131 | + "source": [ |
| 132 | + "Now we can output the estimated utility values at each state:" |
| 133 | + ] |
| 134 | + }, |
| 135 | + { |
| 136 | + "cell_type": "code", |
| 137 | + "execution_count": 10, |
| 138 | + "metadata": {}, |
| 139 | + "outputs": [ |
| 140 | + { |
| 141 | + "data": { |
| 142 | + "text/plain": [ |
| 143 | + "defaultdict(<function __main__.<lambda>()>,\n", |
| 144 | + " {(0, 0): -0.0036556430391564178,\n", |
| 145 | + " (1, 0): -0.04862675963288682,\n", |
| 146 | + " (2, 0): 0.03384490363100474,\n", |
| 147 | + " (3, 0): -0.16618771401113092,\n", |
| 148 | + " (3, 1): -0.6015323978614368,\n", |
| 149 | + " (0, 1): 0.09161077177913537,\n", |
| 150 | + " (0, 2): 0.1834607974581678,\n", |
| 151 | + " (1, 2): 0.26393277962204903,\n", |
| 152 | + " (2, 2): 0.32369726495311274,\n", |
| 153 | + " (3, 2): 0.38898341569576245,\n", |
| 154 | + " (2, 1): -0.044858154562400485})" |
| 155 | + ] |
| 156 | + }, |
| 157 | + "execution_count": 10, |
| 158 | + "metadata": {}, |
| 159 | + "output_type": "execute_result" |
| 160 | + } |
| 161 | + ], |
| 162 | + "source": [ |
| 163 | + "U" |
| 164 | + ] |
| 165 | + }, |
| 166 | + { |
| 167 | + "cell_type": "markdown", |
| 168 | + "metadata": {}, |
| 169 | + "source": [ |
| 170 | + "Let us finally compare these estimates to value_iteration results." |
| 171 | + ] |
| 172 | + }, |
| 173 | + { |
| 174 | + "cell_type": "code", |
| 175 | + "execution_count": 13, |
| 176 | + "metadata": {}, |
| 177 | + "outputs": [ |
| 178 | + { |
| 179 | + "name": "stdout", |
| 180 | + "output_type": "stream", |
| 181 | + "text": [ |
| 182 | + "{(0, 1): 0.3984432178350045, (1, 2): 0.649585681261095, (3, 2): 1.0, (0, 0): 0.2962883154554812, (3, 0): 0.12987274656746342, (3, 1): -1.0, (2, 1): 0.48644001739269643, (2, 0): 0.3447542300124158, (2, 2): 0.7953620878466678, (1, 0): 0.25386699846479516, (0, 2): 0.5093943765842497}\n" |
| 183 | + ] |
| 184 | + } |
| 185 | + ], |
| 186 | + "source": [ |
| 187 | + "print(value_iteration(sequential_decision_environment))" |
| 188 | + ] |
| 189 | + } |
| 190 | + ], |
| 191 | + "metadata": { |
| 192 | + "kernelspec": { |
| 193 | + "display_name": "Python 3", |
| 194 | + "language": "python", |
| 195 | + "name": "python3" |
| 196 | + }, |
| 197 | + "language_info": { |
| 198 | + "codemirror_mode": { |
| 199 | + "name": "ipython", |
| 200 | + "version": 3 |
| 201 | + }, |
| 202 | + "file_extension": ".py", |
| 203 | + "mimetype": "text/x-python", |
| 204 | + "name": "python", |
| 205 | + "nbconvert_exporter": "python", |
| 206 | + "pygments_lexer": "ipython3", |
| 207 | + "version": "3.7.2" |
| 208 | + } |
| 209 | + }, |
| 210 | + "nbformat": 4, |
| 211 | + "nbformat_minor": 2 |
| 212 | +} |
0 commit comments