|
929 | 929 | "source": [
|
930 | 930 | "elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()"
|
931 | 931 | ]
|
| 932 | + }, |
| 933 | + { |
| 934 | + "cell_type": "markdown", |
| 935 | + "metadata": {}, |
| 936 | + "source": [ |
| 937 | + "## Approximate Inference in Bayesian Networks\n", |
| 938 | + "\n", |
| 939 | + "Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms." |
| 940 | + ] |
| 941 | + }, |
| 942 | + { |
| 943 | + "cell_type": "code", |
| 944 | + "execution_count": null, |
| 945 | + "metadata": { |
| 946 | + "collapsed": false |
| 947 | + }, |
| 948 | + "outputs": [], |
| 949 | + "source": [ |
| 950 | + "%psource BayesNode.sample" |
| 951 | + ] |
| 952 | + }, |
| 953 | + { |
| 954 | + "cell_type": "markdown", |
| 955 | + "metadata": {}, |
| 956 | + "source": [ |
| 957 | + "Before we consider the different algorithms in this section let us look at the **BayesNode.sample** method. It samples from the distribution for this variable conditioned on event's values for parent_variables. That is, return True/False at random according to with the conditional probability given the parents. The **probability** function is a simple helper from **utils** module which returns True with the probability passed to it.\n", |
| 958 | + "\n", |
| 959 | + "### Prior Sampling\n", |
| 960 | + "\n", |
| 961 | + "The idea of Prior Sampling is to sample from the Bayesian Network in a topological order. We start at the top of the network and sample as per **P(X<sub>i</sub> | parents(X<sub>i</sub>)** i.e. the probability distribution from which the value is sampled is conditioned on the values already assigned to the variable's parents. This can be thought of as a simulation." |
| 962 | + ] |
| 963 | + }, |
| 964 | + { |
| 965 | + "cell_type": "code", |
| 966 | + "execution_count": null, |
| 967 | + "metadata": { |
| 968 | + "collapsed": true |
| 969 | + }, |
| 970 | + "outputs": [], |
| 971 | + "source": [ |
| 972 | + "%psource prior_sample" |
| 973 | + ] |
| 974 | + }, |
| 975 | + { |
| 976 | + "cell_type": "markdown", |
| 977 | + "metadata": {}, |
| 978 | + "source": [ |
| 979 | + "The function **prior_sample** implements the algorithm described in **Figure 14.13** of the book. Nodes are sampled in the topological order. The old value of the event is passed as evidence for parent values. We will use the Bayesian Network in **Figure 14.12** to try out the **prior_sample**\n", |
| 980 | + "\n", |
| 981 | + "<img src=\"files/images/sprinklernet.jpg\" height=\"500\" width=\"500\">\n", |
| 982 | + "\n", |
| 983 | + "We store the samples on the observations. Let us find **P(Rain=True)**" |
| 984 | + ] |
| 985 | + }, |
| 986 | + { |
| 987 | + "cell_type": "code", |
| 988 | + "execution_count": null, |
| 989 | + "metadata": { |
| 990 | + "collapsed": false |
| 991 | + }, |
| 992 | + "outputs": [], |
| 993 | + "source": [ |
| 994 | + "N = 1000\n", |
| 995 | + "all_observations = [prior_sample(sprinkler) for x in range(N)]" |
| 996 | + ] |
| 997 | + }, |
| 998 | + { |
| 999 | + "cell_type": "markdown", |
| 1000 | + "metadata": {}, |
| 1001 | + "source": [ |
| 1002 | + "Now we filter to get the observations where Rain = True" |
| 1003 | + ] |
| 1004 | + }, |
| 1005 | + { |
| 1006 | + "cell_type": "code", |
| 1007 | + "execution_count": null, |
| 1008 | + "metadata": { |
| 1009 | + "collapsed": true |
| 1010 | + }, |
| 1011 | + "outputs": [], |
| 1012 | + "source": [ |
| 1013 | + "rain_true = [observation for observation in all_observations if observation['Rain'] == True]" |
| 1014 | + ] |
| 1015 | + }, |
| 1016 | + { |
| 1017 | + "cell_type": "markdown", |
| 1018 | + "metadata": {}, |
| 1019 | + "source": [ |
| 1020 | + "Finally, we can find **P(Rain=True)**" |
| 1021 | + ] |
| 1022 | + }, |
| 1023 | + { |
| 1024 | + "cell_type": "code", |
| 1025 | + "execution_count": null, |
| 1026 | + "metadata": { |
| 1027 | + "collapsed": false |
| 1028 | + }, |
| 1029 | + "outputs": [], |
| 1030 | + "source": [ |
| 1031 | + "answer = len(rain_true) / N\n", |
| 1032 | + "print(answer)" |
| 1033 | + ] |
| 1034 | + }, |
| 1035 | + { |
| 1036 | + "cell_type": "markdown", |
| 1037 | + "metadata": {}, |
| 1038 | + "source": [ |
| 1039 | + "To evaluate a conditional distribution. We can use a two-step filtering process. We first separate out the variables that are consistent with the evidence. Then for each value of query variable, we can find probabilities. For example to find **P(Cloudy=True | Rain=True)**. We have already filtered out the values consistent with our evidence in **rain_true**. Now we apply a second filtering step on **rain_true** to find **P(Rain=True and Cloudy=True)**" |
| 1040 | + ] |
| 1041 | + }, |
| 1042 | + { |
| 1043 | + "cell_type": "code", |
| 1044 | + "execution_count": null, |
| 1045 | + "metadata": { |
| 1046 | + "collapsed": false |
| 1047 | + }, |
| 1048 | + "outputs": [], |
| 1049 | + "source": [ |
| 1050 | + "rain_and_cloudy = [observation for observation in rain_true if observation['Cloudy'] == True]\n", |
| 1051 | + "answer = len(rain_and_cloudy) / len(rain_true)\n", |
| 1052 | + "print(answer)" |
| 1053 | + ] |
| 1054 | + }, |
| 1055 | + { |
| 1056 | + "cell_type": "markdown", |
| 1057 | + "metadata": {}, |
| 1058 | + "source": [ |
| 1059 | + "### Rejection Sampling\n", |
| 1060 | + "\n", |
| 1061 | + "Rejection Sampling is based on an idea similar to what we did just now. First, it generates samples from the prior distribution specified by the network. Then, it rejects all those that do not match the evidence. The function **rejection_sampling** implements the algorithm described by **Figure 14.14**" |
| 1062 | + ] |
| 1063 | + }, |
| 1064 | + { |
| 1065 | + "cell_type": "code", |
| 1066 | + "execution_count": null, |
| 1067 | + "metadata": { |
| 1068 | + "collapsed": true |
| 1069 | + }, |
| 1070 | + "outputs": [], |
| 1071 | + "source": [ |
| 1072 | + "%psource rejection_sampling" |
| 1073 | + ] |
| 1074 | + }, |
| 1075 | + { |
| 1076 | + "cell_type": "markdown", |
| 1077 | + "metadata": {}, |
| 1078 | + "source": [ |
| 1079 | + "The function keeps counts of each of the possible values of the Query variable and increases the count when we see an observation consistent with the evidence. It takes in input parameters **X** - The Query Variable, **e** - evidence, **bn** - Bayes net and **N** - number of prior samples to generate.\n", |
| 1080 | + "\n", |
| 1081 | + "**consistent_with** is used to check consistency." |
| 1082 | + ] |
| 1083 | + }, |
| 1084 | + { |
| 1085 | + "cell_type": "code", |
| 1086 | + "execution_count": null, |
| 1087 | + "metadata": { |
| 1088 | + "collapsed": true |
| 1089 | + }, |
| 1090 | + "outputs": [], |
| 1091 | + "source": [ |
| 1092 | + "%psource consistent_with" |
| 1093 | + ] |
| 1094 | + }, |
| 1095 | + { |
| 1096 | + "cell_type": "markdown", |
| 1097 | + "metadata": {}, |
| 1098 | + "source": [ |
| 1099 | + "To answer **P(Cloudy=True | Rain=True)**" |
| 1100 | + ] |
| 1101 | + }, |
| 1102 | + { |
| 1103 | + "cell_type": "code", |
| 1104 | + "execution_count": null, |
| 1105 | + "metadata": { |
| 1106 | + "collapsed": false |
| 1107 | + }, |
| 1108 | + "outputs": [], |
| 1109 | + "source": [ |
| 1110 | + "p = rejection_sampling('Cloudy', dict(Rain=True), sprinkler, 1000)\n", |
| 1111 | + "p[True]" |
| 1112 | + ] |
| 1113 | + }, |
| 1114 | + { |
| 1115 | + "cell_type": "markdown", |
| 1116 | + "metadata": {}, |
| 1117 | + "source": [ |
| 1118 | + "### Likelihood Weighting\n", |
| 1119 | + "\n", |
| 1120 | + "Rejection sampling tends to reject a lot of samples if our evidence consists of a large number of variables. Likelihood Weighting solves this by fixing the evidence (i.e. not sampling it) and then using weights to make sure that our overall sampling is still consistent.\n", |
| 1121 | + "\n", |
| 1122 | + "The pseudocode in **Figure 14.15** is implemented as **likelihood_weighting** and **weighted_sample**." |
| 1123 | + ] |
| 1124 | + }, |
| 1125 | + { |
| 1126 | + "cell_type": "code", |
| 1127 | + "execution_count": null, |
| 1128 | + "metadata": { |
| 1129 | + "collapsed": true |
| 1130 | + }, |
| 1131 | + "outputs": [], |
| 1132 | + "source": [ |
| 1133 | + "%psource weighted_sample" |
| 1134 | + ] |
| 1135 | + }, |
| 1136 | + { |
| 1137 | + "cell_type": "markdown", |
| 1138 | + "metadata": {}, |
| 1139 | + "source": [ |
| 1140 | + "\n", |
| 1141 | + "**weighted_sample** samples an event from Bayesian Network that's consistent with the evidence **e** and returns the event and its weight, the likelihood that the event accords to the evidence. It takes in two parameters **bn** the Bayesian Network and **e** the evidence.\n", |
| 1142 | + "\n", |
| 1143 | + "The weight is obtained by multiplying **P(x<sub>i</sub> | parents(x<sub>i</sub>))** for each node in evidence. We set the values of **event = evidence** at the start of the function." |
| 1144 | + ] |
| 1145 | + }, |
| 1146 | + { |
| 1147 | + "cell_type": "code", |
| 1148 | + "execution_count": null, |
| 1149 | + "metadata": { |
| 1150 | + "collapsed": false |
| 1151 | + }, |
| 1152 | + "outputs": [], |
| 1153 | + "source": [ |
| 1154 | + "weighted_sample(sprinkler, dict(Rain=True))" |
| 1155 | + ] |
| 1156 | + }, |
| 1157 | + { |
| 1158 | + "cell_type": "code", |
| 1159 | + "execution_count": null, |
| 1160 | + "metadata": { |
| 1161 | + "collapsed": true |
| 1162 | + }, |
| 1163 | + "outputs": [], |
| 1164 | + "source": [ |
| 1165 | + "%psource likelihood_weighting" |
| 1166 | + ] |
| 1167 | + }, |
| 1168 | + { |
| 1169 | + "cell_type": "markdown", |
| 1170 | + "metadata": {}, |
| 1171 | + "source": [ |
| 1172 | + "**likelihood_weighting** implements the algorithm to solve our inference problem. The code is similar to **rejection_sampling** but instead of adding one for each sample we add the weight obtained from **weighted_sampling**." |
| 1173 | + ] |
| 1174 | + }, |
| 1175 | + { |
| 1176 | + "cell_type": "code", |
| 1177 | + "execution_count": null, |
| 1178 | + "metadata": { |
| 1179 | + "collapsed": false |
| 1180 | + }, |
| 1181 | + "outputs": [], |
| 1182 | + "source": [ |
| 1183 | + "likelihood_weighting('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()" |
| 1184 | + ] |
| 1185 | + }, |
| 1186 | + { |
| 1187 | + "cell_type": "markdown", |
| 1188 | + "metadata": {}, |
| 1189 | + "source": [ |
| 1190 | + "### Gibbs Sampling\n", |
| 1191 | + "\n", |
| 1192 | + "In likelihood sampling, it is possible to obtain low weights in cases where the evidence variables reside at the bottom of the Bayesian Network. This can happen because influence only propagates downwards in likelihood sampling.\n", |
| 1193 | + "\n", |
| 1194 | + "Gibbs Sampling solves this. The implementation of **Figure 14.16** is provided in the function **gibbs_ask** " |
| 1195 | + ] |
| 1196 | + }, |
| 1197 | + { |
| 1198 | + "cell_type": "code", |
| 1199 | + "execution_count": null, |
| 1200 | + "metadata": { |
| 1201 | + "collapsed": true |
| 1202 | + }, |
| 1203 | + "outputs": [], |
| 1204 | + "source": [ |
| 1205 | + "%psource gibbs_ask" |
| 1206 | + ] |
| 1207 | + }, |
| 1208 | + { |
| 1209 | + "cell_type": "markdown", |
| 1210 | + "metadata": {}, |
| 1211 | + "source": [ |
| 1212 | + "In **gibbs_ask** we initialize the non-evidence variables to random values. And then select non-evidence variables and sample it from **P(Variable | value in the current state of all remaining vars) ** repeatedly sample. In practice, we speed this up by using **markov_blanket_sample** instead. This works because terms not involving the variable get canceled in the calculation. The arguments for **gibbs_ask** are similar to **likelihood_weighting**" |
| 1213 | + ] |
| 1214 | + }, |
| 1215 | + { |
| 1216 | + "cell_type": "code", |
| 1217 | + "execution_count": null, |
| 1218 | + "metadata": { |
| 1219 | + "collapsed": false |
| 1220 | + }, |
| 1221 | + "outputs": [], |
| 1222 | + "source": [ |
| 1223 | + "gibbs_ask('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()" |
| 1224 | + ] |
932 | 1225 | }
|
933 | 1226 | ],
|
934 | 1227 | "metadata": {
|
|
948 | 1241 | "nbconvert_exporter": "python",
|
949 | 1242 | "pygments_lexer": "ipython3",
|
950 | 1243 | "version": "3.4.3"
|
| 1244 | + }, |
| 1245 | + "widgets": { |
| 1246 | + "state": {}, |
| 1247 | + "version": "1.1.1" |
951 | 1248 | }
|
952 | 1249 | },
|
953 | 1250 | "nbformat": 4,
|
|
0 commit comments