|
20 | 20 | " * Tic-Tac-Toe\n",
|
21 | 21 | " * Figure 5.2 Game\n",
|
22 | 22 | "* Min-Max\n",
|
| 23 | + "* Alpha-Beta\n", |
23 | 24 | "* Players\n",
|
24 | 25 | "* Let's Play Some Games!"
|
25 | 26 | ]
|
|
347 | 348 | "cell_type": "markdown",
|
348 | 349 | "metadata": {},
|
349 | 350 | "source": [
|
350 |
| - "`utility`: Returns the value of the terminal state for a player ('MAX' and 'MIN')." |
| 351 | + "`utility`: Returns the value of the terminal state for a player ('MAX' and 'MIN'). Note that for 'MIN' the value returned is the negative of the utility." |
351 | 352 | ]
|
352 | 353 | },
|
353 | 354 | {
|
|
363 | 364 | },
|
364 | 365 | {
|
365 | 366 | "cell_type": "code",
|
366 |
| - "execution_count": 12, |
| 367 | + "execution_count": 3, |
367 | 368 | "metadata": {},
|
368 | 369 | "outputs": [
|
369 | 370 | {
|
370 | 371 | "name": "stdout",
|
371 | 372 | "output_type": "stream",
|
372 | 373 | "text": [
|
373 |
| - "3\n" |
| 374 | + "3\n", |
| 375 | + "-3\n" |
374 | 376 | ]
|
375 | 377 | }
|
376 | 378 | ],
|
377 | 379 | "source": [
|
378 |
| - "print(fig52.utility('B1', 'MAX'))" |
| 380 | + "print(fig52.utility('B1', 'MAX'))\n", |
| 381 | + "print(fig52.utility('B1', 'MIN'))" |
379 | 382 | ]
|
380 | 383 | },
|
381 | 384 | {
|
|
472 | 475 | "source": [
|
473 | 476 | "# MIN-MAX\n",
|
474 | 477 | "\n",
|
475 |
| - "This algorithm (often called *Minimax*) computes the next move for a player (MIN or MAX) at their current state. It recursively computes the minimax value of successor states, until it reaches terminals (the leaves of the tree). Using the `utility` value of the terminal states, it computes the values of parent states until it reaches the initial node (the root of the tree). The algorithm returns the move that returns the optimal value of the initial node's successor states.\n", |
| 478 | + "## Overview\n", |
476 | 479 | "\n",
|
477 |
| - "Below is the code for the algorithm:" |
| 480 | + "This algorithm (often called *Minimax*) computes the next move for a player (MIN or MAX) at their current state. It recursively computes the minimax value of successor states, until it reaches terminals (the leaves of the tree). Using the `utility` value of the terminal states, it computes the values of parent states until it reaches the initial node (the root of the tree).\n", |
| 481 | + "\n", |
| 482 | + "It is worth noting that the algorithm works in a depth-first manner." |
| 483 | + ] |
| 484 | + }, |
| 485 | + { |
| 486 | + "cell_type": "markdown", |
| 487 | + "metadata": {}, |
| 488 | + "source": [ |
| 489 | + "## Implementation\n", |
| 490 | + "\n", |
| 491 | + "In the implementation we are using two functions, `max_value` and `min_value` to calculate the best move for MAX and MIN respectively. These functions interact in an alternating recursion; one calls the other until a terminal state is reached. When the recursion halts, we are left with scores for each move. We return the max. Despite returning the max, it will work for MIN too since for MIN the values are their negative (hence the order of values is reversed, so the higher the better for MIN too)." |
478 | 492 | ]
|
479 | 493 | },
|
480 | 494 | {
|
|
492 | 506 | "cell_type": "markdown",
|
493 | 507 | "metadata": {},
|
494 | 508 | "source": [
|
| 509 | + "## Example\n", |
| 510 | + "\n", |
495 | 511 | "We will now play the Fig52 game using this algorithm. Take a look at the Fig52Game from above to follow along.\n",
|
496 | 512 | "\n",
|
497 | 513 | "It is the turn of MAX to move, and he is at state A. He can move to B, C or D, using moves a1, a2 and a3 respectively. MAX's goal is to maximize the end value. So, to make a decision, MAX needs to know the values at the aforementioned nodes and pick the greatest one. After MAX, it is MIN's turn to play. So MAX wants to know what will the values of B, C and D be after MIN plays.\n",
|
|
546 | 562 | "print(minimax_decision('A', fig52))"
|
547 | 563 | ]
|
548 | 564 | },
|
| 565 | + { |
| 566 | + "cell_type": "markdown", |
| 567 | + "metadata": {}, |
| 568 | + "source": [ |
| 569 | + "# ALPHA-BETA\n", |
| 570 | + "\n", |
| 571 | + "## Overview\n", |
| 572 | + "\n", |
| 573 | + "While *Minimax* is great for computing a move, it can get tricky when the number of games states gets bigger. The algorithm needs to search all the leaves of the tree, which increase exponentially to its depth.\n", |
| 574 | + "\n", |
| 575 | + "For Tic-Tac-Toe, where the depth of the tree is 9 (after the 9th move, the game ends), we can have at most 9! terminal states (at most because not all terminal nodes are at the last level of the tree; some are higher up because the game ended before the 9th move). This isn't so bad, but for more complex problems like chess, we have over $10^{40}$ terminal nodes. Unfortunately we have not found a way to cut the exponent away, but we nevertheless have found ways to alleviate the workload.\n", |
| 576 | + "\n", |
| 577 | + "Here we examine *pruning* the game tree, which means removing parts of it that we do not need to examine. The particular type of pruning is called *alpha-beta*, and the search in whole is called *alpha-beta search*.\n", |
| 578 | + "\n", |
| 579 | + "To showcase what parts of the tree we don't need to search, we will take a look at the example `Fig52Game`.\n", |
| 580 | + "\n", |
| 581 | + "In the example game, we need to find the best move for player MAX at state A, which is the maximum value of MIN's possible moves at successor states.\n", |
| 582 | + "\n", |
| 583 | + "`MAX(A) = MAX( MIN(B), MIN(C), MIN(D) )`\n", |
| 584 | + "\n", |
| 585 | + "`MIN(B)` is the minimum of 3, 12, 8 which is 3. So the above formula becomes:\n", |
| 586 | + "\n", |
| 587 | + "`MAX(A) = MAX( 3, MIN(C), MIN(D) )`\n", |
| 588 | + "\n", |
| 589 | + "Next move we will check is c1, which leads to a terminal state with utility of 2. Before we continue searching under state C, let's pop back into our formula with the new value:\n", |
| 590 | + "\n", |
| 591 | + "`MAX(A) = MAX( 3, MIN(2, c2, .... cN), MIN(D) )`\n", |
| 592 | + "\n", |
| 593 | + "We do not know how many moves state C allows, but we know that the first one results in a value of 2. Do we need to keep searching under C? The answer is no. The value MIN will pick on C will at most be 2. Since MAX already has the option to pick something greater than that, 3 from B, he does not need to keep searching under C.\n", |
| 594 | + "\n", |
| 595 | + "In *alpha-beta* we make use of two additional parameters for each state/node, *a* and *b*, that describe bounds on the possible moves. The parameter *a* denotes the best choice (highest value) for MAX along that path, while *b* denotes the best choice (lowest value) for MIN. As we go along we update *a* and *b* and prune a node branch when the value of the node is worse than the value of *a* and *b* for MAX and MIN respectively.\n", |
| 596 | + "\n", |
| 597 | + "In the above example, after the search under state B, MAX had an *a* value of 3. So, when searching node C we found a value less than that, 2, we stopped searching under C." |
| 598 | + ] |
| 599 | + }, |
| 600 | + { |
| 601 | + "cell_type": "markdown", |
| 602 | + "metadata": {}, |
| 603 | + "source": [ |
| 604 | + "## Implementation\n", |
| 605 | + "\n", |
| 606 | + "Like *minimax*, we again make use of functions `max_value` and `min_value`, but this time we utilise the *a* and *b* values, updating them and stopping the recursive call if we end up on nodes with values worse than *a* and *b* (for MAX and MIN). The algorithm finds the maximum value and returns the move that results in it.\n", |
| 607 | + "\n", |
| 608 | + "The implementation:" |
| 609 | + ] |
| 610 | + }, |
| 611 | + { |
| 612 | + "cell_type": "code", |
| 613 | + "execution_count": 2, |
| 614 | + "metadata": { |
| 615 | + "collapsed": true |
| 616 | + }, |
| 617 | + "outputs": [], |
| 618 | + "source": [ |
| 619 | + "%psource alphabeta_search" |
| 620 | + ] |
| 621 | + }, |
| 622 | + { |
| 623 | + "cell_type": "markdown", |
| 624 | + "metadata": {}, |
| 625 | + "source": [ |
| 626 | + "## Example\n", |
| 627 | + "\n", |
| 628 | + "We will play the Fig52 Game with the *alpha-beta* search algorithm. It is the turn of MAX to play at state A." |
| 629 | + ] |
| 630 | + }, |
| 631 | + { |
| 632 | + "cell_type": "code", |
| 633 | + "execution_count": 8, |
| 634 | + "metadata": {}, |
| 635 | + "outputs": [ |
| 636 | + { |
| 637 | + "name": "stdout", |
| 638 | + "output_type": "stream", |
| 639 | + "text": [ |
| 640 | + "a1\n" |
| 641 | + ] |
| 642 | + } |
| 643 | + ], |
| 644 | + "source": [ |
| 645 | + "print(alphabeta_search('A', fig52))" |
| 646 | + ] |
| 647 | + }, |
| 648 | + { |
| 649 | + "cell_type": "markdown", |
| 650 | + "metadata": {}, |
| 651 | + "source": [ |
| 652 | + "The optimal move for MAX is a1, for the reasons given above. MIN will pick move b1 for B resulting in a value of 3, updating the *a* value of MAX to 3. Then, when we find under C a node of value 2, we will stop searching under that sub-tree since it is less than *a*. From D we have a value of 2. So, the best move for MAX is the one resulting in a value of 3, which is a1.\n", |
| 653 | + "\n", |
| 654 | + "Below we see the best moves for MIN starting from B, C and D respectively. Note that the algorithm in these cases works the same way as *minimax*, since all the nodes below the aforementioned states are terminal." |
| 655 | + ] |
| 656 | + }, |
| 657 | + { |
| 658 | + "cell_type": "code", |
| 659 | + "execution_count": 7, |
| 660 | + "metadata": {}, |
| 661 | + "outputs": [ |
| 662 | + { |
| 663 | + "name": "stdout", |
| 664 | + "output_type": "stream", |
| 665 | + "text": [ |
| 666 | + "b1\n", |
| 667 | + "c1\n", |
| 668 | + "d3\n" |
| 669 | + ] |
| 670 | + } |
| 671 | + ], |
| 672 | + "source": [ |
| 673 | + "print(alphabeta_search('B', fig52))\n", |
| 674 | + "print(alphabeta_search('C', fig52))\n", |
| 675 | + "print(alphabeta_search('D', fig52))" |
| 676 | + ] |
| 677 | + }, |
549 | 678 | {
|
550 | 679 | "cell_type": "markdown",
|
551 | 680 | "metadata": {},
|
|
561 | 690 | "The `random_player` is a function that plays random moves in the game. That's it. There isn't much more to this guy. \n",
|
562 | 691 | "\n",
|
563 | 692 | "## alphabeta_player\n",
|
564 |
| - "The `alphabeta_player`, on the other hand, calls the `alphabeta_full_search` function, which returns the best move in the current game state. Thus, the `alphabeta_player` always plays the best move given a game state, assuming that the game tree is small enough to search entirely.\n", |
| 693 | + "The `alphabeta_player`, on the other hand, calls the `alphabeta_search` function, which returns the best move in the current game state. Thus, the `alphabeta_player` always plays the best move given a game state, assuming that the game tree is small enough to search entirely.\n", |
565 | 694 | "\n",
|
566 | 695 | "## play_game\n",
|
567 | 696 | "The `play_game` function will be the one that will actually be used to play the game. You pass as arguments to it an instance of the game you want to play and the players you want in this game. Use it to play AI vs AI, AI vs human, or even human vs human matches!"
|
|
580 | 709 | },
|
581 | 710 | {
|
582 | 711 | "cell_type": "code",
|
583 |
| - "execution_count": 2, |
| 712 | + "execution_count": 3, |
584 | 713 | "metadata": {
|
585 | 714 | "collapsed": true
|
586 | 715 | },
|
|
672 | 801 | },
|
673 | 802 | {
|
674 | 803 | "cell_type": "code",
|
675 |
| - "execution_count": 6, |
| 804 | + "execution_count": 5, |
676 | 805 | "metadata": {},
|
677 | 806 | "outputs": [
|
678 | 807 | {
|
|
681 | 810 | "'a1'"
|
682 | 811 | ]
|
683 | 812 | },
|
684 |
| - "execution_count": 6, |
| 813 | + "execution_count": 5, |
685 | 814 | "metadata": {},
|
686 | 815 | "output_type": "execute_result"
|
687 | 816 | }
|
688 | 817 | ],
|
689 | 818 | "source": [
|
690 |
| - "alphabeta_full_search('A', game52)" |
| 819 | + "alphabeta_search('A', game52)" |
691 | 820 | ]
|
692 | 821 | },
|
693 | 822 | {
|
|
0 commit comments