Meeting 06: Alpha-Beta Pruning and Practical Game Play

Reading: AIAMA 5.3-5.4

We now move on to dealing with real games of moderate complexity under constraints. Section 5.3 describes Alpha-Beta pruning, an ingenious way to increase the speed of minimax search without compromising the decision. Section 5.4 describes how to deal with time-limits in games and shows how searching as deep as possible into the game tree and constructing good evaluation functions is the name of the game.

Questions you should be able to answer after reading are:

  1. How does alpha-beta pruning work?
  2. Does minimax with and without alpha-beta pruning reach the same decision for a full game tree?
  3. Does minimax with and without alpha-beta pruning reach the same decision for a game tree with a fixed ply depth?
  4. What is the importance of the evaluation function in real game play?
  5. Are good evaluation functions game specific?
  6. What is the horizon effect?
  7. After combining the approaches in sections 5.1-5.4, do we have an effective algorithm for games like checkers or chess?

There has been a lot of progress using reinforcement learning and self-play in games. We will discuss this briefly as an alternative to search techniques like alpha-beta.