From UBC Wiki
Jump to: navigation, search


Authors: Jack Ding, Michael Yau

Link to Github

What is the problem?

Go is a two-player abstract strategy board game invented in China over 2500 years ago. Players take turns placing black and white stones on a 9x9, 13x13, or 19x19 board, with the aim to establish as much territory as possible. Although the rules are very simple, the game itself is extremely complex (much more than chess, for example).

For more details, see the wikipedia page:

We will be implementing Go in Haskell, to be played via command line.

What is the something extra?

Some claim that Go is the most complex game in the world. It has long been considered one of the most difficult games to conquer with artificial intelligence due to its incredibly vast search space. As a comparison, the first chess program to defeat the world champion was in 1997 (IBM's Deep Blue against Garry Kasparov), whereas it was not until 2015 that a computer defeated any professional go player (Google DeepMind's AlphaGo against Fan Hui), without handicap, in an official match. In 2016, AlphaGo defeated Lee Se Dol (considered the top player in the world in the 21st century) 4 games to 1.

We will attempt to create a Go AI, called BetaGo, using similar techniques that we used for 2048 Prolog (some variation of exhaustive search with possible optimizations).

What did we learn from doing this?


Though the rules of Go are fairly straightforward, mastering it is much harder. It turns out this philosophy also applies when it comes to implementing the game in Haskell. When broken down, these are the core components to the game’s functionality:

  • A game loop that alternates between 2 players: Each player on their turn can either place a tile of their respective color (black or white) on the board or skip their turn (similar to the 2048 Prolog implementation, this is just an infinitely recursing function)
  • A board state that is represented as an NxN (N = 19 by default) list of list of Chars, where each Char is either a black piece ‘b’, white piece ‘w’, or an unoccupied position ‘ ‘ on the board
  • A series of checks to determine if a move is valid (coordinates entered are in bounds of game board and there is no piece occupying that position already, and move does not result in a suicide)
  • Executing the move if it is valid by placing the new piece on the board
  • Checking if the move results in a capture of neighboring pieces and executing captures
  • Displaying the board state before and after every move
  • Some additional helpers to assist with the above tasks (placing and getting a piece from a board, checking if a position is out of bounds or occupied, checking if a piece is surrounded by hostile pieces, etc…)

For the most part, Haskell is able to execute these tasks in a fairly clean and efficient manner, as any functionality that manipulates the board state is done very easily with the take and drop list manipulator operations, and a series of board state changes (such as captures) can thus be done recursively with the positions to be manipulated stored as a list of moves. However, these operations can also be done very easily in an imperative language, as a board could be represented as a 2D array and each position can be looked up and manipulated in constant time, and changes to the board state would not require the creation of a new board. One advantage of Haskell is that it forced us to be very deliberate with each of our expressions. Once we implemented a function and got it to compile, we found very few bugs compared to our typical experience with other less strict languages. We spent significantly less time debugging our code and more time focused on thinking of the logic and behavior required for each function.


Compared to 2048, which has a very defined endgame condition (when no new moves can be made), Go’s win conditions are much more subtle. A player’s score is ever changing as territories can be taken and retaken. In a real game of Go, games almost never reach completion, as one player would recognize when a game is unwinnable based on the enemy’s positioning and surrender. This is not something easily accountable from a programming perspective. One naive approach would be to count the capture score and territory score of a player and consider the game won if it reaches a certain amount, but how can we determine when a territory is completely won (ie, the other player cannot possibly claim it without the first player deliberately allowing it)? This was beyond the scope of our resources and time, so we decided to only display capture score and allow the players themselves to decide who won.


Our approach to the AI was basically to generate a pool of all legal moves, assign a weight to each move and select a random one to execute. This is pretty easy to do on an empty board. In Go, certain areas on the board are considered more valuable to take than others, specifically the corners and sides. So we start by assigning a base score to each tile that grows at the corners and dips the closer to the middle it gets. Once there are pieces on the board and these positions are no longer in play, it becomes harder to assess which moves are better. A good AI should be able to contest and defend territory from the enemy player. However, there is no “right” way to do this. Go is a game of action and reaction. The game board is huge and there are many different moves to accomplish these tasks. A brute force approach would be to simulate every possible move a few steps ahead and calculate the difference in score to rank the best moves. However, this would be incredibly slow and given the earlier problem of fluctuating scores, probably not very effective either. In the end, our AI acts like a decent player for the first few turns but deteriorates the longer the game goes. We hope that in the future we can become more proficient at Haskell to come up with a better approach, perhaps even one that incorporates machine learning. For now, it is truly a BetaGo.