I posted about building a big decision tree that proved that player 2 could force a win by player 1 in tic tac toe. Turns out I was reading my results incorrectly (a result of ”2″ at a node meant player 2 could not force a player 1 win). That sucks.
I have proven the inverse of my hypothesis, so that’s something! Next up to see if P2 can never win (so P1 wins or draws).
I made a previous post about my efforts to make a Tic Tac Toe AI that always loses, which tried to do it with a pretty little algorithm and nothing more. Maybe there is a pretty algorithm out there, but I didn’t find it.
My new approach is to play every game in order to build a decision tree for the AI to follow. The tree is pretty big (a ~25mb nested JSON object), but the process is fast and a lot simpler in terms of code. Best of all: I can prove whether my hypothesis, that player two can always force a player one win, is true or not.
Each node of the tree looks like the top image above. The root has 9 children, which correspond to the 9 moves player one (P1) can make during the first turn. Each of those children have 8 children of their own (second, above), which correspond to P2’s first turn. This continues to the leaves, where there is a completely filled in board and we know that the game is over.
By simulating each game, we can assign true– for cases where P1 wins– and false– where P2 wins or there’s a draw– to each of the leaves. The leaves’ parent node (corresponding to turn 8, or P2’s last turn) can be assigned “true” if either of its children are true, meaning P2 has a least one move it can make there P1 wins. The parent of this node (corresponding to turn 7, or P1’s choice of three remaining spaces), can be assigned “true” if all of its children are true, meaning P1 must make a move that results in an eventual P1 win. This alternating AND/OR pattern continues up the tree to the root node. If the root node is “true”, then no matter what moves P1 makes, P2 can make a move that results in P1 winning.
You can view the JSON file here on Github, which has been cut down to about 11 mb by removing all moves that would occur after a game has been completed. I haven’t done anything to confirm that my process works and is giving correct data, other than confirm that all 255,168 games have been simulated. But that’ll come more after building the AI in my web version of the game and running through a test there (and letting humans battle this un-un-beatable foe!)