The Rival Books of Aster collectible-card game needed an AI.
About half of the time I spent on this game was devoted to server logic, and the other half was devoted to building an AI opponent. Surprisingly, the AI turned out to be far smarter than I thought it would be — but more on that later.
Defining the Scope of the AI Problem
One of the things I’ve learned about programming is that a little bit of clearly-thought-out design work before jumping into the code is a superb idea. Design is about establishing what we need to build, how we plan to go about building it, and whether the underlying technology we’re planning on using is capable of delivering the performance we need. Design can also be about adjusting the specifications, feature sets, proposed system architecture, underlying technologies chosen, and other elements with the intent of arriving at a simple, low-risk plan for delivering a solution.
My intent with the AI was simple: I asked myself, “Given the needs of this game, what is the simplest possible AI I can build?” Future work can do something more complicated, but the primary goal of this work was not to explore complex designs, but rather to realize a minimum viable product.
First of all, the game needed an AI. The normal mode of play is player-versus-player (PvP), but when a player logs in and can’t be immediately matched with another player, an AI is required (the alternative would be to not let the player play, which was unacceptable). The AI was also needed for tutorial sessions — the tutorials are interactive, so even in tutorials an AI was needed to respond to the player’s actions.
Second, the AI needed to be able to play all possible move types and card types. This requirement was non-trivial because of the large number of cards and move type available, and because many moves could trigger a variety of effects that modified player/creature health, attack/defense strength, magic effects, and other stats. Worse still, some effects are capable of causing other effects in a chain reaction. The AI needed to understand enough about all types of moves, cards, and effects to play intelligently.
Ideally, the AI would allow for maximal code reuse. There was a choice that had to be made about where the AI logic would run — it could run on the client or the server. If the AI were implemented on the client, then the client would have to be aware of two modes of gameplay: one set of logic for PvP, another set for player versus AI. Furthermore, the game logic to compute all of the effects of a particular move would have to exist on both the client and the server, increasing the time required to add, change, or test new move or effect types. Since the server was already responsible for computing all move effects, we decided to avoid code duplication by writing an AI that would run on the server.
Why Most Programmers Can’t Write Good AI
I’ve seen this problem over and over again: When faced with a complex problem, the go-to mode of thought for most programmers is to think about types of conditions and responses. This condition-response type of thinking goes like this: “If condition A happens, then the AI should perform response X. Otherwise, if condition B happens, then the AI should perform response Y… etc.” While this kind of condition-response thinking is suitable for producing simple responses in simple situations, it utterly fails when applied to complex situations where the complete set of possible conditions is hard to imagine.
The difficulty of imagining all possible situations that the AI may find itself in leads to reductive thinking on the part of the programmer: rather than choosing a response based on an analysis of all factors that may be relevant, a response is chosen based only on a very few factors that the programmer assumes will be “most important” in a given situation. The problem is this: aside from trivially simple situations, programmers are remarkably bad at consistently predicting what the most-important situational factors will actually be. Reductive thinking by the programmer usually results in reductive thinking by the AI: Rather than fully considering and responding to all aspects of its current situation, a reductive AI will sometimes ignore extremely important events and objects that a normal human would have responded to.
It Doesn’t Have to be Complex, but a Change in Thinking is Needed
Dealing with legacy thinking is extremely difficult. Established patterns of thought often leave no room for better ways of thinking — these patterns must be pushed aside (at least temporarily) to make room for better patterns of thought.
Changing how you think is both the hardest and the most necessary step in making a real improvement. And there are payoffs: the AI for Rival Books of Aster is both conceptually simple and non-reductive. It doesn’t always win, but it plays reasonably well; and even more importantly, it was constructed in less than 3 person-months of work.
So what did I do? I adapted the notion of a board evaluation function, which is used in a great number of chess- and checkers-playing programs. A board evaluation function is a kind of heuristic that estimates how “good” a particular board position is.
Normally, a board evaluation function produces a single number. But for Rival Books of Aster, I had the evaluation function produce a vector of numbers. Each element of the vector corresponded to one type of feature — for example, player health was one feature, combined attack strength another, and the presence of walls was another.
Why vectors? The reason for vectors was to support different play styles. For example, some AIs favoured attacking over defending, or playing lots of weak cards over a few stronger cards. The vector representation was convenient for representing these variations: an element with a large number meant that the AI placed a high value on that factor, and a small number meant that the AI placed a low value on that factor.
By doing an element-wise multiplication of the evaluation function output with a play style vector, then adding all the elements together, you arrive at a single number representing how “good” that AI thinks the board state is. The mathematically astute among you will recognize that this operation is simply the dot product of the feature vector and the play-style vector.
But we’re not quite done yet. One quirk of this game is that each player may make zero, one, or several moves on any given turn. Not all moves are beneficial, so the player must decide when there are no more moves worth making (and end the turn). To accommodate this behaviour, the evaluation function doesn’t merely evaluate a single board state, but instead evaluates the difference between two board states. One board state is the current board state, and the other is the simulated result of making a move. By subtracting the current from the simulated state, the final result will be positive if the AI considers the move to be beneficial, and negative if the AI considers the move to be detrimental. The choice of when to end the turn then becomes easy: the AI ends the turn when there are no positively-rated moves left.
As an engineering detail, the AI was built to function asynchronously. Since it was on the server, the computational resources had to be shared with potentially hundreds of concurrent users. Furthermore, the server still had to be responsive to user events while the AI was running. To avoid problems with responsiveness, the AI was built to run as a set of asynchronous tasks that could be scheduled on other threads — some tasks were triggered by user messages, others by timer events.
I built a lookahead option also. The lookahead option simulates the result of two or more consecutive moves, so it is computationally more expensive. However, lookahead did not seem to make a significant difference in how well the AI performed.
Results and Expectations
The AI’s performance exceeded my expectations. I didn’t think it would win very many games, but the feedback I got from the rest of the team was that it was winning too many games!
Before getting that feedback, I was thinking about how I might improve the AI’s performance: something that did multi-turn lookahead, perhaps some kind of Monte Carlo method — but that’s not what was practically needed. What was needed was an option to make the AI play worse! Otherwise, it would have been able to beat beginners too easily.
The solution I landed on was, like the rest of the AI, as simple as possible. The AI had a configurable parameter between 0% and 100%. The value of this parameter is multiplied by the largest move rating to determine a minimum cutoff value. The actual move chosen is randomly selected from those moves that have at least the minimum cutoff rating. So at 100%, the highest-rated move is always chosen, and at 0% any non-negative move may be chosen.
So why did such a simple AI work so well? I’m not entirely sure, but I have 2 guesses. First, the structure of the game is such that a lot of information is hidden (your opponent’s hand and deck, and even the ordering of cards in your own hand), so uncertainty increases very quickly with deeper lookahead. Greater uncertainly reduces what can be known, which limits the maximum possible benefit of strategizing. What’s interesting here is that the rest of the team thought the AI was doing multi-turn lookahead (because it is often able to make move sequences that humans interpret as being planned).
The second guess about the AI’s performance is that it is more precise and less reductive in its thought process than the typical human. When making a decision, it does a complete survey of all possible moves (humans may overlook a few). It also does an accurate projection of all the effects of every move (humans have difficulty imagining the full effects of a move, particularly if some of those effects are indirect). Finally, it performs precise comparisons among moves (humans have difficulty maintaining a stable ranking among choices when presented with a large number of options).
In conclusion, I was satisfied with what I was able to accomplish with the Rival Books of Aster AI. A change in thinking about the problem produced a remarkably clean and easy-to-maintain design with low computational requirements and more than enough intelligence. Moving forward, I think there are ways to make AI programming more accessible to people without advanced degrees or genius-level intelligence. Practical AI may require nothing more than a change in thinking.