I have recently stumbled upon the game 2048. You merge similar tiles by moving them in any of the four directions to make “bigger” tiles. After each move, a new tile appears at random empty position with a value of either 2
or 4
. The game terminates when all the boxes are filled and there are no moves that can merge tiles, or you create a tile with a value of 2048
.
One, I need to follow a welldefined strategy to reach the goal. So, I thought of writing a program for it.
My current algorithm:
while (!game_over) {
for each possible move:
count_no_of_merges_for_2tiles and 4tiles
choose the move with a large number of merges
}
What I am doing is at any point, I will try to merge the tiles with values 2
and 4
, that is, I try to have 2
and 4
tiles, as minimum as possible. If I try it this way, all other tiles were automatically getting merged and the strategy seems good.
But, when I actually use this algorithm, I only get around 4000 points before the game terminates. Maximum points AFAIK is slightly more than 20,000 points which is way larger than my current score. Is there a better algorithm than the above?
9
I developed a 2048 AI using expectimax optimization, instead of the minimax search used by @ovolve’s algorithm. The AI simply performs maximization over all possible moves, followed by expectation over all possible tile spawns (weighted by the probability of the tiles, i.e. 10% for a 4 and 90% for a 2). As far as I’m aware, it is not possible to prune expectimax optimization (except to remove branches that are exceedingly unlikely), and so the algorithm used is a carefully optimized brute force search.
Performance
The AI in its default configuration (max search depth of 8) takes anywhere from 10ms to 200ms to execute a move, depending on the complexity of the board position. In testing, the AI achieves an average move rate of 510 moves per second over the course of an entire game. If the search depth is limited to 6 moves, the AI can easily execute 20+ moves per second, which makes for some interesting watching.
To assess the score performance of the AI, I ran the AI 100 times (connected to the browser game via remote control). For each tile, here are the proportions of games in which that tile was achieved at least once:
2048: 100%
4096: 100%
8192: 100%
16384: 94%
32768: 36%
The minimum score over all runs was 124024; the maximum score achieved was 794076. The median score is 387222. The AI never failed to obtain the 2048 tile (so it never lost the game even once in 100 games); in fact, it achieved the 8192 tile at least once in every run!
Here’s the screenshot of the best run:
This game took 27830 moves over 96 minutes, or an average of 4.8 moves per second.
Implementation
My approach encodes the entire board (16 entries) as a single 64bit integer (where tiles are the nybbles, i.e. 4bit chunks). On a 64bit machine, this enables the entire board to be passed around in a single machine register.
Bit shift operations are used to extract individual rows and columns. A single row or column is a 16bit quantity, so a table of size 65536 can encode transformations which operate on a single row or column. For example, moves are implemented as 4 lookups into a precomputed “move effect table” which describes how each move affects a single row or column (for example, the “move right” table contains the entry “1122 > 0023” describing how the row [2,2,4,4] becomes the row [0,0,4,8] when moved to the right).
Scoring is also done using table lookup. The tables contain heuristic scores computed on all possible rows/columns, and the resultant score for a board is simply the sum of the table values across each row and column.
This board representation, along with the table lookup approach for movement and scoring, allows the AI to search a huge number of game states in a short period of time (over 10,000,000 game states per second on one core of my mid2011 laptop).
The expectimax search itself is coded as a recursive search which alternates between “expectation” steps (testing all possible tile spawn locations and values, and weighting their optimized scores by the probability of each possibility), and “maximization” steps (testing all possible moves and selecting the one with the best score). The tree search terminates when it sees a previouslyseen position (using a transposition table), when it reaches a predefined depth limit, or when it reaches a board state that is highly unlikely (e.g. it was reached by getting 6 “4” tiles in a row from the starting position). The typical search depth is 48 moves.
Heuristics
Several heuristics are used to direct the optimization algorithm towards favorable positions. The precise choice of heuristic has a huge effect on the performance of the algorithm. The various heuristics are weighted and combined into a positional score, which determines how “good” a given board position is. The optimization search will then aim to maximize the average score of all possible board positions. The actual score, as shown by the game, is not used to calculate the board score, since it is too heavily weighted in favor of merging tiles (when delayed merging could produce a large benefit).
Initially, I used two very simple heuristics, granting “bonuses” for open squares and for having large values on the edge. These heuristics performed pretty well, frequently achieving 16384 but never getting to 32768.
Petr Morávek (@xificurk) took my AI and added two new heuristics. The first heuristic was a penalty for having nonmonotonic rows and columns which increased as the ranks increased, ensuring that nonmonotonic rows of small numbers would not strongly affect the score, but nonmonotonic rows of large numbers hurt the score substantially. The second heuristic counted the number of potential merges (adjacent equal values) in addition to open spaces. These two heuristics served to push the algorithm towards monotonic boards (which are easier to merge), and towards board positions with lots of merges (encouraging it to align merges where possible for greater effect).
Furthermore, Petr also optimized the heuristic weights using a “metaoptimization” strategy (using an algorithm called CMAES), where the weights themselves were adjusted to obtain the highest possible average score.
The effect of these changes are extremely significant. The algorithm went from achieving the 16384 tile around 13% of the time to achieving it over 90% of the time, and the algorithm began to achieve 32768 over 1/3 of the time (whereas the old heuristics never once produced a 32768 tile).
I believe there’s still room for improvement on the heuristics. This algorithm definitely isn’t yet “optimal”, but I feel like it’s getting pretty close.
That the AI achieves the 32768 tile in over a third of its games is a huge milestone; I will be surprised to hear if any human players have achieved 32768 on the official game (i.e. without using tools like savestates or undo). I think the 65536 tile is within reach!
You can try the AI for yourself. The code is available at https://github.com/nneonneo/2048ai.
35
 16
@RobL: 2’s appear 90% of the time; 4’s appear 10% of the time. It’s in the source code:
var value = Math.random() < 0.9 ? 2 : 4;
.– nneonneoApr 4, 2014 at 5:22
 42
Currently porting to Cuda so the GPU does the work for even better speeds!
– nimssonApr 11, 2014 at 21:54
 31
@nneonneo I ported your code with emscripten to javascript, and it works quite well in the browser now! Cool to watch, without the need to compile and everything… In Firefox, performance is quite good…
Aug 23, 2014 at 17:11
 7
Theoretical limit in a 4×4 grid actually IS 131072 not 65536. However that requires getting a 4 in the right moment (i.e. the entire board filled with 4 .. 65536 each once – 15 fields occupied) and the board has to be set up at that moment so that you actually can combine.
Jul 27, 2015 at 15:16
 8
@nneonneo You might want to check our AI, which seems even better, getting to 32k in 60% of games: github.com/aszczepanski/2048
– cauchyDec 23, 2015 at 17:21
I’m the author of the AI program that others have mentioned in this thread. You can view the AI in action or read the source.
Currently, the program achieves about a 90% win rate running in javascript in the browser on my laptop given about 100 milliseconds of thinking time per move, so while not perfect (yet!) it performs pretty well.
Since the game is a discrete state space, perfect information, turnbased game like chess and checkers, I used the same methods that have been proven to work on those games, namely minimax search with alphabeta pruning. Since there is already a lot of info on that algorithm out there, I’ll just talk about the two main heuristics that I use in the static evaluation function and which formalize many of the intuitions that other people have expressed here.
Monotonicity
This heuristic tries to ensure that the values of the tiles are all either increasing or decreasing along both the left/right and up/down directions. This heuristic alone captures the intuition that many others have mentioned, that higher valued tiles should be clustered in a corner. It will typically prevent smaller valued tiles from getting orphaned and will keep the board very organized, with smaller tiles cascading in and filling up into the larger tiles.
Here’s a screenshot of a perfectly monotonic grid. I obtained this by running the algorithm with the eval function set to disregard the other heuristics and only consider monotonicity.
Smoothness
The above heuristic alone tends to create structures in which adjacent tiles are decreasing in value, but of course in order to merge, adjacent tiles need to be the same value. Therefore, the smoothness heuristic just measures the value difference between neighboring tiles, trying to minimize this count.
A commenter on Hacker News gave an interesting formalization of this idea in terms of graph theory.
Here’s a screenshot of a perfectly smooth grid.
Free Tiles
And finally, there is a penalty for having too few free tiles, since options can quickly run out when the game board gets too cramped.
And that’s it! Searching through the game space while optimizing these criteria yields remarkably good performance. One advantage to using a generalized approach like this rather than an explicitly coded move strategy is that the algorithm can often find interesting and unexpected solutions. If you watch it run, it will often make surprising but effective moves, like suddenly switching which wall or corner it’s building up against.
Edit:
Here’s a demonstration of the power of this approach. I uncapped the tile values (so it kept going after reaching 2048) and here is the best result after eight trials.
Yes, that’s a 4096 alongside a 2048. =) That means it achieved the elusive 2048 tile three times on the same board.
29
 92
You can treat the computer placing the ‘2’ and ‘4’ tiles as the ‘opponent’.
– Wei YenMar 15, 2014 at 2:53
 31
@WeiYen Sure, but regarding it as a minmax problem is not faithful to the game logic, because the computer is placing tiles randomly with certain probabilities, rather than intentionally minimising the score.
– kooMar 15, 2014 at 14:55
 62
Even though the AI is randomly placing the tiles, the goal is not to lose. Getting unlucky is the same thing as the opponent choosing the worst move for you. The “min” part means that you try to play conservatively so that there are no awful moves that you could get unlucky.
– FryGuyMar 16, 2014 at 4:17
 205
I had an idea to create a fork of 2048, where the computer instead of placing the 2s and 4s randomly uses your AI to determine where to put the values. The result: sheer impossibleness. Can be tried out here: sztupy.github.io/2048Hard
– SztupYMar 17, 2014 at 1:03
 35
@SztupY Wow, this is evil. Reminds me of qntm.org/hatetris Hatetris, which also tries to place the piece that will improve your situation the least.
– PatashuMar 17, 2014 at 2:27
I became interested in the idea of an AI for this game containing no hardcoded intelligence (i.e no heuristics, scoring functions etc). The AI should “know” only the game rules, and “figure out” the game play. This is in contrast to most AIs (like the ones in this thread) where the game play is essentially brute force steered by a scoring function representing human understanding of the game.
AI Algorithm
I found a simple yet surprisingly good playing algorithm: To determine the next move for a given board, the AI plays the game in memory using random moves until the game is over. This is done several times while keeping track of the end game score. Then the average end score per starting move is calculated. The starting move with the highest average end score is chosen as the next move.
With just 100 runs (i.e in memory games) per move, the AI achieves the 2048 tile 80% of the times and the 4096 tile 50% of the times. Using 10000 runs gets the 2048 tile 100%, 70% for 4096 tile, and about 1% for the 8192 tile.
The best achieved score is shown here:
An interesting fact about this algorithm is that while the randomplay games are unsurprisingly quite bad, choosing the best (or least bad) move leads to very good game play: A typical AI game can reach 70000 points and last 3000 moves, yet the inmemory random play games from any given position yield an average of 340 additional points in about 40 extra moves before dying. (You can see this for yourself by running the AI and opening the debug console.)
This graph illustrates this point: The blue line shows the board score after each move. The red line shows the algorithm’s best randomrun end game score from that position. In essence, the red values are “pulling” the blue values upwards towards them, as they are the algorithm’s best guess. It’s interesting to see the red line is just a tiny bit above the blue line at each point, yet the blue line continues to increase more and more.
I find it quite surprising that the algorithm doesn’t need to actually foresee good game play in order to chose the moves that produce it.
Searching later I found this algorithm might be classified as a Pure Monte Carlo Tree Search algorithm.
Implementation and Links
First I created a JavaScript version which can be seen in action here. This version can run 100’s of runs in decent time. Open the console for extra info.
(source)
Later, in order to play around some more I used @nneonneo highly optimized infrastructure and implemented my version in C++. This version allows for up to 100000 runs per move and even 1000000 if you have the patience. Building instructions provided. It runs in the console and also has a remotecontrol to play the web version.
(source)
Results
Surprisingly, increasing the number of runs does not drastically improve the game play. There seems to be a limit to this strategy at around 80000 points with the 4096 tile and all the smaller ones, very close to the achieving the 8192 tile. Increasing the number of runs from 100 to 100000 increases the odds of getting to this score limit (from 5% to 40%) but not breaking through it.
Running 10000 runs with a temporary increase to 1000000 near critical positions managed to break this barrier less than 1% of the times achieving a max score of 129892 and the 8192 tile.
Improvements
After implementing this algorithm I tried many improvements including using the min or max scores, or a combination of min,max,and avg. I also tried using depth: Instead of trying K runs per move, I tried K moves per move list of a given length (“up,up,left” for example) and selecting the first move of the best scoring move list.
Later I implemented a scoring tree that took into account the conditional probability of being able to play a move after a given move list.
However, none of these ideas showed any real advantage over the simple first idea. I left the code for these ideas commented out in the C++ code.
I did add a “Deep Search” mechanism that increased the run number temporarily to 1000000 when any of the runs managed to accidentally reach the next highest tile. This offered a time improvement.
I’d be interested to hear if anyone has other improvement ideas that maintain the domainindependence of the AI.
2048 Variants and Clones
Just for fun, I’ve also implemented the AI as a bookmarklet, hooking into the game’s controls. This allows the AI to work with the original game and many of its variants.
This is possible due to domainindependent nature of the AI. Some of the variants are quite distinct, such as the Hexagonal clone.
9
 7
+1. As an AI student I found this really interesting. Will take a better look at this in the free time.
– IsaacMay 25, 2014 at 22:18
 5
This is amazing! I just spent hours optimizing weights for a good heuristic function for expectimax and I implement this in 3 minutes and this completely smashes it.
May 29, 2014 at 17:09
 11
 7
Watching this playing is calling for an enlightenment. This blows all heuristics and yet it works. Congratulations !
Jul 23, 2014 at 20:03
 5
This might help! ov3y.github.io/2048AI
Mar 12, 2014 at 6:12
@nitish712 by the way, your algorithm is greedy since you have
choose the move with large number of merges
which quickly lead to local optimaMar 12, 2014 at 12:45
@500InternalServerError: If I were to implement an AI with alphabeta game tree pruning, it would be assuming that the new blocks are adversarially placed. It’s a worstcase assumption, but might be useful.
Mar 14, 2014 at 20:52
A fun distraction when you don’t have time to aim for a high score: Try to get the lowest score possible. In theory it’s alternating 2s and 4s.
Mar 19, 2014 at 0:31
Discussion on this question’s legitimacy can be found on meta: meta.stackexchange.com/questions/227266/…
Mar 30, 2014 at 20:37

Show 4 more comments