Solving Sprinkfield

Posted by | No Tags | Company life | No Comments on Solving Sprinkfield

Sprinkfield is a logics game in which we must water a given area or field, following a number of rules (to read about the rules briefly, see here). First it seemed a good idea to create new levels by simply drawing them, however, it caused 2 problems:

  • it was hard to decide if the new level was solvable; and
  • we didn’t know what the best solution (most points covered by the shortest route) was.

To address the above problems, we started developing a solver program to automate level solvability checks.

In this post, we try to summarize what we did during the roughly 2 months we spent on this task. As always, it was not as easy as we had first thought. Now we will show you some of the more interesting dead-ends too because it was a very useful experience for us (and maybe it could be useful for others too).

Best solution

We wanted to check level solvability and find the best solution through one code (first, we tried to do these separately but it soon turned out, the small gain in runtime was not worth the work of writing two programs in parallel).

We started with the simplest method: using brute-force, we tried all possible watering combinations and chose the best one from those that resulted in a solution. (The “best” means the shortest one with the highest score in Puzzle mode, and the one with the highest score in Fullfield mode.)

This exhaustive search was implemented through a depth-first tree walk.

Simple, isn’t it?

Without completeness, now we will have a bit more detailed look at how we speeded up the runtime of our code from 1 week to 14 seconds.

Graph

The solving algorithm represents the game as a graph where nodes represent the game positions while edges represent the allowed moves (i.e. waterings) leading from one game position to the next. These edges are created by the move generator.

sprinkfield-solver-1

This is how the solver maps the game. Walk order: from top to bottom and from left to right.

Our brute-force algorithm explores this graph starting from the opening position by trying every allowed move and then further analyzing the arising new positions until no more allowed move remains. If there is no more move, the code returns to a former game position (i.e. we implemented a backtrack search).

Runtime

The above concept is very simple but, unfortunately, very slow as well.

When finding the best solution for a given level takes for a whole week on a desktop PC then something must be done. We wanted to create 70+60 levels for the game, for which to find the best solutions would have taken for 130 weeks in this way. And we have not yet even calculated those solved levels that we would have later decided to replace with something else.

Multithreading #1

The solution seemed to be easy: let’s use multithreading. If we run the solver on parallel threads, it will finish with the task quicker. True, but technically it is not that simple to do a parallel graph search…

We implemented a fairly basic multithread method. The separate threads used a common hash table, in which each thread recorded the game positions it already analyzed, and where each thread checked if the next game position to be analyzed had already been dealt with by another thread.

We were happy with this solution, but no too long – as it is often the case. Unfortunately, this multithreading did not result in a linear speed-up rate of the runtime, but only in a logarithmic one. That is running the search on 4 threads, speeded the runtime up approx. 2.5-3 times and not 4 times.

Multithreading #2

Then came the next idea: let’s run the solver on several independent threads in parallel so each separate thread can process a separate level by itself.

This was a very very simple idea which did not take long to implement, and indeed we managed to achieve a speed-up rate (for total runtime) that was nearly linear to the number of threads. Why not started with this…?

Basically, we returned to a single-threaded algorithm but added the list of levels to solve as a command line argument. When a thread had solved a level, it saved the solution in a log file. However, before starting to analyze a new level, it first checked if that level already had a log entry, and if yes, it picked another level to process. So we left the task of multithreading to the operating system: we could run the algorithm in parallel on that many threads as we wished because each thread analyzed a different level.

Efficiency

Now we were satisfied with the multithreading part but it could still took up to a week to solve a level so we needed further tricks…

Changing the paradigm

As we said earlier, first we tried to solve the problem by using depth-first search. Then we tried other methods too:

To a certain extent, all of the above methods worked. However, they had two big problems:

  • they required too much memory and/or
  • the algorithm did not know when the search was finished.

The A* search algorithm seemed to be the best idea. But, to implement it, we needed a priority queue which in turn ate up a lot of mem, sometimes all of it. We also tried to implement this queue using an interval heap structure, which made it possible to limit queue length by dropping the weakest candidate at the end of the queue. This solved the problem of memory usage but sometimes we lost an important element, and thus the program could not find the best solution.

Because of all these, we finally went back to using depth-first search (where we knew it exactly when was the entire graph analyzed and it did not need a lot of mem either).

What’s wrong with depth first search?

There are more than one problems with that. On the one hand, our graph is huge. On a typical level of 10×10 fields, there are an average of 100 starting moves. After each move, the number of valid next moves decreases by 10, at an average. So for a 10-move-solution, we have to analyze 100x90x80x… x10 positions, that is 36,288,000,000,000,000 game situations.

On the other hand, an even bigger problem (partly due to this large number) is that the algorithm cannot remember which graph nodes it already walked through, so it tends to re-walk them if an already analyzed position is reached on a different path.

That is the algorithm walks the graph as a tree.

Making it graph again

To solve this problem, each game state was indexed by a unique Zobrist key, which is the bitwise XOR combination of randomly generated 64bit numbers for every field type. At this point, it is enough to say that the key can be computed incrementally (i.e. it is enough to calculate it for the changed part of the analyzed level, we don’t need to deal with the entire level each time), and it always gives the same value for the same game position, while the chance of key collision for different game positions is negligible but this can be handled easily.

So, by using Zobrist keys, we can define a hash table of any size (the bigger the better it is – to a certain limit), which is indexed by the lower bits of the key. If we find the position we are about to analyze, we can skip it and go for the next one.

We must know that even with this hash table we cannot recognize all repetitions, still, each eliminated repetition speeds up the search by magnitudes (if we find a repetition at the “top” of the graph, we can skip a huge subgraph from the walk).

sprinkfield-solver-2

An example of different paths leading to the same game position. If we have already walked the upper path, we do not need to further analyze the lower path from the common game position (node) onwards.

Chopping the trees (branch cutting)

The use of Zobrist keys basically made it possible to cut certain branches off our tree, that is we can omit them from the search. And we could do this without compromising the accuracy of the final result.

This idea can still be further honed. That is we can encounter certain cases (game positions) which can be omitted from further search because they do not influence the final result.

Such cases are:

  • calculating an upper bound of the achievable score (from a certain game position): if the estimated score is not higher than that of the so-far best solution, it is no point in further analyzing the given move combination;
  • if the number of still uncovered tiles is larger than the number of tiles coverable with the available sprinklers;
  • if we arrive at a graph level that is deeper (i.e. we have already made more moves, walked a longer path) than the so-far best solution (in Puzzle mode we seek the shortest solution);
  • if there are more isolated (non-waterable) tiles than the number of deployable scarecrows (because we have to place a scarecrow on every unwatered field);
  • also, we made the move generator smarter so that while searching for the solution, it matches not only the game rules but the available resources as well (e.g. if there is only as many unwatered tiles left as the number of scarecrows to deploy then the code does not generate any further moves involving these tiles).

Basically, all possible cuts we recognize (and execute) decrease the size of the graph to walk exponentially. Therefore, it is good to find new possibilities for cuts even if finding them is relatively time demanding because our efforts will pay off manifold.

“Careful with that axe”

The above list could be much longer but many cuts we came up with did not pay off eventually. There were two main reasons for this:

  • the method was wrong: we cut more than we should have, thus we lost the best solution too;
  • the cut was too costly: it costed more to detect it (in time) than what we gained by executing it.

Also, one cut was skipped because we managed to incorporate it in another one.

Without completeness, here are some of the omitted cuts:

  • Larger moves first: While walking through the graph, at a deeper node we never tried a move which was larger than those made at a higher node (closer to the starting point). It was a mistake.
  • Forward probability cut-off: We tried to finetune the algorithm using various magic numbers, that is after ordering the possible moves we tried only the first 50-60-…-95%. We knew that with this we could lose solutions too but we hoped it won’t hurt. Unfortunately, the move ordering was not good enough to be used for every position (we either cut too much or there was no point in it at all).
  • The upper estimate of the achievable score was re-worked several times. Its difficulty came from the fact that we had to give an upper estimate without walking through the entire graph. However, the less this estimate overestimates the actual score, the more useful it is.
  • In the beginning, we also monitored if there were too much muddy or too little ground tiles. Later this task was delegated to the move generator so we did not have to check it during the search.

A very important conclusion was that if a single cut speeded up the algorithm by one magnitude then it was likely to be a wrong cut.

Move ordering

A key cut in the above list is the one based on the estimated maximum score.

If somehow we can walk the graph finding the best solution early in our search then we can skip a large part of the work.

According to our experiences, a human could find a fairly good (or even the best) solution quite fast on every level. When analyzing the way we were thinking, we found the following:

  • We try big waterings earlier than the small ones;
  • We make forced moves (e.g. corner tiles) without thinking;
  • First we focus on fully covering the level and only then we go for a high score (in Fullfield mode).

We incorporated the above observations into the solver to analyze the moves possible at a certain game position in the following order:

  1. important moves are first: at least one of these has to be executed (e.g. involving still unwatered or dry tiles);
  2. moves involving more unwatered tiles have priority over the others;
  3. larger moves are tried first.

Applying the above order, on most levels, the solver manages to find the best solution in a few seconds. The rest of its time is spent on the indirect proof (i.e. there is no better solution).

sprinkfield-solver-3

Subgraphs (blue triangles) of the graph (white triangle) omitted from the walk.
Left: possible cuts using unordered walk.
Right: possible cuts using ordered walk.

Symmetry

The algorithm can be further improved if in case of symmetric (to the 4 main symmetry axes) positions, we analyze only one of them. It can be asked if it has any use at all since very few levels are symmetric.

Although it is true, but we do encounter several symmetric positions (not in the strict sense of the word, but in terms of valid moves) during the walk.

img5

A symmetric game position: in terms of valid moves, the situation is minor diagonal symmetric and thus it is enough to try out only the moves involving either symmetric half.

If we analyze only one of these symmetric positions, we can lose solutions but not score.

Optimization

In this post we have summarized the tools and methods we use in the solver.

However, we can further gain valuable hours of runtime through making our program code more effective.

Programming language

We tried developing solvers in several programming languages:

  • JavaScript;
  • Java;
  • C;
  • Pascal.

In our next blog entry, we will tell you which of the above languages we opted to use for our final code. We will also discuss the language specific optimizations.


No Comments

Leave a comment