Category Archives: Sermon

Scientific Sermon: Computing Winning Sets From Winning Strategies

[latexpage]
In our last post of the series, we have seen how to obtain winning strategies in polytime if all you can do is compute winning sets (without strategies) in polynomial time. In this post, we’ll try to solve a related problem: given a pair of strategies for both players that are guaranteed to be winning on the winning sets of the game, can we extract these winning sets in polynomial time? The answer is, as we will see, yes.

In order to simplify the problem, let $G$ be a parity game and $\sigma$ be a player 0 strategy. We can define everything accordingly for player 1.
Define the player 0 $\sigma$-winning set as follows:
\[
W_0(G,\sigma) = \{v \in V \mid \text{all $\sigma$-conforming plays starting in $v$ are won by pl~0}\}
\]

It is easy to see that $W_0(G,\sigma) \subseteq W_0(G)$, and $W_0(G,\sigma) = W_0(G)$ iff $\sigma$ is a winning strategy on $W_0(G)$.

Hence, in order to solve our problem, it suffices to compute $W_0(G,\sigma)$.

Now we simplify our problem even more by considering the so-called strategy subgame. Given a game $G=(V,V_0,V_1,E,\Omega)$ and a player 0 strategy $\sigma$, the strategy subgame $G|_\sigma = (V,V_0,V_1,E’,\Omega)$ restricts the set of edges as follows:
\[
E’ = \{(v,w) \in E \mid v \in V_0 \Rightarrow \sigma(v) = w\}
\]
In other words all edges of player 1 remain in the game while all player 0 edges chosen by $\sigma$ are removed from the game.

It is not hard to see that $W_0(G,\sigma) = W_0(G|_\sigma)$, because the game on the right-hand side still includes the strategy $\sigma$. So we have a reduced our original problem to computing $W_0(G|_\sigma)$.

Why is that of any help? If you look closely, you’ll see that $W_0(G|_\sigma)$ is a so-called one-player game, i.e. a game in which at most one of the two players (in this case at most player 1) has more than one strategy. So our remaining task can be phrased as follows: How do we solve one-player parity games in polynomial time?

There are a bunch of algorithms that solve one-player parity games in polytime, but let us consider an algorithm very similar to the recursive algorithm that we have described earlier. You can also take a look at the Wikipedia page for an outline of the algorithm.

Our algorithm is recursive in the number of different priorities. If we only have one priority, we can immediately see who wins the whole game. If we have more than one priority, let $p$ be the largest one.

  1. Case $p$ is odd: Let $P$ be the set of all nodes with priority $p$. We compute the $1$-attractor $U$ of $P$. It is not too hard to see that $U \subseteq W_1$ with the attractor strategy of player 1 on $U$ as winning strategy. We remove $U$ from the game and continue with the rest.
  2. Case $p$ is even: Let $P$ be the set of all nodes with priority $p$. We compute the $0$-attractor $U$ of $P$ and remove it from the game, resulting in a game $G’$. We solve the game $G’$ by recursion and obtain winning sets $W_0’$ and $W_1’$. We compute the $1$-attractor $A$ in $G$ of $W_1’$ and remove it from the game as it is clearly won by player 1. This leaves us with $U \setminus A$ and $W_0′ \setminus A$.
    We now claim that $W_0′ \setminus A$ is won by player 0. Assume it’s not. Then, there must be a player 1 escape going through $A \setminus W_0′ \cup U$ reaching $W_1’$. But since $A$ is an attractor and the game is a one-player game, this is impossible. Let $X$ be the player 0 attractor of $W_0′ \setminus A$. Hence, we can remove $X$ as a save win for player 0 from the game.
    We are left with $U \setminus (A \cup X)$ with which we continue.

As an exercise, you can check that the algorithm runs in polytime.

In the next post of the series, we consider an assumption that you are allowed to make if you’re trying to find a polytime algorithm for solving parity games: you can assume that one of the two players, say player 0, wins the whole game. All you need to do, is to compute her winning strategy.

Scientific Sermon: Extracting Winning Strategies from Parity Game Winning Sets

[latexpage]
If you’ve read the last posts in the series, you might remember that solving a parity games means to determine the partition into winning sets for both players and to provide witnessing winning strategies on the sets.

In our today’s post, we’ll have a look at a polytime-equivalent variation. Assume that we have an algorithm $A$ that takes a parity game $G$ and computes the partition into winning sets without providing any witnessing winning strategies, i.e.
\[
(W_0,W_1) = A(G)
\]

The question is, can we easily (in the sense that the blow-up is only polynomial-time) create an algorithm $A’$ based on $A$ s.t. $A’$ additionally computes witnessing winning strategies? The answer is yes and the iterative algorithm is fairly easy:

$A'(G)$:
  $(W_0,W_1) := A(G)$
  $E' := \{(v,w) \in E \mid v \in (V_0 \cap W_0) \cup (V_1 \cap W_1)\}$
  $F := E \setminus E'$
  While there are two edges in $E'$ with the same source Do
    $(v,w) := $pick one of such edges in $E'$
    $(W_0',W_1') := A(G|_{(E' \cup F) \setminus \{(v,w)\}})$
    if $W_0 = W_0'$
      $E' := E' \setminus \{(v,w)\}$
    else
      $E' := E' \setminus \{(v,u) \in E' \mid u \not= w\}$
  return the trivial strategies in $G|_{E' \cup F}$ on the winning sets

The basic idea is to reduce the number of strategies for both players on their winning sets until only one strategy remains for each player while maintaining the invariant that after removing strategies there will still be winning strategies left.
We reduce the number of strategies by removing edges from nodes with more than one outgoing edge. Note that we only remove player 0 edges from nodes of the player 0 winning set and player 1 edges from nodes of the player 1 winning set.
Let $e$ be such an edge. Remove it from the game and solve it again using algorithm $A$. If the algorithm returns the same winning sets, we know that there is a winning strategy left for the controller of $e$ that doesn’t require $e$, so we can safely remove it for good. If the winning sets change after removing $e$, we know that $e$ is essential for the winning strategy, so we can remove all other edges except for $e$ emanating from that node.

As we can only remove edges linearly often, the blow-up of $A$ is linear.

In the next post of the series, we will consider a related problem: given a pair of (total) strategies for both players that are guaranteed to be winning on the respective winning sets (without being given the winning sets), can we determine the winning sets easily?

Scientific Sermon: Parity Game Solving

[latexpage]
Last time, we saw how parity games look like and what their complexity theoretic status is – we can solve them by a non-deterministic polytime algorithm, but we don’t know how to do it in deterministic polytime.

In this post, we will see one of the most natural algorithms for solving parity games deterministically. Along the way, we will see why parity games are positionally determined, i.e. why every node is won by one of the two players by using a positional strategy, no matter how the opponent is playing. Recall that a general strategy is a plan that tells the player which outgoing edge to take from a node by taking the whole history of the play into account. Positional strategies, on the other hand, don’t care about the history of the play. In other words, no matter how the history looks like, a positional strategy will always choose the same outgoing edge from a node.

Before we get started, let us fix a mathematical formulation of parity games first. We say that $G=(V,V_0,V_1,E,\Omega)$ is a parity game if $(V,E)$ is a directed, total graph – i.e. the arena on which the game is based -, $V_0$ and $V_1$ are a partition of $V$ into the node sets of each player – if you recall the last post, the circular nodes belong to $V_0$ while the rectangular posts belong to $V_1$ -, and $\Omega: V \rightarrow \mathbb{N}$ is the priority assignment function that maps each node to a natural number, its priority.

The algorithm is based on the notion of attractors. Let $i=0,1$ be one of the two players. Given a set of nodes $U$, the $i$-attractor of $U$ is the least superset $A$ of $U$ such that player $i$ can force every play starting in a node $v \in A$ to visit a node $u \in U$. Intuitively, you can think of $U$ as a set of nodes beneficial to player $i$ and $A$ to be an extension in which every path, no matter what the other player is doing, leads to the set $U$.

Mathematically, the attractor $A$ is determined iteratively, starting with $A := U$. In every step, we include all nodes by player $i$ that have an edge leading directly to $A$. Obviously, player $i$ can force these nodes to visit $A$ and by induction to visit $U$. Also, we include all nodes by the other player that have all outgoing edges directed towards $A$. In other words, no matter what player $1-i$ is doing, he ends up in $A$ and by induction in $U$. This process is iterated until no new nodes have been added to $A$. It is immediate to see that the attractor computation can be done in deterministic polytime. Let $Attr_i(U)$ denote the $i$-attractor of $U$.

It should be easy to see that player $i$ can force a visit from $A$ to $U$ by a positional strategy.

The algorithm for solving parity games that we consider here is called the recursive algorithm for parity games. It is based on a recursive descent in the number of priorities. The base case is the empty game. Assume now that the game $G$ is not empty and let $p$ be the largest priority occurring. Let $i$ be the player “owning” the priority, i.e. $i = p \mod 2$. Let $U$ be the set of nodes with priority $p$. It is easy to see that every play that visits $U$ infinitely often is won by player $i$, because the largest priority occurring is $p$. So consider $U$ to be a “good” region for player $i$. We can even extend $U$ to a potentially larger good region by building the $i$-attractor $A = Attr_i(U)$. Every play that visits $A$ infinitely often is won by player $i$.

The question remains whether $i$ can enforce infinitely many visits to $A$. In order to answer that question, we consider the game $G’ = G \setminus A$ in which all $A$-nodes and $A$-edges have been removed. Note that the arena $G’$ is total again only due to the fact that $A$ is an attractor. By recursion, we solve $G’$ and obtain a partition into winning sets for both players $W_i’$ and $W_{1-i}’$.

There are two cases now. The easy one is $W_{1-i}’ = \emptyset$, i.e. player $1-i$ can’t win anything in the sub game G’. In that case, player $i$ wins from every node in $G$, because if player $1-i$ decides to visit $A$ infinitely often, player $i$ wins by forcing the visits to $U$, and if player $1-i$ decides to visit $A$ only finitely often, player $i$ wins by induction. It is easy to see by induction that this can be done with a positional strategy.

The other case is $W_{1-i}’ \not= \emptyset$. Here, we only we only know for sure that player $1-i$ can win on $W_{1-i}’$ as there is no player $i$ escape from $W_{1-i}’$ to $A$ (because $A$ is an attractor already). So what can we do now? We enlarge $W_{1-i}’$ again by computing the $1-i$-attractor $B = Attr_{1-i}(W_{1-i}’)$. It is immediate to see that player $1-i$ wins on $B$ with a positional strategy. We consider a second sub game $G” = G \setminus B$ and solve it recursively again. We obtain a partition into winning sets $W_i”$ and $W_{1-i}”$, which we can use to define the winning sets for $G$, i.e. $W_i := W_i”$ and $W_{1-i} := W_{1-i}” \cup B$.

I’ve updated the respective Wikipedia article as well where you can find a pseudocode outline of the algorithm.

The recursive algorithm is probably the most straight-forward algorithm for solving parity games. It can be easily seen that the algorithm is potentially exponential because of the two recursive descents and one can, in fact, provide a family of games on which the algorithm runs in exponential time. Interestingly, the algorithm is one of the fastest in practice. It also shows that all nodes of a parity game can be partitioned into the winning sets of both players and that they can win with a positional strategy on their respective winning sets.

The next post in this series will consider some interesting, seemingly simpler subclasses of parity games that would allow us to define a polytime algorithm for general parity games, given that we would have a polytime algorithm for one of the simpler classes.

Scientific Sermon: Parity Games & Complexity

Last time, we described how parity games could be seen as an algorithmic backend to verification problems in practice. In this post of the series, we introduce parity games concretely and explain why they are an interesting subject on their own from a complexity theoretic (we’ll get to that) point of view.

From a game-theoretic point of view, a parity game is a two-player zero-sum perfect-information game. Let’s decode what that means. First, two-player game means that there are two players playing this game, like in chess.

Now zero-sum means that for one player to be winning the other player must be losing. Many casual games are zero-sum games. In more general terms, zero-sum means the following: after the two players have played the game, they both get an outcome of the play – that could, for instance, be money that a player gets or that a player has to pay. Whatever amount one player gains must be at loss for the other player. In other words the sum of the outcomes for both players must be zero. In our scenario we could say that winning results in outcome 1 while losing results in outcome -1, hence the sum will always be zero.

Third, it’s a perfect-information game. That means that both players know the complete state of the game at any time. Chess, for instance, is a perfect-information game, because both players can see the whole board. Most card games, on the other hand, are imperfect-information games, because the players can’t see which cards the other players have. They can guess, of course, and compute probabilities, but in general they don’t know for sure which cards the other people hold.

So how do parity games look like? First of all mind the plural: parity games really describe a class of games – there are infinitely many different parity games. To put that into perspective, the contemporarily popular Sudoku games are also a class of games – there many different instances of the Soduko grid to start with.

Parity games are so-called graph-based games. A graph consists of two ingredients: a set of positions, called nodes, and a set of (one-way) connections, called edges. Every edge in a graph connects two nodes with each other. For our purposes, a graph can be seen as the abstraction of a game board – they usually look grid-like and every position is directly connected to its immediate neighbors. But parity games don’t necessarily connect immediate neighbors with each other. Additionally, we want to have “bridges” that connect nodes being far away with each other. So a graph really is an abstraction of the two-dimensional game boards.

The nodes (positions) of the graph have two shapes: circular and rectangular. The circular nodes belong to player 0, the rectangular nodes belong to player 1. For mathematical reasons, we call both players 0 and 1 instead of 1 and 2. We also call them player “Even” and player “Odd”. Furthermore, every node is labeled with a natural number. Intuitively, player Even likes large even numbers and player Odd likes large odd numbers. The game is played by putting a pebble on a designated starting node of the graph. If this starting node belongs to Even, then she can choose one of the outgoing edges (connections) and moves the token along the chosen edge to a new node. Otherwise, it’s Odd’s turn to choose the edge. The game then continues in that fashion for an infinite number of steps.

Wait a minute – how could two players play for an infinite number of steps? Of course, this could not be done by real living players. We will see later how this strange issue can be resolved. But bear with me for the time being. Just assume that both players played for an infinite number of steps – resulting, for instance, in the following sequence of nodes:

4, 1, 6, 8, 2, 0, 3, 2, 0, 3, 2, 0, 3, 2, 0, 3, …

Who wins this play? The winning condition is as follows: if the largest number that occurs infinitely often is even, Even wins; otherwise, Odd wins. So in this example play, player Odd wins, because 3 is the largest number (called priority) that occurs infinitely often.

The algorithmic question is, given a parity game and a designated starting position, can we tell whether one of the two players has a winning strategy against the other player? In other words, is there a clever plan for one of the two players such that no matter what the opponent does, the player is winning? And if so, can we also understand why?

The answer to all these questions is yes: parity games are determined, so there must be a winning strategy for one of the players from a designated starting position; we can also algorithmically decide which of the two players can win; furthermore, we can provide an additional winning strategy that witnesses the clear win of the respective player.

If you think about it, it follows that the witnessing winning strategy must have a finite description, otherwise there could be no way for the algorithm to print it out. So how could strategies possibly look like for both players?

A strategy should tell the player which outgoing edge to take from a node when the pebble has been put on it. This decision can depend on the whole history of the play, i.e. on everything that has happened beforehand. So in general, a strategy can be an object of infinite size. Luckily, parity games are not only determined, but positionally determined. That means that there is no strategic benefit in making decisions depend on the history. If a player chooses to move from node v to node u at some point, then the player should stick to that choice – in other words whenever the pebble falls back to node v, the player should choose the edge leading to node u again. Such positional strategies are finite objects. So why is there no strategic benefit in non-positional strategies? We will see that whenever we have found a non-positional winning strategy, we can also find a positional winning strategy – so there is no need to hassle around with non-positional strategies in the first place.

Consider the example parity game again for a minute. The blue respectively red areas describe the winning sets of both player Even and player Odd. A winning set is the set of node from which the respective player has a winning strategy. The boldly marked blue and red edges are witnessing positional winning strategies. You can try it out yourself: no matter which choices you make as an opponent (being it positional or non-positional choices), you will lose against the other player in his / her winning region.

Which brings us back to the algorithmic question: deciding which player has a positional winning strategy from a designated starting node. As we will also see, it is easy to determine whether a given positional strategy is indeed a winning strategy. So algorithmically, this problems is in the complexity class NP – but we don’t know whether it is NP-hard nor do we know whether is in P. Because of the positional determinacy, the problem is also contained in coNP. In case you have no idea what I’m talking about, please read the wonderful article by Scott Aaronson on the NP/P-problem which is one of the most important open problems in maths and theoretical computer science. Since there are not many natural problems that are contained in both NP and coNP without being easily seen to be in P, the problem of parity game solving touches one of the most interesting subjects of pure complexity theory. In other words, it would be a major break-through if we could prove parity game solving to be in P. Or if we could prove it’s NP-hard which (I guess) no one really believes.

The next post in this series will feature an algorithm that solves parity games and prove some interesting results along the way.

Scientific Sermon: Wikis and Parity Games

[latexpage]
This will be the first post in a series called “scientific sermon”. It will feature folklore knowledge in my scientific area of expertise as well as introductory articles to the field. Eventually, I will also include complementary explanations to my research.

Along the way, I will also try to update the related Wikipedia articles. However most of the content will probably be too specific, but nevertheless it would be helpful to have all that in a wiki. At least I would like to have scientific results organized in a wiki – it would be much more accessible for outsiders, and the scientific community could collaboratively share and store their knowledge in a holistic way.

It is a little surprising to me that no such “science wiki” seems to exist. There is no doubt to my mind that such thing will come to life in the near future – you can already see the first attempts going into the right direction. There is, for instance, the very well maintained Complexity Zoo featuring essentially all known complexity classes and their relationships. Then, on the other hand, there is the intriguing Polymath Blog which is a remarkably successful online community for doing topnotch collaborative scientific research.

So I will in the end set up a small wiki for theoretical computer science, and of course invite other scientists to contribute to it. But let’s see how far I’ll get with good old Wikipedia first.

My first post will be about parity games and why they are an important abstraction in the theory of computation.

Theoretical computer science deals with a bunch of important questions. One of them is to find fast algorithms for practical problems and another is to understand in general what computers are capable of. The problem of parity game solving is related to both questions.

Let us start with the practical problems which should serve as the original motivation why people got interested in parity games. One central goal of modern computer science is to decide whether software is correct, i.e. bug-free. In other words, whether the human beings that wrote the software made any mistakes. This is particularly important for critical applications such as power plants, railroad systems and so on. We just can’t afford to make any mistakes here.

Thus, we desire to have an additional computer program that we can give our software to check for bugs. After some time, it should tell us whether the software is bug-free and if it’s not, what’s the reason for that (so we can correct our mistakes). Well, first we have to specify what we mean by “bug-free”. Any computer program performing such automated verification wouldn’t know which behavior we want from our software. Hence, it is the duty of the software developers to also provide a specification that defines “correct” behavior.

For example, we want that whenever a processor adds two natural numbers, that the result is the correct sum. A specification is usually formulated in a logic; for instance, the specification of correct addition could be specified as follows:
\[
\forall a, b \in \mathbb{N}: add(a, b) = a + b
\]
The left-hand side refers to an addition operator of the processor while the right-hand side refers to the mathematical addition of natural numbers.

The question that we now want to answer is, given a blueprint (called “structure” or “interpretation”) of the processor, whether it satisfies the specification given by formulas in a certain logic. Such a scenario is called a model checking problem.

It should be noted that not every model checking problem can be solved (even when we assume that time is not the issue), because some logics are so complicated to handle, that no computer can answer such a question in general. See the halting problem for more information about that.

However many practical model checking problems can be solved and are solved in practice. Of course, it is desirable that the process of automated verification terminates fast – it wouldn’t be of too much use to wait years to get an answer. Hence, theoretical computer science tries to speed up algorithms solving that problem.

As a mathematician, you always try to simplify and abstract problems when you fancy solving them. One core idea of abstraction is to remove every detail of the problem that is unnecessary and distracting.

And this is where parity games come into play. They are an abstraction of the model checking problem for several modal logics, most importantly for a fixed-point logic called the modal $\mu$-calculus.

Some other applications and relations of parity game solving can be seen in the following diagram:

In the next post in this series, I will introduce how parity games actually look like and explain why they are an interesting subject on their own – in order to understand what computers are capable of in general.