Category Archives: Series

Perfect Tools Part 2: Actionable Remembering on How to do Things

Have you ever googled for a quick tutorial on how to scp files from one computer to another? Or searched for the right commands to create and switch to a new git branch? What are the perfect tools to do that job? You might say, it depends on what the objective is – scp is fairly different from git.

And you are right. But there are two more things to it. First, you have to find out how it works in order to perform the task. Google is usually your friend here, and I assume that you know how to phrase successful search queries. So what’s the second thing? Performing the task a second or a third time. And forgetting how you’ve done it before. I’m sure that happened to you countless times.

So what’s the perfect tool for the job? It’s actually your favourite text editor, and using it for this job is more of a habit than of an expertise: whenever you found out how to perform a specific task, write it down. Not a long text, just as much as you need to understand as quickly as possible how to do it again. As a bonus, you might want to write a shell script instead that performs the task for you.

The way I’m doing it, is two-fold. I have one directory with text files, describing how to do things. Here are two examples:

  •  Git Branching Workflow
git checkout -b NEW_BRANCH
git add .
git commit -a -m "Finished Work"
git checkout master
git merge NEW_BRANCH
git branch -d NEW_BRANCH
git push
  •  Configure Apache on Mac OS-X
sudo apachectl start
cd /etc/apache2/other
sudo nano test.conf
NameVirtualHost *:80
<VirtualHost *:80>
    ServerName localhost
    <Directory /YOUR_WEBSERVER_ROOT>
        Allow from all
        AllowOverride AuthConfig
        Options FollowSymLinks
sudo apachectl -k restart

But even better than that is making such notes actionable by writing scripts that do the work for you. I have all my utility scripts in one directory that is included in the PATH variable of my terminal. In fact, all scripts sit in a DropBox folder, so all my machines are always up to date. All scripts are prefixed by “cmd-“, so I can easily find and execute all of them by simply typing “cmd-” in my terminal and then auto-completing the specific task by hitting tab.

Here are a couple of examples:

  • Limit your downstream bandwidth (
ipfw add pipe 1 all from any to any in  
ipfw pipe 1 config bw $1Kbit/s delay $2ms
  • Convert video to mp4 (
ffmpeg -i $1 $1.mp4
  • Find file globally (
find / -name $1
  • Find file recursively (
grep -r "$1" .
  • Overwrite your mac address (
sudo ifconfig en0 ether $1

I have about 30 text files and 50 scripts of this kind now. The additional time that you need to write up these little documents when encountering the task for the first time is nothing compared to the time that you’ll need to find out how it works again for the second or third time. Not to mention your frustration about feeling repetitive.

Perfect Tools Part 1: Selection and Consumption of Posts

If you are like me, then you are always searching for ways to improve on the efficiency of your daily work flow. I feel like I’m already closing in on – at least – local optima in some areas, and since it took me some time to find and combine the right tools to do the job, I might as well share it here.

I am an avid reader of many specific different blogs and news sites, but like most people, I only read a tiny fraction of all articles that are posted per day. So the first task I have to do every day is filtering: deciding which posts are of interest to me. For me, the process of filtering is completely separate from reading the actual articles.

Since I don’t want to browse 20 different websites each day to find new posts, I use an aggregator that pulls in all articles and presents the new ones to me in a uniform fashion that makes it easy to go over the different posts. There are essentially two types of aggregators: intelligent and dumb ones.

Intelligent aggregators try to guess which sources and which posts are of interest to you and only present you with that selection. Dumb aggregators in turn show you the sources that you have selected and within those sources all the posts. While most people probably decide to use intelligent aggregators, I decided to use dumb aggregators. For one, I don’t want to see a semi-random selection of sources, since I have hand-picked my own sources, and I always have the feeling that I might be missing out on interesting articles if a machine-learning algorithm is selecting articles for me. I am particularly in doubt whether serendipitous discovery wouldn’t be prohibited by an intelligent aggregator. (If you are looking for intelligent aggregators, I’d recommend to check out Prismatic and Flipboard.)

So which tool am I using for dumb aggregation? It’s called Feedly and allows you to select the sources you’re interested in and presents all new articles as lists. Perfect for filtering. You can also use it for reading, but that’s not what I’m doing.

My workflow starts by going to my Feedly subscriptions and scanning over them. If I’m interested in an item, I open it in a new tab and continue to scan. I will not start to read any of the posts until I’m finished scanning posts. (Hint: the short cut for opening a link in a new tab without automatically changing to the tab on the Mac is Cmd + Click) I call this first pass the scanning phase.

After scanning all new articles, I close Feedly and go through the tabs. I decide which articles to read now, to read later and which ones to discard. When I decide to discard an article, I immediately close the tab. When I decide to read an article now, I immediately read it. These are particularly posts that have a short timely relevance and need to be read on that day. The articles that are of interest to me but of no immediate timely relevance are marked to be read later (which I will describe next). More than 50% fall in that category. I call this overall second pass the triage phase.

How do I save articles for reading them later? I was using Instapaper for a long time but now switched to Pocket. Pocket installs a browser extension that allows you to mark posts with one click to read them later. That’s exactly what I do with interesting articles that are of no immediate timely relevance.

I read most of the pocket articles on the go using the smartphone or tablet app that allows you to read all your saved articles even when being offline. So whenever I’m in the sub, a cab, a train, a plane, in the gym etc., I read the articles from there. I call this third pass the reading phase.

When I find that one article is so interesting that I need to take action based on the content later (i.e. send it to a friend, check out links etc.), I “heart” the article. I’ll check that “heart category” from time to time when using my computer to go through that list. I call this fourth pass the action phase.

But there is still room for improvement. What do you do when you are literally on the go? Reading while walking slows down your walking, and since walking is your primary task, there is no point in this trade-off. Wouldn’t it be great if you could listen to the articles you’ve saved? You can, and the text-to-speech is actually quite nice and auto-detects the right language for each article (in my case, that’s English and German). The app that I’m using for that is called Lisgo, and synchronises with all your saved Pocket articles. And that’s also the primary reason I’ve switched from Instapaper to Pocket since there was no text-to-speech extension for it.

I am pretty happy with the combination of Feedly, Pocket and Lisgo right now, and don’t see much room for improvement. How do you consume your daily news from the web? For me, it’s broken up into scanning, triage, reading and action phase. Which is, to some extent, pretty similar to how I treat my email inbox.

Why do people prefer to archive their life instead of living it?

When young-ish people do activities nowadays, like going to a party, to a concert or whatever, you’ll see many of them perceiving most of the event through the display of their phones, busy taking videos and pictures. It seems like it is more important to many folks to document what they’re doing than actually doing it.

Why do people rather record videos and photos of their activities than just enjoying the moment? If you ask them, most will pretend that they want to be able to remember what happened years later, and archiving it digitally is the best way to not loose track of what was going on. While that is true, undeniably, I suspect that most people will never look at most of their old photos and videos ever again. The predominant behaviour on the other hand is to immediately post and share the videos and photos, most likely in order to improve the digital depiction of one self. In other words it might be more important to people to improve on the way others perceive them digitally than really living what they’re pretending to be doing.

What is even more surprising than archiving real life instead of living it is that many people rather talk (writing messages) to other people on their phone than to the people they’re socialising with. Sure, whenever I get an email or a text, I feel slightly pressured too, to answer as quickly as possible. But only very few of my incoming messages actually have to be answered within one hour or so. And that won’t be much different for other people, so that can’t be the only reason.

Is it that we usually meet up with people that are less important than other folks we’re texting with? I don’t think that’s the reason either. So what is then? I think part of the answer might be the decreasing attention span of people. It might just be too hard for people to devote their attention for an extended period to the folks hanging out with them. The other reason might be social efficiency. Since we are already spending time with the people we’re hanging out with (even if we’re texting instead), we can socialise with other people on our phone at the same time. We can, in other words, satisfy the socialising needs of more people at the same time, even if we’re sacrificing the direct interaction with the people next to us. But since most of them are doing the same thing, it can’t be that harmful, right?

Do I do it? Well, certainly not the life-archiving part, but that’s mostly because I don’t really enjoy taking photos too much. How about communicating with other people on the phone while socialising in real life? I try not to. Call me old school, but I still find it a little insulting. However, I sometimes do it if I really feel that I have to answer certain texts in a timely manner. I also start doing it when the people I’m socialising with are doing it, basically to hold a mirror up to them; needless to say that no one ever gets my subtle behavioural critique though.

People should not be afraid of their governments. But actually they couldn’t care less.

After all has been said and done over and over, it still took me a fair amount of time to realize what bothered me the most about the NSA affair. Have we been really surprised by what the NSA is doing? Not so much. Is Edward Snowden a hero? Probably, but hero is too much of a militaristic term for my taste. Should we be okay with being spied on? Of course not. Isn’t the data’s content much more important than the meta-data that the NSA is tracking? Not at all. But those who have nothing to hide have nothing to fear? Quite on the contrary.

The idea of spying massively on people is not new. Most classic dystopian stories like 1984 are centered around an omnipresent all-seeing eye kind of state. Even in our own recent history we had governments like the German Democratic Republic in Eastern Germany that massively spied on their people with severe consequences for major parts of the population. So not much about what we’re seeing today is a genuinely new phenomenon.

The graphic novel V for Vendetta also draws the picture of a dystopian state, and the main character that tries to liberate the oppressed people states at some point one of the most iconic sentences about dystopian states: “People should not be afraid of their governments. Governments should be afraid of their people.”

I always liked that sentence, and I still do. Now interestingly, while it should apply to the current situation, it doesn’t. It’s actually the other way around, although the government is spying on the people.

Edward Snowden himself closed with the following words in his famous interview with the Guardian, speaking about what he fears to come out of all of this:

The greatest fear that I have regarding the outcome for America of these disclosures is that nothing will change. People will see in the media all of these disclosures. They’ll know the lengths that the government is going to grant themselves powers unilaterally to create greater control over American society and global society. But they won’t be willing to take the risks necessary to stand up and fight to change things to force their representatives to actually take a stand in their interests.

But people are not only not willing to stand up, they, by and large, couldn’t care less. To me, that’s the most interesting thing about the whole thing. And even I feel fairly detached about the matter. But haven’t I been idealistic once? Haven’t I swore to myself to being one of the first to stand up against any form of state-sanctioned oppression or state-sanctioned undermining of civil rights? And yet, here I am, shrugging my shoulders. Is this what getting old feels like? You give up on your ideals? Maybe. But even if so – sadly enough -, this can only be half the truth.

Because I’m not the only one. There are millions of young people that must have had the same thoughts that I had at that age. But apparently they don’t care either. Is it the openness of the Internet that transformed our innermost perception of privacy and of the importance of privacy? Do we first need to see with our own eyes how our governments might use our own data against us?

I am not trying to be apologetic here. I am just surprised by how less people care, including me.

Once upon a time in Japan, one year ago

The following is an old blog post from last year that I’ve posted on a different blog that I’ve now closed. In restrospective not much has changed since then in my views. I have to admit, however, that some thoughts are fairly pathetic – I guess I was still overwhelmed by everything when I wrote it.

I’ve spent the first couple of days in Tokyo and went then to Kyoto. Tokyo is an incredibly large city packed with skyscrapers. If you go up one of the high buildings around the center to get a good view, it seems like the city is never ending, like the whole world is covered with skyscrapers. Like the Death Star (without being dramatic here). It’s almost surreal. Let there be no doubt, the skyline of New York is one of the most impressive sceneries, but compared to Tokyo, Manhattan is just a very small district in this gigantic city. It has multilayered streets all over the place. It is also the first city I’ve seen that has cars but no parking spots.

I’ve spent the rest of my time in Kyoto. It is the old royal city south of Tokyo, much smaller and greener than Tokyo. The best way to get around there is to get yourself a bike. My colleague was so friendly to give me one of his bikes which made it perfectly easy for me to visit all the places I was looking for. Japan in general and Kyoto in particular are beautiful. The people devote quite some attention to small little details. Every tree has been brought into perfect shape and each plant’s color is intense. Lime green. Crimson red. It’s just a form of art. It’s like stepping into a science fiction movie. Which sounds cliche and overgeneralizing but to me it was.

As you may know, one of the most important occasions for Japanese people is the so-called cherry blossom season, which is a two weeks long period in which the cherry blossom trees bloom. It usually takes place by the end of March, but this year the trees started to bloom a little later, so I was lucky enough to enjoy the trees at full peak. People are sitting under the trees and have picnics. It is a national celebration called “Hanami”, and most Japanese people get a warm tone in their voice when talking about their cherry blossom trees. You don’t really get it at first, but you start to understand after some time that the Hanami means a lot to them. It also shows you how much of a national tragedy Fukushima was, as they didn’t celebrate the Hanami last year.

When you go by the river in Kyoto, you see many people sitting around just looking at the river or the trees besides the river. They just enjoy the beauty of the moment. We have somehow forgotten how this works; we are always obsessed by using our time as efficiently as possible, even our free time.

For me, life has always been about choosing between hedonism and delayed gratification, and I think that is pretty typical for Western cultures. Hedonism, I always thought, might give you an instant gratification, a good fun, without giving your life any meaning. You’re missing the big picture and you might as well lose yourself in perpetuating meaningless stuff over and over again. Delayed gratification, on the other hand, seems to give you the big picture. It allows you to focus on doing something you might not enjoy too much, but you are promised to be rewarded later on for doing it.

I’ve always interpreted the carpe diems along these lines. Steve Job’s quote “Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life” seems to support this interpretation. You should always have the big picture in mind.

But what I now begin to understand is that delayed gratification tends to build up endless chains of in-order-to-thinking, postponing the gratification possibly infinitely long. “I’ll be dead soon” also means that you might as well be dead without being ever gratified for whatever it is you’re doing. Your big picture might not even work out as you planed it, and then you’re screwed.

Looking at the Japanese culture, I now realize that it is neither necessary nor desirable to choose between hedonism and delayed gratification. You should focus on the big picture, but you should also try to enjoy life every day. There is so much beauty in this world. We just need to relearn how to see and enjoy it. Today might by your last day, so don’t waste it doing stuff that you hate. Don’t waste it following other people’s dreams. I might be completely off the point to identify this as being a Japenese thing. (Edit: I am now pretty sure I was off the point.) Maybe it was just me thinking along these lines when I was there.

Religion in Japan obviously is quite different from ours. The original Japanese religion is Shinto and now coexists with Buddhism. There are many temples for both religions and some of them are even mixed. The Shinto religion is a polytheistic nature religion, where you basically have gods for everything. But instead of going there to just praise a lord, you pray for health or fertility or good luck in an exam. Your relationship with the god is much more demanding from your site. What I found particularly sympathetic were trees that they’ve put in front of the temples on which everybody can attach small papers that you want the god to take care of. If you think of fortune cookies, you basically take the good wishes with you and let the restaurant deal with all the bad stuff. It’s an optimistic and non-fatalistic way to deal with the future.

I probably don’t have to tell you much about the food: it’s incredibly good and fresh, and the Japanese dishes that we get in the Western world are only just ten percent of the different kinds of dishes that you get here. Many of them were exotic but also very tasteful. Be careful when trying Japanese sea cucumber though, which tastes pretty terrible, at least from my point of view. And even some Japanese people admitted that only a small fraction of them actually likes sea cucumber, so I think you’ll be on the safe side if you don’t like it.

In comparison to Japanese dishes, our food looks like it’s prepared by some uncivilized barbaric tribe. Instead of using elegant chop sticks, we get a set of tools to work on the food in order to get the stuff into manageable pieces. Even when you consider Western food like steaks or whatever, it’s a good question to ask why it’s not already cut. It would be much nicer to eat.

Everything, every ritual is carried out with the uttermost perfection. If people hand you stuff, they always turn it around in front of you before giving it you so that you can see the item in the right order. And they always give it you with both hands. This gesture somehow turns a simple action like giving you some stuff into an almost transcendental celebration.

People seem to have a fundamentally different understanding of how people should behave in society. Let me give you two examples. The public places and streets in Kyoto have no trash bins, but there is also no trash lying around. Not a single thing. I first thought that you’re probably going to be punished quite badly when being caught throwing stuff on the ground, but that’s not the thing. People see public places like their own gardens. The public is themselves. So why should they throw garbage on the ground? That just doesn’t make sense. Of course, our society has a completely different take on this. Throwing garbage on the ground in public places? As long as no one sees me, who cares?

Also small-scale stealing seems to be out of the picture. People let their bikes just stand around, leaving bags of stuff that they just bought in the baskets, and go into a shop to buy some other items. They just leave their bags and bikes without attendance. That’s amazing and it seems to just work like that.

People also seem to be generally friendly and kind. Although many do not speak English, people always try to help you, even if it takes them quite some time to accompany you to your desired destination. As a practical advice, you should take any directions that people give you with caution, as people sometimes rather give you a wrong direction than telling you that they don’t know where you should go. From what I’ve understood it’s about not giving you negative feedback. It’s about not losing your face, and that goes for both parties.

So go with the flow and try to be polite and sensitive yourself. I’ll definitely come back soon.

Scientific Sermon: Computing Winning Sets From Winning Strategies

In our last post of the series, we have seen how to obtain winning strategies in polytime if all you can do is compute winning sets (without strategies) in polynomial time. In this post, we’ll try to solve a related problem: given a pair of strategies for both players that are guaranteed to be winning on the winning sets of the game, can we extract these winning sets in polynomial time? The answer is, as we will see, yes.

In order to simplify the problem, let $G$ be a parity game and $\sigma$ be a player 0 strategy. We can define everything accordingly for player 1.
Define the player 0 $\sigma$-winning set as follows:
W_0(G,\sigma) = \{v \in V \mid \text{all $\sigma$-conforming plays starting in $v$ are won by pl~0}\}

It is easy to see that $W_0(G,\sigma) \subseteq W_0(G)$, and $W_0(G,\sigma) = W_0(G)$ iff $\sigma$ is a winning strategy on $W_0(G)$.

Hence, in order to solve our problem, it suffices to compute $W_0(G,\sigma)$.

Now we simplify our problem even more by considering the so-called strategy subgame. Given a game $G=(V,V_0,V_1,E,\Omega)$ and a player 0 strategy $\sigma$, the strategy subgame $G|_\sigma = (V,V_0,V_1,E’,\Omega)$ restricts the set of edges as follows:
E’ = \{(v,w) \in E \mid v \in V_0 \Rightarrow \sigma(v) = w\}
In other words all edges of player 1 remain in the game while all player 0 edges chosen by $\sigma$ are removed from the game.

It is not hard to see that $W_0(G,\sigma) = W_0(G|_\sigma)$, because the game on the right-hand side still includes the strategy $\sigma$. So we have a reduced our original problem to computing $W_0(G|_\sigma)$.

Why is that of any help? If you look closely, you’ll see that $W_0(G|_\sigma)$ is a so-called one-player game, i.e. a game in which at most one of the two players (in this case at most player 1) has more than one strategy. So our remaining task can be phrased as follows: How do we solve one-player parity games in polynomial time?

There are a bunch of algorithms that solve one-player parity games in polytime, but let us consider an algorithm very similar to the recursive algorithm that we have described earlier. You can also take a look at the Wikipedia page for an outline of the algorithm.

Our algorithm is recursive in the number of different priorities. If we only have one priority, we can immediately see who wins the whole game. If we have more than one priority, let $p$ be the largest one.

  1. Case $p$ is odd: Let $P$ be the set of all nodes with priority $p$. We compute the $1$-attractor $U$ of $P$. It is not too hard to see that $U \subseteq W_1$ with the attractor strategy of player 1 on $U$ as winning strategy. We remove $U$ from the game and continue with the rest.
  2. Case $p$ is even: Let $P$ be the set of all nodes with priority $p$. We compute the $0$-attractor $U$ of $P$ and remove it from the game, resulting in a game $G’$. We solve the game $G’$ by recursion and obtain winning sets $W_0’$ and $W_1’$. We compute the $1$-attractor $A$ in $G$ of $W_1’$ and remove it from the game as it is clearly won by player 1. This leaves us with $U \setminus A$ and $W_0′ \setminus A$.
    We now claim that $W_0′ \setminus A$ is won by player 0. Assume it’s not. Then, there must be a player 1 escape going through $A \setminus W_0′ \cup U$ reaching $W_1’$. But since $A$ is an attractor and the game is a one-player game, this is impossible. Let $X$ be the player 0 attractor of $W_0′ \setminus A$. Hence, we can remove $X$ as a save win for player 0 from the game.
    We are left with $U \setminus (A \cup X)$ with which we continue.

As an exercise, you can check that the algorithm runs in polytime.

In the next post of the series, we consider an assumption that you are allowed to make if you’re trying to find a polytime algorithm for solving parity games: you can assume that one of the two players, say player 0, wins the whole game. All you need to do, is to compute her winning strategy.

Applied Engineering: Exhaustive Search

In the last post of the series, we’ve introduced modules, functors and maps. We had ended with realization of the value assignment function that maps a given table assignment to a (ranking) value.

In this post, we will conclude the series and solve the optimal table assignment problem using OCaml. We will implement a function that searches through the space of potential table assignments to find (the) one with best possible value. We will use a very simple exhaustive search algorithm here.

First, we introduce a new module similar to the Map module we’ve already used previously. It models a set of objects and allows to add, remove, and find elements and to iterate over them efficiently. For our purposes, a set will be the right data structure for walking through the search space.

module IntSet = Set.Make(
      type t = int
      let compare = compare

let int_list_to_set int_list =
    List.fold_left (fun set_acc list_el -> IntSet.add list_el set_acc)
    IntSet.empty int_list

As before, we have to specify a compare function, and we use the default one once again. We also add a function int_list_to_set that converts a list of integers to a set of integers by folding the list and adding one element at a time using IntSet.add while starting with the empty set IntSet.empty.

If we want to get the sets of people and seats, we could use the mapping function as follows:

let people_set = int_list_to_set ( (fun (x,_) -> x) people)
let seats_set = int_list_to_set ( (fun (x,_,_) -> x) table)

Before we can turn to the exhaustive search algorithm, we consider one last type that is particularly useful in functional programs: the option type. It is a box that can contain at most one instance of some other type – in other words, it’s either an empty box or a singleton. It is already defined by OCaml, but it can easily be redefined:

type 'a option = None | Some of 'a

You can use the type as follows:

let x = None
let y = Some 5
match option_variable with
  None -> do_something
| Some i -> do_something_else

Back to our initial problem: finding an optimal table assignment. Just for fun, we will also try to find a worst table assignment, so the search algorithm should be given a comparison routine that allows the algorithm to select the more “appropriate” table assignment when comparing two.

We will therefore consider the following two comparison functions:

let better_selector assignment assignment' =
    if assignment_value assignment > assignment_value assignment'
    then assignment
    else assignment'

let worse_selector assignment assignment' =
    if assignment_value assignment > assignment_value assignment'
    then assignment'
    else assignment

Our exhaustive search algorithm will be called find_assignment, and it will be given one of the comparison routines as only parameter. We will then be able to create the following two convenience function searching for either a best or a worst assignment:

let best_assignment = find_assignment better_selector
let worst_assignment = find_assignment worse_selector

The basic idea of our exhaustive search algorithm is as follows. At any point in the algorithm we have four objects:

  1. the currently “best” assignment (initially will be None),
  2. a partial assignment (initially will be the empty map),
  3. the set of empty seats  (initially will be the set of all seats), and
  4. the set of persons without a seat (initially will be the set of all persons)

The algorithm will then proceed as follows. If there are no more people without a seat, the partial assignment (which is a full assignment at that point) will be compared with the currently best assignment, and the better one will be returned. If otherwise there are no more seats left, the algorithm will fail, because there are not enough seats for all persons. If there are still seats and persons left, the algorithm will pick the first person unseated and the first unoccupied seat, seat the person there, and recursively find the best assignment. Then, the algorithm will free the seat again, and seat the second person on the same seat and recursively find the best assignment again. This will be done for all persons unseated, so in the end, we’ll find a best assignment.

Here is the code:

let find_assignment selector =
    let people_left = int_list_to_set ( (fun (x,_) -> x) people) in
    let seats_left = int_list_to_set ( (fun (x,_,_) -> x) table) in
    let rec find reference assignment people_left' seats_left' =
        if IntSet.is_empty people_left' then
            match reference with
                None -> Some assignment
            |   Some other_assignment -> Some (selector other_assignment assignment)
        else if IntSet.is_empty seats_left' then
            failwith "There are not enough seats!"
            IntSet.fold (fun person reference' ->
                let people_left'' = IntSet.remove person people_left' in
                let seat = IntSet.choose seats_left' in
                let seats_left'' = IntSet.remove seat seats_left' in
                let assignment' = IntMap.add person seat assignment in
                find reference' assignment' people_left'' seats_left''
            ) people_left' reference
        match find None IntMap.empty people_left seats_left with
            None -> failwith "There must be one assignment."
        |   Some assignment -> assignment

So what do we get in the end? A best seating assignment gets a score of 7.5 and the assignment is:

Hank sits on seat #5
Karen sits on seat #4
Becka sits on seat #1
Mia sits on seat #2
Julian sits on seat #0
Trixi sits on seat #3

A worst seating assignment gets a score of -0.625 and the assignment is:

Hank sits on seat #0
Karen sits on seat #3
Becka sits on seat #1
Mia sits on seat #5
Julian sits on seat #4
Trixi sits on seat #2

This completes our visit to the functional language OCaml. If you would like to download the full source code for the example, please click here.

Enhanced by Zemanta

Scientific Sermon: Extracting Winning Strategies from Parity Game Winning Sets

If you’ve read the last posts in the series, you might remember that solving a parity games means to determine the partition into winning sets for both players and to provide witnessing winning strategies on the sets.

In our today’s post, we’ll have a look at a polytime-equivalent variation. Assume that we have an algorithm $A$ that takes a parity game $G$ and computes the partition into winning sets without providing any witnessing winning strategies, i.e.
(W_0,W_1) = A(G)

The question is, can we easily (in the sense that the blow-up is only polynomial-time) create an algorithm $A’$ based on $A$ s.t. $A’$ additionally computes witnessing winning strategies? The answer is yes and the iterative algorithm is fairly easy:

  $(W_0,W_1) := A(G)$
  $E' := \{(v,w) \in E \mid v \in (V_0 \cap W_0) \cup (V_1 \cap W_1)\}$
  $F := E \setminus E'$
  While there are two edges in $E'$ with the same source Do
    $(v,w) := $pick one of such edges in $E'$
    $(W_0',W_1') := A(G|_{(E' \cup F) \setminus \{(v,w)\}})$
    if $W_0 = W_0'$
      $E' := E' \setminus \{(v,w)\}$
      $E' := E' \setminus \{(v,u) \in E' \mid u \not= w\}$
  return the trivial strategies in $G|_{E' \cup F}$ on the winning sets

The basic idea is to reduce the number of strategies for both players on their winning sets until only one strategy remains for each player while maintaining the invariant that after removing strategies there will still be winning strategies left.
We reduce the number of strategies by removing edges from nodes with more than one outgoing edge. Note that we only remove player 0 edges from nodes of the player 0 winning set and player 1 edges from nodes of the player 1 winning set.
Let $e$ be such an edge. Remove it from the game and solve it again using algorithm $A$. If the algorithm returns the same winning sets, we know that there is a winning strategy left for the controller of $e$ that doesn’t require $e$, so we can safely remove it for good. If the winning sets change after removing $e$, we know that $e$ is essential for the winning strategy, so we can remove all other edges except for $e$ emanating from that node.

As we can only remove edges linearly often, the blow-up of $A$ is linear.

In the next post of the series, we will consider a related problem: given a pair of (total) strategies for both players that are guaranteed to be winning on the respective winning sets (without being given the winning sets), can we determine the winning sets easily?

Enhanced by Zemanta

Applied Engineering: Modules, Functors and Maps

In the last post of the series, we introduced the functional programming language OCaml and basic types like lists. We applied higher-order functions like map to them (recall that a higher-order function essentially is a function taking a function as an argument).

Recall that we’ve considered the list of people:

let people = [
  (0, "Hank");
  (1, "Karen");
  (2, "Becka");
  (3, "Mia");
  (4, "Julian");
  (5, "Trixi")

If we now want to look up the name of the person with id, let’s say, 4, we can only walk through the list until we find an entry that matches the respective number. This is a very inefficient algorithm for looking up data (its worst-case and average-case complexity are both linear). We can be much faster using a different data structure that stores all entries in order and applies binary search. The classic structure is a balanced tree and all database system rely on such organization of data. Luckily, OCaml already provides a module for handling such data. A module can be seen as a namespace for types and functions that belong together.

We will use the map module that allows to quickly look up a key and return a corresponding value – in our case, the keys would be the people’s ids and the values would be the people’s names. The map module is general enough to allow arbitrary keys and arbitrary values to be stored in a balanced tree.

If you think about that for a minute, you might see an issue here. In order to organize the data in the tree, you need to be able to compare keys with each other, so they can be arranged in the right order. But how would the map module compare stuff with each other if the map module doesn’t know how to compare certain types? Well, it doesn’t. We have to specify how our keys should be compared with each other.

How do we explain to the map module how our keys can be compared with each other? By parametrizing the map module by an auxiliary module that explains the key comparison. More precisely, we map the auxiliary module to a concrete map module by a functor (a mapping from a module to a module).

module IntMap = Map.Make(
    type t = int
    let compare = compare

On other words: we define the module IntMap to bethe  Map.Make-Functor applied to the auxiliary module with key type int (our people’s ids) and the comparator compare (OCaml provides a “magic” compare function for every type that we use here – for ints, it’s the natural ordering on integers).

Next, we want to convert our list of people to an IntMap, mapping ids to people’s names. In order to build and use the map, we need three IntMap-functions: one to create an initial empty IntMap-instance, one to add assignments to it and one to look up keys.

IntMap.empty: 'a IntMap.t
IntMap.add: int -> 'a -> 'a IntMap.t -> 'a IntMap.t
IntMap.find: int -> 'a IntMap.t -> 'a

Recall that ‘a is a type variable. In our case, the type variable corresponds to the value type – and since the map does not care about the specifics of this type, we don’t have to fix it via any auxiliary modules and functors.

The first function, IntMap.empty, simply returns an empty instance of an IntMap. The second function, IntMap.add, takes a key, a value and an existing IntMap as arguments and returns a new IntMap that added the new key-value assignment to the existing IntMap. The third function, IntMap.find, takes a key and an existing IntMap as arguments and returns the corresponding value (assuming that the key-value pair is present).

Our conversion function therefore looks as follows:

let int_list_to_map int_list =
	let empty_map = IntMap.empty in
	let rec helper rest_list acc_map =
		match rest_list with
			[] -> acc_map
		|	(domain, range)::rest_list' ->
                                helper rest_list' (IntMap.add domain range acc_map)
		helper int_list empty_map

The function starts with an empty map and then walks recursively through the last, adding the assignments one by one to the map. We can now use it to generate our people mapping and use it to look up people’s names:

let people_map = int_list_to_map people
let people_func p = IntMap.find p people_map

Recall that in our initial post, we had two more lists – the list of constraints (which person likes to sit next to which person) and the table configuration (which is seat is close to which set). Every entry in each of these lists is a triple – two ids and a value (how likable / how close).

let constraints = [
	(0, 1, 1.0);
	(0, 2, 1.0);
	(0, 3, -0.5);
	(0, 4, -1.0);
	(1, 0, 0.75);
	(1, 2, 1.0);
	(1, 3, 0.5);
	(1, 4, 0.5);
	(1, 5, -0.75);
	(2, 0, 0.5);
	(2, 1, 0.5);
	(2, 3, 0.75);
	(2, 4, -0.75);
	(3, 0, 1.0);
	(3, 5, 0.5);
	(4, 1, 0.5);
	(5, 0, 1.0);
	(5, 1, -0.5)

let table = [
	(0, 1, 1.0);
	(0, 4, 1.0);
	(1, 0, 1.0);
	(1, 4, 1.0);
	(1, 2, 1.0);
	(1, 5, 0.5);
	(2, 3, 1.0);
	(2, 1, 1.0);
	(2, 5, 1.0);
	(2, 4, 0.5);
	(3, 2, 1.0);
	(3, 5, 1.0);
	(4, 0, 1.0);
	(4, 1, 1.0);
	(4, 5, 1.0);
	(4, 2, 0.5);
	(5, 3, 1.0);
	(5, 4, 1.0);
	(5, 3, 1.0);
	(5, 1, 0.5)

We again want to convert both lists to maps, but in this case, the key consists of two ids, i.e. is an int product-type. We need to create a new module IntMap2 by mapping a new auxiliary module (that allows to compare key pairs with each other) to a map:

module Int2Map = Map.Make(
	  type t = int * int
	  let compare = compare

let int2_list_to_map int2_list =
	let empty_map = Int2Map.empty in
	let rec helper rest_list acc_map =
		match rest_list with
			[] -> acc_map
		|	(domain, domain2, range)::rest_list' ->
                                   helper rest_list' (Int2Map.add (domain, domain2) range acc_map)
		helper int2_list empty_map

let constraint_map = int2_list_to_map constraints

let table_map = int2_list_to_map table

We again want to define functions to look up the respective values by key pairs. There will be cases, however, in which certain key pairs don’t exist in the mappings. We didn’t specify, for instance, any constraints for the two seats 0 and 2 with respect to each other, because we implicitly assign them the proximity value 0. Similarly, this holds true for the constraints. We need to make sure, therefore, that our look up routines catch the exception in which no key pair could be found.

let constraint_func p q =
		Int2Map.find (p, q) constraint_map
		Not_found -> 0.0

let table_map = int2_list_to_map table		

let table_func p q =
		Int2Map.find (p, q) table_map
		Not_found -> 0.0

We say here, try to execute the look up code provided by the Map module. If it raises the exception Not_found, return 0.

Next, we need to define a new type for handling dinner table assignments. It should be a map from a person’s id to a seat id – in other words an int IntMap.t. To play around with it, let us setup a test assignment that assigns each person the seat with the same id:

let test_assignment = int_list_to_map [(0,0); (1,1); (2,2); (3,3); (4,4); (5,5)]

So far, so good. To get a feeling of what’s going on, let us print the table assignment. We will make use of another higher-order function provided by the Map module that allows us to walk through the map:

IntMap.fold: (int -> 'a -> 'b -> 'b) -> 'a IntMap.t -> 'b -> 'b

This looks pretty complicated, so let’s check the details. The first argument is function that accepts a key and a value, and maps some data (of type ‘b) to updated data (of type ‘b again). The second argument is the map and third argument is some initial data (of type ‘b). Overall, the routine routines data of type ‘b. Internally, the function starts with initial data and the first key-value-pair and calls the user function to obtain an updated data object. It continues in that fashion with the second key-value-pair until all key-value-pairs have been handled. The final updated data object is then returned.

We will make us of it build a textual representation of dinner table assignments:

let format_assignment assignment =
	IntMap.fold (fun person seat acc_format ->
		acc_format ^
                people_func person ^
                " sits on seat #" ^
                string_of_int seat ^ "\n"
	) assignment ""

How does it work? It starts with the empty string “” and adds a line for every assignment pair in the user defined function. The current string is given by the accumulator variable acc_format. The output accumulator is built by concatenating (denoted by ^ in Ocaml) the accumulator variable with the person’s name and the seat number.

Applying that to our test assignment, we get the following output:

# print_string (format_assignment test_assignment);;
Hank sits on seat #0
Karen sits on seat #1
Becka sits on seat #2
Mia sits on seat #3
Julian sits on seat #4
Trixi sits on seat #5

We conclude the post by realizing the assignment value function of our initial post that allows us to rate a given dinner table assignment:
\sum_{p,q \in People} constraint(p,q) \cdot proximity(assign(p), assign(q))
We make extensive use of the fold operation again (which corresponds to the sum operation here):

let assignment_value assignment = 
	IntMap.fold (fun p _ acc_value ->
		IntMap.fold (fun q _ acc_value' ->
			acc_value' +. constraint_func p q *.
                        table_func (IntMap.find p assignment)
                                   (IntMap.find q assignment)
		) people_map acc_value
	) people_map 0.0

For the record, the assignment value of our test assignment is 3.5. We should find an assignment that yields a higher value, right? We will see how to accomplish that in our final post in the series.

Enhanced by Zemanta

Scientific Sermon: Parity Game Solving

Last time, we saw how parity games look like and what their complexity theoretic status is – we can solve them by a non-deterministic polytime algorithm, but we don’t know how to do it in deterministic polytime.

In this post, we will see one of the most natural algorithms for solving parity games deterministically. Along the way, we will see why parity games are positionally determined, i.e. why every node is won by one of the two players by using a positional strategy, no matter how the opponent is playing. Recall that a general strategy is a plan that tells the player which outgoing edge to take from a node by taking the whole history of the play into account. Positional strategies, on the other hand, don’t care about the history of the play. In other words, no matter how the history looks like, a positional strategy will always choose the same outgoing edge from a node.

Before we get started, let us fix a mathematical formulation of parity games first. We say that $G=(V,V_0,V_1,E,\Omega)$ is a parity game if $(V,E)$ is a directed, total graph – i.e. the arena on which the game is based -, $V_0$ and $V_1$ are a partition of $V$ into the node sets of each player – if you recall the last post, the circular nodes belong to $V_0$ while the rectangular posts belong to $V_1$ -, and $\Omega: V \rightarrow \mathbb{N}$ is the priority assignment function that maps each node to a natural number, its priority.

The algorithm is based on the notion of attractors. Let $i=0,1$ be one of the two players. Given a set of nodes $U$, the $i$-attractor of $U$ is the least superset $A$ of $U$ such that player $i$ can force every play starting in a node $v \in A$ to visit a node $u \in U$. Intuitively, you can think of $U$ as a set of nodes beneficial to player $i$ and $A$ to be an extension in which every path, no matter what the other player is doing, leads to the set $U$.

Mathematically, the attractor $A$ is determined iteratively, starting with $A := U$. In every step, we include all nodes by player $i$ that have an edge leading directly to $A$. Obviously, player $i$ can force these nodes to visit $A$ and by induction to visit $U$. Also, we include all nodes by the other player that have all outgoing edges directed towards $A$. In other words, no matter what player $1-i$ is doing, he ends up in $A$ and by induction in $U$. This process is iterated until no new nodes have been added to $A$. It is immediate to see that the attractor computation can be done in deterministic polytime. Let $Attr_i(U)$ denote the $i$-attractor of $U$.

It should be easy to see that player $i$ can force a visit from $A$ to $U$ by a positional strategy.

The algorithm for solving parity games that we consider here is called the recursive algorithm for parity games. It is based on a recursive descent in the number of priorities. The base case is the empty game. Assume now that the game $G$ is not empty and let $p$ be the largest priority occurring. Let $i$ be the player “owning” the priority, i.e. $i = p \mod 2$. Let $U$ be the set of nodes with priority $p$. It is easy to see that every play that visits $U$ infinitely often is won by player $i$, because the largest priority occurring is $p$. So consider $U$ to be a “good” region for player $i$. We can even extend $U$ to a potentially larger good region by building the $i$-attractor $A = Attr_i(U)$. Every play that visits $A$ infinitely often is won by player $i$.

The question remains whether $i$ can enforce infinitely many visits to $A$. In order to answer that question, we consider the game $G’ = G \setminus A$ in which all $A$-nodes and $A$-edges have been removed. Note that the arena $G’$ is total again only due to the fact that $A$ is an attractor. By recursion, we solve $G’$ and obtain a partition into winning sets for both players $W_i’$ and $W_{1-i}’$.

There are two cases now. The easy one is $W_{1-i}’ = \emptyset$, i.e. player $1-i$ can’t win anything in the sub game G’. In that case, player $i$ wins from every node in $G$, because if player $1-i$ decides to visit $A$ infinitely often, player $i$ wins by forcing the visits to $U$, and if player $1-i$ decides to visit $A$ only finitely often, player $i$ wins by induction. It is easy to see by induction that this can be done with a positional strategy.

The other case is $W_{1-i}’ \not= \emptyset$. Here, we only we only know for sure that player $1-i$ can win on $W_{1-i}’$ as there is no player $i$ escape from $W_{1-i}’$ to $A$ (because $A$ is an attractor already). So what can we do now? We enlarge $W_{1-i}’$ again by computing the $1-i$-attractor $B = Attr_{1-i}(W_{1-i}’)$. It is immediate to see that player $1-i$ wins on $B$ with a positional strategy. We consider a second sub game $G” = G \setminus B$ and solve it recursively again. We obtain a partition into winning sets $W_i”$ and $W_{1-i}”$, which we can use to define the winning sets for $G$, i.e. $W_i := W_i”$ and $W_{1-i} := W_{1-i}” \cup B$.

I’ve updated the respective Wikipedia article as well where you can find a pseudocode outline of the algorithm.

The recursive algorithm is probably the most straight-forward algorithm for solving parity games. It can be easily seen that the algorithm is potentially exponential because of the two recursive descents and one can, in fact, provide a family of games on which the algorithm runs in exponential time. Interestingly, the algorithm is one of the fastest in practice. It also shows that all nodes of a parity game can be partitioned into the winning sets of both players and that they can win with a positional strategy on their respective winning sets.

The next post in this series will consider some interesting, seemingly simpler subclasses of parity games that would allow us to define a polytime algorithm for general parity games, given that we would have a polytime algorithm for one of the simpler classes.

Enhanced by Zemanta