Category Archives: Series

Tech Scene: Payment Models for Digital Goods

In the history of the internet, a couple of different payment models have emerged for digital goods. There are various types of digital goods, many of them existed long before the internet: Books, News, Music, Videos, TV-Shows, TV-Channels to name a few. Some goods are inherently digital like virtual goods in computer games.

In the beginning of the internet, there was only one payment model: free digital goods sponsored by advertisement, and we still have that model today. But there were only a couple of digital goods available in the old times, mostly news articles.

Newer models emerged since then. The music and movie industry for instance decided to offer most of their goods by direct payment per item. For virtual goods like in Zynga games, the freemium model disrupted major parts of the gaming industry – you can play the games for free, but once you’re hooked, you have to pay in order to succeed.

The disruption of the direct payment model for music is now driven by services like Spotify or Simfy. Instead of letting you pay for every single item, you have to pay a flat fee per month to listen to any music you want to. You don’t own the titles anymore, you just get the right to listen to them as long as you’re paying for the service.

Which payment model is best depends on the type of goods you offer. I think there are three categories of digital items: collectable, consumable and enabling items. A collectable item has an intrinsic value to people and they want to own it (even after consuming it). Consumable items on the other hand only have value to people before they’ve consumed them. An enabling item allows people to improve on something existing, so they need to have it to get one step further.

Of course, no item falls completely into one category and it also depends on the mindset of the customer where to put an item. To me, books and movies are mostly collectable items which is why we still mainly use direct payment for such goods. TV-Shows and music on the other hand is more of a consumable item – sure, there are some shows and some music that pop-culture aficionados want to watch or to listen to all day long over and over, but most shows are only consumed once or twice. Likewise, we usually listen to a number of songs until we get bored of them and move on. We don’t really care whether we still have them. Hence, flat fee models are on the move for consumable items. It goes without saying that the freemium model is best fitted to enabling items such as virtual goods.

What category is news? Most of the time, it is a consumable good, because news are usually only relevant to you until they’re not new anymore. Because that’s what news is – it’s new. Sure, news portals do contain articles that have long-term value, but mostly it’s only relevant for a short time. So what’s the right payment model? In the old times of the internet, it was free to the consumer and sponsored by advertisement. It was okay for some time that this payment model didn’t produce much revenue, because real news publishing was still happening offline. Nowadays, the digital news portals have to generate more revenue as the offline revenue stream is breaking down.

So what other payment models are there? News publishers transfers their issue-based offline system to the net by offering flat fee subscriptions on a monthly or yearly basis, or per-issue payment like in the offline world. Considering news issues on a fine granular level, you pay a flat fee for reading any number of articles in the issue. So that’s the flat fee model for a consumable item. Makes sense. But there are still not enough people falling for that, as there are still many free news websites.

There is a tendency of news publishers to introduce (soft) paywalls to account for that. A (soft) paywall restricts your access to digital goods – articles in particular – by requiring you to pay a very small fee per article you’re reading or by allowing you a small number of articles to read for free until you either have to pay on a per-article basis or by subscription.

Time will tell whether per-article payment works. Since articles are mostly consumable goods of short relevance with many free competitors, I would rather suggest a different payment model. It should neither be for free, nor be a flat fee, nor be a per-article payment. The problem with the per-article payment is that you can’t really give people a demo of what they’re going to get. Then again people are hesitant to pay for the article as they might regret it in case the article didn’t meet their expectations.

My suggestion therefore would be a model that I call “capped satisfactory payment”. It consists of two rules. You pay on a per-article basis, but if you didn’t like the article after reading, you can get your money back for the article. Obviously, you combine that with a fair-use policy and tracking of rejected articles, so you exclude free-riders. The second rule is to cap the overall payment for read articles per issue. In other words if a reader really digests a lot of articles in an issue, he shouldn’t pay much more than the flat fee for the whole issue in the first place. To my mind, this model communicates clearly that you take the readers’ opinion seriously while reducing the hurdle for the consumer to read and pay for an article dramatically.


Enhanced by Zemanta

Applied Engineering: Types, Lists and Functions

In the last post, we introduced the problem of assigning seats at your dinner table to your guests in an optimal way – and by optimal, we mean that most constraints can be satisfied most accurately.

In order to solve the problem, we use the functional programming language OCaml. Functional programs are very close to mathematical formulations – it is about the definition of data and functions operating on data and not so much about how to compute stuff with it. So let us define the list of people first. Every person is described by a number and his or her name – mathematically, we describe that by a pair, which is exactly what happens in OCaml:

(0, "Hank")

Ocaml is a typed language, so it will tell you what type the definition you just made has. The type corresponds to the set-theoretic universe in which the pair lives. Integers like 0 have the type int while words like “Hank” have the type string. Therefore Ocaml will infer the following type (you can try that out by starting “ocaml” in your terminal and entering the pair terminated by “;;”):

- : int * string = (0, "Hank")

That makes sense – a pair (“*”) of an integer and a string.

The description of the whole table is a different story – in general, we have an arbitrary number of people. Mathematically, we could describe that by a set – a similar type exists in functional languages as well. But we first introduce a simpler type: the list. A list contains an arbitrary number of objects of the same type in a fixed order. In Ocaml, we introduce the list of persons as follows:

let people = [
(0, "Hank");
(1, "Karen");
(2, "Becka");
(3, "Mia");
(4, "Julian");
(5, "Trixi")

Again, Ocaml infers the type which is a list of pairs:

val people : (int * string) list

Next, we want to crunch some numbers. For starters, let’s try to find out how many people there are in the list. As a mathematician, one would define a recursive function that sums up the number of objects in the list – and that’s exactly what we will do in a functional language:

let rec cardinal list =
match list with
[] -> 0
| obj::list_rest -> 1 + cardinal list_rest

The first line introduces a definition again – we define a recursive function named “cardinal” that has one argument named “list”. The function is defined by a pattern matching on the given list. If the list is empty “[]”, the function returns 0. Otherwise, the list can be partitioned into a head named “obj” and a tail named “list_rest”, and the number of items in the list can be computed by counting the number of items in the the tail of the list plus 1.

Ocaml gives our function “cardinal” a type again:

val cardinal : 'a list -> int

Therefore “cardinal” is a function that maps a list based on a type variable ‘a to an integer. The type variable means that “cardinal” does not depend on how the objects of the list look like. Indeed, we can apply “cardinal” to our list of people:

cardinal people: int = 6

Let us consider another example of a recursive definition. We could be interested in obtaining the list of names without the integers. For this, we actually have to solve two problems: obtaining an element of a pair and mapping every object of a list to a new object. In order to select the second element of a pair, we use pattern matching again:

let second (x,y) = y

Ocaml infers the following type:

val second : 'a * 'b -> 'b

Therefore, “second” is a function that takes a pair with type variables ‘a and ‘b and maps it to the type variable ‘b. In other words, the function does not care about the type of the first entry of the pair and preserves the type of the second entry of the pair.

We continue with mapping a list of objects to a new list of new objects. For this, we assume that “f” is a function that maps an object to a new object. Then, we can define the mapping operation as follows:

let rec map f list =
match list with
[] -> []
| obj::list_rest -> (f obj)::(map f list_rest)

As before, we define “map” to be a recursive function that takes “f” and “list” as arguments. If “list” is an emtpy list, it returns an empty list as well. Otherwise, the list can be partitioned into a head “obj” and a tail “list_rest”. We apply “f” to “obj” in order to get a new object and use it as head of our new list that we create by calling “map” recursively on the tail of the list.

Ocaml infers the following type:

val map : ('a -> 'b) -> 'a list -> 'b list

The type might look scary at first sight – it says the following: “map” is a function that maps the argument “f” to a function that maps the argument “list” to a result. The first argument “f” has the type (‘a -> ‘b) which requires “f” itself to be a function that maps something of type ‘a to something of type ‘b. Then, “map” maps a list with objects of type ‘a to a list with objects of type ‘b.

We can now apply “map” to our function “second” that selects the second entry of a pair:

map second: ('a * 'b) list -> 'b list

In other words, “map second” is now a function that takes a list of object pairs and maps it to a list of objects that share the type of the second pair items of the original list. If we now apply this to our list of people, we get the following result:

map second people: string list = ["Hank"; "Karen"; "Becka"; "Mia"; "Julian"; "Trixi"]

In our next post in the series, we will consider advanced types like Sets and Maps, and introduce a type for describing table assignments, bringing us closer to the solution of our problem.

Enhanced by Zemanta

Modern Living: Information Overload

We live in the era of abundant information – in particular it is the real-time information that competes for our attention. And our peers obviously know that all this information is immediately available to us, so we are supposed to stay informed. It has become common sense to react quickly to new information and to be aware of it.

There are several different kinds of digital information that we are flooded with on an hourly basis.

First, there are social networks like Facebook or Google+ that keep you up to date with your friends’ interactions, activities and thoughts. You might feel like you’re missing out on some important information, so you find yourself checking your wall every now and then. However a lot of time is spent reading through spam postings. Then there is the real-time service Twitter that feeds you with the newest bits of information of people that you follow. Again, you have to spend an awful amount of time reading through the spam tweets.

Second, there are the classical news portals like the New York Times that provide you with serious news about politics, the economy, sports, arts, culture and all that. Since you’re supposed to stay informed, you regularly check the news sites throughout the day to see if something important has happened. Most of the time, however, you spend time reading some lurid or funny articles that you wouldn’t define as being of any importance to you. In fact, you would be happy if you hadn’t spent time reading them, but since you stumbled upon them, you felt the immediate urge to consume them.

Third, there are special interest blogs and sites that feed you with information about particular subjects. Usually, there is only a handful articles that a really worth a read from your point of view, but you still have to check all posts to see what deserves your attention. And again it happens that you get distracted by posts of no particular importance to you.

There are a couple of problems related to the way we digest digital information today. First and foremost, there is the problem of scanning: most of the incoming information bits are not relevant to us. But we have to commit our attention to every single item – at least for a short amount of time – to take the decision whether it is of any relevance to us. In other words, we are filtering through the content.

Second, there is the problem of content aggregation: information bits that would belong to each other coming from different sources are not grouped together. This entails the problems of invalidation, duplication and subsumption. We want to read information bits that belong to one context in one go. We do not want to read information that is not valid anymore or if there is updated information available. We do not want to read the same content twice. We do not want to read an article that is contained in another one (because we then read content twice again).

Third, there is the problem of temporal relevance. On the one hand, we need to decide whether this information piece needs our immediate attention or if we can consume it at a later stage. On the other hand, we need to decide that when we are about to consume an information piece, whether it is relevant anymore. If it’s not, there is no point in still reading it, even if we’ve decided to save it for later (but intentionally not that late) in the first place.

There are a couple of software solutions that try to solve the problem. Google News, for instance, aggregates news and presents them in a uniform way. At least for news, that solves the problem of duplication and aggregation. However, you never know whether the aggregation shows you real duplicates and you don’t really know whether one article is subsumed by another. There is also no way of invalidating news – there is only an implicit invalidation given by temporal distance and number of readers per temporal distance, but that isn’t necessarily the right way of invalidating news.

Other software systems like Flipboard or Prismatic ask you for your interests and then try to compile a personalized dashboard with information pieces published by news sites, blogs and your social networks that you might enjoy. However, the algorithms are not transparent to its users – sure, there is a lot of statistics going on and association rule mining and all that, but that’s not transparent to the layman that just wants to consume information relevant to him – and therefore not successful in solving the problem of filtering accurately. They might filter information out that would have been relevant to you. Even more problematic, they filter information types that are completely new to you, so you couldn’t say whether you’re interested in it or not. In other words, the discovery of new information is not possible with such systems in a satisfactory way.

The problem of temporal relevance is not handled in any useful way by any of these applications.

To my mind, a combination of both software and people could help to ease the problem of information overload. The software would be more of a framework like Wikipedia that allows people to commit their capacity to solve the problem of collective information overload.

First, the problem of filtering could be solved in several ways by involving human people. Instead of letting software filter information streams for you, you could instead subscribe to filtered and aggregated streams that are published by people that you put your trust in. There could be a filtered and aggregated stream by Guy Kawaski that features the important articles from the important tech blogs about the relevant new valley hot shots. There could be a filtered and aggregated stream by Kofi Annan on important articles on international policy. And so on. I would trust experts much more to filter information for me than any algorithm.

Second, the problem of content aggregation and the entailed issues could also be solved by human people. They could mark posts as invalidated, as duplicated, as being a subsumption and so on. They could even digest the most important information that is contained in a lengthy article. In many cases, I would rather subscribe to the digested version of economic news that just outlines the article’s facts about a company.

Third, the problem of temporal relevance can also easily be decided by the people. If there is an article about movie history, there is obviously almost no temporal relevance at all. If there is a feature on the upcoming soccer game, it is only relevant as long as the soccer game hasn’t happened.

All this can be decided by the people. A software system would give people the capabilities to act on the information in such ways and to promote a peer reviewed system like Wikipedia to prevent biases and vandalism. What would happen really is that a small fraction of people devotes their time to filter, tag, digest and select information so that a large fraction of people can save time – because today, all these things are done in parallel by people. And that seems like a waste of time from a society’s point of view.

Scientific Sermon: Parity Games & Complexity

Last time, we described how parity games could be seen as an algorithmic backend to verification problems in practice. In this post of the series, we introduce parity games concretely and explain why they are an interesting subject on their own from a complexity theoretic (we’ll get to that) point of view.

From a game-theoretic point of view, a parity game is a two-player zero-sum perfect-information game. Let’s decode what that means. First, two-player game means that there are two players playing this game, like in chess.

Now zero-sum means that for one player to be winning the other player must be losing. Many casual games are zero-sum games. In more general terms, zero-sum means the following: after the two players have played the game, they both get an outcome of the play – that could, for instance, be money that a player gets or that a player has to pay. Whatever amount one player gains must be at loss for the other player. In other words the sum of the outcomes for both players must be zero. In our scenario we could say that winning results in outcome 1 while losing results in outcome -1, hence the sum will always be zero.

Third, it’s a perfect-information game. That means that both players know the complete state of the game at any time. Chess, for instance, is a perfect-information game, because both players can see the whole board. Most card games, on the other hand, are imperfect-information games, because the players can’t see which cards the other players have. They can guess, of course, and compute probabilities, but in general they don’t know for sure which cards the other people hold.

So how do parity games look like? First of all mind the plural: parity games really describe a class of games – there are infinitely many different parity games. To put that into perspective, the contemporarily popular Sudoku games are also a class of games – there many different instances of the Soduko grid to start with.

Parity games are so-called graph-based games. A graph consists of two ingredients: a set of positions, called nodes, and a set of (one-way) connections, called edges. Every edge in a graph connects two nodes with each other. For our purposes, a graph can be seen as the abstraction of a game board – they usually look grid-like and every position is directly connected to its immediate neighbors. But parity games don’t necessarily connect immediate neighbors with each other. Additionally, we want to have “bridges” that connect nodes being far away with each other. So a graph really is an abstraction of the two-dimensional game boards.

The nodes (positions) of the graph have two shapes: circular and rectangular. The circular nodes belong to player 0, the rectangular nodes belong to player 1. For mathematical reasons, we call both players 0 and 1 instead of 1 and 2. We also call them player “Even” and player “Odd”. Furthermore, every node is labeled with a natural number. Intuitively, player Even likes large even numbers and player Odd likes large odd numbers. The game is played by putting a pebble on a designated starting node of the graph. If this starting node belongs to Even, then she can choose one of the outgoing edges (connections) and moves the token along the chosen edge to a new node. Otherwise, it’s Odd’s turn to choose the edge. The game then continues in that fashion for an infinite number of steps.

Wait a minute – how could two players play for an infinite number of steps? Of course, this could not be done by real living players. We will see later how this strange issue can be resolved. But bear with me for the time being. Just assume that both players played for an infinite number of steps – resulting, for instance, in the following sequence of nodes:

4, 1, 6, 8, 2, 0, 3, 2, 0, 3, 2, 0, 3, 2, 0, 3, …

Who wins this play? The winning condition is as follows: if the largest number that occurs infinitely often is even, Even wins; otherwise, Odd wins. So in this example play, player Odd wins, because 3 is the largest number (called priority) that occurs infinitely often.

The algorithmic question is, given a parity game and a designated starting position, can we tell whether one of the two players has a winning strategy against the other player? In other words, is there a clever plan for one of the two players such that no matter what the opponent does, the player is winning? And if so, can we also understand why?

The answer to all these questions is yes: parity games are determined, so there must be a winning strategy for one of the players from a designated starting position; we can also algorithmically decide which of the two players can win; furthermore, we can provide an additional winning strategy that witnesses the clear win of the respective player.

If you think about it, it follows that the witnessing winning strategy must have a finite description, otherwise there could be no way for the algorithm to print it out. So how could strategies possibly look like for both players?

A strategy should tell the player which outgoing edge to take from a node when the pebble has been put on it. This decision can depend on the whole history of the play, i.e. on everything that has happened beforehand. So in general, a strategy can be an object of infinite size. Luckily, parity games are not only determined, but positionally determined. That means that there is no strategic benefit in making decisions depend on the history. If a player chooses to move from node v to node u at some point, then the player should stick to that choice – in other words whenever the pebble falls back to node v, the player should choose the edge leading to node u again. Such positional strategies are finite objects. So why is there no strategic benefit in non-positional strategies? We will see that whenever we have found a non-positional winning strategy, we can also find a positional winning strategy – so there is no need to hassle around with non-positional strategies in the first place.

Consider the example parity game again for a minute. The blue respectively red areas describe the winning sets of both player Even and player Odd. A winning set is the set of node from which the respective player has a winning strategy. The boldly marked blue and red edges are witnessing positional winning strategies. You can try it out yourself: no matter which choices you make as an opponent (being it positional or non-positional choices), you will lose against the other player in his / her winning region.

Which brings us back to the algorithmic question: deciding which player has a positional winning strategy from a designated starting node. As we will also see, it is easy to determine whether a given positional strategy is indeed a winning strategy. So algorithmically, this problems is in the complexity class NP – but we don’t know whether it is NP-hard nor do we know whether is in P. Because of the positional determinacy, the problem is also contained in coNP. In case you have no idea what I’m talking about, please read the wonderful article by Scott Aaronson on the NP/P-problem which is one of the most important open problems in maths and theoretical computer science. Since there are not many natural problems that are contained in both NP and coNP without being easily seen to be in P, the problem of parity game solving touches one of the most interesting subjects of pure complexity theory. In other words, it would be a major break-through if we could prove parity game solving to be in P. Or if we could prove it’s NP-hard which (I guess) no one really believes.

The next post in this series will feature an algorithm that solves parity games and prove some interesting results along the way.

Tech Scene: Platform Apis and Standards

This post will be about platform application programming interfaces (APIs), protocols and standards. When we build software that has to integrate with components written by other people or when our software has to communicate with some other program (for instance via the internet), both programs have to agree on a common language. Otherwise, they could not exchange any meaningful data or commands.

The designers of the software can create any language they want for communicating, but all involved components have to agree on it. The way software components talk to each other is usually called protocol. It could be seen as both the grammar and the vocabulary that all components understand. Your browser, for instance, used the HTTP protocol to retrieve this website from my web server. They both agreed to speak HTTP. The vocabulary, in this case, was a formal way of your browser saying “give me the following page” and my web server replying “there you go” with the full page attached to it.

This set of commands could be seen as an application programming interface. The server specified which commands it understands. But an API is not necessarily tied to a protocol. It is just an abstract way of specifying the supported command set.

Within the last couple of years, many applications in the internet developed so-called platform APIs – a way of opening up their applications to other programmers. You could write, for instance, a service that could be hooked up with the Facebook API, so your application could browse through friends, interests and all that.

While all this is great, there is usually no standard attached to these APIs. This means that similar applications offer different APIs – in other words in order for your application to access the friends of Google+, it has to use a different API than when accessing the friends of Facebook. Note that this completely differs from the HTTP protocol for accessing websites. Whenever your browser requests a page from a server, it uses the exact same command set – because all HTTP servers have the same API.

And that’s great, because it makes browsers so versatile – they can browse every page. The same thing holds true for emails: there is a single API that unifies all mail servers. The email system is even more interesting, as it is completely decentralised (with all its benefits and handicaps).

The reason why systems like web browsing and email work so well together is standards: the internet world and the industry agreed a long time ago to all use these protocols and the associated APIs. Standards do contribute to an accessible market, it simplifies planing and it makes it much easier for customers to change between providers of a certain service, the decentralisation makes the standard’s ecosystem robust, reliable and competitive. It even allows user to communicate cross-provider with each other. Hence, there are a lot of benefits associated with standards.

However, standards also serve as barrier for innovation and evolution – because it so hard to change them once they’re successfully in place. The best example is good old email – it’s insecure, out of fashion, full of spam and yet it is still the most successful communication platform we have on the internet. And it will take a lot of time for this to change.

But the specific platform APIs as we have them now on Facebook, Google+, Instagram, Instapaper, Dropbox, Foursquare, Twitter and so on also have their downsides. Every developer that wants to build on their services has to write specific code for each supported platform. While you can say “I support email”, you can’t really say “I support social networking” – because “social networking” has not been standardised. As a consequence, developers have to spend an extended amount of time to integrate different kinds of platforms and even more importantly have to make a selection of supported services. By this, big players like Facebook are of course favoured while smaller players miss out on the opportunity to be supported by other services.

Also for the customer it can have unpleasant side effects at times, particularly when a specific service closes down or when the customer wants to move to a different service. Without standards, there is usually no way to migrate your data in a comfortable way. You can’t just move all your likes, interests, statuses or contacts from Facebook to Google+. Similarly services that store your online playlists like Simfy, Spotify or don’t allow you to migrate to a competitor. And so on. The list could be continued indefinitely.

For the big players, this is kind of neat, because it protects their markets and user bases, but for the customers, it makes it more difficult to change platforms. It is also not possible to communicate with people from other platforms which is, of course, most simple with email. In other words these “closed systems” with their proprietary platform APIs foster monopolies which is usually not in the best interest for the customer.

The different incompatible platform APIs have also contributed to another trend which I would call the middleware service trend, where new applications are being built that try to interlink all different kinds of APIs. This can be on the software as a service level like ShareThis, but it can also feature consumer products like Ifttt.

The best example where we are still desperately lacking an ubiquitous standard is account management and passwords: you still have to sign up for every single page and keep track of the passwords. This is a mess. There is also the problem of personal data that you want to share with different services – such as your payment information with an online shop. The most promising standard here is OpenId and it should serve as a decentralised authentication service. However, the adoption is only so-so. Most websites that feature sign-in via external identity providers preselect Login via Facebook or Login via Twitter – which again features specific platform APIs instead of standards. And this chains you even further to one of the big players.

It will be very interesting to see whether the OpenId standard will gain some serious traction in the future, and how the battle between platform APIs and standards will play out in general.

Applied Engineering: Functional Languages and the Dinner Table Problem

This will be the first post in a series called “applied engineering”. It will feature small tutorials on engineering problems as well as gentle introductions into the art of software engineering. Even non-tech people should be able to follow the series.

My first post in the series considers functional languages, a class of programming languages. Functional languages are mostly used in scientific environments but are now gaining traction in industry applications as well. I will add a couple of posts on functional languages to get the interested reader started.

As with all language paradigma, there is a bunch of concrete realisations that we can use. For our purposes here, we use Ocaml – so if you want to try our little snippets, go ahead and install Ocaml on your own machine.

Now what exactly are language paradigma and what are the different programming languages? Language paradigma are based on the same idea as language families in natural languages – you group languages together that share the same roots. The main language paradigma that we have in computer science are imperative languages, functional languages, markup languages, logic languages and object orientation, although object orientation could be seen as a “language topping”. It is mostly but not exclusively used in combination with imperative languages though.

So we are using the functional language called OCaml. We could also use Haskell – we would stay in the same language family (like Germanic languages). In other words, once you’ve understood the concepts of functional languages using Ocaml, you can easily adapt to Haskell in case you want to.

Let’s start with a real-world problem to see how we could solve it using Ocaml. Assume that you host a dinner party and you’re having a hard time assigning people to your table as some people want to sit right next to each other while others cannot without ruining your evening. After a couple of minutes writing down some possible seatings on a piece of paper, you give up – there are just too many combinations.

Hence let the computer solve the problem for us by using our Ocaml program – because that’s what programs are all about: you give them some well-structured data – in our case whether people want or not want to sit next to each other – and let them compute a solution to the data – in our case a favourable dinner table seating assignment.

Before starting to write a single line of code, let us structure the data. For the purpose of this example, we need to add a bunch of people, a bunch of constraints (who wants to sit or not sit next to whom) and some table configuration (i.e. what seats are next to each other).

First, let us start with the people. We will assign consecutive natural numbers starting from 0 to all people:

0 Hank
1 Karen
2 Becka
3 Mia
4 Julian
5 Trixi

Second, let us add some constraints. A constraint will be of the following form:

Person A likes Person B by +/-p%

This means that person A would like to sit next to person B by a specific rating – ranging from 0% – 100%. If person A would rather not sit next to person B, we let p% range from -100% to 0%. Note that person A wanting to sit next to person B does not necessarily imply the converse. Let us assume the following constraints:

Hank likes Karen by 100%
Hank likes Becka by 100%
Hank likes Mia by -50%
Hank likes Julian by -100%
Karen likes Hank by 75%
Karen likes Becka by 100%
Karen likes Mia by 50%
Karen likes Julian by 50%
Karen likes Trixi by -75%
Becka likes Hank by 50%
Becka likes Karen by 50%
Becka likes Mia by 75%
Becka likes Julian by -75%
Mia likes Hank by 100%
Mia likes Trixi by 50%
Julian likes Karen by 50%
Trixi likes Hank by 100%
Trixi likes Karen by -50%

This for instance tells us that while Hank isn’t too eager to sit next to Mia, she, on the other hand, would like so very much.

Third, let us add some table configuration. We do this be specifying the proximity between seats while no proximity specification means that these two seats are so distant from each other that the two people sitting there could not be considered next to each other. For our purposes, we assume a small dinner table with one seat on each end and two seats on each side. We will give every seat a number again, assigning 0 and 3 to the left resp. the right end, and 1-2 resp. 4-5 to the bottom resp. top side. We will give the proximity specification by such lines:

i j p%

This means that seat $i$ is in $p%$-proximity of $j$ where $100%$ means directly next to each other and $50%$ for instance means that these two chairs are near each other but not exactly next to each other. For our table, the specification could look as follows:

0 1 100%
0 4 100%
1 0 100%
1 4 100%
1 2 100%
1 5 50%
2 3 100%
2 1 100%
2 5 100%
2 4 50%
3 2 100%
3 5 100%
4 0 100%
4 1 100%
4 5 100%
4 2 50%
5 3 100%
5 4 100%
5 3 100%
5 1 50%

This table tells us, for instance, that the bottom left seat 1 is right next to left head seat 0, bottom right seat 2 and top left seat 2, while it is only “near” top right seat 5 and not at all next to the right head seat 3.

A dinner table seating assignment is now an identification like “Hank sits on seat number 3 and Karen sits on seat number 0”. And our program will search for such an assignment that optimizes all given constraints. We will use a mathematical formulation to explain what we mean by optimal – intuitively, we want to maximize the accumulated satisfaction of constraints.

More formally, let $assign: People \rightarrow Seat$ be a table seating assignment, $proximity: Seat \times Seat \rightarrow Precentage$ be the proximity configuration between two seats and let $constraint: People \times People \rightarrow Percentage$ be the constraint of two people. We then want to maximize the value of
\sum_{p,q \in People} constraint(p,q) \cdot proximity(assign(p), assign(q))
with respect to the table seating assignment. In other words we build the sum over all pairs of people, where the value of a pair is given by the product of its proximity and its constraint, i.e. the further away two people are the less relevant the constraint is.

Are we still on the same page? You had this or similar problems before? Great! You have just seen how you model a real-world problem in terms of mathematics and processable structures. In the next post, we will write our first Ocaml program and integrate this data.

Modern Living: It’s about time

There is no doubt that we are lacking time. Even our youngest students are cutting down their sleeping time to the viable minimum [Ny Mag]. The market of personal productivity tools is a booming Web 2.0 business. Self-help and self-organization books are ranked top 10 best selling books for a couple of years straight now [Amazon]. The burnout syndrome, a form of depression that occurs when people feel persistently unable to fulfill all demands that life is putting on them, has become one of the most diagnosed psychological illnesses.

Paradoxically on the other hand, we are increasingly procrastinating. There are more people playing computer games for a longer time than ever before [PC Gamer]. Despite all prophets predicting the death of television, it has never been as popular as today [LA Times]. People spend hours repeatedly checking their Facebook wall and all the links and videos that other people share with them. So it seems like there should be enough time after all. What is going on here?

First, I think that there are two main reasons contributing to all kinds of procrastination. Procrastination is giving us short-term incentives and almost instant gratifications, being it of social, entertaining or achieving kind. From an evolutionary point of view, our brain is geared towards short-term successes rather than going the extra mile for the long-term perspective. The other main reason might be related to fear. It seems favorable to postpone a stressful situation for a little time, when the situation might give you negative feedback or the impression that you need to apply even more in order to succeed.

Second, the information overload of our contemporary societies is eating up a lot of time. News is being broadcasted almost instantaneous on many different channels, and you are supposed to stay informed. This also holds true for our fields of expertise. Tech people for instance have to read a whole bunch of tech blogs and articles, economy people have to check every bits of news about macro and micro economical developments, scientists have to check all new publications of their fields (and the publication rate is steadily increasing), and so on. Then there are your social feeds, hobbies and brands that you love, such as a favorite musicians, authors, directors, actors etc.; although it’s a blessing that all this information is available on the internet for us, it also takes a lot of time to digest all of it. And yet we often get the feeling that we’re under-informed.

Third, there are many more things to do compared to earlier times (I am mostly referring to the first and second world here), and all these things are competing for your attention. Both the digital revolution and the rural exodus are contributing to that fact. As a consequence, you always feel like you’re missing out on something. This particularly holds true for your professional life. Now that we finally have the freedom to actually do everything (I know, I know, I am oversimplifying here), we are scared that the grass might be greener on the other side. We tend to think that we are making a mistake by sticking to a certain profession. There is also a number of studies concluding that an increased number of choices actually makes people more desperate than preselecting choices for them [TED].

Fourth, it seems like we are trying to pack increasingly more duties, responsibilities and training into young people, while we are not leveraging the fact that people get older and older. By stretching the load to higher ages, we could disburden the young and middle-aged people, involve the elder people and at the same time cope with the problem of paying their benefits. I think that in general, the phenomenon of delayed gratification is a trend in our cultures that might seem rational on first sight but overall is not beneficial to the individuals’ mental well-being.

Fifth – and I think this one of the most important reasons, at least in your professional life –, since the beginning of the globalization and the free markets, we, as people, are increasingly competing with each other all around the world. So by simple game theory, it follows that if my competitor is working harder and longer, I should also boost my workload, unless I am okay with missing out on a job, a patent, an invention, etc.; this trend will clearly continue, assuming, of course, that political systems won’t change in a substantial way. And this trend obviously is independent of technological advances.

I neither want to propose any solutions nor condemn any of the observed phenomena – I am struggling myself on a day-by-day basis with the feeling of not having enough time.

To give you my two cents, I think it helps to assume that you only have one life. I do not claim that you only have one life and I do not want to digress into a discussion on whether religion, atheism or agnosticism makes sense or not – I just want to stress that assuming that you only have one life really helps you to make the right decisions. And if you are lucky enough to have the opportunity to do different things, don’t do something that you don’t like. You should try stuff to see whether you like it, but if you don’t, do not stick to it for whatever reason. It also shows you that indefinite delayed gratification is not a good strategy to follow. I will write another blog post on that topic.

To my mind, our nowadays societies are still fixed to anachronistic patterns of educating people, putting people to work and retiring older people. I am fairly sure that this will gradually but in the end substantially change within the next decades. I will cover this in another blog post.

As for the issue of information overload, I am very confident that technology will find a solution to that problem within the next years. I will write another blog post on that topic.

Finally, I don’t think that there is any viable solution to the problem of global competition. There are interesting concepts like basic income guarantee and so on, but even they don’t change the general pattern of being better off by doing more than your competitor. And I think that in general, liberal democratic systems have been proven to be quite motivating and fruitful for moving forward.

Speaking of time, do we all perceive time at the same rate? And if so, is this perceived rate of time an inherently human thing or do we just learn to synchronize our perception of time with our peers while we’re growing up?

And as a follow-up question: is this question rather a philosophical or a neurobiological one?

Scientific Sermon: Wikis and Parity Games

This will be the first post in a series called “scientific sermon”. It will feature folklore knowledge in my scientific area of expertise as well as introductory articles to the field. Eventually, I will also include complementary explanations to my research.

Along the way, I will also try to update the related Wikipedia articles. However most of the content will probably be too specific, but nevertheless it would be helpful to have all that in a wiki. At least I would like to have scientific results organized in a wiki – it would be much more accessible for outsiders, and the scientific community could collaboratively share and store their knowledge in a holistic way.

It is a little surprising to me that no such “science wiki” seems to exist. There is no doubt to my mind that such thing will come to life in the near future – you can already see the first attempts going into the right direction. There is, for instance, the very well maintained Complexity Zoo featuring essentially all known complexity classes and their relationships. Then, on the other hand, there is the intriguing Polymath Blog which is a remarkably successful online community for doing topnotch collaborative scientific research.

So I will in the end set up a small wiki for theoretical computer science, and of course invite other scientists to contribute to it. But let’s see how far I’ll get with good old Wikipedia first.

My first post will be about parity games and why they are an important abstraction in the theory of computation.

Theoretical computer science deals with a bunch of important questions. One of them is to find fast algorithms for practical problems and another is to understand in general what computers are capable of. The problem of parity game solving is related to both questions.

Let us start with the practical problems which should serve as the original motivation why people got interested in parity games. One central goal of modern computer science is to decide whether software is correct, i.e. bug-free. In other words, whether the human beings that wrote the software made any mistakes. This is particularly important for critical applications such as power plants, railroad systems and so on. We just can’t afford to make any mistakes here.

Thus, we desire to have an additional computer program that we can give our software to check for bugs. After some time, it should tell us whether the software is bug-free and if it’s not, what’s the reason for that (so we can correct our mistakes). Well, first we have to specify what we mean by “bug-free”. Any computer program performing such automated verification wouldn’t know which behavior we want from our software. Hence, it is the duty of the software developers to also provide a specification that defines “correct” behavior.

For example, we want that whenever a processor adds two natural numbers, that the result is the correct sum. A specification is usually formulated in a logic; for instance, the specification of correct addition could be specified as follows:
\forall a, b \in \mathbb{N}: add(a, b) = a + b
The left-hand side refers to an addition operator of the processor while the right-hand side refers to the mathematical addition of natural numbers.

The question that we now want to answer is, given a blueprint (called “structure” or “interpretation”) of the processor, whether it satisfies the specification given by formulas in a certain logic. Such a scenario is called a model checking problem.

It should be noted that not every model checking problem can be solved (even when we assume that time is not the issue), because some logics are so complicated to handle, that no computer can answer such a question in general. See the halting problem for more information about that.

However many practical model checking problems can be solved and are solved in practice. Of course, it is desirable that the process of automated verification terminates fast – it wouldn’t be of too much use to wait years to get an answer. Hence, theoretical computer science tries to speed up algorithms solving that problem.

As a mathematician, you always try to simplify and abstract problems when you fancy solving them. One core idea of abstraction is to remove every detail of the problem that is unnecessary and distracting.

And this is where parity games come into play. They are an abstraction of the model checking problem for several modal logics, most importantly for a fixed-point logic called the modal $\mu$-calculus.

Some other applications and relations of parity game solving can be seen in the following diagram:

In the next post in this series, I will introduce how parity games actually look like and explain why they are an interesting subject on their own – in order to understand what computers are capable of in general.