Category Archives: Tech Scene

Perfect Tools Part 2: Actionable Remembering on How to do Things

Have you ever googled for a quick tutorial on how to scp files from one computer to another? Or searched for the right commands to create and switch to a new git branch? What are the perfect tools to do that job? You might say, it depends on what the objective is – scp is fairly different from git.

And you are right. But there are two more things to it. First, you have to find out how it works in order to perform the task. Google is usually your friend here, and I assume that you know how to phrase successful search queries. So what’s the second thing? Performing the task a second or a third time. And forgetting how you’ve done it before. I’m sure that happened to you countless times.

So what’s the perfect tool for the job? It’s actually your favourite text editor, and using it for this job is more of a habit than of an expertise: whenever you found out how to perform a specific task, write it down. Not a long text, just as much as you need to understand as quickly as possible how to do it again. As a bonus, you might want to write a shell script instead that performs the task for you.

The way I’m doing it, is two-fold. I have one directory with text files, describing how to do things. Here are two examples:

  •  Git Branching Workflow
git checkout -b NEW_BRANCH
git add .
git commit -a -m "Finished Work"
git checkout master
git merge NEW_BRANCH
git branch -d NEW_BRANCH
git push
  •  Configure Apache on Mac OS-X
sudo apachectl start
cd /etc/apache2/other
sudo nano test.conf
NameVirtualHost *:80
<VirtualHost *:80>
    ServerName localhost
    <Directory /YOUR_WEBSERVER_ROOT>
        Allow from all
        AllowOverride AuthConfig
        Options FollowSymLinks
sudo apachectl -k restart

But even better than that is making such notes actionable by writing scripts that do the work for you. I have all my utility scripts in one directory that is included in the PATH variable of my terminal. In fact, all scripts sit in a DropBox folder, so all my machines are always up to date. All scripts are prefixed by “cmd-“, so I can easily find and execute all of them by simply typing “cmd-” in my terminal and then auto-completing the specific task by hitting tab.

Here are a couple of examples:

  • Limit your downstream bandwidth (
ipfw add pipe 1 all from any to any in  
ipfw pipe 1 config bw $1Kbit/s delay $2ms
  • Convert video to mp4 (
ffmpeg -i $1 $1.mp4
  • Find file globally (
find / -name $1
  • Find file recursively (
grep -r "$1" .
  • Overwrite your mac address (
sudo ifconfig en0 ether $1

I have about 30 text files and 50 scripts of this kind now. The additional time that you need to write up these little documents when encountering the task for the first time is nothing compared to the time that you’ll need to find out how it works again for the second or third time. Not to mention your frustration about feeling repetitive.

Perfect Tools Part 1: Selection and Consumption of Posts

If you are like me, then you are always searching for ways to improve on the efficiency of your daily work flow. I feel like I’m already closing in on – at least – local optima in some areas, and since it took me some time to find and combine the right tools to do the job, I might as well share it here.

I am an avid reader of many specific different blogs and news sites, but like most people, I only read a tiny fraction of all articles that are posted per day. So the first task I have to do every day is filtering: deciding which posts are of interest to me. For me, the process of filtering is completely separate from reading the actual articles.

Since I don’t want to browse 20 different websites each day to find new posts, I use an aggregator that pulls in all articles and presents the new ones to me in a uniform fashion that makes it easy to go over the different posts. There are essentially two types of aggregators: intelligent and dumb ones.

Intelligent aggregators try to guess which sources and which posts are of interest to you and only present you with that selection. Dumb aggregators in turn show you the sources that you have selected and within those sources all the posts. While most people probably decide to use intelligent aggregators, I decided to use dumb aggregators. For one, I don’t want to see a semi-random selection of sources, since I have hand-picked my own sources, and I always have the feeling that I might be missing out on interesting articles if a machine-learning algorithm is selecting articles for me. I am particularly in doubt whether serendipitous discovery wouldn’t be prohibited by an intelligent aggregator. (If you are looking for intelligent aggregators, I’d recommend to check out Prismatic and Flipboard.)

So which tool am I using for dumb aggregation? It’s called Feedly and allows you to select the sources you’re interested in and presents all new articles as lists. Perfect for filtering. You can also use it for reading, but that’s not what I’m doing.

My workflow starts by going to my Feedly subscriptions and scanning over them. If I’m interested in an item, I open it in a new tab and continue to scan. I will not start to read any of the posts until I’m finished scanning posts. (Hint: the short cut for opening a link in a new tab without automatically changing to the tab on the Mac is Cmd + Click) I call this first pass the scanning phase.

After scanning all new articles, I close Feedly and go through the tabs. I decide which articles to read now, to read later and which ones to discard. When I decide to discard an article, I immediately close the tab. When I decide to read an article now, I immediately read it. These are particularly posts that have a short timely relevance and need to be read on that day. The articles that are of interest to me but of no immediate timely relevance are marked to be read later (which I will describe next). More than 50% fall in that category. I call this overall second pass the triage phase.

How do I save articles for reading them later? I was using Instapaper for a long time but now switched to Pocket. Pocket installs a browser extension that allows you to mark posts with one click to read them later. That’s exactly what I do with interesting articles that are of no immediate timely relevance.

I read most of the pocket articles on the go using the smartphone or tablet app that allows you to read all your saved articles even when being offline. So whenever I’m in the sub, a cab, a train, a plane, in the gym etc., I read the articles from there. I call this third pass the reading phase.

When I find that one article is so interesting that I need to take action based on the content later (i.e. send it to a friend, check out links etc.), I “heart” the article. I’ll check that “heart category” from time to time when using my computer to go through that list. I call this fourth pass the action phase.

But there is still room for improvement. What do you do when you are literally on the go? Reading while walking slows down your walking, and since walking is your primary task, there is no point in this trade-off. Wouldn’t it be great if you could listen to the articles you’ve saved? You can, and the text-to-speech is actually quite nice and auto-detects the right language for each article (in my case, that’s English and German). The app that I’m using for that is called Lisgo, and synchronises with all your saved Pocket articles. And that’s also the primary reason I’ve switched from Instapaper to Pocket since there was no text-to-speech extension for it.

I am pretty happy with the combination of Feedly, Pocket and Lisgo right now, and don’t see much room for improvement. How do you consume your daily news from the web? For me, it’s broken up into scanning, triage, reading and action phase. Which is, to some extent, pretty similar to how I treat my email inbox.

Tech Scene: Payment Models for Digital Goods

In the history of the internet, a couple of different payment models have emerged for digital goods. There are various types of digital goods, many of them existed long before the internet: Books, News, Music, Videos, TV-Shows, TV-Channels to name a few. Some goods are inherently digital like virtual goods in computer games.

In the beginning of the internet, there was only one payment model: free digital goods sponsored by advertisement, and we still have that model today. But there were only a couple of digital goods available in the old times, mostly news articles.

Newer models emerged since then. The music and movie industry for instance decided to offer most of their goods by direct payment per item. For virtual goods like in Zynga games, the freemium model disrupted major parts of the gaming industry – you can play the games for free, but once you’re hooked, you have to pay in order to succeed.

The disruption of the direct payment model for music is now driven by services like Spotify or Simfy. Instead of letting you pay for every single item, you have to pay a flat fee per month to listen to any music you want to. You don’t own the titles anymore, you just get the right to listen to them as long as you’re paying for the service.

Which payment model is best depends on the type of goods you offer. I think there are three categories of digital items: collectable, consumable and enabling items. A collectable item has an intrinsic value to people and they want to own it (even after consuming it). Consumable items on the other hand only have value to people before they’ve consumed them. An enabling item allows people to improve on something existing, so they need to have it to get one step further.

Of course, no item falls completely into one category and it also depends on the mindset of the customer where to put an item. To me, books and movies are mostly collectable items which is why we still mainly use direct payment for such goods. TV-Shows and music on the other hand is more of a consumable item – sure, there are some shows and some music that pop-culture aficionados want to watch or to listen to all day long over and over, but most shows are only consumed once or twice. Likewise, we usually listen to a number of songs until we get bored of them and move on. We don’t really care whether we still have them. Hence, flat fee models are on the move for consumable items. It goes without saying that the freemium model is best fitted to enabling items such as virtual goods.

What category is news? Most of the time, it is a consumable good, because news are usually only relevant to you until they’re not new anymore. Because that’s what news is – it’s new. Sure, news portals do contain articles that have long-term value, but mostly it’s only relevant for a short time. So what’s the right payment model? In the old times of the internet, it was free to the consumer and sponsored by advertisement. It was okay for some time that this payment model didn’t produce much revenue, because real news publishing was still happening offline. Nowadays, the digital news portals have to generate more revenue as the offline revenue stream is breaking down.

So what other payment models are there? News publishers transfers their issue-based offline system to the net by offering flat fee subscriptions on a monthly or yearly basis, or per-issue payment like in the offline world. Considering news issues on a fine granular level, you pay a flat fee for reading any number of articles in the issue. So that’s the flat fee model for a consumable item. Makes sense. But there are still not enough people falling for that, as there are still many free news websites.

There is a tendency of news publishers to introduce (soft) paywalls to account for that. A (soft) paywall restricts your access to digital goods – articles in particular – by requiring you to pay a very small fee per article you’re reading or by allowing you a small number of articles to read for free until you either have to pay on a per-article basis or by subscription.

Time will tell whether per-article payment works. Since articles are mostly consumable goods of short relevance with many free competitors, I would rather suggest a different payment model. It should neither be for free, nor be a flat fee, nor be a per-article payment. The problem with the per-article payment is that you can’t really give people a demo of what they’re going to get. Then again people are hesitant to pay for the article as they might regret it in case the article didn’t meet their expectations.

My suggestion therefore would be a model that I call “capped satisfactory payment”. It consists of two rules. You pay on a per-article basis, but if you didn’t like the article after reading, you can get your money back for the article. Obviously, you combine that with a fair-use policy and tracking of rejected articles, so you exclude free-riders. The second rule is to cap the overall payment for read articles per issue. In other words if a reader really digests a lot of articles in an issue, he shouldn’t pay much more than the flat fee for the whole issue in the first place. To my mind, this model communicates clearly that you take the readers’ opinion seriously while reducing the hurdle for the consumer to read and pay for an article dramatically.


Tech Scene: Platform Apis and Standards

This post will be about platform application programming interfaces (APIs), protocols and standards. When we build software that has to integrate with components written by other people or when our software has to communicate with some other program (for instance via the internet), both programs have to agree on a common language. Otherwise, they could not exchange any meaningful data or commands.

The designers of the software can create any language they want for communicating, but all involved components have to agree on it. The way software components talk to each other is usually called protocol. It could be seen as both the grammar and the vocabulary that all components understand. Your browser, for instance, used the HTTP protocol to retrieve this website from my web server. They both agreed to speak HTTP. The vocabulary, in this case, was a formal way of your browser saying “give me the following page” and my web server replying “there you go” with the full page attached to it.

This set of commands could be seen as an application programming interface. The server specified which commands it understands. But an API is not necessarily tied to a protocol. It is just an abstract way of specifying the supported command set.

Within the last couple of years, many applications in the internet developed so-called platform APIs – a way of opening up their applications to other programmers. You could write, for instance, a service that could be hooked up with the Facebook API, so your application could browse through friends, interests and all that.

While all this is great, there is usually no standard attached to these APIs. This means that similar applications offer different APIs – in other words in order for your application to access the friends of Google+, it has to use a different API than when accessing the friends of Facebook. Note that this completely differs from the HTTP protocol for accessing websites. Whenever your browser requests a page from a server, it uses the exact same command set – because all HTTP servers have the same API.

And that’s great, because it makes browsers so versatile – they can browse every page. The same thing holds true for emails: there is a single API that unifies all mail servers. The email system is even more interesting, as it is completely decentralised (with all its benefits and handicaps).

The reason why systems like web browsing and email work so well together is standards: the internet world and the industry agreed a long time ago to all use these protocols and the associated APIs. Standards do contribute to an accessible market, it simplifies planing and it makes it much easier for customers to change between providers of a certain service, the decentralisation makes the standard’s ecosystem robust, reliable and competitive. It even allows user to communicate cross-provider with each other. Hence, there are a lot of benefits associated with standards.

However, standards also serve as barrier for innovation and evolution – because it so hard to change them once they’re successfully in place. The best example is good old email – it’s insecure, out of fashion, full of spam and yet it is still the most successful communication platform we have on the internet. And it will take a lot of time for this to change.

But the specific platform APIs as we have them now on Facebook, Google+, Instagram, Instapaper, Dropbox, Foursquare, Twitter and so on also have their downsides. Every developer that wants to build on their services has to write specific code for each supported platform. While you can say “I support email”, you can’t really say “I support social networking” – because “social networking” has not been standardised. As a consequence, developers have to spend an extended amount of time to integrate different kinds of platforms and even more importantly have to make a selection of supported services. By this, big players like Facebook are of course favoured while smaller players miss out on the opportunity to be supported by other services.

Also for the customer it can have unpleasant side effects at times, particularly when a specific service closes down or when the customer wants to move to a different service. Without standards, there is usually no way to migrate your data in a comfortable way. You can’t just move all your likes, interests, statuses or contacts from Facebook to Google+. Similarly services that store your online playlists like Simfy, Spotify or don’t allow you to migrate to a competitor. And so on. The list could be continued indefinitely.

For the big players, this is kind of neat, because it protects their markets and user bases, but for the customers, it makes it more difficult to change platforms. It is also not possible to communicate with people from other platforms which is, of course, most simple with email. In other words these “closed systems” with their proprietary platform APIs foster monopolies which is usually not in the best interest for the customer.

The different incompatible platform APIs have also contributed to another trend which I would call the middleware service trend, where new applications are being built that try to interlink all different kinds of APIs. This can be on the software as a service level like ShareThis, but it can also feature consumer products like Ifttt.

The best example where we are still desperately lacking an ubiquitous standard is account management and passwords: you still have to sign up for every single page and keep track of the passwords. This is a mess. There is also the problem of personal data that you want to share with different services – such as your payment information with an online shop. The most promising standard here is OpenId and it should serve as a decentralised authentication service. However, the adoption is only so-so. Most websites that feature sign-in via external identity providers preselect Login via Facebook or Login via Twitter – which again features specific platform APIs instead of standards. And this chains you even further to one of the big players.

It will be very interesting to see whether the OpenId standard will gain some serious traction in the future, and how the battle between platform APIs and standards will play out in general.